[LinchPin] Help on using linchpin as a project library rather than a package

Clint Savage herlo at redhat.com
Wed Dec 20 15:46:03 UTC 2017


Chris,

I'm glad to hear you were able to get everything working with dependencies.
If you have further issues with other providers, please let me know.

I did want to check in with you on the API itsefl. I figure you are using
v1.2.1 (or whats at the head of the develop branch), correct? If so, I
wanted to encourage you to check out the latest code in the release1.5
branch.

If you decide to take a look, you'll notice a few things moved around. This
was mostly to make things match up with how other python libraries are
packaged. The main reason for this change was to simplify the LinchpinAPI
class. Specifically, the calls to lp_up, lp_destroy, etc. have been moved
out of the LinchpinAPI class, and into the LinchpinCli class. The purpose
for this was to make it so you could just pass a dictionary to the
`do_action` method[1]. Everything else, pretty much stays the same.

The LinchPin API now works more like a standard Ansible API caller.
Essentially, each resource_group has a resource_group_type, this type is
matched up with a playbook found in the 'linchpin/provision/' directory.
For instance, using your openstack example, the topology identifies a
resource_group_type of 'openstack'. From that, the API looks up the
linchpin/provision/openstack.yml playbook, and executes it. This provides
more flexibility for adding more roles quickly. The topology format has
been updated a little, and is a bit more strict, which provides us with a
way to validate the 'resource_definitions' and pass them to the playbook.
In turn, a developer can quickly create a playbook, schema, and role for a
new provider with much less effort than before.

If you wanted to continue as before, however, instantiating the LinchpinCli
class as before, set it up, and then call lp_up/lp_destroy, passing the
PinFile and targets as before. Everything should be backward compatible,
which the LinchpinAPI class handles converting old topologies to the new
format.

Anyway, I'm still writing documentation on the release1.5 branch, which can
be sene here: http://herlo-linchpin.readthedocs.io/en/docs1.5/index.html

Now that I've said all of this, I hope it helps and is useful. If not, it
is pretty safe to ignore it.

Cheers,

herlo

1-
https://github.com/CentOS-PaaS-SIG/linchpin/blob/release1.5/linchpin/__init__.py#L380

On Wed, Dec 20, 2017 at 7:29 AM, Chris Beer <cbeer at redhat.com> wrote:

>
>
> On Wed, Dec 20, 2017 at 9:08 AM, Greg Hellings <ghelling at redhat.com>
> wrote:
>
>>
>>
>> On Dec 20, 2017 07:58, "Chris Beer" <cbeer at redhat.com> wrote:
>>
>> Clint,
>>
>> Actually, I already have it working. So, at least for running against
>> OpenStack, there are no 'gotchas' to worry about.
>>
>> My code uses the LinchPin Python API to provision and teardown nodes. It
>> runs in an OpenShift container (CentOS7) as a dynamic Jenkins slave. I
>> installed all the dependencies (found in the LinchPin repo) when I created
>> my container without actually installing LinchPin. Then my test code pulls
>> the LinchPin repo and adds the directory to my PYTHONPATH.
>>
>>
>> You could also do a "pip install ." to install it from your local
>> checkout. That would even take care of any added dependencies since the
>> last time you updated the image.
>>
>> --Greg
>>
>>
>> Greg,
>
> Actually, I can't. This code is running in a container, which means that
> most of it is a read-only file system. I'm not setting up a virtual
> environment (the container is the virtual environment), so trying to
> install any packages after the container is up and running would fail.
>
> Also, realize that this is just for development. I needed to be able to
> hack up the LinchPin code to get it working (pull request has already been
> submitted) and add debugging so I could determine why my configuration was
> failing. Which was successful. Once I have the rest of the system working
> properly, I'll back out my hack and install LinchPin when I create the
> container.
>
> In case anyone is curious - the reason I did this was to save time during
> development. When I started, I would just change the LinchPin code and
> rebuild the container. But rebuilding the container takes ~15 minutes, and
> I have to rebuild it each time I make a change. By pulling the LinchPin
> repo as part of the test instead, I could make a change to the LinchPin
> code and test it in less than 20 seconds. Also, since ansible tries to
> write a retry file to the roles directory when a failure occurs, I don't
> get additional errors when ansible fails (which muddies the logs). That's
> because when LinchPin is installed as a Python package, the code is in a
> read-only file system. When I pull the repo as part of my test, the code is
> in a read-write file system on the container.
>
> --
>
> Christopher Beer
>
> Principal Software Engineer
>
> Red Hat System Design and Engineering Organization
> <https://www.redhat.com>
>
> Westford, Massachusetts, USA
> <https://red.ht/sig>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linchpin/attachments/20171220/a1ac0c30/attachment.htm>


More information about the linchpin mailing list