[rdo-list] Packstack refactor and future ideas

Tom Buskey tom at buskey.name
Wed Jun 22 15:18:08 UTC 2016


Having a tiny, multi-node, non-HA cloud has been extremely useful for
developing anything that needs an Openstack cloud to talk to.

We don't care if the cloud dies.  Our VM images and recipes are elsewhere.
We rebuild the clouds as needed.  We migrate to another working cloud while
we rebuild if needed.

We need to test > 1 compute node but not 5 compute nodes.  Ex: migration
can't be done on a single node!

packstack is ideal.  A 2 node cloud where both nodes need to do compute
lets us run 20-30 VMs to test.  We also need multiple clouds.

We have "production" clouds and need a minimum of 3 nodes == 50% more costs
and the controller is mostly idle.  Going from 1 to 5 (7?) for a "real
production" cloud is a big leap.

We don't need HA and the load on the controller is light enough that it can
also handle compute.






On Mon, Jun 20, 2016 at 10:11 AM, Ivan Chavero <ichavero at redhat.com> wrote:

>
>
> ----- Original Message -----
> > From: "Boris Derzhavets" <bderzhavets at hotmail.com>
> > To: "Javier Pena" <javier.pena at redhat.com>
> > Cc: "alan pevec" <alan.pevec at redhat.com>, "rdo-list" <
> rdo-list at redhat.com>
> > Sent: Monday, June 20, 2016 8:35:52 AM
> > Subject: Re: [rdo-list] Packstack refactor and future ideas
> >
> >
> >
> >
> >
> >
> >
> > From: Javier Pena <javier.pena at redhat.com>
> > Sent: Monday, June 20, 2016 7:44 AM
> > To: Boris Derzhavets
> > Cc: rdo-list; alan pevec
> > Subject: Re: [rdo-list] Packstack refactor and future ideas
> > ----- Original Message -----
> >
> > > From: rdo-list-bounces at redhat.com <rdo-list-bounces at redhat.com> on
> behalf
> > > of
> > > Javier Pena <javier.pena at redhat.com>
> > > Sent: Friday, June 17, 2016 10:45 AM
> > > To: rdo-list
> > > Cc: alan pevec
> > > Subject: Re: [rdo-list] Packstack refactor and future ideas
> >
> > > ----- Original Message -----
> > > > > We could take an easier way and assume we only have 3 roles, as in
> the
> > > > > current refactored code: controller, network, compute. The logic
> would
> > > > > then be:
> > > > > - By default we install everything, so all in one
> > > > > - If our host is not CONFIG_CONTROLLER_HOST but is part of
> > > > > CONFIG_NETWORK_HOSTS, we apply the network manifest
> > > > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS
> > > > >
> > > > > Of course, the last two options would assume a first server is
> > > > > installed
> > > > > as
> > > > > controller.
> > > > >
> > > > > This would allow us to reuse the same answer file on all runs (one
> per
> > > > > host
> > > > > as you proposed), eliminate the ssh code as we are always running
> > > > > locally,
> > > > > and make some assumptions in the python code, like expecting OPM
> to be
> > > > > deployed and such. A contributed ansible wrapper to automate the
> runs
> > > > > would be straightforward to create.
> > > > >
> > > > > What do you think? Would it be worth the effort?
> > > >
> > > > +2 I like that proposal a lot! An ansible wrapper is then just an
> > > > example playbook in docs but could be done w/o ansible as well,
> > > > manually or using some other remote execution tooling of user's
> > > > choice.
> > > >
> > > Now that the phase 1 refactor is under review and passing CI, I think
> it's
> > > time to come to a conclusion on this.
> > > This option looks like the best compromise between keeping it simple
> and
> > > dropping the least possible amount of features. So unless someone has a
> > > better idea, I'll work on that as soon as the current review is merged.
> > >
> > > Would it be possible :-
> > >
> > > - By default we install everything, so all in one
> > > - If our host is not CONFIG_CONTROLLER_HOST but is part of
> > > CONFIG_NETWORK_HOSTS, we apply the network manifest
> > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS
> > > - If our host is not CONFIG_CONTROLLER_HOST but is part of
> > > CONFIG_STORAGE_HOSTS , we apply the storage manifest
> > >
> > > Just one more role. May we have 4 roles ?
> >
> > This is a tricky one. There used to be support for separate
> > CONFIG_STORAGE_HOSTS, but I think it has been removed (or at least not
> > tested for quite a long time).
>
> This option is still there, is set as "unsupported" i think it might be
> a good idea to keep it.
>
> what do you guys think?
>
>
> > However, this feature currently works for RDO Mitaka ( as well it woks
> for
> > Liberty)
> > It's even possible to add Storage Node via packstack , taking care of
> glance
> > and swift proxy
> > keystone endpoints manually .
> > For small prod deployments like several (5-10) Haswell Xeon boxes. ( no
> HA
> > requirements from
> > customer's side ). Ability to split Storage specifically Swift (AIO)
> > instances or Cinder iSCSILVM
> > back ends hosting Node from Controller is extremely critical feature.
> > What I am writing is based on several projects committed in South
> America's
> > countries.
> > No complaints from site support stuff to myself for configurations
> deployed
> > via Packstack.
> > Dropping this feature ( unsupported , but stable working ) will for sure
> make
> > Packstack
> > almost useless toy .
> > In situation when I am able only play with TripleO QuickStart due to
> Upstream
> > docs
> > ( Mitaka trunk instructions set) for instack-virt-setup don't allow to
> commit
> > `openstack undercloud install` makes Howto :-
> >
> > https://remote-lab.net/rdo-manager-ha-openstack-deployment
> >
> > non reproducible. I have nothing against TripleO turn, but absence of
> Red Hat
> > high quality manuals for TripleO bare metal / TripleO Instak-virt-setup
> > will affect RDO Community in wide spread way. I mean first all countries
> > like Chile, Brazil, China and etc.
> >
> > Thank you.
> > Boris.
> >
> > This would need to be a follow-up review, if it is finally decided to do
> so.
> >
> > Regards,
> > Javier
> >
> > > Thanks
> > > Boris.
> >
> > > Regards,
> > > Javier
> >
> > > > Alan
> > > >
> >
> > > _______________________________________________
> > > rdo-list mailing list
> > > rdo-list at redhat.com
> > > https://www.redhat.com/mailman/listinfo/rdo-list
> >
> >
> > rdo-list Info Page - Red Hat
> > www.redhat.com
> > The rdo-list mailing list provides a forum for discussions about
> installing,
> > running, and using OpenStack on Red Hat based distributions. To see the
> > collection of ...
> >
> >
> > > rdo-list Info Page - Red Hat
> > > www.redhat.com
> > > The rdo-list mailing list provides a forum for discussions about
> > > installing,
> > > running, and using OpenStack on Red Hat based distributions. To see the
> > > collection of ...
> >
> > > To unsubscribe: rdo-list-unsubscribe at redhat.com
> >
> > > _______________________________________________
> > > rdo-list mailing list
> > > rdo-list at redhat.com
> > > https://www.redhat.com/mailman/listinfo/rdo-list
> >
> > > To unsubscribe: rdo-list-unsubscribe at redhat.com
> >
> > _______________________________________________
> > rdo-list mailing list
> > rdo-list at redhat.com
> > https://www.redhat.com/mailman/listinfo/rdo-list
> >
> > To unsubscribe: rdo-list-unsubscribe at redhat.com
>
> _______________________________________________
> rdo-list mailing list
> rdo-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rdo-list
>
> To unsubscribe: rdo-list-unsubscribe at redhat.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rdo-list/attachments/20160622/3f518fd7/attachment.htm>


More information about the rdo-list mailing list