From rbowen at redhat.com Mon Aug 1 14:33:40 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 1 Aug 2016 10:33:40 -0400 Subject: [rdo-list] ask.openstack.org unanswered 'RDO' questions Message-ID: <5d540d90-f768-532b-0825-a9be82748697@redhat.com> 40 unanswered questions: You are not authorized : identity:create_service https://ask.openstack.org/en/question/94973/you-are-not-authorized-identitycreate_service/ Tags: devstack#mitaka, mitaka-openstack, mitaka RDO - is there any fedora package newer than puppet-4.2.1-3.fc24.noarch.rpm https://ask.openstack.org/en/question/94969/rdo-is-there-any-fedora-package-newer-than-puppet-421-3fc24noarchrpm/ Tags: rdo, puppet, install-openstack OpenStack RDO mysqld 100% cpu https://ask.openstack.org/en/question/94961/openstack-rdo-mysqld-100-cpu/ Tags: openstack, mysqld, cpu Failed to set RDO repo on host-packstact-centOS-7 https://ask.openstack.org/en/question/94828/failed-to-set-rdo-repo-on-host-packstact-centos-7/ Tags: openstack-packstack, centos7, rdo how to deploy haskell-distributed in RDO? https://ask.openstack.org/en/question/94785/how-to-deploy-haskell-distributed-in-rdo/ Tags: rdo How to set quota for domain and have it shared with all the projects/tenants in domain https://ask.openstack.org/en/question/94105/how-to-set-quota-for-domain-and-have-it-shared-with-all-the-projectstenants-in-domain/ Tags: domainquotadriver rdo tripleO liberty undercloud install failing https://ask.openstack.org/en/question/94023/rdo-tripleo-liberty-undercloud-install-failing/ Tags: rdo, rdo-manager, liberty, undercloud, instack Add new compute node for TripleO deployment in virtual environment https://ask.openstack.org/en/question/93703/add-new-compute-node-for-tripleo-deployment-in-virtual-environment/ Tags: compute, tripleo, liberty, virtual, baremetal Unable to start Ceilometer services https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-services/ Tags: ceilometer, ceilometer-api Adding hard drive space to RDO installation https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ Tags: cinder, openstack, space, add AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ Tags: openstack, networking, aws ceilometer: I've installed openstack mitaka. but swift stops working when i configured the pipeline and ceilometer filter https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ Tags: ceilometer, openstack-swift, mitaka Fail on installing the controller on Cent OS 7 https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ Tags: installation, centos7, controller the error of service entity and API endpoints https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ Tags: service, entity, and, api, endpoints Running delorean fails: Git won't fetch sources https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ Tags: delorean, rdo Liberty RDO: stack resource topology icons are pink https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ Tags: stack, resource, topology, dashboard Build of instance aborted: Block Device Mapping is Invalid. https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ Tags: cinder, lvm, centos7 No handlers could be found for logger "oslo_config.cfg" while syncing the glance database https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ Tags: liberty, glance, install-openstack how to use chef auto manage openstack in RDO? https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ Tags: chef, rdo Separate Cinder storage traffic from management https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ Tags: cinder, separate, nic, iscsi Openstack installation fails using packstack, failure is in installation of openstack-nova-compute. Error: Dependency Package[nova-compute] has failures https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ Tags: novacompute, rdo, packstack, dependency, failure CentOS OpenStack - compute node can't talk https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ Tags: rdo How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on RDO Liberty ? https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ Tags: rdo, liberty, swift, ha VM and container can't download anything from internet https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ Tags: rdo, neutron, network, connectivity Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ Tags: keyboard, map, keymap, vncproxy, novnc OpenStack-Docker driver failed https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ Tags: docker, openstack, liberty Sahara SSHException: Error reading SSH protocol banner https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ Tags: sahara, icehouse, ssh, vanila Error Sahara create cluster: 'Error attach volume to instance https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, vanila, icehouse Creating Sahara cluster: Error attach volume to instance https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, hadoop, icehouse, vanilla -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From hguemar at fedoraproject.org Mon Aug 1 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 1 Aug 2016 15:00:03 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160801150003.D103460A4003@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-08-03 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From abregman at redhat.com Mon Aug 1 15:21:28 2016 From: abregman at redhat.com (Arie Bregman) Date: Mon, 1 Aug 2016 18:21:28 +0300 Subject: [rdo-list] Multiple tools for deploying and testing TripleO Message-ID: Hi, I would like to start a discussion on the overlap between tools we have for deploying and testing TripleO (RDO & RHOSP) in CI. Several months ago, we worked on one common framework for deploying and testing OpenStack (RDO & RHOSP) in CI. I think you can say it didn't work out well, which eventually led each group to focus on developing other existing/new tools. What we have right now for deploying and testing -------------------------------------------------------- === Component CI, Gating === I'll start with the projects we created, I think that's only fair :) * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB project. * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per release. * Octario[3] - Testing using RPMs (pep8, unit, functional, tempest, csit) + Patching RPMs with submitted code. === Automation, QE === * InfraRed[4] - provision install and test. Pluggable and modular, allows you to create your own provisioner, installer and tester. As far as I know, the groups is working now on different structure of one main project and three sub projects (provision, install and test). === RDO === I didn't use RDO tools, so I apologize if I got something wrong: * About ~25 micro independent Ansible roles[5]. You can either choose to use one of them or several together. They are used for provisioning, installing and testing Tripleo. * Tripleo-quickstart[6] - uses the micro roles for deploying tripleo and test it. As I said, I didn't use the tools, so feel free to add more information you think is relevant. === More? === I hope not. Let us know if are familiar with more tools. Conclusion -------------- So as you can see, there are several projects that eventually overlap in many areas. Each group is basically using the same tasks (provision resources, build/import overcloud images, run tempest, collect logs, etc.) Personally, I think it's a waste of resources. For each task there is at least two people from different groups who work on exactly the same task. The most recent example I can give is OVB. As far as I know, both groups are working on implementing it in their set of tools right now. On the other hand, you can always claim: "we already tried to work on the same framework, we failed to do it successfully" - right, but maybe with better ground rules we can manage it. We would defiantly benefit a lot from doing that. What's next? ---------------- So first of all, I would like to hear from you if you think that we can collaborate once again or is it actually better to keep it as it is now. If you agree that collaboration here makes sense, maybe you have ideas on how we can do it better this time. I think that setting up a meeting to discuss the right architecture for the project(s) and decide on good review/gating process, would be a good start. Please let me know what do you think and keep in mind that this is not about which tool is better!. As you can see I didn't mention the time it takes for each tool to deploy and test, and also not the full feature list it supports. If possible, we should keep it about collaborating and not choosing the best tool. Our solution could be the combination of two or more tools eventually (tripleo-red, infra-quickstart? :D ) "You may say I'm a dreamer, but I'm not the only one. I hope some day you'll join us and the infra will be as one" :) [1] https://github.com/redhat-openstack/ansible-ovb [2] https://github.com/redhat-openstack/ansible-rhosp [3] https://github.com/redhat-openstack/octario [4] https://github.com/rhosqeauto/InfraRed [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role [6] https://github.com/openstack/tripleo-quickstart From whayutin at redhat.com Mon Aug 1 16:35:31 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 1 Aug 2016 12:35:31 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: Message-ID: On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman wrote: > Hi, > > I would like to start a discussion on the overlap between tools we > have for deploying and testing TripleO (RDO & RHOSP) in CI. > > Several months ago, we worked on one common framework for deploying > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it > didn't work out well, which eventually led each group to focus on > developing other existing/new tools. > > What we have right now for deploying and testing > -------------------------------------------------------- > === Component CI, Gating === > I'll start with the projects we created, I think that's only fair :) > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB project. > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per release. > > * Octario[3] - Testing using RPMs (pep8, unit, functional, tempest, > csit) + Patching RPMs with submitted code. > > === Automation, QE === > * InfraRed[4] - provision install and test. Pluggable and modular, > allows you to create your own provisioner, installer and tester. > > As far as I know, the groups is working now on different structure of > one main project and three sub projects (provision, install and test). > > === RDO === > I didn't use RDO tools, so I apologize if I got something wrong: > > * About ~25 micro independent Ansible roles[5]. You can either choose > to use one of them or several together. They are used for > provisioning, installing and testing Tripleo. > > * Tripleo-quickstart[6] - uses the micro roles for deploying tripleo > and test it. > > As I said, I didn't use the tools, so feel free to add more > information you think is relevant. > > === More? === > I hope not. Let us know if are familiar with more tools. > > Conclusion > -------------- > So as you can see, there are several projects that eventually overlap > in many areas. Each group is basically using the same tasks (provision > resources, build/import overcloud images, run tempest, collect logs, > etc.) > > Personally, I think it's a waste of resources. For each task there is > at least two people from different groups who work on exactly the same > task. The most recent example I can give is OVB. As far as I know, > both groups are working on implementing it in their set of tools right > now. > > On the other hand, you can always claim: "we already tried to work on > the same framework, we failed to do it successfully" - right, but > maybe with better ground rules we can manage it. We would defiantly > benefit a lot from doing that. > > What's next? > ---------------- > So first of all, I would like to hear from you if you think that we > can collaborate once again or is it actually better to keep it as it > is now. > +1 on collaboration, with that being said I can't support forcing groups to use one tool or another. Forcing the issue only builds resentment across teams. > > If you agree that collaboration here makes sense, maybe you have ideas > on how we can do it better this time. > > I think that setting up a meeting to discuss the right architecture > for the project(s) and decide on good review/gating process, would be > a good start. > Not sure why upstream tripleo is left out of this discussion. Ideally if possible we all need to be using upstream CI tools where we can, and if you can't use upstream tools do your best to move the tool(s) upstream. > > Please let me know what do you think and keep in mind that this is not > about which tool is better!. As you can see I didn't mention the time > it takes for each tool to deploy and test, and also not the full > feature list it supports. > If possible, we should keep it about collaborating and not choosing > the best tool. Our solution could be the combination of two or more > tools eventually (tripleo-red, infra-quickstart? :D ) > > "You may say I'm a dreamer, but I'm not the only one. I hope some day > you'll join us and the infra will be as one" :) > Thanks Arie! > > [1] https://github.com/redhat-openstack/ansible-ovb > [2] https://github.com/redhat-openstack/ansible-rhosp > [3] https://github.com/redhat-openstack/octario > [4] https://github.com/rhosqeauto/InfraRed > [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role > [6] https://github.com/openstack/tripleo-quickstart > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Mon Aug 1 16:39:26 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Mon, 1 Aug 2016 18:39:26 +0200 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: Message-ID: <8873d52c-e0ad-c870-d528-73a9b6f1fc53@redhat.com> On 01/08/2016 17:21, Arie Bregman wrote: > Hi, > > I would like to start a discussion on the overlap between tools we > have for deploying and testing TripleO (RDO & RHOSP) in CI. > > Several months ago, we worked on one common framework for deploying > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it > didn't work out well, which eventually led each group to focus on > developing other existing/new tools. > > What we have right now for deploying and testing > -------------------------------------------------------- > === Component CI, Gating === > I'll start with the projects we created, I think that's only fair :) > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB project. > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per release. > > * Octario[3] - Testing using RPMs (pep8, unit, functional, tempest, > csit) + Patching RPMs with submitted code. > > === Automation, QE === > * InfraRed[4] - provision install and test. Pluggable and modular, > allows you to create your own provisioner, installer and tester. > > As far as I know, the groups is working now on different structure of > one main project and three sub projects (provision, install and test). > > === RDO === > I didn't use RDO tools, so I apologize if I got something wrong: > > * About ~25 micro independent Ansible roles[5]. You can either choose > to use one of them or several together. They are used for > provisioning, installing and testing Tripleo. > > * Tripleo-quickstart[6] - uses the micro roles for deploying tripleo > and test it. > > As I said, I didn't use the tools, so feel free to add more > information you think is relevant. > > === More? === > I hope not. Let us know if are familiar with more tools. First of all Arie thanks for starting this discussion. Just as an addition at the moment we(at HA-CI)'re using this [1] to test OSP downstream (but it supports upstream). For the rdo-stream we are using with success quickstart, after developing a specific role to deploys on barmetal [2]. I understand what's your point and I agree with it all the line, unfortunately I don't have answers, but I can share my experience. The main problem I found was that it was very difficult to find a starting point for a newbie. I'm talking for myself of course, but for me it was quicker to deploy the scripts in [1] (over 8 months ago) than choose a tool from the table that was suitable for our purpose. I know that this may sound like a cat that bites his tail, but I think that without a common entry point it will be always difficult for anyone to start contributing and collaborate with existing problem. I mean, if today someone wants to start playing on tripleo/director to deploy something useful (say, for example, high availability undercloud) what will be the official suggestion to have a deployed environment? Does he have to use Octario? But what if he needs provisioning? Does he have to use quickstart? But then what if he needs to use baremetal? I don't want to add entropy to the discussion, I'm just saying that finding something that covers everything is more than complicated at this point. For sure it does not justify creating new tool for each need, but in some way it explains it. Maybe it could be useful having some kind of matrix representing needs/tools to start from, to be used as a starting point for the newbies and as way to find common points to see what can be done to unify the tools. These are my two cents, sorry for being so long. [1] https://github.com/rscarazz/tripleo-director-installer [2] https://github.com/redhat-openstack/ansible-role-tripleo-baremetal-undercloud -- Raoul Scarazzini rasca at redhat.com > Conclusion > -------------- > So as you can see, there are several projects that eventually overlap > in many areas. Each group is basically using the same tasks (provision > resources, build/import overcloud images, run tempest, collect logs, > etc.) > > Personally, I think it's a waste of resources. For each task there is > at least two people from different groups who work on exactly the same > task. The most recent example I can give is OVB. As far as I know, > both groups are working on implementing it in their set of tools right > now. > > On the other hand, you can always claim: "we already tried to work on > the same framework, we failed to do it successfully" - right, but > maybe with better ground rules we can manage it. We would defiantly > benefit a lot from doing that. > > What's next? > ---------------- > So first of all, I would like to hear from you if you think that we > can collaborate once again or is it actually better to keep it as it > is now. > > If you agree that collaboration here makes sense, maybe you have ideas > on how we can do it better this time. > > I think that setting up a meeting to discuss the right architecture > for the project(s) and decide on good review/gating process, would be > a good start. > > Please let me know what do you think and keep in mind that this is not > about which tool is better!. As you can see I didn't mention the time > it takes for each tool to deploy and test, and also not the full > feature list it supports. > If possible, we should keep it about collaborating and not choosing > the best tool. Our solution could be the combination of two or more > tools eventually (tripleo-red, infra-quickstart? :D ) > > "You may say I'm a dreamer, but I'm not the only one. I hope some day > you'll join us and the infra will be as one" :) > > [1] https://github.com/redhat-openstack/ansible-ovb > [2] https://github.com/redhat-openstack/ansible-rhosp > [3] https://github.com/redhat-openstack/octario > [4] https://github.com/rhosqeauto/InfraRed > [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role > [6] https://github.com/openstack/tripleo-quickstart > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From dms at redhat.com Mon Aug 1 17:07:02 2016 From: dms at redhat.com (David Moreau Simard) Date: Mon, 1 Aug 2016 13:07:02 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: Message-ID: The vast majority of RDO's CI relies on using upstream installation/deployment projects in order to test installation of RDO packages in different ways and configurations. Unless I'm mistaken, TripleO Quickstart was originally created as a mean to "easily" install TripleO in different topologies without requiring a massive amount of hardware. This project allows us to test TripleO in virtual deployments on just one server instead of, say, 6. There's also WeIRDO [1] which was left out of your list. WeIRDO is super simple and simply aims to run upstream gate jobs (such as puppet-openstack-integration [2][3] and packstack [4][5]) outside of the gate. It'll install dependencies that are expected to be there (i.e, usually set up by the openstack-infra gate preparation jobs), set up the trunk repositories we're interested in testing and the rest is handled by the upstream project testing framework. The WeIRDO project is /very/ low maintenance and brings an exceptional amount of coverage and value. This coverage is important because RDO provides OpenStack packages or projects that are not necessarily used by TripleO and the reality is that not everyone deploying OpenStack on CentOS with RDO will be using TripleO. Anyway, sorry for sidetracking but back to the topic, thanks for opening the discussion. What honestly perplexes me is the situation of CI in RDO and OSP, especially around TripleO/Director, is the amount of work that is spent downstream. And by downstream, here, I mean anything that isn't in TripleO proper. I keep dreaming about how awesome upstream TripleO CI would be if all that effort was spent directly there instead -- and then that all work could bear fruit and trickle down downstream for free. Exactly like how we keep improving the testing coverage in puppet-openstack-integration, it's automatically pulled in RDO CI through WeIRDO for free. We make the upstream better and we benefit from it simultaneously: everyone wins. [1]: https://github.com/rdo-infra/weirdo [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack [3]: https://github.com/openstack/puppet-openstack-integration#description [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack [5]: https://github.com/openstack/packstack#packstack-integration-tests David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman wrote: > Hi, > > I would like to start a discussion on the overlap between tools we > have for deploying and testing TripleO (RDO & RHOSP) in CI. > > Several months ago, we worked on one common framework for deploying > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it > didn't work out well, which eventually led each group to focus on > developing other existing/new tools. > > What we have right now for deploying and testing > -------------------------------------------------------- > === Component CI, Gating === > I'll start with the projects we created, I think that's only fair :) > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB project. > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per release. > > * Octario[3] - Testing using RPMs (pep8, unit, functional, tempest, > csit) + Patching RPMs with submitted code. > > === Automation, QE === > * InfraRed[4] - provision install and test. Pluggable and modular, > allows you to create your own provisioner, installer and tester. > > As far as I know, the groups is working now on different structure of > one main project and three sub projects (provision, install and test). > > === RDO === > I didn't use RDO tools, so I apologize if I got something wrong: > > * About ~25 micro independent Ansible roles[5]. You can either choose > to use one of them or several together. They are used for > provisioning, installing and testing Tripleo. > > * Tripleo-quickstart[6] - uses the micro roles for deploying tripleo > and test it. > > As I said, I didn't use the tools, so feel free to add more > information you think is relevant. > > === More? === > I hope not. Let us know if are familiar with more tools. > > Conclusion > -------------- > So as you can see, there are several projects that eventually overlap > in many areas. Each group is basically using the same tasks (provision > resources, build/import overcloud images, run tempest, collect logs, > etc.) > > Personally, I think it's a waste of resources. For each task there is > at least two people from different groups who work on exactly the same > task. The most recent example I can give is OVB. As far as I know, > both groups are working on implementing it in their set of tools right > now. > > On the other hand, you can always claim: "we already tried to work on > the same framework, we failed to do it successfully" - right, but > maybe with better ground rules we can manage it. We would defiantly > benefit a lot from doing that. > > What's next? > ---------------- > So first of all, I would like to hear from you if you think that we > can collaborate once again or is it actually better to keep it as it > is now. > > If you agree that collaboration here makes sense, maybe you have ideas > on how we can do it better this time. > > I think that setting up a meeting to discuss the right architecture > for the project(s) and decide on good review/gating process, would be > a good start. > > Please let me know what do you think and keep in mind that this is not > about which tool is better!. As you can see I didn't mention the time > it takes for each tool to deploy and test, and also not the full > feature list it supports. > If possible, we should keep it about collaborating and not choosing > the best tool. Our solution could be the combination of two or more > tools eventually (tripleo-red, infra-quickstart? :D ) > > "You may say I'm a dreamer, but I'm not the only one. I hope some day > you'll join us and the infra will be as one" :) > > [1] https://github.com/redhat-openstack/ansible-ovb > [2] https://github.com/redhat-openstack/ansible-rhosp > [3] https://github.com/redhat-openstack/octario > [4] https://github.com/rhosqeauto/InfraRed > [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role > [6] https://github.com/openstack/tripleo-quickstart > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ibravo at ltgfederal.com Mon Aug 1 17:59:36 2016 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Mon, 1 Aug 2016 13:59:36 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: Message-ID: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> If we are talking about tools, I would also want to add something with regards to user interface of these tools. This is based on my own experience: I started trying to deploy Openstack with Staypuft and The Foreman. The UI of The Foreman was intuitive enough for the discovery and provisioning of the servers. The OpenStack portion, not so much. Forward a couple of releases and we had a TripleO GUI (Tuskar, I believe) that allowed you to graphically build your Openstack cloud. That was a reasonable good GUI for Openstack. Following that, TripleO become a script based installer, that required experience in Heat templates. I know I didn?t have it and had to ask in the mailing list about how to present this or change that. I got a couple of installs working with this setup. In the last session in Austin, my goal was to obtain information on how others were installing Openstack. I was pointed to Fuel as an alternative. I tried it up, and it just worked. It had the discovering capability from The Foreman, and the configuration options from TripleO. I understand that is based in Ansible and because of that, it is not fully CentOS ready for all the nodes (at least not in version 9 that I tried). In any case, as a deployer and installer, it is the most well rounded tool that I found. I?d love to see RDO moving into that direction, and having an easy to use, end user ready deployer tool. IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com > On Aug 1, 2016, at 1:07 PM, David Moreau Simard wrote: > > The vast majority of RDO's CI relies on using upstream > installation/deployment projects in order to test installation of RDO > packages in different ways and configurations. > > Unless I'm mistaken, TripleO Quickstart was originally created as a > mean to "easily" install TripleO in different topologies without > requiring a massive amount of hardware. > This project allows us to test TripleO in virtual deployments on just > one server instead of, say, 6. > > There's also WeIRDO [1] which was left out of your list. > WeIRDO is super simple and simply aims to run upstream gate jobs (such > as puppet-openstack-integration [2][3] and packstack [4][5]) outside > of the gate. > It'll install dependencies that are expected to be there (i.e, usually > set up by the openstack-infra gate preparation jobs), set up the trunk > repositories we're interested in testing and the rest is handled by > the upstream project testing framework. > > The WeIRDO project is /very/ low maintenance and brings an exceptional > amount of coverage and value. > This coverage is important because RDO provides OpenStack packages or > projects that are not necessarily used by TripleO and the reality is > that not everyone deploying OpenStack on CentOS with RDO will be using > TripleO. > > Anyway, sorry for sidetracking but back to the topic, thanks for > opening the discussion. > > What honestly perplexes me is the situation of CI in RDO and OSP, > especially around TripleO/Director, is the amount of work that is > spent downstream. > And by downstream, here, I mean anything that isn't in TripleO proper. > > I keep dreaming about how awesome upstream TripleO CI would be if all > that effort was spent directly there instead -- and then that all work > could bear fruit and trickle down downstream for free. > Exactly like how we keep improving the testing coverage in > puppet-openstack-integration, it's automatically pulled in RDO CI > through WeIRDO for free. > We make the upstream better and we benefit from it simultaneously: > everyone wins. > > [1]: https://github.com/rdo-infra/weirdo > [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack > [3]: https://github.com/openstack/puppet-openstack-integration#description > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack > [5]: https://github.com/openstack/packstack#packstack-integration-tests > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman wrote: >> Hi, >> >> I would like to start a discussion on the overlap between tools we >> have for deploying and testing TripleO (RDO & RHOSP) in CI. >> >> Several months ago, we worked on one common framework for deploying >> and testing OpenStack (RDO & RHOSP) in CI. I think you can say it >> didn't work out well, which eventually led each group to focus on >> developing other existing/new tools. >> >> What we have right now for deploying and testing >> -------------------------------------------------------- >> === Component CI, Gating === >> I'll start with the projects we created, I think that's only fair :) >> >> * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB project. >> >> * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per release. >> >> * Octario[3] - Testing using RPMs (pep8, unit, functional, tempest, >> csit) + Patching RPMs with submitted code. >> >> === Automation, QE === >> * InfraRed[4] - provision install and test. Pluggable and modular, >> allows you to create your own provisioner, installer and tester. >> >> As far as I know, the groups is working now on different structure of >> one main project and three sub projects (provision, install and test). >> >> === RDO === >> I didn't use RDO tools, so I apologize if I got something wrong: >> >> * About ~25 micro independent Ansible roles[5]. You can either choose >> to use one of them or several together. They are used for >> provisioning, installing and testing Tripleo. >> >> * Tripleo-quickstart[6] - uses the micro roles for deploying tripleo >> and test it. >> >> As I said, I didn't use the tools, so feel free to add more >> information you think is relevant. >> >> === More? === >> I hope not. Let us know if are familiar with more tools. >> >> Conclusion >> -------------- >> So as you can see, there are several projects that eventually overlap >> in many areas. Each group is basically using the same tasks (provision >> resources, build/import overcloud images, run tempest, collect logs, >> etc.) >> >> Personally, I think it's a waste of resources. For each task there is >> at least two people from different groups who work on exactly the same >> task. The most recent example I can give is OVB. As far as I know, >> both groups are working on implementing it in their set of tools right >> now. >> >> On the other hand, you can always claim: "we already tried to work on >> the same framework, we failed to do it successfully" - right, but >> maybe with better ground rules we can manage it. We would defiantly >> benefit a lot from doing that. >> >> What's next? >> ---------------- >> So first of all, I would like to hear from you if you think that we >> can collaborate once again or is it actually better to keep it as it >> is now. >> >> If you agree that collaboration here makes sense, maybe you have ideas >> on how we can do it better this time. >> >> I think that setting up a meeting to discuss the right architecture >> for the project(s) and decide on good review/gating process, would be >> a good start. >> >> Please let me know what do you think and keep in mind that this is not >> about which tool is better!. As you can see I didn't mention the time >> it takes for each tool to deploy and test, and also not the full >> feature list it supports. >> If possible, we should keep it about collaborating and not choosing >> the best tool. Our solution could be the combination of two or more >> tools eventually (tripleo-red, infra-quickstart? :D ) >> >> "You may say I'm a dreamer, but I'm not the only one. I hope some day >> you'll join us and the infra will be as one" :) >> >> [1] https://github.com/redhat-openstack/ansible-ovb >> [2] https://github.com/redhat-openstack/ansible-rhosp >> [3] https://github.com/redhat-openstack/octario >> [4] https://github.com/rhosqeauto/InfraRed >> [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role >> [6] https://github.com/openstack/tripleo-quickstart >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Aug 1 18:16:00 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 1 Aug 2016 14:16:00 -0400 Subject: [rdo-list] Upcoming OpenStack Meetups, week of Aug 1 Message-ID: The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Wednesday August 03 in Richardson, TX, US: Topic: Taming your relational and non-relational databases with OpenStack Trove - http://www.meetup.com/OpenStack-DFW/events/231162206/ * Wednesday August 03 in Singapore, SG: 8 Shenton Way, #10-00 AXA Tower Red Hat Asia Pacific Pte Ltd, Singapore - http://www.meetup.com/OpenStack-Singapore/events/232531940/ * Wednesday August 03 in Singapore, SG: Welcome, Stackers! - http://www.meetup.com/Singapore-OPENSTACK-Evangelist-Meetup/events/232531579/ * Wednesday August 03 in Singapore, SG: Welcome, Stackers! - http://www.meetup.com/Singapore-OpenStack-User-Group-Meetup/events/232531629/ * Wednesday August 03 in Tel Aviv-Yafo, IL: OpenStack Certified Administrator Exam and Ansible Automation with Rackspace - http://www.meetup.com/OpenStack-Israel/events/232840427/ * Thursday August 04 in Los Angeles, CA, US: RED HAT STORAGE DAY ? Los Angeles - http://www.meetup.com/Red-Hat-Los-Angeles/events/232762414/ * Thursday August 04 in Fort Lauderdale, FL, US: SFOUG Presentations - http://www.meetup.com/South-Florida-OpenStack-Users-Group/events/232450735/ * Thursday August 04 in Wellington, NZ: Automating in the OpenStack Cloud in Auckland - http://www.meetup.com/New-Zealand-OpenStack-User-Group/events/232796455/ * Thursday August 04 in Wroclaw, PL: OpenStack Wroc?aw Meetup #2 - http://www.meetup.com/Wroclaw-OpenStack-Meetup/events/232584503/ * Thursday August 04 in Austin, TX, US: Magnum: Openstack containers as a service. - http://www.meetup.com/Docker-Austin/events/232067917/ * Saturday August 06 in Orlando, FL, US: OpenStack Build Day! - http://www.meetup.com/Orlando-Central-Florida-OpenStack-Meetup/events/232805347/ * Sunday August 07 in Tambaram, IN: Swift Storage and its architecture - http://www.meetup.com/CloudnLoud-Openstack-Cloud-RedHat-Opensource/events/232847173/ -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From mohammed.arafa at gmail.com Mon Aug 1 20:45:27 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 1 Aug 2016 16:45:27 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: I too am an end user and have a similar story. I had tried packstack all in one but when it was time to deploy to actual servers I looked to Ubuntu Maas. It was buggy so after a month or so of several attempts I went to RDO. And was happy when I had my environment up. But it was not reproducible. I spent months trying. And finally I looked elsewhere and was told fuel. With fuel I have ha and ceph and live migration with in 2 hours. And repeatable too And yes. When tripleo quick start showed up. I did not even look at it. Information overload? Too much time spent evaluating and too little building something productive? And now I hear of even more. In honesty with the rename of RDO to triple o is there any need for an installer? /outburst over On Aug 1, 2016 2:01 PM, "Ignacio Bravo" wrote: > If we are talking about tools, I would also want to add something with > regards to user interface of these tools. This is based on my own > experience: > > I started trying to deploy Openstack with Staypuft and The Foreman. The UI > of The Foreman was intuitive enough for the discovery and provisioning of > the servers. The OpenStack portion, not so much. > > Forward a couple of releases and we had a TripleO GUI (Tuskar, I believe) > that allowed you to graphically build your Openstack cloud. That was a > reasonable good GUI for Openstack. > > Following that, TripleO become a script based installer, that required > experience in Heat templates. I know I didn?t have it and had to ask in the > mailing list about how to present this or change that. I got a couple of > installs working with this setup. > > In the last session in Austin, my goal was to obtain information on how > others were installing Openstack. I was pointed to Fuel as an alternative. > I tried it up, and it just worked. It had the discovering capability from > The Foreman, and the configuration options from TripleO. I understand that > is based in Ansible and because of that, it is not fully CentOS ready for > all the nodes (at least not in version 9 that I tried). In any case, as a > deployer and installer, it is the most well rounded tool that I found. > > I?d love to see RDO moving into that direction, and having an easy to use, > end user ready deployer tool. > > IB > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > > On Aug 1, 2016, at 1:07 PM, David Moreau Simard wrote: > > The vast majority of RDO's CI relies on using upstream > installation/deployment projects in order to test installation of RDO > packages in different ways and configurations. > > Unless I'm mistaken, TripleO Quickstart was originally created as a > mean to "easily" install TripleO in different topologies without > requiring a massive amount of hardware. > This project allows us to test TripleO in virtual deployments on just > one server instead of, say, 6. > > There's also WeIRDO [1] which was left out of your list. > WeIRDO is super simple and simply aims to run upstream gate jobs (such > as puppet-openstack-integration [2][3] and packstack [4][5]) outside > of the gate. > It'll install dependencies that are expected to be there (i.e, usually > set up by the openstack-infra gate preparation jobs), set up the trunk > repositories we're interested in testing and the rest is handled by > the upstream project testing framework. > > The WeIRDO project is /very/ low maintenance and brings an exceptional > amount of coverage and value. > This coverage is important because RDO provides OpenStack packages or > projects that are not necessarily used by TripleO and the reality is > that not everyone deploying OpenStack on CentOS with RDO will be using > TripleO. > > Anyway, sorry for sidetracking but back to the topic, thanks for > opening the discussion. > > What honestly perplexes me is the situation of CI in RDO and OSP, > especially around TripleO/Director, is the amount of work that is > spent downstream. > And by downstream, here, I mean anything that isn't in TripleO proper. > > I keep dreaming about how awesome upstream TripleO CI would be if all > that effort was spent directly there instead -- and then that all work > could bear fruit and trickle down downstream for free. > Exactly like how we keep improving the testing coverage in > puppet-openstack-integration, it's automatically pulled in RDO CI > through WeIRDO for free. > We make the upstream better and we benefit from it simultaneously: > everyone wins. > > [1]: https://github.com/rdo-infra/weirdo > [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack > [3]: https://github.com/openstack/puppet-openstack-integration#description > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack > [5]: https://github.com/openstack/packstack#packstack-integration-tests > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman wrote: > > Hi, > > I would like to start a discussion on the overlap between tools we > have for deploying and testing TripleO (RDO & RHOSP) in CI. > > Several months ago, we worked on one common framework for deploying > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it > didn't work out well, which eventually led each group to focus on > developing other existing/new tools. > > What we have right now for deploying and testing > -------------------------------------------------------- > === Component CI, Gating === > I'll start with the projects we created, I think that's only fair :) > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB project. > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per release. > > * Octario[3] - Testing using RPMs (pep8, unit, functional, tempest, > csit) + Patching RPMs with submitted code. > > === Automation, QE === > * InfraRed[4] - provision install and test. Pluggable and modular, > allows you to create your own provisioner, installer and tester. > > As far as I know, the groups is working now on different structure of > one main project and three sub projects (provision, install and test). > > === RDO === > I didn't use RDO tools, so I apologize if I got something wrong: > > * About ~25 micro independent Ansible roles[5]. You can either choose > to use one of them or several together. They are used for > provisioning, installing and testing Tripleo. > > * Tripleo-quickstart[6] - uses the micro roles for deploying tripleo > and test it. > > As I said, I didn't use the tools, so feel free to add more > information you think is relevant. > > === More? === > I hope not. Let us know if are familiar with more tools. > > Conclusion > -------------- > So as you can see, there are several projects that eventually overlap > in many areas. Each group is basically using the same tasks (provision > resources, build/import overcloud images, run tempest, collect logs, > etc.) > > Personally, I think it's a waste of resources. For each task there is > at least two people from different groups who work on exactly the same > task. The most recent example I can give is OVB. As far as I know, > both groups are working on implementing it in their set of tools right > now. > > On the other hand, you can always claim: "we already tried to work on > the same framework, we failed to do it successfully" - right, but > maybe with better ground rules we can manage it. We would defiantly > benefit a lot from doing that. > > What's next? > ---------------- > So first of all, I would like to hear from you if you think that we > can collaborate once again or is it actually better to keep it as it > is now. > > If you agree that collaboration here makes sense, maybe you have ideas > on how we can do it better this time. > > I think that setting up a meeting to discuss the right architecture > for the project(s) and decide on good review/gating process, would be > a good start. > > Please let me know what do you think and keep in mind that this is not > about which tool is better!. As you can see I didn't mention the time > it takes for each tool to deploy and test, and also not the full > feature list it supports. > If possible, we should keep it about collaborating and not choosing > the best tool. Our solution could be the combination of two or more > tools eventually (tripleo-red, infra-quickstart? :D ) > > "You may say I'm a dreamer, but I'm not the only one. I hope some day > you'll join us and the infra will be as one" :) > > [1] https://github.com/redhat-openstack/ansible-ovb > [2] https://github.com/redhat-openstack/ansible-rhosp > [3] https://github.com/redhat-openstack/octario > [4] https://github.com/rhosqeauto/InfraRed > [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role > [6] https://github.com/openstack/tripleo-quickstart > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Aug 1 20:51:09 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 1 Aug 2016 16:51:09 -0400 Subject: [rdo-list] What are you working on in Newton? Message-ID: In the weeks after the Mitaka release, I did a number of interviews about what people had been working on for that release[1]. I'd like to get a bit of a jump on this for Newton. If you would like to talk briefly (10 - 30 minutes) about what you're doing for the Newton release, and/or what exciting things will be in this version, please let me know (off-list) and we'll schedule something. Thanks! -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity [1] https://dmsimard.com/2016/05/15/what-did-everyone-do-for-the-mitaka-release-of-openstack/ From pgsousa at gmail.com Mon Aug 1 22:01:05 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 1 Aug 2016 23:01:05 +0100 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: My 2 cents here as an operator/integrator, since I've been using the CentOS SIG repositories (mitaka) and following the RHEL Oficial Documentation, I've managed to install several baremetal tripleo based clouds with success. I've not tried tripleo quickstart, I've also tried Fuel in the past and it works pretty well with the plugin architecture and the network validation among other things, but still I prefer tripleo, it gives me more flexibility to setup the network the way I want it, and using ironic to provision the baremetal hosts is pretty cool too. Also personally I prefer to use Centos than Ubuntu as O.S base system, I find it more stable. Still tripleo lacks the ease of installation that Fuel has, and an UI would be great. Also, I'm not sure that using heat templates is the best approach, specially when someone makes a mistake editing the yaml files and stack returns an error. This could happen when you try to update the overcloud nodes, scaling the compute nodes for example. It's not easy to revert the heat stack when you make a mistake. There's a lot of room to improve, specially in terms of complexity of installation and update. Maybe containers (kolla) could be a good approach in the future? On Mon, Aug 1, 2016 at 9:45 PM, Mohammed Arafa wrote: > I too am an end user and have a similar story. I had tried packstack all > in one but when it was time to deploy to actual servers I looked to Ubuntu > Maas. It was buggy so after a month or so of several attempts I went to > RDO. And was happy when I had my environment up. But it was not > reproducible. I spent months trying. And finally I looked elsewhere and was > told fuel. > With fuel I have ha and ceph and live migration with in 2 hours. And > repeatable too > > And yes. When tripleo quick start showed up. I did not even look at it. > Information overload? Too much time spent evaluating and too little > building something productive? And now I hear of even more. > > In honesty with the rename of RDO to triple o is there any need for an > installer? > > /outburst over > > On Aug 1, 2016 2:01 PM, "Ignacio Bravo" wrote: > >> If we are talking about tools, I would also want to add something with >> regards to user interface of these tools. This is based on my own >> experience: >> >> I started trying to deploy Openstack with Staypuft and The Foreman. The >> UI of The Foreman was intuitive enough for the discovery and provisioning >> of the servers. The OpenStack portion, not so much. >> >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I believe) >> that allowed you to graphically build your Openstack cloud. That was a >> reasonable good GUI for Openstack. >> >> Following that, TripleO become a script based installer, that required >> experience in Heat templates. I know I didn?t have it and had to ask in the >> mailing list about how to present this or change that. I got a couple of >> installs working with this setup. >> >> In the last session in Austin, my goal was to obtain information on how >> others were installing Openstack. I was pointed to Fuel as an alternative. >> I tried it up, and it just worked. It had the discovering capability from >> The Foreman, and the configuration options from TripleO. I understand that >> is based in Ansible and because of that, it is not fully CentOS ready for >> all the nodes (at least not in version 9 that I tried). In any case, as a >> deployer and installer, it is the most well rounded tool that I found. >> >> I?d love to see RDO moving into that direction, and having an easy to >> use, end user ready deployer tool. >> >> IB >> >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> >> >> On Aug 1, 2016, at 1:07 PM, David Moreau Simard wrote: >> >> The vast majority of RDO's CI relies on using upstream >> installation/deployment projects in order to test installation of RDO >> packages in different ways and configurations. >> >> Unless I'm mistaken, TripleO Quickstart was originally created as a >> mean to "easily" install TripleO in different topologies without >> requiring a massive amount of hardware. >> This project allows us to test TripleO in virtual deployments on just >> one server instead of, say, 6. >> >> There's also WeIRDO [1] which was left out of your list. >> WeIRDO is super simple and simply aims to run upstream gate jobs (such >> as puppet-openstack-integration [2][3] and packstack [4][5]) outside >> of the gate. >> It'll install dependencies that are expected to be there (i.e, usually >> set up by the openstack-infra gate preparation jobs), set up the trunk >> repositories we're interested in testing and the rest is handled by >> the upstream project testing framework. >> >> The WeIRDO project is /very/ low maintenance and brings an exceptional >> amount of coverage and value. >> This coverage is important because RDO provides OpenStack packages or >> projects that are not necessarily used by TripleO and the reality is >> that not everyone deploying OpenStack on CentOS with RDO will be using >> TripleO. >> >> Anyway, sorry for sidetracking but back to the topic, thanks for >> opening the discussion. >> >> What honestly perplexes me is the situation of CI in RDO and OSP, >> especially around TripleO/Director, is the amount of work that is >> spent downstream. >> And by downstream, here, I mean anything that isn't in TripleO proper. >> >> I keep dreaming about how awesome upstream TripleO CI would be if all >> that effort was spent directly there instead -- and then that all work >> could bear fruit and trickle down downstream for free. >> Exactly like how we keep improving the testing coverage in >> puppet-openstack-integration, it's automatically pulled in RDO CI >> through WeIRDO for free. >> We make the upstream better and we benefit from it simultaneously: >> everyone wins. >> >> [1]: https://github.com/rdo-infra/weirdo >> [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack >> [3]: >> https://github.com/openstack/puppet-openstack-integration#description >> [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >> [5]: https://github.com/openstack/packstack#packstack-integration-tests >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> >> On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman >> wrote: >> >> Hi, >> >> I would like to start a discussion on the overlap between tools we >> have for deploying and testing TripleO (RDO & RHOSP) in CI. >> >> Several months ago, we worked on one common framework for deploying >> and testing OpenStack (RDO & RHOSP) in CI. I think you can say it >> didn't work out well, which eventually led each group to focus on >> developing other existing/new tools. >> >> What we have right now for deploying and testing >> -------------------------------------------------------- >> === Component CI, Gating === >> I'll start with the projects we created, I think that's only fair :) >> >> * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB project. >> >> * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per release. >> >> * Octario[3] - Testing using RPMs (pep8, unit, functional, tempest, >> csit) + Patching RPMs with submitted code. >> >> === Automation, QE === >> * InfraRed[4] - provision install and test. Pluggable and modular, >> allows you to create your own provisioner, installer and tester. >> >> As far as I know, the groups is working now on different structure of >> one main project and three sub projects (provision, install and test). >> >> === RDO === >> I didn't use RDO tools, so I apologize if I got something wrong: >> >> * About ~25 micro independent Ansible roles[5]. You can either choose >> to use one of them or several together. They are used for >> provisioning, installing and testing Tripleo. >> >> * Tripleo-quickstart[6] - uses the micro roles for deploying tripleo >> and test it. >> >> As I said, I didn't use the tools, so feel free to add more >> information you think is relevant. >> >> === More? === >> I hope not. Let us know if are familiar with more tools. >> >> Conclusion >> -------------- >> So as you can see, there are several projects that eventually overlap >> in many areas. Each group is basically using the same tasks (provision >> resources, build/import overcloud images, run tempest, collect logs, >> etc.) >> >> Personally, I think it's a waste of resources. For each task there is >> at least two people from different groups who work on exactly the same >> task. The most recent example I can give is OVB. As far as I know, >> both groups are working on implementing it in their set of tools right >> now. >> >> On the other hand, you can always claim: "we already tried to work on >> the same framework, we failed to do it successfully" - right, but >> maybe with better ground rules we can manage it. We would defiantly >> benefit a lot from doing that. >> >> What's next? >> ---------------- >> So first of all, I would like to hear from you if you think that we >> can collaborate once again or is it actually better to keep it as it >> is now. >> >> If you agree that collaboration here makes sense, maybe you have ideas >> on how we can do it better this time. >> >> I think that setting up a meeting to discuss the right architecture >> for the project(s) and decide on good review/gating process, would be >> a good start. >> >> Please let me know what do you think and keep in mind that this is not >> about which tool is better!. As you can see I didn't mention the time >> it takes for each tool to deploy and test, and also not the full >> feature list it supports. >> If possible, we should keep it about collaborating and not choosing >> the best tool. Our solution could be the combination of two or more >> tools eventually (tripleo-red, infra-quickstart? :D ) >> >> "You may say I'm a dreamer, but I'm not the only one. I hope some day >> you'll join us and the infra will be as one" :) >> >> [1] https://github.com/redhat-openstack/ansible-ovb >> [2] https://github.com/redhat-openstack/ansible-rhosp >> [3] https://github.com/redhat-openstack/octario >> [4] https://github.com/rhosqeauto/InfraRed >> [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role >> [6] https://github.com/openstack/tripleo-quickstart >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Mon Aug 1 22:59:22 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Tue, 2 Aug 2016 08:59:22 +1000 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: On 02/08/16 03:59, Ignacio Bravo wrote: > If we are talking about tools, I would also want to add something with > regards to user interface of these tools. This is based on my own > experience: > > I started trying to deploy Openstack with Staypuft and The Foreman. The > UI of The Foreman was intuitive enough for the discovery and > provisioning of the servers. The OpenStack portion, not so much. > > Forward a couple of releases and we had a TripleO GUI (Tuskar, I > believe) that allowed you to graphically build your Openstack cloud. > That was a reasonable good GUI for Openstack. > > Following that, TripleO become a script based installer, that required > experience in Heat templates. I know I didn?t have it and had to ask in > the mailing list about how to present this or change that. I got a > couple of installs working with this setup. > > In the last session in Austin, my goal was to obtain information on how > others were installing Openstack. I was pointed to Fuel as an > alternative. I tried it up, and it just worked. It had the discovering > capability from The Foreman, and the configuration options from TripleO. > I understand that is based in Ansible and because of that, it is not > fully CentOS ready for all the nodes (at least not in version 9 that I > tried). In any case, as a deployer and installer, it is the most well > rounded tool that I found. > > I?d love to see RDO moving into that direction, and having an easy to > use, end user ready deployer tool. > > IB Hi Ignacio, You might not be aware but currently there is work being done on a new TripleO UI (replacing Tuskar). https://github.com/openstack/tripleo-ui You can see a demo of it at https://www.youtube.com/watch?v=1Lc04DKGxCg Regards, Graeme > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > >> On Aug 1, 2016, at 1:07 PM, David Moreau Simard > > wrote: >> >> The vast majority of RDO's CI relies on using upstream >> installation/deployment projects in order to test installation of RDO >> packages in different ways and configurations. >> >> Unless I'm mistaken, TripleO Quickstart was originally created as a >> mean to "easily" install TripleO in different topologies without >> requiring a massive amount of hardware. >> This project allows us to test TripleO in virtual deployments on just >> one server instead of, say, 6. >> >> There's also WeIRDO [1] which was left out of your list. >> WeIRDO is super simple and simply aims to run upstream gate jobs (such >> as puppet-openstack-integration [2][3] and packstack [4][5]) outside >> of the gate. >> It'll install dependencies that are expected to be there (i.e, usually >> set up by the openstack-infra gate preparation jobs), set up the trunk >> repositories we're interested in testing and the rest is handled by >> the upstream project testing framework. >> >> The WeIRDO project is /very/ low maintenance and brings an exceptional >> amount of coverage and value. >> This coverage is important because RDO provides OpenStack packages or >> projects that are not necessarily used by TripleO and the reality is >> that not everyone deploying OpenStack on CentOS with RDO will be using >> TripleO. >> >> Anyway, sorry for sidetracking but back to the topic, thanks for >> opening the discussion. >> >> What honestly perplexes me is the situation of CI in RDO and OSP, >> especially around TripleO/Director, is the amount of work that is >> spent downstream. >> And by downstream, here, I mean anything that isn't in TripleO proper. >> >> I keep dreaming about how awesome upstream TripleO CI would be if all >> that effort was spent directly there instead -- and then that all work >> could bear fruit and trickle down downstream for free. >> Exactly like how we keep improving the testing coverage in >> puppet-openstack-integration, it's automatically pulled in RDO CI >> through WeIRDO for free. >> We make the upstream better and we benefit from it simultaneously: >> everyone wins. >> >> [1]: https://github.com/rdo-infra/weirdo >> [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack >> [3]: https://github.com/openstack/puppet-openstack-integration#description >> [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >> [5]: https://github.com/openstack/packstack#packstack-integration-tests >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> >> On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > > wrote: >>> Hi, >>> >>> I would like to start a discussion on the overlap between tools we >>> have for deploying and testing TripleO (RDO & RHOSP) in CI. >>> >>> Several months ago, we worked on one common framework for deploying >>> and testing OpenStack (RDO & RHOSP) in CI. I think you can say it >>> didn't work out well, which eventually led each group to focus on >>> developing other existing/new tools. >>> >>> What we have right now for deploying and testing >>> -------------------------------------------------------- >>> === Component CI, Gating === >>> I'll start with the projects we created, I think that's only fair :) >>> >>> * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB >>> project. >>> >>> * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per release. >>> >>> * Octario[3] - Testing using RPMs (pep8, unit, functional, tempest, >>> csit) + Patching RPMs with submitted code. >>> >>> === Automation, QE === >>> * InfraRed[4] - provision install and test. Pluggable and modular, >>> allows you to create your own provisioner, installer and tester. >>> >>> As far as I know, the groups is working now on different structure of >>> one main project and three sub projects (provision, install and test). >>> >>> === RDO === >>> I didn't use RDO tools, so I apologize if I got something wrong: >>> >>> * About ~25 micro independent Ansible roles[5]. You can either choose >>> to use one of them or several together. They are used for >>> provisioning, installing and testing Tripleo. >>> >>> * Tripleo-quickstart[6] - uses the micro roles for deploying tripleo >>> and test it. >>> >>> As I said, I didn't use the tools, so feel free to add more >>> information you think is relevant. >>> >>> === More? === >>> I hope not. Let us know if are familiar with more tools. >>> >>> Conclusion >>> -------------- >>> So as you can see, there are several projects that eventually overlap >>> in many areas. Each group is basically using the same tasks (provision >>> resources, build/import overcloud images, run tempest, collect logs, >>> etc.) >>> >>> Personally, I think it's a waste of resources. For each task there is >>> at least two people from different groups who work on exactly the same >>> task. The most recent example I can give is OVB. As far as I know, >>> both groups are working on implementing it in their set of tools right >>> now. >>> >>> On the other hand, you can always claim: "we already tried to work on >>> the same framework, we failed to do it successfully" - right, but >>> maybe with better ground rules we can manage it. We would defiantly >>> benefit a lot from doing that. >>> >>> What's next? >>> ---------------- >>> So first of all, I would like to hear from you if you think that we >>> can collaborate once again or is it actually better to keep it as it >>> is now. >>> >>> If you agree that collaboration here makes sense, maybe you have ideas >>> on how we can do it better this time. >>> >>> I think that setting up a meeting to discuss the right architecture >>> for the project(s) and decide on good review/gating process, would be >>> a good start. >>> >>> Please let me know what do you think and keep in mind that this is not >>> about which tool is better!. As you can see I didn't mention the time >>> it takes for each tool to deploy and test, and also not the full >>> feature list it supports. >>> If possible, we should keep it about collaborating and not choosing >>> the best tool. Our solution could be the combination of two or more >>> tools eventually (tripleo-red, infra-quickstart? :D ) >>> >>> "You may say I'm a dreamer, but I'm not the only one. I hope some day >>> you'll join us and the infra will be as one" :) >>> >>> [1] https://github.com/redhat-openstack/ansible-ovb >>> [2] https://github.com/redhat-openstack/ansible-rhosp >>> [3] https://github.com/redhat-openstack/octario >>> [4] https://github.com/rhosqeauto/InfraRed >>> [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role >>> [6] https://github.com/openstack/tripleo-quickstart >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From ggillies at redhat.com Mon Aug 1 23:10:10 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Tue, 2 Aug 2016 09:10:10 +1000 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: On 02/08/16 06:45, Mohammed Arafa wrote: > I too am an end user and have a similar story. I had tried packstack all > in one but when it was time to deploy to actual servers I looked to > Ubuntu Maas. It was buggy so after a month or so of several attempts I > went to RDO. And was happy when I had my environment up. But it was not > reproducible. I spent months trying. And finally I looked elsewhere and > was told fuel. > With fuel I have ha and ceph and live migration with in 2 hours. And > repeatable too > > And yes. When tripleo quick start showed up. I did not even look at it. > Information overload? Too much time spent evaluating and too little > building something productive? And now I hear of even more. > > In honesty with the rename of RDO to triple o is there any need for an > installer? > > /outburst over Hi, Just to clear up some confusion here, RDO and TripleO are two very different things. RDO Is an RPM distribution of Openstack that aims to track upstream Openstack as closely as possible. We provide stable RPM trees for the current stable releases of Openstack, as well as RPM trees of the latest Openstack code as it's committed upstream (through DLRN). You can deploy and manage Openstack with RDO however you would like including puppet, chef, ansible, saltstack, and RDO also works with some of the complete installer projects like TripleO and even kolla (you can see the containers created by kolla have centos/RDO options at [1]). TripleO is an Openstack project which Red Hat is largely involved in, and aims to make an Openstack installer using Openstack itself. It currently works best with RDO because that's where most of our effort is focussed, but there is no technical reason it can't work with other distros or Operating systems. Both of these pieces go into our downstream commercial Offering. RDO becomes Red Hat Openstack Platform, while TripleO becomes RHOS Director. If you wish to consume RDO there is no reason for you to be forced to use TripleO, and are welcome to use any other deployment method or tool. We welcome people expanding RDO by utilising it however they please, deploying it however they like. I would love to hear from more people in the community who are using RDO and not using the default packstack/TripleO installers, as it allows us to get a better understanding of users needs, and helps us learn more about what tools and workflows work best for people. Regards, Graeme [1] https://hub.docker.com/u/kolla/ > > > On Aug 1, 2016 2:01 PM, "Ignacio Bravo" > wrote: > > If we are talking about tools, I would also want to add something > with regards to user interface of these tools. This is based on my > own experience: > > I started trying to deploy Openstack with Staypuft and The Foreman. > The UI of The Foreman was intuitive enough for the discovery and > provisioning of the servers. The OpenStack portion, not so much. > > Forward a couple of releases and we had a TripleO GUI (Tuskar, I > believe) that allowed you to graphically build your Openstack cloud. > That was a reasonable good GUI for Openstack. > > Following that, TripleO become a script based installer, that > required experience in Heat templates. I know I didn?t have it and > had to ask in the mailing list about how to present this or change > that. I got a couple of installs working with this setup. > > In the last session in Austin, my goal was to obtain information on > how others were installing Openstack. I was pointed to Fuel as an > alternative. I tried it up, and it just worked. It had the > discovering capability from The Foreman, and the configuration > options from TripleO. I understand that is based in Ansible and > because of that, it is not fully CentOS ready for all the nodes (at > least not in version 9 that I tried). In any case, as a deployer and > installer, it is the most well rounded tool that I found. > > I?d love to see RDO moving into that direction, and having an easy > to use, end user ready deployer tool. > > IB > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > >> On Aug 1, 2016, at 1:07 PM, David Moreau Simard > > wrote: >> >> The vast majority of RDO's CI relies on using upstream >> installation/deployment projects in order to test installation of RDO >> packages in different ways and configurations. >> >> Unless I'm mistaken, TripleO Quickstart was originally created as a >> mean to "easily" install TripleO in different topologies without >> requiring a massive amount of hardware. >> This project allows us to test TripleO in virtual deployments on just >> one server instead of, say, 6. >> >> There's also WeIRDO [1] which was left out of your list. >> WeIRDO is super simple and simply aims to run upstream gate jobs (such >> as puppet-openstack-integration [2][3] and packstack [4][5]) outside >> of the gate. >> It'll install dependencies that are expected to be there (i.e, usually >> set up by the openstack-infra gate preparation jobs), set up the trunk >> repositories we're interested in testing and the rest is handled by >> the upstream project testing framework. >> >> The WeIRDO project is /very/ low maintenance and brings an exceptional >> amount of coverage and value. >> This coverage is important because RDO provides OpenStack packages or >> projects that are not necessarily used by TripleO and the reality is >> that not everyone deploying OpenStack on CentOS with RDO will be using >> TripleO. >> >> Anyway, sorry for sidetracking but back to the topic, thanks for >> opening the discussion. >> >> What honestly perplexes me is the situation of CI in RDO and OSP, >> especially around TripleO/Director, is the amount of work that is >> spent downstream. >> And by downstream, here, I mean anything that isn't in TripleO proper. >> >> I keep dreaming about how awesome upstream TripleO CI would be if all >> that effort was spent directly there instead -- and then that all work >> could bear fruit and trickle down downstream for free. >> Exactly like how we keep improving the testing coverage in >> puppet-openstack-integration, it's automatically pulled in RDO CI >> through WeIRDO for free. >> We make the upstream better and we benefit from it simultaneously: >> everyone wins. >> >> [1]: https://github.com/rdo-infra/weirdo >> [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack >> [3]: >> https://github.com/openstack/puppet-openstack-integration#description >> [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >> [5]: >> https://github.com/openstack/packstack#packstack-integration-tests >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> >> On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > > wrote: >>> Hi, >>> >>> I would like to start a discussion on the overlap between tools we >>> have for deploying and testing TripleO (RDO & RHOSP) in CI. >>> >>> Several months ago, we worked on one common framework for deploying >>> and testing OpenStack (RDO & RHOSP) in CI. I think you can say it >>> didn't work out well, which eventually led each group to focus on >>> developing other existing/new tools. >>> >>> What we have right now for deploying and testing >>> -------------------------------------------------------- >>> === Component CI, Gating === >>> I'll start with the projects we created, I think that's only fair :) >>> >>> * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB >>> project. >>> >>> * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per >>> release. >>> >>> * Octario[3] - Testing using RPMs (pep8, unit, functional, tempest, >>> csit) + Patching RPMs with submitted code. >>> >>> === Automation, QE === >>> * InfraRed[4] - provision install and test. Pluggable and modular, >>> allows you to create your own provisioner, installer and tester. >>> >>> As far as I know, the groups is working now on different structure of >>> one main project and three sub projects (provision, install and >>> test). >>> >>> === RDO === >>> I didn't use RDO tools, so I apologize if I got something wrong: >>> >>> * About ~25 micro independent Ansible roles[5]. You can either choose >>> to use one of them or several together. They are used for >>> provisioning, installing and testing Tripleo. >>> >>> * Tripleo-quickstart[6] - uses the micro roles for deploying tripleo >>> and test it. >>> >>> As I said, I didn't use the tools, so feel free to add more >>> information you think is relevant. >>> >>> === More? === >>> I hope not. Let us know if are familiar with more tools. >>> >>> Conclusion >>> -------------- >>> So as you can see, there are several projects that eventually overlap >>> in many areas. Each group is basically using the same tasks >>> (provision >>> resources, build/import overcloud images, run tempest, collect logs, >>> etc.) >>> >>> Personally, I think it's a waste of resources. For each task there is >>> at least two people from different groups who work on exactly the >>> same >>> task. The most recent example I can give is OVB. As far as I know, >>> both groups are working on implementing it in their set of tools >>> right >>> now. >>> >>> On the other hand, you can always claim: "we already tried to work on >>> the same framework, we failed to do it successfully" - right, but >>> maybe with better ground rules we can manage it. We would defiantly >>> benefit a lot from doing that. >>> >>> What's next? >>> ---------------- >>> So first of all, I would like to hear from you if you think that we >>> can collaborate once again or is it actually better to keep it as it >>> is now. >>> >>> If you agree that collaboration here makes sense, maybe you have >>> ideas >>> on how we can do it better this time. >>> >>> I think that setting up a meeting to discuss the right architecture >>> for the project(s) and decide on good review/gating process, would be >>> a good start. >>> >>> Please let me know what do you think and keep in mind that this >>> is not >>> about which tool is better!. As you can see I didn't mention the time >>> it takes for each tool to deploy and test, and also not the full >>> feature list it supports. >>> If possible, we should keep it about collaborating and not choosing >>> the best tool. Our solution could be the combination of two or more >>> tools eventually (tripleo-red, infra-quickstart? :D ) >>> >>> "You may say I'm a dreamer, but I'm not the only one. I hope some day >>> you'll join us and the infra will be as one" :) >>> >>> [1] https://github.com/redhat-openstack/ansible-ovb >>> [2] https://github.com/redhat-openstack/ansible-rhosp >>> [3] https://github.com/redhat-openstack/octario >>> [4] https://github.com/rhosqeauto/InfraRed >>> [5] >>> https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role >>> [6] https://github.com/openstack/tripleo-quickstart >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From mohammed.arafa at gmail.com Mon Aug 1 23:16:27 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 1 Aug 2016 19:16:27 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: does this tripleo ui integrate into horizon like tuskar did? and if i recall correclty rdo was _renamed_ to tripleO a few months back On Mon, Aug 1, 2016 at 7:10 PM, Graeme Gillies wrote: > On 02/08/16 06:45, Mohammed Arafa wrote: > > I too am an end user and have a similar story. I had tried packstack all > > in one but when it was time to deploy to actual servers I looked to > > Ubuntu Maas. It was buggy so after a month or so of several attempts I > > went to RDO. And was happy when I had my environment up. But it was not > > reproducible. I spent months trying. And finally I looked elsewhere and > > was told fuel. > > With fuel I have ha and ceph and live migration with in 2 hours. And > > repeatable too > > > > And yes. When tripleo quick start showed up. I did not even look at it. > > Information overload? Too much time spent evaluating and too little > > building something productive? And now I hear of even more. > > > > In honesty with the rename of RDO to triple o is there any need for an > > installer? > > > > /outburst over > > Hi, > > Just to clear up some confusion here, RDO and TripleO are two very > different things. > > RDO Is an RPM distribution of Openstack that aims to track upstream > Openstack as closely as possible. We provide stable RPM trees for the > current stable releases of Openstack, as well as RPM trees of the latest > Openstack code as it's committed upstream (through DLRN). > > You can deploy and manage Openstack with RDO however you would like > including puppet, chef, ansible, saltstack, and RDO also works with some > of the complete installer projects like TripleO and even kolla (you can > see the containers created by kolla have centos/RDO options at [1]). > > TripleO is an Openstack project which Red Hat is largely involved in, > and aims to make an Openstack installer using Openstack itself. It > currently works best with RDO because that's where most of our effort is > focussed, but there is no technical reason it can't work with other > distros or Operating systems. > > Both of these pieces go into our downstream commercial Offering. RDO > becomes Red Hat Openstack Platform, while TripleO becomes RHOS Director. > > If you wish to consume RDO there is no reason for you to be forced to > use TripleO, and are welcome to use any other deployment method or tool. > We welcome people expanding RDO by utilising it however they please, > deploying it however they like. > > I would love to hear from more people in the community who are using RDO > and not using the default packstack/TripleO installers, as it allows us > to get a better understanding of users needs, and helps us learn more > about what tools and workflows work best for people. > > Regards, > > Graeme > > [1] https://hub.docker.com/u/kolla/ > > > > > > > On Aug 1, 2016 2:01 PM, "Ignacio Bravo" > > wrote: > > > > If we are talking about tools, I would also want to add something > > with regards to user interface of these tools. This is based on my > > own experience: > > > > I started trying to deploy Openstack with Staypuft and The Foreman. > > The UI of The Foreman was intuitive enough for the discovery and > > provisioning of the servers. The OpenStack portion, not so much. > > > > Forward a couple of releases and we had a TripleO GUI (Tuskar, I > > believe) that allowed you to graphically build your Openstack cloud. > > That was a reasonable good GUI for Openstack. > > > > Following that, TripleO become a script based installer, that > > required experience in Heat templates. I know I didn?t have it and > > had to ask in the mailing list about how to present this or change > > that. I got a couple of installs working with this setup. > > > > In the last session in Austin, my goal was to obtain information on > > how others were installing Openstack. I was pointed to Fuel as an > > alternative. I tried it up, and it just worked. It had the > > discovering capability from The Foreman, and the configuration > > options from TripleO. I understand that is based in Ansible and > > because of that, it is not fully CentOS ready for all the nodes (at > > least not in version 9 that I tried). In any case, as a deployer and > > installer, it is the most well rounded tool that I found. > > > > I?d love to see RDO moving into that direction, and having an easy > > to use, end user ready deployer tool. > > > > IB > > > > > > __ > > Ignacio Bravo > > LTG Federal, Inc > > www.ltgfederal.com > > > > > >> On Aug 1, 2016, at 1:07 PM, David Moreau Simard >> > wrote: > >> > >> The vast majority of RDO's CI relies on using upstream > >> installation/deployment projects in order to test installation of > RDO > >> packages in different ways and configurations. > >> > >> Unless I'm mistaken, TripleO Quickstart was originally created as a > >> mean to "easily" install TripleO in different topologies without > >> requiring a massive amount of hardware. > >> This project allows us to test TripleO in virtual deployments on > just > >> one server instead of, say, 6. > >> > >> There's also WeIRDO [1] which was left out of your list. > >> WeIRDO is super simple and simply aims to run upstream gate jobs > (such > >> as puppet-openstack-integration [2][3] and packstack [4][5]) outside > >> of the gate. > >> It'll install dependencies that are expected to be there (i.e, > usually > >> set up by the openstack-infra gate preparation jobs), set up the > trunk > >> repositories we're interested in testing and the rest is handled by > >> the upstream project testing framework. > >> > >> The WeIRDO project is /very/ low maintenance and brings an > exceptional > >> amount of coverage and value. > >> This coverage is important because RDO provides OpenStack packages > or > >> projects that are not necessarily used by TripleO and the reality is > >> that not everyone deploying OpenStack on CentOS with RDO will be > using > >> TripleO. > >> > >> Anyway, sorry for sidetracking but back to the topic, thanks for > >> opening the discussion. > >> > >> What honestly perplexes me is the situation of CI in RDO and OSP, > >> especially around TripleO/Director, is the amount of work that is > >> spent downstream. > >> And by downstream, here, I mean anything that isn't in TripleO > proper. > >> > >> I keep dreaming about how awesome upstream TripleO CI would be if > all > >> that effort was spent directly there instead -- and then that all > work > >> could bear fruit and trickle down downstream for free. > >> Exactly like how we keep improving the testing coverage in > >> puppet-openstack-integration, it's automatically pulled in RDO CI > >> through WeIRDO for free. > >> We make the upstream better and we benefit from it simultaneously: > >> everyone wins. > >> > >> [1]: https://github.com/rdo-infra/weirdo > >> [2]: > https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack > >> [3]: > >> > https://github.com/openstack/puppet-openstack-integration#description > >> [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack > >> [5]: > >> https://github.com/openstack/packstack#packstack-integration-tests > >> > >> David Moreau Simard > >> Senior Software Engineer | Openstack RDO > >> > >> dmsimard = [irc, github, twitter] > >> > >> David Moreau Simard > >> Senior Software Engineer | Openstack RDO > >> > >> dmsimard = [irc, github, twitter] > >> > >> > >> On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman >> > wrote: > >>> Hi, > >>> > >>> I would like to start a discussion on the overlap between tools we > >>> have for deploying and testing TripleO (RDO & RHOSP) in CI. > >>> > >>> Several months ago, we worked on one common framework for deploying > >>> and testing OpenStack (RDO & RHOSP) in CI. I think you can say it > >>> didn't work out well, which eventually led each group to focus on > >>> developing other existing/new tools. > >>> > >>> What we have right now for deploying and testing > >>> -------------------------------------------------------- > >>> === Component CI, Gating === > >>> I'll start with the projects we created, I think that's only fair > :) > >>> > >>> * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB > >>> project. > >>> > >>> * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per > >>> release. > >>> > >>> * Octario[3] - Testing using RPMs (pep8, unit, functional, tempest, > >>> csit) + Patching RPMs with submitted code. > >>> > >>> === Automation, QE === > >>> * InfraRed[4] - provision install and test. Pluggable and modular, > >>> allows you to create your own provisioner, installer and tester. > >>> > >>> As far as I know, the groups is working now on different structure > of > >>> one main project and three sub projects (provision, install and > >>> test). > >>> > >>> === RDO === > >>> I didn't use RDO tools, so I apologize if I got something wrong: > >>> > >>> * About ~25 micro independent Ansible roles[5]. You can either > choose > >>> to use one of them or several together. They are used for > >>> provisioning, installing and testing Tripleo. > >>> > >>> * Tripleo-quickstart[6] - uses the micro roles for deploying > tripleo > >>> and test it. > >>> > >>> As I said, I didn't use the tools, so feel free to add more > >>> information you think is relevant. > >>> > >>> === More? === > >>> I hope not. Let us know if are familiar with more tools. > >>> > >>> Conclusion > >>> -------------- > >>> So as you can see, there are several projects that eventually > overlap > >>> in many areas. Each group is basically using the same tasks > >>> (provision > >>> resources, build/import overcloud images, run tempest, collect > logs, > >>> etc.) > >>> > >>> Personally, I think it's a waste of resources. For each task there > is > >>> at least two people from different groups who work on exactly the > >>> same > >>> task. The most recent example I can give is OVB. As far as I know, > >>> both groups are working on implementing it in their set of tools > >>> right > >>> now. > >>> > >>> On the other hand, you can always claim: "we already tried to work > on > >>> the same framework, we failed to do it successfully" - right, but > >>> maybe with better ground rules we can manage it. We would defiantly > >>> benefit a lot from doing that. > >>> > >>> What's next? > >>> ---------------- > >>> So first of all, I would like to hear from you if you think that we > >>> can collaborate once again or is it actually better to keep it as > it > >>> is now. > >>> > >>> If you agree that collaboration here makes sense, maybe you have > >>> ideas > >>> on how we can do it better this time. > >>> > >>> I think that setting up a meeting to discuss the right architecture > >>> for the project(s) and decide on good review/gating process, would > be > >>> a good start. > >>> > >>> Please let me know what do you think and keep in mind that this > >>> is not > >>> about which tool is better!. As you can see I didn't mention the > time > >>> it takes for each tool to deploy and test, and also not the full > >>> feature list it supports. > >>> If possible, we should keep it about collaborating and not choosing > >>> the best tool. Our solution could be the combination of two or more > >>> tools eventually (tripleo-red, infra-quickstart? :D ) > >>> > >>> "You may say I'm a dreamer, but I'm not the only one. I hope some > day > >>> you'll join us and the infra will be as one" :) > >>> > >>> [1] https://github.com/redhat-openstack/ansible-ovb > >>> [2] https://github.com/redhat-openstack/ansible-rhosp > >>> [3] https://github.com/redhat-openstack/octario > >>> [4] https://github.com/rhosqeauto/InfraRed > >>> [5] > >>> > https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role > >>> [6] https://github.com/openstack/tripleo-quickstart > >>> > >>> _______________________________________________ > >>> rdo-list mailing list > >>> rdo-list at redhat.com > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Mon Aug 1 23:24:49 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Tue, 2 Aug 2016 09:24:49 +1000 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: <7f87e267-0cb5-e219-9d29-b168ed0b15b8@redhat.com> On 02/08/16 09:16, Mohammed Arafa wrote: > does this tripleo ui integrate into horizon like tuskar did? The new TripleO UI is completely stand alone from horizon. I would expect that once the UI reaches maturity we won't be using horizon on the undercloud at all, but I'll let someone else more familiar with the UI confirm. > > and if i recall correclty rdo was _renamed_ to tripleO a few months back That was "RDO Manager" which was the name we were using for TripleO. So you still had two things, RDO (the distribution) and RDO Manager which was the installer. We realised that having the installer called TripleO in upstream (openstack.org), RDO Manager midstream (in RDO), and RHOS Director downstream made no sense, so dropped the RDO Manager name. It's now TripleO everywhere. Hope this helps, Regards, Graeme > > On Mon, Aug 1, 2016 at 7:10 PM, Graeme Gillies > wrote: > > On 02/08/16 06:45, Mohammed Arafa wrote: > > I too am an end user and have a similar story. I had tried packstack all > > in one but when it was time to deploy to actual servers I looked to > > Ubuntu Maas. It was buggy so after a month or so of several attempts I > > went to RDO. And was happy when I had my environment up. But it was not > > reproducible. I spent months trying. And finally I looked elsewhere and > > was told fuel. > > With fuel I have ha and ceph and live migration with in 2 hours. And > > repeatable too > > > > And yes. When tripleo quick start showed up. I did not even look at it. > > Information overload? Too much time spent evaluating and too little > > building something productive? And now I hear of even more. > > > > In honesty with the rename of RDO to triple o is there any need for an > > installer? > > > > /outburst over > > Hi, > > Just to clear up some confusion here, RDO and TripleO are two very > different things. > > RDO Is an RPM distribution of Openstack that aims to track upstream > Openstack as closely as possible. We provide stable RPM trees for the > current stable releases of Openstack, as well as RPM trees of the latest > Openstack code as it's committed upstream (through DLRN). > > You can deploy and manage Openstack with RDO however you would like > including puppet, chef, ansible, saltstack, and RDO also works with some > of the complete installer projects like TripleO and even kolla (you can > see the containers created by kolla have centos/RDO options at [1]). > > TripleO is an Openstack project which Red Hat is largely involved in, > and aims to make an Openstack installer using Openstack itself. It > currently works best with RDO because that's where most of our effort is > focussed, but there is no technical reason it can't work with other > distros or Operating systems. > > Both of these pieces go into our downstream commercial Offering. RDO > becomes Red Hat Openstack Platform, while TripleO becomes RHOS Director. > > If you wish to consume RDO there is no reason for you to be forced to > use TripleO, and are welcome to use any other deployment method or tool. > We welcome people expanding RDO by utilising it however they please, > deploying it however they like. > > I would love to hear from more people in the community who are using RDO > and not using the default packstack/TripleO installers, as it allows us > to get a better understanding of users needs, and helps us learn more > about what tools and workflows work best for people. > > Regards, > > Graeme > > [1] https://hub.docker.com/u/kolla/ > > > > > > > On Aug 1, 2016 2:01 PM, "Ignacio Bravo" > > >> wrote: > > > > If we are talking about tools, I would also want to add something > > with regards to user interface of these tools. This is based on my > > own experience: > > > > I started trying to deploy Openstack with Staypuft and The > Foreman. > > The UI of The Foreman was intuitive enough for the discovery and > > provisioning of the servers. The OpenStack portion, not so much. > > > > Forward a couple of releases and we had a TripleO GUI (Tuskar, I > > believe) that allowed you to graphically build your Openstack > cloud. > > That was a reasonable good GUI for Openstack. > > > > Following that, TripleO become a script based installer, that > > required experience in Heat templates. I know I didn?t have it and > > had to ask in the mailing list about how to present this or change > > that. I got a couple of installs working with this setup. > > > > In the last session in Austin, my goal was to obtain > information on > > how others were installing Openstack. I was pointed to Fuel as an > > alternative. I tried it up, and it just worked. It had the > > discovering capability from The Foreman, and the configuration > > options from TripleO. I understand that is based in Ansible and > > because of that, it is not fully CentOS ready for all the > nodes (at > > least not in version 9 that I tried). In any case, as a > deployer and > > installer, it is the most well rounded tool that I found. > > > > I?d love to see RDO moving into that direction, and having an easy > > to use, end user ready deployer tool. > > > > IB > > > > > > __ > > Ignacio Bravo > > LTG Federal, Inc > > www.ltgfederal.com > > > > > > >> On Aug 1, 2016, at 1:07 PM, David Moreau Simard > >> >> wrote: > >> > >> The vast majority of RDO's CI relies on using upstream > >> installation/deployment projects in order to test > installation of RDO > >> packages in different ways and configurations. > >> > >> Unless I'm mistaken, TripleO Quickstart was originally > created as a > >> mean to "easily" install TripleO in different topologies without > >> requiring a massive amount of hardware. > >> This project allows us to test TripleO in virtual deployments > on just > >> one server instead of, say, 6. > >> > >> There's also WeIRDO [1] which was left out of your list. > >> WeIRDO is super simple and simply aims to run upstream gate > jobs (such > >> as puppet-openstack-integration [2][3] and packstack [4][5]) > outside > >> of the gate. > >> It'll install dependencies that are expected to be there > (i.e, usually > >> set up by the openstack-infra gate preparation jobs), set up > the trunk > >> repositories we're interested in testing and the rest is > handled by > >> the upstream project testing framework. > >> > >> The WeIRDO project is /very/ low maintenance and brings an > exceptional > >> amount of coverage and value. > >> This coverage is important because RDO provides OpenStack > packages or > >> projects that are not necessarily used by TripleO and the > reality is > >> that not everyone deploying OpenStack on CentOS with RDO will > be using > >> TripleO. > >> > >> Anyway, sorry for sidetracking but back to the topic, thanks for > >> opening the discussion. > >> > >> What honestly perplexes me is the situation of CI in RDO and OSP, > >> especially around TripleO/Director, is the amount of work that is > >> spent downstream. > >> And by downstream, here, I mean anything that isn't in > TripleO proper. > >> > >> I keep dreaming about how awesome upstream TripleO CI would > be if all > >> that effort was spent directly there instead -- and then that > all work > >> could bear fruit and trickle down downstream for free. > >> Exactly like how we keep improving the testing coverage in > >> puppet-openstack-integration, it's automatically pulled in RDO CI > >> through WeIRDO for free. > >> We make the upstream better and we benefit from it > simultaneously: > >> everyone wins. > >> > >> [1]: https://github.com/rdo-infra/weirdo > >> [2]: > https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack > >> [3]: > >> > https://github.com/openstack/puppet-openstack-integration#description > >> [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack > >> [5]: > >> > https://github.com/openstack/packstack#packstack-integration-tests > >> > >> David Moreau Simard > >> Senior Software Engineer | Openstack RDO > >> > >> dmsimard = [irc, github, twitter] > >> > >> David Moreau Simard > >> Senior Software Engineer | Openstack RDO > >> > >> dmsimard = [irc, github, twitter] > >> > >> > >> On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > > >> >> wrote: > >>> Hi, > >>> > >>> I would like to start a discussion on the overlap between > tools we > >>> have for deploying and testing TripleO (RDO & RHOSP) in CI. > >>> > >>> Several months ago, we worked on one common framework for > deploying > >>> and testing OpenStack (RDO & RHOSP) in CI. I think you can > say it > >>> didn't work out well, which eventually led each group to > focus on > >>> developing other existing/new tools. > >>> > >>> What we have right now for deploying and testing > >>> -------------------------------------------------------- > >>> === Component CI, Gating === > >>> I'll start with the projects we created, I think that's only > fair :) > >>> > >>> * Ansible-OVB[1] - Provisioning Tripleo heat stack, using > the OVB > >>> project. > >>> > >>> * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per > >>> release. > >>> > >>> * Octario[3] - Testing using RPMs (pep8, unit, functional, > tempest, > >>> csit) + Patching RPMs with submitted code. > >>> > >>> === Automation, QE === > >>> * InfraRed[4] - provision install and test. Pluggable and > modular, > >>> allows you to create your own provisioner, installer and tester. > >>> > >>> As far as I know, the groups is working now on different > structure of > >>> one main project and three sub projects (provision, install and > >>> test). > >>> > >>> === RDO === > >>> I didn't use RDO tools, so I apologize if I got something wrong: > >>> > >>> * About ~25 micro independent Ansible roles[5]. You can > either choose > >>> to use one of them or several together. They are used for > >>> provisioning, installing and testing Tripleo. > >>> > >>> * Tripleo-quickstart[6] - uses the micro roles for deploying > tripleo > >>> and test it. > >>> > >>> As I said, I didn't use the tools, so feel free to add more > >>> information you think is relevant. > >>> > >>> === More? === > >>> I hope not. Let us know if are familiar with more tools. > >>> > >>> Conclusion > >>> -------------- > >>> So as you can see, there are several projects that > eventually overlap > >>> in many areas. Each group is basically using the same tasks > >>> (provision > >>> resources, build/import overcloud images, run tempest, > collect logs, > >>> etc.) > >>> > >>> Personally, I think it's a waste of resources. For each task > there is > >>> at least two people from different groups who work on > exactly the > >>> same > >>> task. The most recent example I can give is OVB. As far as I > know, > >>> both groups are working on implementing it in their set of tools > >>> right > >>> now. > >>> > >>> On the other hand, you can always claim: "we already tried > to work on > >>> the same framework, we failed to do it successfully" - > right, but > >>> maybe with better ground rules we can manage it. We would > defiantly > >>> benefit a lot from doing that. > >>> > >>> What's next? > >>> ---------------- > >>> So first of all, I would like to hear from you if you think > that we > >>> can collaborate once again or is it actually better to keep > it as it > >>> is now. > >>> > >>> If you agree that collaboration here makes sense, maybe you have > >>> ideas > >>> on how we can do it better this time. > >>> > >>> I think that setting up a meeting to discuss the right > architecture > >>> for the project(s) and decide on good review/gating process, > would be > >>> a good start. > >>> > >>> Please let me know what do you think and keep in mind that this > >>> is not > >>> about which tool is better!. As you can see I didn't mention > the time > >>> it takes for each tool to deploy and test, and also not the full > >>> feature list it supports. > >>> If possible, we should keep it about collaborating and not > choosing > >>> the best tool. Our solution could be the combination of two > or more > >>> tools eventually (tripleo-red, infra-quickstart? :D ) > >>> > >>> "You may say I'm a dreamer, but I'm not the only one. I hope > some day > >>> you'll join us and the infra will be as one" :) > >>> > >>> [1] https://github.com/redhat-openstack/ansible-ovb > >>> [2] https://github.com/redhat-openstack/ansible-rhosp > >>> [3] https://github.com/redhat-openstack/octario > >>> [4] https://github.com/rhosqeauto/InfraRed > >>> [5] > >>> > https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role > >>> [6] https://github.com/openstack/tripleo-quickstart > >>> > >>> _______________________________________________ > >>> rdo-list mailing list > >>> rdo-list at redhat.com > > > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > > > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > > > > -- > > > > > > > > *805010942448935* > ** > > > > > *GR750055912MA* > > > > > *Link to me on LinkedIn * > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From ggillies at redhat.com Mon Aug 1 23:27:51 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Tue, 2 Aug 2016 09:27:51 +1000 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: On 02/08/16 08:01, Pedro Sousa wrote: > My 2 cents here as an operator/integrator, since I've been using the > CentOS SIG repositories (mitaka) and following the RHEL Oficial > Documentation, I've managed to install several baremetal tripleo based > clouds with success. I've not tried tripleo quickstart, > > I've also tried Fuel in the past and it works pretty well with the > plugin architecture and the network validation among other things, but > still I prefer tripleo, it gives me more flexibility to setup the > network the way I want it, and using ironic to provision the baremetal > hosts is pretty cool too. Also personally I prefer to use Centos than > Ubuntu as O.S base system, I find it more stable. > > Still tripleo lacks the ease of installation that Fuel has, and an UI > would be great. Also, I'm not sure that using heat templates is the best > approach, specially when someone makes a mistake editing the yaml files > and stack returns an error. This could happen when you try to update the > overcloud nodes, scaling the compute nodes for example. It's not easy to > revert the heat stack when you make a mistake. > > There's a lot of room to improve, specially in terms of complexity of > installation and update. Maybe containers (kolla) could be a good > approach in the future? Hi Pedro, As an Operator and long time user of TripleO, I sympathise with you that the combination of heat templates and puppet are difficult to learn and don't have the mature tooling to help you understand and test how changes to the code will reflect in the real environment. One thing I will point out is that if you do a stack update that fails, more often than not it's not the end of the world. If you go on your controller plane and make sure pacemaker and all services are up and running, the state of the stack in heat on the undercloud doesn't really matter as much. This way we always try to "fail forward". If we do a bad stack update, we make sure the environment is stable again, and then push a new stack update with the fixes. Having a staging or test environment you can utilise to perform changes on first in order to identify these problems is also beneficial. We try and get all our Operators to have a virtual tripleo setup on a desktop for them to "develop" changes on, as well as a shared staging environments to do final testing of any proposed change before rolling into production. If you are interested in understanding our full development/rollout process I can go into more detail. Also kolla already supports centos/RDO, so you can head over to the kolla project and follow their documentation if you are interested in giving it a go. You are able to use Centos and RDO with containers right now, no need to wait for anything in the future. Regards, Graeme > > > > > > > > On Mon, Aug 1, 2016 at 9:45 PM, Mohammed Arafa > wrote: > > I too am an end user and have a similar story. I had tried packstack > all in one but when it was time to deploy to actual servers I looked > to Ubuntu Maas. It was buggy so after a month or so of several > attempts I went to RDO. And was happy when I had my environment up. > But it was not reproducible. I spent months trying. And finally I > looked elsewhere and was told fuel. > With fuel I have ha and ceph and live migration with in 2 hours. And > repeatable too > > And yes. When tripleo quick start showed up. I did not even look at > it. Information overload? Too much time spent evaluating and too > little building something productive? And now I hear of even more. > > In honesty with the rename of RDO to triple o is there any need for > an installer? > > /outburst over > > > On Aug 1, 2016 2:01 PM, "Ignacio Bravo" > wrote: > > If we are talking about tools, I would also want to add > something with regards to user interface of these tools. This is > based on my own experience: > > I started trying to deploy Openstack with Staypuft and The > Foreman. The UI of The Foreman was intuitive enough for the > discovery and provisioning of the servers. The OpenStack > portion, not so much. > > Forward a couple of releases and we had a TripleO GUI (Tuskar, I > believe) that allowed you to graphically build your Openstack > cloud. That was a reasonable good GUI for Openstack. > > Following that, TripleO become a script based installer, that > required experience in Heat templates. I know I didn?t have it > and had to ask in the mailing list about how to present this or > change that. I got a couple of installs working with this setup. > > In the last session in Austin, my goal was to obtain information > on how others were installing Openstack. I was pointed to Fuel > as an alternative. I tried it up, and it just worked. It had the > discovering capability from The Foreman, and the configuration > options from TripleO. I understand that is based in Ansible and > because of that, it is not fully CentOS ready for all the nodes > (at least not in version 9 that I tried). In any case, as a > deployer and installer, it is the most well rounded tool that I > found. > > I?d love to see RDO moving into that direction, and having an > easy to use, end user ready deployer tool. > > IB > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > >> On Aug 1, 2016, at 1:07 PM, David Moreau Simard >> > wrote: >> >> The vast majority of RDO's CI relies on using upstream >> installation/deployment projects in order to test installation >> of RDO >> packages in different ways and configurations. >> >> Unless I'm mistaken, TripleO Quickstart was originally created >> as a >> mean to "easily" install TripleO in different topologies without >> requiring a massive amount of hardware. >> This project allows us to test TripleO in virtual deployments >> on just >> one server instead of, say, 6. >> >> There's also WeIRDO [1] which was left out of your list. >> WeIRDO is super simple and simply aims to run upstream gate >> jobs (such >> as puppet-openstack-integration [2][3] and packstack [4][5]) >> outside >> of the gate. >> It'll install dependencies that are expected to be there (i.e, >> usually >> set up by the openstack-infra gate preparation jobs), set up >> the trunk >> repositories we're interested in testing and the rest is >> handled by >> the upstream project testing framework. >> >> The WeIRDO project is /very/ low maintenance and brings an >> exceptional >> amount of coverage and value. >> This coverage is important because RDO provides OpenStack >> packages or >> projects that are not necessarily used by TripleO and the >> reality is >> that not everyone deploying OpenStack on CentOS with RDO will >> be using >> TripleO. >> >> Anyway, sorry for sidetracking but back to the topic, thanks for >> opening the discussion. >> >> What honestly perplexes me is the situation of CI in RDO and OSP, >> especially around TripleO/Director, is the amount of work that is >> spent downstream. >> And by downstream, here, I mean anything that isn't in TripleO >> proper. >> >> I keep dreaming about how awesome upstream TripleO CI would be >> if all >> that effort was spent directly there instead -- and then that >> all work >> could bear fruit and trickle down downstream for free. >> Exactly like how we keep improving the testing coverage in >> puppet-openstack-integration, it's automatically pulled in RDO CI >> through WeIRDO for free. >> We make the upstream better and we benefit from it simultaneously: >> everyone wins. >> >> [1]: https://github.com/rdo-infra/weirdo >> [2]: >> https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack >> [3]: >> https://github.com/openstack/puppet-openstack-integration#description >> [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >> [5]: >> https://github.com/openstack/packstack#packstack-integration-tests >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> >> On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman >> > wrote: >>> Hi, >>> >>> I would like to start a discussion on the overlap between >>> tools we >>> have for deploying and testing TripleO (RDO & RHOSP) in CI. >>> >>> Several months ago, we worked on one common framework for >>> deploying >>> and testing OpenStack (RDO & RHOSP) in CI. I think you can say it >>> didn't work out well, which eventually led each group to focus on >>> developing other existing/new tools. >>> >>> What we have right now for deploying and testing >>> -------------------------------------------------------- >>> === Component CI, Gating === >>> I'll start with the projects we created, I think that's only >>> fair :) >>> >>> * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the >>> OVB project. >>> >>> * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per >>> release. >>> >>> * Octario[3] - Testing using RPMs (pep8, unit, functional, >>> tempest, >>> csit) + Patching RPMs with submitted code. >>> >>> === Automation, QE === >>> * InfraRed[4] - provision install and test. Pluggable and >>> modular, >>> allows you to create your own provisioner, installer and tester. >>> >>> As far as I know, the groups is working now on different >>> structure of >>> one main project and three sub projects (provision, install >>> and test). >>> >>> === RDO === >>> I didn't use RDO tools, so I apologize if I got something wrong: >>> >>> * About ~25 micro independent Ansible roles[5]. You can >>> either choose >>> to use one of them or several together. They are used for >>> provisioning, installing and testing Tripleo. >>> >>> * Tripleo-quickstart[6] - uses the micro roles for deploying >>> tripleo >>> and test it. >>> >>> As I said, I didn't use the tools, so feel free to add more >>> information you think is relevant. >>> >>> === More? === >>> I hope not. Let us know if are familiar with more tools. >>> >>> Conclusion >>> -------------- >>> So as you can see, there are several projects that eventually >>> overlap >>> in many areas. Each group is basically using the same tasks >>> (provision >>> resources, build/import overcloud images, run tempest, >>> collect logs, >>> etc.) >>> >>> Personally, I think it's a waste of resources. For each task >>> there is >>> at least two people from different groups who work on exactly >>> the same >>> task. The most recent example I can give is OVB. As far as I >>> know, >>> both groups are working on implementing it in their set of >>> tools right >>> now. >>> >>> On the other hand, you can always claim: "we already tried to >>> work on >>> the same framework, we failed to do it successfully" - right, but >>> maybe with better ground rules we can manage it. We would >>> defiantly >>> benefit a lot from doing that. >>> >>> What's next? >>> ---------------- >>> So first of all, I would like to hear from you if you think >>> that we >>> can collaborate once again or is it actually better to keep >>> it as it >>> is now. >>> >>> If you agree that collaboration here makes sense, maybe you >>> have ideas >>> on how we can do it better this time. >>> >>> I think that setting up a meeting to discuss the right >>> architecture >>> for the project(s) and decide on good review/gating process, >>> would be >>> a good start. >>> >>> Please let me know what do you think and keep in mind that >>> this is not >>> about which tool is better!. As you can see I didn't mention >>> the time >>> it takes for each tool to deploy and test, and also not the full >>> feature list it supports. >>> If possible, we should keep it about collaborating and not >>> choosing >>> the best tool. Our solution could be the combination of two >>> or more >>> tools eventually (tripleo-red, infra-quickstart? :D ) >>> >>> "You may say I'm a dreamer, but I'm not the only one. I hope >>> some day >>> you'll join us and the infra will be as one" :) >>> >>> [1] https://github.com/redhat-openstack/ansible-ovb >>> [2] https://github.com/redhat-openstack/ansible-rhosp >>> [3] https://github.com/redhat-openstack/octario >>> [4] https://github.com/rhosqeauto/InfraRed >>> [5] >>> https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role >>> [6] https://github.com/openstack/tripleo-quickstart >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From pgsousa at gmail.com Mon Aug 1 23:40:57 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 2 Aug 2016 00:40:57 +0100 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: HI Graeme, I'm more interested in following RHEL OSP approach, so I install both Director or TripleO depending on my customers, but I'll take a close look on Kolla. I have a question (I know it's out of scope) but if you can answer me, I appreciate it. For overcloud nodes images do we still need delorean repos? Or can we just install a clean Centos image with SIG repositories? I want the images to be as stable as possible. Thanks On Tue, Aug 2, 2016 at 12:27 AM, Graeme Gillies wrote: > On 02/08/16 08:01, Pedro Sousa wrote: > > My 2 cents here as an operator/integrator, since I've been using the > > CentOS SIG repositories (mitaka) and following the RHEL Oficial > > Documentation, I've managed to install several baremetal tripleo based > > clouds with success. I've not tried tripleo quickstart, > > > > I've also tried Fuel in the past and it works pretty well with the > > plugin architecture and the network validation among other things, but > > still I prefer tripleo, it gives me more flexibility to setup the > > network the way I want it, and using ironic to provision the baremetal > > hosts is pretty cool too. Also personally I prefer to use Centos than > > Ubuntu as O.S base system, I find it more stable. > > > > Still tripleo lacks the ease of installation that Fuel has, and an UI > > would be great. Also, I'm not sure that using heat templates is the best > > approach, specially when someone makes a mistake editing the yaml files > > and stack returns an error. This could happen when you try to update the > > overcloud nodes, scaling the compute nodes for example. It's not easy to > > revert the heat stack when you make a mistake. > > > > There's a lot of room to improve, specially in terms of complexity of > > installation and update. Maybe containers (kolla) could be a good > > approach in the future? > > Hi Pedro, > > As an Operator and long time user of TripleO, I sympathise with you that > the combination of heat templates and puppet are difficult to learn and > don't have the mature tooling to help you understand and test how > changes to the code will reflect in the real environment. > > One thing I will point out is that if you do a stack update that fails, > more often than not it's not the end of the world. If you go on your > controller plane and make sure pacemaker and all services are up and > running, the state of the stack in heat on the undercloud doesn't really > matter as much. > > This way we always try to "fail forward". If we do a bad stack update, > we make sure the environment is stable again, and then push a new stack > update with the fixes. > > Having a staging or test environment you can utilise to perform changes > on first in order to identify these problems is also beneficial. We try > and get all our Operators to have a virtual tripleo setup on a desktop > for them to "develop" changes on, as well as a shared staging > environments to do final testing of any proposed change before rolling > into production. > > If you are interested in understanding our full development/rollout > process I can go into more detail. > > Also kolla already supports centos/RDO, so you can head over to the > kolla project and follow their documentation if you are interested in > giving it a go. You are able to use Centos and RDO with containers right > now, no need to wait for anything in the future. > > Regards, > > Graeme > > > > > > > > > > > > > > > > > On Mon, Aug 1, 2016 at 9:45 PM, Mohammed Arafa > > wrote: > > > > I too am an end user and have a similar story. I had tried packstack > > all in one but when it was time to deploy to actual servers I looked > > to Ubuntu Maas. It was buggy so after a month or so of several > > attempts I went to RDO. And was happy when I had my environment up. > > But it was not reproducible. I spent months trying. And finally I > > looked elsewhere and was told fuel. > > With fuel I have ha and ceph and live migration with in 2 hours. And > > repeatable too > > > > And yes. When tripleo quick start showed up. I did not even look at > > it. Information overload? Too much time spent evaluating and too > > little building something productive? And now I hear of even more. > > > > In honesty with the rename of RDO to triple o is there any need for > > an installer? > > > > /outburst over > > > > > > On Aug 1, 2016 2:01 PM, "Ignacio Bravo" > > wrote: > > > > If we are talking about tools, I would also want to add > > something with regards to user interface of these tools. This is > > based on my own experience: > > > > I started trying to deploy Openstack with Staypuft and The > > Foreman. The UI of The Foreman was intuitive enough for the > > discovery and provisioning of the servers. The OpenStack > > portion, not so much. > > > > Forward a couple of releases and we had a TripleO GUI (Tuskar, I > > believe) that allowed you to graphically build your Openstack > > cloud. That was a reasonable good GUI for Openstack. > > > > Following that, TripleO become a script based installer, that > > required experience in Heat templates. I know I didn?t have it > > and had to ask in the mailing list about how to present this or > > change that. I got a couple of installs working with this setup. > > > > In the last session in Austin, my goal was to obtain information > > on how others were installing Openstack. I was pointed to Fuel > > as an alternative. I tried it up, and it just worked. It had the > > discovering capability from The Foreman, and the configuration > > options from TripleO. I understand that is based in Ansible and > > because of that, it is not fully CentOS ready for all the nodes > > (at least not in version 9 that I tried). In any case, as a > > deployer and installer, it is the most well rounded tool that I > > found. > > > > I?d love to see RDO moving into that direction, and having an > > easy to use, end user ready deployer tool. > > > > IB > > > > > > __ > > Ignacio Bravo > > LTG Federal, Inc > > www.ltgfederal.com > > > > > >> On Aug 1, 2016, at 1:07 PM, David Moreau Simard > >> > wrote: > >> > >> The vast majority of RDO's CI relies on using upstream > >> installation/deployment projects in order to test installation > >> of RDO > >> packages in different ways and configurations. > >> > >> Unless I'm mistaken, TripleO Quickstart was originally created > >> as a > >> mean to "easily" install TripleO in different topologies without > >> requiring a massive amount of hardware. > >> This project allows us to test TripleO in virtual deployments > >> on just > >> one server instead of, say, 6. > >> > >> There's also WeIRDO [1] which was left out of your list. > >> WeIRDO is super simple and simply aims to run upstream gate > >> jobs (such > >> as puppet-openstack-integration [2][3] and packstack [4][5]) > >> outside > >> of the gate. > >> It'll install dependencies that are expected to be there (i.e, > >> usually > >> set up by the openstack-infra gate preparation jobs), set up > >> the trunk > >> repositories we're interested in testing and the rest is > >> handled by > >> the upstream project testing framework. > >> > >> The WeIRDO project is /very/ low maintenance and brings an > >> exceptional > >> amount of coverage and value. > >> This coverage is important because RDO provides OpenStack > >> packages or > >> projects that are not necessarily used by TripleO and the > >> reality is > >> that not everyone deploying OpenStack on CentOS with RDO will > >> be using > >> TripleO. > >> > >> Anyway, sorry for sidetracking but back to the topic, thanks for > >> opening the discussion. > >> > >> What honestly perplexes me is the situation of CI in RDO and > OSP, > >> especially around TripleO/Director, is the amount of work that > is > >> spent downstream. > >> And by downstream, here, I mean anything that isn't in TripleO > >> proper. > >> > >> I keep dreaming about how awesome upstream TripleO CI would be > >> if all > >> that effort was spent directly there instead -- and then that > >> all work > >> could bear fruit and trickle down downstream for free. > >> Exactly like how we keep improving the testing coverage in > >> puppet-openstack-integration, it's automatically pulled in RDO > CI > >> through WeIRDO for free. > >> We make the upstream better and we benefit from it > simultaneously: > >> everyone wins. > >> > >> [1]: https://github.com/rdo-infra/weirdo > >> [2]: > >> > https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack > >> [3]: > >> > https://github.com/openstack/puppet-openstack-integration#description > >> [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack > >> [5]: > >> > https://github.com/openstack/packstack#packstack-integration-tests > >> > >> David Moreau Simard > >> Senior Software Engineer | Openstack RDO > >> > >> dmsimard = [irc, github, twitter] > >> > >> David Moreau Simard > >> Senior Software Engineer | Openstack RDO > >> > >> dmsimard = [irc, github, twitter] > >> > >> > >> On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > >> > wrote: > >>> Hi, > >>> > >>> I would like to start a discussion on the overlap between > >>> tools we > >>> have for deploying and testing TripleO (RDO & RHOSP) in CI. > >>> > >>> Several months ago, we worked on one common framework for > >>> deploying > >>> and testing OpenStack (RDO & RHOSP) in CI. I think you can say > it > >>> didn't work out well, which eventually led each group to focus > on > >>> developing other existing/new tools. > >>> > >>> What we have right now for deploying and testing > >>> -------------------------------------------------------- > >>> === Component CI, Gating === > >>> I'll start with the projects we created, I think that's only > >>> fair :) > >>> > >>> * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the > >>> OVB project. > >>> > >>> * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per > >>> release. > >>> > >>> * Octario[3] - Testing using RPMs (pep8, unit, functional, > >>> tempest, > >>> csit) + Patching RPMs with submitted code. > >>> > >>> === Automation, QE === > >>> * InfraRed[4] - provision install and test. Pluggable and > >>> modular, > >>> allows you to create your own provisioner, installer and > tester. > >>> > >>> As far as I know, the groups is working now on different > >>> structure of > >>> one main project and three sub projects (provision, install > >>> and test). > >>> > >>> === RDO === > >>> I didn't use RDO tools, so I apologize if I got something > wrong: > >>> > >>> * About ~25 micro independent Ansible roles[5]. You can > >>> either choose > >>> to use one of them or several together. They are used for > >>> provisioning, installing and testing Tripleo. > >>> > >>> * Tripleo-quickstart[6] - uses the micro roles for deploying > >>> tripleo > >>> and test it. > >>> > >>> As I said, I didn't use the tools, so feel free to add more > >>> information you think is relevant. > >>> > >>> === More? === > >>> I hope not. Let us know if are familiar with more tools. > >>> > >>> Conclusion > >>> -------------- > >>> So as you can see, there are several projects that eventually > >>> overlap > >>> in many areas. Each group is basically using the same tasks > >>> (provision > >>> resources, build/import overcloud images, run tempest, > >>> collect logs, > >>> etc.) > >>> > >>> Personally, I think it's a waste of resources. For each task > >>> there is > >>> at least two people from different groups who work on exactly > >>> the same > >>> task. The most recent example I can give is OVB. As far as I > >>> know, > >>> both groups are working on implementing it in their set of > >>> tools right > >>> now. > >>> > >>> On the other hand, you can always claim: "we already tried to > >>> work on > >>> the same framework, we failed to do it successfully" - right, > but > >>> maybe with better ground rules we can manage it. We would > >>> defiantly > >>> benefit a lot from doing that. > >>> > >>> What's next? > >>> ---------------- > >>> So first of all, I would like to hear from you if you think > >>> that we > >>> can collaborate once again or is it actually better to keep > >>> it as it > >>> is now. > >>> > >>> If you agree that collaboration here makes sense, maybe you > >>> have ideas > >>> on how we can do it better this time. > >>> > >>> I think that setting up a meeting to discuss the right > >>> architecture > >>> for the project(s) and decide on good review/gating process, > >>> would be > >>> a good start. > >>> > >>> Please let me know what do you think and keep in mind that > >>> this is not > >>> about which tool is better!. As you can see I didn't mention > >>> the time > >>> it takes for each tool to deploy and test, and also not the > full > >>> feature list it supports. > >>> If possible, we should keep it about collaborating and not > >>> choosing > >>> the best tool. Our solution could be the combination of two > >>> or more > >>> tools eventually (tripleo-red, infra-quickstart? :D ) > >>> > >>> "You may say I'm a dreamer, but I'm not the only one. I hope > >>> some day > >>> you'll join us and the infra will be as one" :) > >>> > >>> [1] https://github.com/redhat-openstack/ansible-ovb > >>> [2] https://github.com/redhat-openstack/ansible-rhosp > >>> [3] https://github.com/redhat-openstack/octario > >>> [4] https://github.com/rhosqeauto/InfraRed > >>> [5] > >>> > https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role > >>> [6] https://github.com/openstack/tripleo-quickstart > >>> > >>> _______________________________________________ > >>> rdo-list mailing list > >>> rdo-list at redhat.com > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Mon Aug 1 23:48:14 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Tue, 2 Aug 2016 09:48:14 +1000 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: On 02/08/16 09:40, Pedro Sousa wrote: > HI Graeme, > > I'm more interested in following RHEL OSP approach, so I install both > Director or TripleO depending on my customers, but I'll take a close > look on Kolla. Sure thing. To be clear here, we are very interested in containers and looking at how they can currently fit into TripleO/Director. Containers are an amazing technology, but come with a lots of pros and cons for Operators, so we want to make sure we are considering them in a fashion that makes the most sense. Kolla is a great way to use them now if you are just purely interested in looking at a container based deployment. > > I have a question (I know it's out of scope) but if you can answer me, I > appreciate it. For overcloud nodes images do we still need delorean > repos? Or can we just install a clean Centos image with SIG > repositories? I want the images to be as stable as possible. For my stable RDO deployments I make sure not to use any delorean repos, and I would expect that would be the same for most people. You have two ways of doing this. Either using the pre-built stable rdo images at http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ Which is preferred as they are what have been tested and validated by us. If you wish to build your own overcloud images using stable pacakges only, you need to do the following Manually patch the undercloud to build overcloud images using rhos-release rpm only (which utilises the stable Mitaka repo from CentOS, and nothing from RDO Trunk [delorean]). I do this by modifying the file /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py At around line 467 you will see a reference to epel, I add a new line after that to include the rdo_release DIB element to the build as well. This typically makes the file look something like http://paste.openstack.org/show/508196/ (note like 468). Then I create a directory to store my images and build them specifying the mitaka version of rdo_release. I then upload these images # mkdir ~/images # cd ~/images # export RDO_RELEASE=mitaka # openstack overcloud image build --all # openstack overcloud image upload --update-existing I'm not sure if someone else can shed some light on an easier way to build overcloud images with rdo-release and not delorean. Hope this helps, Regards, Graeme > > Thanks > > > > > > > > On Tue, Aug 2, 2016 at 12:27 AM, Graeme Gillies > wrote: > > On 02/08/16 08:01, Pedro Sousa wrote: > > My 2 cents here as an operator/integrator, since I've been using the > > CentOS SIG repositories (mitaka) and following the RHEL Oficial > > Documentation, I've managed to install several baremetal tripleo based > > clouds with success. I've not tried tripleo quickstart, > > > > I've also tried Fuel in the past and it works pretty well with the > > plugin architecture and the network validation among other things, but > > still I prefer tripleo, it gives me more flexibility to setup the > > network the way I want it, and using ironic to provision the baremetal > > hosts is pretty cool too. Also personally I prefer to use Centos than > > Ubuntu as O.S base system, I find it more stable. > > > > Still tripleo lacks the ease of installation that Fuel has, and an UI > > would be great. Also, I'm not sure that using heat templates is the best > > approach, specially when someone makes a mistake editing the yaml files > > and stack returns an error. This could happen when you try to update the > > overcloud nodes, scaling the compute nodes for example. It's not easy to > > revert the heat stack when you make a mistake. > > > > There's a lot of room to improve, specially in terms of complexity of > > installation and update. Maybe containers (kolla) could be a good > > approach in the future? > > Hi Pedro, > > As an Operator and long time user of TripleO, I sympathise with you that > the combination of heat templates and puppet are difficult to learn and > don't have the mature tooling to help you understand and test how > changes to the code will reflect in the real environment. > > One thing I will point out is that if you do a stack update that fails, > more often than not it's not the end of the world. If you go on your > controller plane and make sure pacemaker and all services are up and > running, the state of the stack in heat on the undercloud doesn't really > matter as much. > > This way we always try to "fail forward". If we do a bad stack update, > we make sure the environment is stable again, and then push a new stack > update with the fixes. > > Having a staging or test environment you can utilise to perform changes > on first in order to identify these problems is also beneficial. We try > and get all our Operators to have a virtual tripleo setup on a desktop > for them to "develop" changes on, as well as a shared staging > environments to do final testing of any proposed change before rolling > into production. > > If you are interested in understanding our full development/rollout > process I can go into more detail. > > Also kolla already supports centos/RDO, so you can head over to the > kolla project and follow their documentation if you are interested in > giving it a go. You are able to use Centos and RDO with containers right > now, no need to wait for anything in the future. > > Regards, > > Graeme > > > > > > > > > > > > > > > > > On Mon, Aug 1, 2016 at 9:45 PM, Mohammed Arafa > > >> wrote: > > > > I too am an end user and have a similar story. I had tried packstack > > all in one but when it was time to deploy to actual servers I looked > > to Ubuntu Maas. It was buggy so after a month or so of several > > attempts I went to RDO. And was happy when I had my environment up. > > But it was not reproducible. I spent months trying. And finally I > > looked elsewhere and was told fuel. > > With fuel I have ha and ceph and live migration with in 2 hours. And > > repeatable too > > > > And yes. When tripleo quick start showed up. I did not even look at > > it. Information overload? Too much time spent evaluating and too > > little building something productive? And now I hear of even more. > > > > In honesty with the rename of RDO to triple o is there any need for > > an installer? > > > > /outburst over > > > > > > On Aug 1, 2016 2:01 PM, "Ignacio Bravo" > > >> > wrote: > > > > If we are talking about tools, I would also want to add > > something with regards to user interface of these tools. > This is > > based on my own experience: > > > > I started trying to deploy Openstack with Staypuft and The > > Foreman. The UI of The Foreman was intuitive enough for the > > discovery and provisioning of the servers. The OpenStack > > portion, not so much. > > > > Forward a couple of releases and we had a TripleO GUI > (Tuskar, I > > believe) that allowed you to graphically build your Openstack > > cloud. That was a reasonable good GUI for Openstack. > > > > Following that, TripleO become a script based installer, that > > required experience in Heat templates. I know I didn?t have it > > and had to ask in the mailing list about how to present > this or > > change that. I got a couple of installs working with this > setup. > > > > In the last session in Austin, my goal was to obtain > information > > on how others were installing Openstack. I was pointed to Fuel > > as an alternative. I tried it up, and it just worked. It > had the > > discovering capability from The Foreman, and the configuration > > options from TripleO. I understand that is based in > Ansible and > > because of that, it is not fully CentOS ready for all the > nodes > > (at least not in version 9 that I tried). In any case, as a > > deployer and installer, it is the most well rounded tool > that I > > found. > > > > I?d love to see RDO moving into that direction, and having an > > easy to use, end user ready deployer tool. > > > > IB > > > > > > __ > > Ignacio Bravo > > LTG Federal, Inc > > www.ltgfederal.com > > > > > > >> On Aug 1, 2016, at 1:07 PM, David Moreau Simard > >> > >> wrote: > >> > >> The vast majority of RDO's CI relies on using upstream > >> installation/deployment projects in order to test > installation > >> of RDO > >> packages in different ways and configurations. > >> > >> Unless I'm mistaken, TripleO Quickstart was originally > created > >> as a > >> mean to "easily" install TripleO in different topologies > without > >> requiring a massive amount of hardware. > >> This project allows us to test TripleO in virtual deployments > >> on just > >> one server instead of, say, 6. > >> > >> There's also WeIRDO [1] which was left out of your list. > >> WeIRDO is super simple and simply aims to run upstream gate > >> jobs (such > >> as puppet-openstack-integration [2][3] and packstack [4][5]) > >> outside > >> of the gate. > >> It'll install dependencies that are expected to be there > (i.e, > >> usually > >> set up by the openstack-infra gate preparation jobs), set up > >> the trunk > >> repositories we're interested in testing and the rest is > >> handled by > >> the upstream project testing framework. > >> > >> The WeIRDO project is /very/ low maintenance and brings an > >> exceptional > >> amount of coverage and value. > >> This coverage is important because RDO provides OpenStack > >> packages or > >> projects that are not necessarily used by TripleO and the > >> reality is > >> that not everyone deploying OpenStack on CentOS with RDO will > >> be using > >> TripleO. > >> > >> Anyway, sorry for sidetracking but back to the topic, > thanks for > >> opening the discussion. > >> > >> What honestly perplexes me is the situation of CI in RDO > and OSP, > >> especially around TripleO/Director, is the amount of work > that is > >> spent downstream. > >> And by downstream, here, I mean anything that isn't in > TripleO > >> proper. > >> > >> I keep dreaming about how awesome upstream TripleO CI > would be > >> if all > >> that effort was spent directly there instead -- and then that > >> all work > >> could bear fruit and trickle down downstream for free. > >> Exactly like how we keep improving the testing coverage in > >> puppet-openstack-integration, it's automatically pulled > in RDO CI > >> through WeIRDO for free. > >> We make the upstream better and we benefit from it > simultaneously: > >> everyone wins. > >> > >> [1]: https://github.com/rdo-infra/weirdo > >> [2]: > >> > https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack > >> [3]: > >> > https://github.com/openstack/puppet-openstack-integration#description > >> [4]: > https://github.com/rdo-infra/ansible-role-weirdo-packstack > >> [5]: > >> > https://github.com/openstack/packstack#packstack-integration-tests > >> > >> David Moreau Simard > >> Senior Software Engineer | Openstack RDO > >> > >> dmsimard = [irc, github, twitter] > >> > >> David Moreau Simard > >> Senior Software Engineer | Openstack RDO > >> > >> dmsimard = [irc, github, twitter] > >> > >> > >> On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > >> > >> wrote: > >>> Hi, > >>> > >>> I would like to start a discussion on the overlap between > >>> tools we > >>> have for deploying and testing TripleO (RDO & RHOSP) in CI. > >>> > >>> Several months ago, we worked on one common framework for > >>> deploying > >>> and testing OpenStack (RDO & RHOSP) in CI. I think you > can say it > >>> didn't work out well, which eventually led each group to > focus on > >>> developing other existing/new tools. > >>> > >>> What we have right now for deploying and testing > >>> -------------------------------------------------------- > >>> === Component CI, Gating === > >>> I'll start with the projects we created, I think that's only > >>> fair :) > >>> > >>> * Ansible-OVB[1] - Provisioning Tripleo heat stack, > using the > >>> OVB project. > >>> > >>> * Ansible-RHOSP[2] - Product installation (RHOSP). > Branch per > >>> release. > >>> > >>> * Octario[3] - Testing using RPMs (pep8, unit, functional, > >>> tempest, > >>> csit) + Patching RPMs with submitted code. > >>> > >>> === Automation, QE === > >>> * InfraRed[4] - provision install and test. Pluggable and > >>> modular, > >>> allows you to create your own provisioner, installer and > tester. > >>> > >>> As far as I know, the groups is working now on different > >>> structure of > >>> one main project and three sub projects (provision, install > >>> and test). > >>> > >>> === RDO === > >>> I didn't use RDO tools, so I apologize if I got > something wrong: > >>> > >>> * About ~25 micro independent Ansible roles[5]. You can > >>> either choose > >>> to use one of them or several together. They are used for > >>> provisioning, installing and testing Tripleo. > >>> > >>> * Tripleo-quickstart[6] - uses the micro roles for deploying > >>> tripleo > >>> and test it. > >>> > >>> As I said, I didn't use the tools, so feel free to add more > >>> information you think is relevant. > >>> > >>> === More? === > >>> I hope not. Let us know if are familiar with more tools. > >>> > >>> Conclusion > >>> -------------- > >>> So as you can see, there are several projects that > eventually > >>> overlap > >>> in many areas. Each group is basically using the same tasks > >>> (provision > >>> resources, build/import overcloud images, run tempest, > >>> collect logs, > >>> etc.) > >>> > >>> Personally, I think it's a waste of resources. For each task > >>> there is > >>> at least two people from different groups who work on > exactly > >>> the same > >>> task. The most recent example I can give is OVB. As far as I > >>> know, > >>> both groups are working on implementing it in their set of > >>> tools right > >>> now. > >>> > >>> On the other hand, you can always claim: "we already > tried to > >>> work on > >>> the same framework, we failed to do it successfully" - > right, but > >>> maybe with better ground rules we can manage it. We would > >>> defiantly > >>> benefit a lot from doing that. > >>> > >>> What's next? > >>> ---------------- > >>> So first of all, I would like to hear from you if you think > >>> that we > >>> can collaborate once again or is it actually better to keep > >>> it as it > >>> is now. > >>> > >>> If you agree that collaboration here makes sense, maybe you > >>> have ideas > >>> on how we can do it better this time. > >>> > >>> I think that setting up a meeting to discuss the right > >>> architecture > >>> for the project(s) and decide on good review/gating process, > >>> would be > >>> a good start. > >>> > >>> Please let me know what do you think and keep in mind that > >>> this is not > >>> about which tool is better!. As you can see I didn't mention > >>> the time > >>> it takes for each tool to deploy and test, and also not > the full > >>> feature list it supports. > >>> If possible, we should keep it about collaborating and not > >>> choosing > >>> the best tool. Our solution could be the combination of two > >>> or more > >>> tools eventually (tripleo-red, infra-quickstart? :D ) > >>> > >>> "You may say I'm a dreamer, but I'm not the only one. I hope > >>> some day > >>> you'll join us and the infra will be as one" :) > >>> > >>> [1] https://github.com/redhat-openstack/ansible-ovb > >>> [2] https://github.com/redhat-openstack/ansible-rhosp > >>> [3] https://github.com/redhat-openstack/octario > >>> [4] https://github.com/rhosqeauto/InfraRed > >>> [5] > >>> > https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role > >>> [6] https://github.com/openstack/tripleo-quickstart > >>> > >>> _______________________________________________ > >>> rdo-list mailing list > >>> rdo-list at redhat.com > > > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > > > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From pgsousa at gmail.com Mon Aug 1 23:59:22 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 2 Aug 2016 00:59:22 +0100 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: I was pretty sure I that was using cbs pre-built images and they had delorean repos but I'll check again tomorrow. Also, is epel still needed? Because in undercloud I'm not using it. Thanks On Tue, Aug 2, 2016 at 12:48 AM, Graeme Gillies wrote: > On 02/08/16 09:40, Pedro Sousa wrote: > > HI Graeme, > > > > I'm more interested in following RHEL OSP approach, so I install both > > Director or TripleO depending on my customers, but I'll take a close > > look on Kolla. > > Sure thing. To be clear here, we are very interested in containers and > looking at how they can currently fit into TripleO/Director. Containers > are an amazing technology, but come with a lots of pros and cons for > Operators, so we want to make sure we are considering them in a fashion > that makes the most sense. Kolla is a great way to use them now if you > are just purely interested in looking at a container based deployment. > > > > > I have a question (I know it's out of scope) but if you can answer me, I > > appreciate it. For overcloud nodes images do we still need delorean > > repos? Or can we just install a clean Centos image with SIG > > repositories? I want the images to be as stable as possible. > > For my stable RDO deployments I make sure not to use any delorean repos, > and I would expect that would be the same for most people. > > You have two ways of doing this. Either using the pre-built stable rdo > images at > > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ > > Which is preferred as they are what have been tested and validated by us. > > If you wish to build your own overcloud images using stable pacakges > only, you need to do the following > > Manually patch the undercloud to build overcloud images using > rhos-release rpm only (which utilises the stable Mitaka repo from > CentOS, and nothing from RDO Trunk [delorean]). I do this by modifying > the file > > /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py > > At around line 467 you will see a reference to epel, I add a new line > after that to include the rdo_release DIB element to the build as well. > This typically makes the file look something like > > http://paste.openstack.org/show/508196/ > > (note like 468). Then I create a directory to store my images and build > them specifying the mitaka version of rdo_release. I then upload these > images > > # mkdir ~/images > # cd ~/images > # export RDO_RELEASE=mitaka > # openstack overcloud image build --all > # openstack overcloud image upload --update-existing > > I'm not sure if someone else can shed some light on an easier way to > build overcloud images with rdo-release and not delorean. > > Hope this helps, > > Regards, > > Graeme > > > > > Thanks > > > > > > > > > > > > > > > > On Tue, Aug 2, 2016 at 12:27 AM, Graeme Gillies > > wrote: > > > > On 02/08/16 08:01, Pedro Sousa wrote: > > > My 2 cents here as an operator/integrator, since I've been using > the > > > CentOS SIG repositories (mitaka) and following the RHEL Oficial > > > Documentation, I've managed to install several baremetal tripleo > based > > > clouds with success. I've not tried tripleo quickstart, > > > > > > I've also tried Fuel in the past and it works pretty well with the > > > plugin architecture and the network validation among other things, > but > > > still I prefer tripleo, it gives me more flexibility to setup the > > > network the way I want it, and using ironic to provision the > baremetal > > > hosts is pretty cool too. Also personally I prefer to use Centos > than > > > Ubuntu as O.S base system, I find it more stable. > > > > > > Still tripleo lacks the ease of installation that Fuel has, and an > UI > > > would be great. Also, I'm not sure that using heat templates is > the best > > > approach, specially when someone makes a mistake editing the yaml > files > > > and stack returns an error. This could happen when you try to > update the > > > overcloud nodes, scaling the compute nodes for example. It's not > easy to > > > revert the heat stack when you make a mistake. > > > > > > There's a lot of room to improve, specially in terms of complexity > of > > > installation and update. Maybe containers (kolla) could be a good > > > approach in the future? > > > > Hi Pedro, > > > > As an Operator and long time user of TripleO, I sympathise with you > that > > the combination of heat templates and puppet are difficult to learn > and > > don't have the mature tooling to help you understand and test how > > changes to the code will reflect in the real environment. > > > > One thing I will point out is that if you do a stack update that > fails, > > more often than not it's not the end of the world. If you go on your > > controller plane and make sure pacemaker and all services are up and > > running, the state of the stack in heat on the undercloud doesn't > really > > matter as much. > > > > This way we always try to "fail forward". If we do a bad stack > update, > > we make sure the environment is stable again, and then push a new > stack > > update with the fixes. > > > > Having a staging or test environment you can utilise to perform > changes > > on first in order to identify these problems is also beneficial. We > try > > and get all our Operators to have a virtual tripleo setup on a > desktop > > for them to "develop" changes on, as well as a shared staging > > environments to do final testing of any proposed change before > rolling > > into production. > > > > If you are interested in understanding our full development/rollout > > process I can go into more detail. > > > > Also kolla already supports centos/RDO, so you can head over to the > > kolla project and follow their documentation if you are interested in > > giving it a go. You are able to use Centos and RDO with containers > right > > now, no need to wait for anything in the future. > > > > Regards, > > > > Graeme > > > > > > > > > > > > > > > > > > > > > > > > > > On Mon, Aug 1, 2016 at 9:45 PM, Mohammed Arafa < > mohammed.arafa at gmail.com > > > >> > wrote: > > > > > > I too am an end user and have a similar story. I had tried > packstack > > > all in one but when it was time to deploy to actual servers I > looked > > > to Ubuntu Maas. It was buggy so after a month or so of several > > > attempts I went to RDO. And was happy when I had my > environment up. > > > But it was not reproducible. I spent months trying. And > finally I > > > looked elsewhere and was told fuel. > > > With fuel I have ha and ceph and live migration with in 2 > hours. And > > > repeatable too > > > > > > And yes. When tripleo quick start showed up. I did not even > look at > > > it. Information overload? Too much time spent evaluating and > too > > > little building something productive? And now I hear of even > more. > > > > > > In honesty with the rename of RDO to triple o is there any > need for > > > an installer? > > > > > > /outburst over > > > > > > > > > On Aug 1, 2016 2:01 PM, "Ignacio Bravo" > > > >> > > wrote: > > > > > > If we are talking about tools, I would also want to add > > > something with regards to user interface of these tools. > > This is > > > based on my own experience: > > > > > > I started trying to deploy Openstack with Staypuft and The > > > Foreman. The UI of The Foreman was intuitive enough for the > > > discovery and provisioning of the servers. The OpenStack > > > portion, not so much. > > > > > > Forward a couple of releases and we had a TripleO GUI > > (Tuskar, I > > > believe) that allowed you to graphically build your > Openstack > > > cloud. That was a reasonable good GUI for Openstack. > > > > > > Following that, TripleO become a script based installer, > that > > > required experience in Heat templates. I know I didn?t > have it > > > and had to ask in the mailing list about how to present > > this or > > > change that. I got a couple of installs working with this > > setup. > > > > > > In the last session in Austin, my goal was to obtain > > information > > > on how others were installing Openstack. I was pointed to > Fuel > > > as an alternative. I tried it up, and it just worked. It > > had the > > > discovering capability from The Foreman, and the > configuration > > > options from TripleO. I understand that is based in > > Ansible and > > > because of that, it is not fully CentOS ready for all the > > nodes > > > (at least not in version 9 that I tried). In any case, as a > > > deployer and installer, it is the most well rounded tool > > that I > > > found. > > > > > > I?d love to see RDO moving into that direction, and having > an > > > easy to use, end user ready deployer tool. > > > > > > IB > > > > > > > > > __ > > > Ignacio Bravo > > > LTG Federal, Inc > > > www.ltgfederal.com > > > > > > > > > > >> On Aug 1, 2016, at 1:07 PM, David Moreau Simard > > >> > > >> wrote: > > >> > > >> The vast majority of RDO's CI relies on using upstream > > >> installation/deployment projects in order to test > > installation > > >> of RDO > > >> packages in different ways and configurations. > > >> > > >> Unless I'm mistaken, TripleO Quickstart was originally > > created > > >> as a > > >> mean to "easily" install TripleO in different topologies > > without > > >> requiring a massive amount of hardware. > > >> This project allows us to test TripleO in virtual > deployments > > >> on just > > >> one server instead of, say, 6. > > >> > > >> There's also WeIRDO [1] which was left out of your list. > > >> WeIRDO is super simple and simply aims to run upstream > gate > > >> jobs (such > > >> as puppet-openstack-integration [2][3] and packstack > [4][5]) > > >> outside > > >> of the gate. > > >> It'll install dependencies that are expected to be there > > (i.e, > > >> usually > > >> set up by the openstack-infra gate preparation jobs), set > up > > >> the trunk > > >> repositories we're interested in testing and the rest is > > >> handled by > > >> the upstream project testing framework. > > >> > > >> The WeIRDO project is /very/ low maintenance and brings an > > >> exceptional > > >> amount of coverage and value. > > >> This coverage is important because RDO provides OpenStack > > >> packages or > > >> projects that are not necessarily used by TripleO and the > > >> reality is > > >> that not everyone deploying OpenStack on CentOS with RDO > will > > >> be using > > >> TripleO. > > >> > > >> Anyway, sorry for sidetracking but back to the topic, > > thanks for > > >> opening the discussion. > > >> > > >> What honestly perplexes me is the situation of CI in RDO > > and OSP, > > >> especially around TripleO/Director, is the amount of work > > that is > > >> spent downstream. > > >> And by downstream, here, I mean anything that isn't in > > TripleO > > >> proper. > > >> > > >> I keep dreaming about how awesome upstream TripleO CI > > would be > > >> if all > > >> that effort was spent directly there instead -- and then > that > > >> all work > > >> could bear fruit and trickle down downstream for free. > > >> Exactly like how we keep improving the testing coverage in > > >> puppet-openstack-integration, it's automatically pulled > > in RDO CI > > >> through WeIRDO for free. > > >> We make the upstream better and we benefit from it > > simultaneously: > > >> everyone wins. > > >> > > >> [1]: https://github.com/rdo-infra/weirdo > > >> [2]: > > >> > > https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack > > >> [3]: > > >> > > > https://github.com/openstack/puppet-openstack-integration#description > > >> [4]: > > https://github.com/rdo-infra/ansible-role-weirdo-packstack > > >> [5]: > > >> > > https://github.com/openstack/packstack#packstack-integration-tests > > >> > > >> David Moreau Simard > > >> Senior Software Engineer | Openstack RDO > > >> > > >> dmsimard = [irc, github, twitter] > > >> > > >> David Moreau Simard > > >> Senior Software Engineer | Openstack RDO > > >> > > >> dmsimard = [irc, github, twitter] > > >> > > >> > > >> On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > > >> > > >> wrote: > > >>> Hi, > > >>> > > >>> I would like to start a discussion on the overlap between > > >>> tools we > > >>> have for deploying and testing TripleO (RDO & RHOSP) in > CI. > > >>> > > >>> Several months ago, we worked on one common framework for > > >>> deploying > > >>> and testing OpenStack (RDO & RHOSP) in CI. I think you > > can say it > > >>> didn't work out well, which eventually led each group to > > focus on > > >>> developing other existing/new tools. > > >>> > > >>> What we have right now for deploying and testing > > >>> -------------------------------------------------------- > > >>> === Component CI, Gating === > > >>> I'll start with the projects we created, I think that's > only > > >>> fair :) > > >>> > > >>> * Ansible-OVB[1] - Provisioning Tripleo heat stack, > > using the > > >>> OVB project. > > >>> > > >>> * Ansible-RHOSP[2] - Product installation (RHOSP). > > Branch per > > >>> release. > > >>> > > >>> * Octario[3] - Testing using RPMs (pep8, unit, > functional, > > >>> tempest, > > >>> csit) + Patching RPMs with submitted code. > > >>> > > >>> === Automation, QE === > > >>> * InfraRed[4] - provision install and test. Pluggable and > > >>> modular, > > >>> allows you to create your own provisioner, installer and > > tester. > > >>> > > >>> As far as I know, the groups is working now on different > > >>> structure of > > >>> one main project and three sub projects (provision, > install > > >>> and test). > > >>> > > >>> === RDO === > > >>> I didn't use RDO tools, so I apologize if I got > > something wrong: > > >>> > > >>> * About ~25 micro independent Ansible roles[5]. You can > > >>> either choose > > >>> to use one of them or several together. They are used for > > >>> provisioning, installing and testing Tripleo. > > >>> > > >>> * Tripleo-quickstart[6] - uses the micro roles for > deploying > > >>> tripleo > > >>> and test it. > > >>> > > >>> As I said, I didn't use the tools, so feel free to add > more > > >>> information you think is relevant. > > >>> > > >>> === More? === > > >>> I hope not. Let us know if are familiar with more tools. > > >>> > > >>> Conclusion > > >>> -------------- > > >>> So as you can see, there are several projects that > > eventually > > >>> overlap > > >>> in many areas. Each group is basically using the same > tasks > > >>> (provision > > >>> resources, build/import overcloud images, run tempest, > > >>> collect logs, > > >>> etc.) > > >>> > > >>> Personally, I think it's a waste of resources. For each > task > > >>> there is > > >>> at least two people from different groups who work on > > exactly > > >>> the same > > >>> task. The most recent example I can give is OVB. As far > as I > > >>> know, > > >>> both groups are working on implementing it in their set > of > > >>> tools right > > >>> now. > > >>> > > >>> On the other hand, you can always claim: "we already > > tried to > > >>> work on > > >>> the same framework, we failed to do it successfully" - > > right, but > > >>> maybe with better ground rules we can manage it. We would > > >>> defiantly > > >>> benefit a lot from doing that. > > >>> > > >>> What's next? > > >>> ---------------- > > >>> So first of all, I would like to hear from you if you > think > > >>> that we > > >>> can collaborate once again or is it actually better to > keep > > >>> it as it > > >>> is now. > > >>> > > >>> If you agree that collaboration here makes sense, maybe > you > > >>> have ideas > > >>> on how we can do it better this time. > > >>> > > >>> I think that setting up a meeting to discuss the right > > >>> architecture > > >>> for the project(s) and decide on good review/gating > process, > > >>> would be > > >>> a good start. > > >>> > > >>> Please let me know what do you think and keep in mind > that > > >>> this is not > > >>> about which tool is better!. As you can see I didn't > mention > > >>> the time > > >>> it takes for each tool to deploy and test, and also not > > the full > > >>> feature list it supports. > > >>> If possible, we should keep it about collaborating and > not > > >>> choosing > > >>> the best tool. Our solution could be the combination of > two > > >>> or more > > >>> tools eventually (tripleo-red, infra-quickstart? :D ) > > >>> > > >>> "You may say I'm a dreamer, but I'm not the only one. I > hope > > >>> some day > > >>> you'll join us and the infra will be as one" :) > > >>> > > >>> [1] https://github.com/redhat-openstack/ansible-ovb > > >>> [2] https://github.com/redhat-openstack/ansible-rhosp > > >>> [3] https://github.com/redhat-openstack/octario > > >>> [4] https://github.com/rhosqeauto/InfraRed > > >>> [5] > > >>> > > > https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role > > >>> [6] https://github.com/openstack/tripleo-quickstart > > >>> > > >>> _______________________________________________ > > >>> rdo-list mailing list > > >>> rdo-list at redhat.com > > > > > >>> https://www.redhat.com/mailman/listinfo/rdo-list > > >>> > > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com rdo-list-unsubscribe at redhat.com> > > >>> > > > > >> > > >> _______________________________________________ > > >> rdo-list mailing list > > >> rdo-list at redhat.com > > > > > >> https://www.redhat.com/mailman/listinfo/rdo-list > > >> > > >> To unsubscribe: rdo-list-unsubscribe at redhat.com rdo-list-unsubscribe at redhat.com> > > >> > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com rdo-list-unsubscribe at redhat.com> > > > > > > > > > -- > > Graeme Gillies > > Principal Systems Administrator > > Openstack Infrastructure > > Red Hat Australia > > > > > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Tue Aug 2 00:02:11 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Tue, 2 Aug 2016 10:02:11 +1000 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: On 02/08/16 09:59, Pedro Sousa wrote: > I was pretty sure I that was using cbs pre-built images and they had > delorean repos but I'll check again tomorrow. Please do, if they do have delorean repos we need to get that checked/fixed. > > Also, is epel still needed? Because in undercloud I'm not using it. My understanding is EPEL is still needed on the under and overcloud, but I would appreciate someone else from the RDO/TripleO team confirming. Regards, Graeme > > Thanks > > On Tue, Aug 2, 2016 at 12:48 AM, Graeme Gillies > wrote: > > On 02/08/16 09:40, Pedro Sousa wrote: > > HI Graeme, > > > > I'm more interested in following RHEL OSP approach, so I install both > > Director or TripleO depending on my customers, but I'll take a close > > look on Kolla. > > Sure thing. To be clear here, we are very interested in containers and > looking at how they can currently fit into TripleO/Director. Containers > are an amazing technology, but come with a lots of pros and cons for > Operators, so we want to make sure we are considering them in a fashion > that makes the most sense. Kolla is a great way to use them now if you > are just purely interested in looking at a container based deployment. > > > > > I have a question (I know it's out of scope) but if you can answer me, I > > appreciate it. For overcloud nodes images do we still need delorean > > repos? Or can we just install a clean Centos image with SIG > > repositories? I want the images to be as stable as possible. > > For my stable RDO deployments I make sure not to use any delorean repos, > and I would expect that would be the same for most people. > > You have two ways of doing this. Either using the pre-built stable rdo > images at > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ > > Which is preferred as they are what have been tested and validated > by us. > > If you wish to build your own overcloud images using stable pacakges > only, you need to do the following > > Manually patch the undercloud to build overcloud images using > rhos-release rpm only (which utilises the stable Mitaka repo from > CentOS, and nothing from RDO Trunk [delorean]). I do this by modifying > the file > > /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py > > At around line 467 you will see a reference to epel, I add a new line > after that to include the rdo_release DIB element to the build as well. > This typically makes the file look something like > > http://paste.openstack.org/show/508196/ > > (note like 468). Then I create a directory to store my images and build > them specifying the mitaka version of rdo_release. I then upload these > images > > # mkdir ~/images > # cd ~/images > # export RDO_RELEASE=mitaka > # openstack overcloud image build --all > # openstack overcloud image upload --update-existing > > I'm not sure if someone else can shed some light on an easier way to > build overcloud images with rdo-release and not delorean. > > Hope this helps, > > Regards, > > Graeme > > > > > Thanks > > > > > > > > > > > > > > > > On Tue, Aug 2, 2016 at 12:27 AM, Graeme Gillies > > >> wrote: > > > > On 02/08/16 08:01, Pedro Sousa wrote: > > > My 2 cents here as an operator/integrator, since I've been > using the > > > CentOS SIG repositories (mitaka) and following the RHEL Oficial > > > Documentation, I've managed to install several baremetal > tripleo based > > > clouds with success. I've not tried tripleo quickstart, > > > > > > I've also tried Fuel in the past and it works pretty well > with the > > > plugin architecture and the network validation among other > things, but > > > still I prefer tripleo, it gives me more flexibility to > setup the > > > network the way I want it, and using ironic to provision the > baremetal > > > hosts is pretty cool too. Also personally I prefer to use > Centos than > > > Ubuntu as O.S base system, I find it more stable. > > > > > > Still tripleo lacks the ease of installation that Fuel has, > and an UI > > > would be great. Also, I'm not sure that using heat templates > is the best > > > approach, specially when someone makes a mistake editing the > yaml files > > > and stack returns an error. This could happen when you try > to update the > > > overcloud nodes, scaling the compute nodes for example. It's > not easy to > > > revert the heat stack when you make a mistake. > > > > > > There's a lot of room to improve, specially in terms of > complexity of > > > installation and update. Maybe containers (kolla) could be a > good > > > approach in the future? > > > > Hi Pedro, > > > > As an Operator and long time user of TripleO, I sympathise > with you that > > the combination of heat templates and puppet are difficult to > learn and > > don't have the mature tooling to help you understand and test how > > changes to the code will reflect in the real environment. > > > > One thing I will point out is that if you do a stack update > that fails, > > more often than not it's not the end of the world. If you go > on your > > controller plane and make sure pacemaker and all services are > up and > > running, the state of the stack in heat on the undercloud > doesn't really > > matter as much. > > > > This way we always try to "fail forward". If we do a bad stack > update, > > we make sure the environment is stable again, and then push a > new stack > > update with the fixes. > > > > Having a staging or test environment you can utilise to > perform changes > > on first in order to identify these problems is also > beneficial. We try > > and get all our Operators to have a virtual tripleo setup on a > desktop > > for them to "develop" changes on, as well as a shared staging > > environments to do final testing of any proposed change before > rolling > > into production. > > > > If you are interested in understanding our full > development/rollout > > process I can go into more detail. > > > > Also kolla already supports centos/RDO, so you can head over > to the > > kolla project and follow their documentation if you are > interested in > > giving it a go. You are able to use Centos and RDO with > containers right > > now, no need to wait for anything in the future. > > > > Regards, > > > > Graeme > > > > > > > > > > > > > > > > > > > > > > > > > > On Mon, Aug 1, 2016 at 9:45 PM, Mohammed Arafa > > > > > > > >>> wrote: > > > > > > I too am an end user and have a similar story. I had tried packstack > > > all in one but when it was time to deploy to actual servers I looked > > > to Ubuntu Maas. It was buggy so after a month or so of several > > > attempts I went to RDO. And was happy when I had my environment up. > > > But it was not reproducible. I spent months trying. And finally I > > > looked elsewhere and was told fuel. > > > With fuel I have ha and ceph and live migration with in 2 hours. And > > > repeatable too > > > > > > And yes. When tripleo quick start showed up. I did not even look at > > > it. Information overload? Too much time spent evaluating and too > > > little building something productive? And now I hear of even more. > > > > > > In honesty with the rename of RDO to triple o is there any need for > > > an installer? > > > > > > /outburst over > > > > > > > > > On Aug 1, 2016 2:01 PM, "Ignacio Bravo" > > > > > >>> > > wrote: > > > > > > If we are talking about tools, I would also want to add > > > something with regards to user interface of these tools. > > This is > > > based on my own experience: > > > > > > I started trying to deploy Openstack with Staypuft > and The > > > Foreman. The UI of The Foreman was intuitive enough > for the > > > discovery and provisioning of the servers. The OpenStack > > > portion, not so much. > > > > > > Forward a couple of releases and we had a TripleO GUI > > (Tuskar, I > > > believe) that allowed you to graphically build your > Openstack > > > cloud. That was a reasonable good GUI for Openstack. > > > > > > Following that, TripleO become a script based > installer, that > > > required experience in Heat templates. I know I > didn?t have it > > > and had to ask in the mailing list about how to present > > this or > > > change that. I got a couple of installs working with > this > > setup. > > > > > > In the last session in Austin, my goal was to obtain > > information > > > on how others were installing Openstack. I was > pointed to Fuel > > > as an alternative. I tried it up, and it just worked. It > > had the > > > discovering capability from The Foreman, and the > configuration > > > options from TripleO. I understand that is based in > > Ansible and > > > because of that, it is not fully CentOS ready for > all the > > nodes > > > (at least not in version 9 that I tried). In any > case, as a > > > deployer and installer, it is the most well rounded tool > > that I > > > found. > > > > > > I?d love to see RDO moving into that direction, and > having an > > > easy to use, end user ready deployer tool. > > > > > > IB > > > > > > > > > __ > > > Ignacio Bravo > > > LTG Federal, Inc > > > www.ltgfederal.com > > > > > > > > > > > >> On Aug 1, 2016, at 1:07 PM, David Moreau Simard > > >> > > > > > >>> wrote: > > >> > > >> The vast majority of RDO's CI relies on using upstream > > >> installation/deployment projects in order to test > > installation > > >> of RDO > > >> packages in different ways and configurations. > > >> > > >> Unless I'm mistaken, TripleO Quickstart was originally > > created > > >> as a > > >> mean to "easily" install TripleO in different > topologies > > without > > >> requiring a massive amount of hardware. > > >> This project allows us to test TripleO in virtual > deployments > > >> on just > > >> one server instead of, say, 6. > > >> > > >> There's also WeIRDO [1] which was left out of your > list. > > >> WeIRDO is super simple and simply aims to run > upstream gate > > >> jobs (such > > >> as puppet-openstack-integration [2][3] and > packstack [4][5]) > > >> outside > > >> of the gate. > > >> It'll install dependencies that are expected to be > there > > (i.e, > > >> usually > > >> set up by the openstack-infra gate preparation > jobs), set up > > >> the trunk > > >> repositories we're interested in testing and the > rest is > > >> handled by > > >> the upstream project testing framework. > > >> > > >> The WeIRDO project is /very/ low maintenance and > brings an > > >> exceptional > > >> amount of coverage and value. > > >> This coverage is important because RDO provides > OpenStack > > >> packages or > > >> projects that are not necessarily used by TripleO > and the > > >> reality is > > >> that not everyone deploying OpenStack on CentOS > with RDO will > > >> be using > > >> TripleO. > > >> > > >> Anyway, sorry for sidetracking but back to the topic, > > thanks for > > >> opening the discussion. > > >> > > >> What honestly perplexes me is the situation of CI > in RDO > > and OSP, > > >> especially around TripleO/Director, is the amount > of work > > that is > > >> spent downstream. > > >> And by downstream, here, I mean anything that isn't in > > TripleO > > >> proper. > > >> > > >> I keep dreaming about how awesome upstream TripleO CI > > would be > > >> if all > > >> that effort was spent directly there instead -- and > then that > > >> all work > > >> could bear fruit and trickle down downstream for free. > > >> Exactly like how we keep improving the testing > coverage in > > >> puppet-openstack-integration, it's automatically pulled > > in RDO CI > > >> through WeIRDO for free. > > >> We make the upstream better and we benefit from it > > simultaneously: > > >> everyone wins. > > >> > > >> [1]: https://github.com/rdo-infra/weirdo > > >> [2]: > > >> > > https://github.com/rdo-infra/ansible-role-weirdo-puppet-openstack > > >> [3]: > > >> > > > https://github.com/openstack/puppet-openstack-integration#description > > >> [4]: > > https://github.com/rdo-infra/ansible-role-weirdo-packstack > > >> [5]: > > >> > > > https://github.com/openstack/packstack#packstack-integration-tests > > >> > > >> David Moreau Simard > > >> Senior Software Engineer | Openstack RDO > > >> > > >> dmsimard = [irc, github, twitter] > > >> > > >> David Moreau Simard > > >> Senior Software Engineer | Openstack RDO > > >> > > >> dmsimard = [irc, github, twitter] > > >> > > >> > > >> On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > > >> > > > > > >>> wrote: > > >>> Hi, > > >>> > > >>> I would like to start a discussion on the overlap > between > > >>> tools we > > >>> have for deploying and testing TripleO (RDO & > RHOSP) in CI. > > >>> > > >>> Several months ago, we worked on one common > framework for > > >>> deploying > > >>> and testing OpenStack (RDO & RHOSP) in CI. I think you > > can say it > > >>> didn't work out well, which eventually led each > group to > > focus on > > >>> developing other existing/new tools. > > >>> > > >>> What we have right now for deploying and testing > > >>> > -------------------------------------------------------- > > >>> === Component CI, Gating === > > >>> I'll start with the projects we created, I think > that's only > > >>> fair :) > > >>> > > >>> * Ansible-OVB[1] - Provisioning Tripleo heat stack, > > using the > > >>> OVB project. > > >>> > > >>> * Ansible-RHOSP[2] - Product installation (RHOSP). > > Branch per > > >>> release. > > >>> > > >>> * Octario[3] - Testing using RPMs (pep8, unit, > functional, > > >>> tempest, > > >>> csit) + Patching RPMs with submitted code. > > >>> > > >>> === Automation, QE === > > >>> * InfraRed[4] - provision install and test. > Pluggable and > > >>> modular, > > >>> allows you to create your own provisioner, > installer and > > tester. > > >>> > > >>> As far as I know, the groups is working now on > different > > >>> structure of > > >>> one main project and three sub projects > (provision, install > > >>> and test). > > >>> > > >>> === RDO === > > >>> I didn't use RDO tools, so I apologize if I got > > something wrong: > > >>> > > >>> * About ~25 micro independent Ansible roles[5]. > You can > > >>> either choose > > >>> to use one of them or several together. They are > used for > > >>> provisioning, installing and testing Tripleo. > > >>> > > >>> * Tripleo-quickstart[6] - uses the micro roles for > deploying > > >>> tripleo > > >>> and test it. > > >>> > > >>> As I said, I didn't use the tools, so feel free to > add more > > >>> information you think is relevant. > > >>> > > >>> === More? === > > >>> I hope not. Let us know if are familiar with more > tools. > > >>> > > >>> Conclusion > > >>> -------------- > > >>> So as you can see, there are several projects that > > eventually > > >>> overlap > > >>> in many areas. Each group is basically using the > same tasks > > >>> (provision > > >>> resources, build/import overcloud images, run tempest, > > >>> collect logs, > > >>> etc.) > > >>> > > >>> Personally, I think it's a waste of resources. For > each task > > >>> there is > > >>> at least two people from different groups who work on > > exactly > > >>> the same > > >>> task. The most recent example I can give is OVB. > As far as I > > >>> know, > > >>> both groups are working on implementing it in > their set of > > >>> tools right > > >>> now. > > >>> > > >>> On the other hand, you can always claim: "we already > > tried to > > >>> work on > > >>> the same framework, we failed to do it successfully" - > > right, but > > >>> maybe with better ground rules we can manage it. > We would > > >>> defiantly > > >>> benefit a lot from doing that. > > >>> > > >>> What's next? > > >>> ---------------- > > >>> So first of all, I would like to hear from you if > you think > > >>> that we > > >>> can collaborate once again or is it actually > better to keep > > >>> it as it > > >>> is now. > > >>> > > >>> If you agree that collaboration here makes sense, > maybe you > > >>> have ideas > > >>> on how we can do it better this time. > > >>> > > >>> I think that setting up a meeting to discuss the right > > >>> architecture > > >>> for the project(s) and decide on good > review/gating process, > > >>> would be > > >>> a good start. > > >>> > > >>> Please let me know what do you think and keep in > mind that > > >>> this is not > > >>> about which tool is better!. As you can see I > didn't mention > > >>> the time > > >>> it takes for each tool to deploy and test, and > also not > > the full > > >>> feature list it supports. > > >>> If possible, we should keep it about collaborating > and not > > >>> choosing > > >>> the best tool. Our solution could be the > combination of two > > >>> or more > > >>> tools eventually (tripleo-red, infra-quickstart? :D ) > > >>> > > >>> "You may say I'm a dreamer, but I'm not the only > one. I hope > > >>> some day > > >>> you'll join us and the infra will be as one" :) > > >>> > > >>> [1] https://github.com/redhat-openstack/ansible-ovb > > >>> [2] https://github.com/redhat-openstack/ansible-rhosp > > >>> [3] https://github.com/redhat-openstack/octario > > >>> [4] https://github.com/rhosqeauto/InfraRed > > >>> [5] > > >>> > > > https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role > > >>> [6] https://github.com/openstack/tripleo-quickstart > > >>> > > >>> _______________________________________________ > > >>> rdo-list mailing list > > >>> rdo-list at redhat.com > > > > > >> > > >>> https://www.redhat.com/mailman/listinfo/rdo-list > > >>> > > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > >>> > > >> > > >> > > >> _______________________________________________ > > >> rdo-list mailing list > > >> rdo-list at redhat.com > > > > > >> > > >> https://www.redhat.com/mailman/listinfo/rdo-list > > >> > > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > >> > > >> > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > > > >> > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > >> > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > > > >> > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > >> > > > > > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > -- > > Graeme Gillies > > Principal Systems Administrator > > Openstack Infrastructure > > Red Hat Australia > > > > > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From cbrown2 at ocf.co.uk Tue Aug 2 08:12:12 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Tue, 2 Aug 2016 09:12:12 +0100 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> Message-ID: <1470125532.18770.12.camel@ocf.co.uk> Hello RDOistas (I think that is the expression?), Another year, another OpenStack deployment tool. :) On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: > If we are talking about tools, I would also want to add something > with regards to user interface of these tools. This is based on my > own experience: > > I started trying to deploy Openstack with Staypuft and The Foreman. > The UI of The Foreman was intuitive enough for the discovery and > provisioning of the servers. The OpenStack portion, not so much. This is exactly mine also. I think this works really well in very large enterprise environments where you need to split out services over more than three controllers. You do need good in-house puppet skills though so better for enterprise with a good sysadmin team. > Forward a couple of releases and we had a TripleO GUI (Tuskar, I > believe) that allowed you to graphically build your Openstack cloud. > That was a reasonable good GUI for Openstack. Well, I found it barely usable. It was only ever good as a graphical representiation of what the build was doing. Interacting with it was not great. > Following that, TripleO become a script based installer, that > required experience in Heat templates. I know I didn?t have it and > had to ask in the mailing list about how to present this or change > that. I got a couple of installs working with this setup. Works well now that I understand all the foibles and have invested time into understanding heat templates and puppet modules. Its good in that it forces you to learn about orchestration which is such an important end-goal of cloud environments. > In the last session in Austin, my goal was to obtain information on > how others were installing Openstack. I was pointed to Fuel as an > alternative. I tried it up, and it just worked. It had the > discovering capability from The Foreman, and the configuration > options from TripleO. I understand that is based in Ansible and > because of that, it is not fully CentOS ready for all the nodes (at > least not in version 9 that I tried). In any case, as a deployer and > installer, it is the most well rounded tool that I found. This is interesting to know. I've heard of Fuel of course but there are some politics involved - it still has the team:single-vendor tag but from what I see Mirantis are very keen for it to become the default OpenStack installer. I don't think being Ansible-based should be a problem - we are deploying OpenShift on OpenStack which uses Openshift- ansible - this recently moved to Ansible 2.1 without too much disruption. > I?d love to see RDO moving into that direction, and having an easy to > use, end user ready deployer tool. If its as good as you say its definitely worth evaluating. From our point of view, we want to be able to add services to the pacemaker cluster with some ease - for example Magnum and Sahara - and it looks like there are steps being taken with regards to composable roles and simplification of the pacemaker cluster to just core services. But if someone can explain that better I would appreciate it. Regards > IB > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > > > On Aug 1, 2016, at 1:07 PM, David Moreau Simard > > wrote: > > > > The vast majority of RDO's CI relies on using upstream > > installation/deployment projects in order to test installation of > > RDO > > packages in different ways and configurations. > > > > Unless I'm mistaken, TripleO Quickstart was originally created as a > > mean to "easily" install TripleO in different topologies without > > requiring a massive amount of hardware. > > This project allows us to test TripleO in virtual deployments on > > just > > one server instead of, say, 6. > > > > There's also WeIRDO [1] which was left out of your list. > > WeIRDO is super simple and simply aims to run upstream gate jobs > > (such > > as puppet-openstack-integration [2][3] and packstack [4][5]) > > outside > > of the gate. > > It'll install dependencies that are expected to be there (i.e, > > usually > > set up by the openstack-infra gate preparation jobs), set up the > > trunk > > repositories we're interested in testing and the rest is handled by > > the upstream project testing framework. > > > > The WeIRDO project is /very/ low maintenance and brings an > > exceptional > > amount of coverage and value. > > This coverage is important because RDO provides OpenStack packages > > or > > projects that are not necessarily used by TripleO and the reality > > is > > that not everyone deploying OpenStack on CentOS with RDO will be > > using > > TripleO. > > > > Anyway, sorry for sidetracking but back to the topic, thanks for > > opening the discussion. > > > > What honestly perplexes me is the situation of CI in RDO and OSP, > > especially around TripleO/Director, is the amount of work that is > > spent downstream. > > And by downstream, here, I mean anything that isn't in TripleO > > proper. > > > > I keep dreaming about how awesome upstream TripleO CI would be if > > all > > that effort was spent directly there instead -- and then that all > > work > > could bear fruit and trickle down downstream for free. > > Exactly like how we keep improving the testing coverage in > > puppet-openstack-integration, it's automatically pulled in RDO CI > > through WeIRDO for free. > > We make the upstream better and we benefit from it simultaneously: > > everyone wins. > > > > [1]: https://github.com/rdo-infra/weirdo > > [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst > > ack > > [3]: https://github.com/openstack/puppet-openstack-integration#desc > > ription > > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack > > [5]: https://github.com/openstack/packstack#packstack-integration-t > > ests > > > > David Moreau Simard > > Senior Software Engineer | Openstack RDO > > > > dmsimard = [irc, github, twitter] > > > > David Moreau Simard > > Senior Software Engineer | Openstack RDO > > > > dmsimard = [irc, github, twitter] > > > > > > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > > wrote: > > > Hi, > > > > > > I would like to start a discussion on the overlap between tools > > > we > > > have for deploying and testing TripleO (RDO & RHOSP) in CI. > > > > > > Several months ago, we worked on one common framework for > > > deploying > > > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it > > > didn't work out well, which eventually led each group to focus on > > > developing other existing/new tools. > > > > > > What we have right now for deploying and testing > > > -------------------------------------------------------- > > > === Component CI, Gating === > > > I'll start with the projects we created, I think that's only fair > > > :) > > > > > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB > > > project. > > > > > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per > > > release. > > > > > > * Octario[3] - Testing using RPMs (pep8, unit, functional, > > > tempest, > > > csit) + Patching RPMs with submitted code. > > > > > > === Automation, QE === > > > * InfraRed[4] - provision install and test. Pluggable and > > > modular, > > > allows you to create your own provisioner, installer and tester. > > > > > > As far as I know, the groups is working now on different > > > structure of > > > one main project and three sub projects (provision, install and > > > test). > > > > > > === RDO === > > > I didn't use RDO tools, so I apologize if I got something wrong: > > > > > > * About ~25 micro independent Ansible roles[5]. You can either > > > choose > > > to use one of them or several together. They are used for > > > provisioning, installing and testing Tripleo. > > > > > > * Tripleo-quickstart[6] - uses the micro roles for deploying > > > tripleo > > > and test it. > > > > > > As I said, I didn't use the tools, so feel free to add more > > > information you think is relevant. > > > > > > === More? === > > > I hope not. Let us know if are familiar with more tools. > > > > > > Conclusion > > > -------------- > > > So as you can see, there are several projects that eventually > > > overlap > > > in many areas. Each group is basically using the same tasks > > > (provision > > > resources, build/import overcloud images, run tempest, collect > > > logs, > > > etc.) > > > > > > Personally, I think it's a waste of resources. For each task > > > there is > > > at least two people from different groups who work on exactly the > > > same > > > task. The most recent example I can give is OVB. As far as I > > > know, > > > both groups are working on implementing it in their set of tools > > > right > > > now. > > > > > > On the other hand, you can always claim: "we already tried to > > > work on > > > the same framework, we failed to do it successfully" - right, but > > > maybe with better ground rules we can manage it. We would > > > defiantly > > > benefit a lot from doing that. > > > > > > What's next? > > > ---------------- > > > So first of all, I would like to hear from you if you think that > > > we > > > can collaborate once again or is it actually better to keep it as > > > it > > > is now. > > > > > > If you agree that collaboration here makes sense, maybe you have > > > ideas > > > on how we can do it better this time. > > > > > > I think that setting up a meeting to discuss the right > > > architecture > > > for the project(s) and decide on good review/gating process, > > > would be > > > a good start. > > > > > > Please let me know what do you think and keep in mind that this > > > is not > > > about which tool is better!. As you can see I didn't mention the > > > time > > > it takes for each tool to deploy and test, and also not the full > > > feature list it supports. > > > If possible, we should keep it about collaborating and not > > > choosing > > > the best tool. Our solution could be the combination of two or > > > more > > > tools eventually (tripleo-red, infra-quickstart? :D ) > > > > > > "You may say I'm a dreamer, but I'm not the only one. I hope some > > > day > > > you'll join us and the infra will be as one" :) > > > > > > [1] https://github.com/redhat-openstack/ansible-ovb > > > [2] https://github.com/redhat-openstack/ansible-rhosp > > > [3] https://github.com/redhat-openstack/octario > > > [4] https://github.com/rhosqeauto/InfraRed > > > [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi > > > ble-role > > > [6] https://github.com/openstack/tripleo-quickstart > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Regards, Christopher Brown OpenStack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc Please note, any emails relating to an OCF Support request must always be sent to support at ocf.co.uk for a ticket number to be generated or existing support ticket to be updated. Should this not be done then OCF cannot be held responsible for requests not dealt with in a timely manner. OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 2PG. This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. From abregman at redhat.com Tue Aug 2 08:58:25 2016 From: abregman at redhat.com (Arie Bregman) Date: Tue, 2 Aug 2016 11:58:25 +0300 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: <1470125532.18770.12.camel@ocf.co.uk> References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: It became a discussion around the official installer and how to improve it. While it's an important discussion, no doubt, I actually want to focus on our automation and CI tools. Since I see there is an agreement that collaboration does make sense here, let's move to the hard questions :) Wes, Tal - there is huge difference right now between infrared and tripleo-quickstart in their structure. One is all-in-one project and the other one is multiple micro projects managed by one project. Do you think there is a way to consolidate or move to a different model which will make sense for both RDO and RHOSP? something that both groups can work on. Raoul - I totally agree with you, especially with "difficult for anyone to start contributing and collaborate". This is exactly why this discussion started. If we can agree on one set of tools, it will make everyone's life easier - current groups, new contributors, folks that just want to deploy TripleO quickly. But I'm afraid some sacrifices need to be made by both groups. David - I thought WeiRDO is used only for packstack, so I apologize I didn't include it. It does sound like an anther testing project, is there a place to merge it with another existing testing project? like Octario for example or one of TripleO testing projects. Or does it make sense to keep it a standalone project? On Tue, Aug 2, 2016 at 11:12 AM, Christopher Brown wrote: > Hello RDOistas (I think that is the expression?), > > Another year, another OpenStack deployment tool. :) > > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: >> If we are talking about tools, I would also want to add something >> with regards to user interface of these tools. This is based on my >> own experience: >> >> I started trying to deploy Openstack with Staypuft and The Foreman. >> The UI of The Foreman was intuitive enough for the discovery and >> provisioning of the servers. The OpenStack portion, not so much. > > This is exactly mine also. I think this works really well in very large > enterprise environments where you need to split out services over more > than three controllers. You do need good in-house puppet skills though > so better for enterprise with a good sysadmin team. > >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I >> believe) that allowed you to graphically build your Openstack cloud. >> That was a reasonable good GUI for Openstack. > > Well, I found it barely usable. It was only ever good as a graphical > representiation of what the build was doing. Interacting with it was > not great. > >> Following that, TripleO become a script based installer, that >> required experience in Heat templates. I know I didn?t have it and >> had to ask in the mailing list about how to present this or change >> that. I got a couple of installs working with this setup. > > Works well now that I understand all the foibles and have invested time > into understanding heat templates and puppet modules. Its good in that > it forces you to learn about orchestration which is such an important > end-goal of cloud environments. > >> In the last session in Austin, my goal was to obtain information on >> how others were installing Openstack. I was pointed to Fuel as an >> alternative. I tried it up, and it just worked. It had the >> discovering capability from The Foreman, and the configuration >> options from TripleO. I understand that is based in Ansible and >> because of that, it is not fully CentOS ready for all the nodes (at >> least not in version 9 that I tried). In any case, as a deployer and >> installer, it is the most well rounded tool that I found. > > This is interesting to know. I've heard of Fuel of course but there are > some politics involved - it still has the team:single-vendor tag but > from what I see Mirantis are very keen for it to become the default > OpenStack installer. I don't think being Ansible-based should be a > problem - we are deploying OpenShift on OpenStack which uses Openshift- > ansible - this recently moved to Ansible 2.1 without too much > disruption. > >> I?d love to see RDO moving into that direction, and having an easy to >> use, end user ready deployer tool. > > If its as good as you say its definitely worth evaluating. From our > point of view, we want to be able to add services to the pacemaker > cluster with some ease - for example Magnum and Sahara - and it looks > like there are steps being taken with regards to composable roles and > simplification of the pacemaker cluster to just core services. > > But if someone can explain that better I would appreciate it. > > Regards > >> IB >> >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> >> >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard >> > wrote: >> > >> > The vast majority of RDO's CI relies on using upstream >> > installation/deployment projects in order to test installation of >> > RDO >> > packages in different ways and configurations. >> > >> > Unless I'm mistaken, TripleO Quickstart was originally created as a >> > mean to "easily" install TripleO in different topologies without >> > requiring a massive amount of hardware. >> > This project allows us to test TripleO in virtual deployments on >> > just >> > one server instead of, say, 6. >> > >> > There's also WeIRDO [1] which was left out of your list. >> > WeIRDO is super simple and simply aims to run upstream gate jobs >> > (such >> > as puppet-openstack-integration [2][3] and packstack [4][5]) >> > outside >> > of the gate. >> > It'll install dependencies that are expected to be there (i.e, >> > usually >> > set up by the openstack-infra gate preparation jobs), set up the >> > trunk >> > repositories we're interested in testing and the rest is handled by >> > the upstream project testing framework. >> > >> > The WeIRDO project is /very/ low maintenance and brings an >> > exceptional >> > amount of coverage and value. >> > This coverage is important because RDO provides OpenStack packages >> > or >> > projects that are not necessarily used by TripleO and the reality >> > is >> > that not everyone deploying OpenStack on CentOS with RDO will be >> > using >> > TripleO. >> > >> > Anyway, sorry for sidetracking but back to the topic, thanks for >> > opening the discussion. >> > >> > What honestly perplexes me is the situation of CI in RDO and OSP, >> > especially around TripleO/Director, is the amount of work that is >> > spent downstream. >> > And by downstream, here, I mean anything that isn't in TripleO >> > proper. >> > >> > I keep dreaming about how awesome upstream TripleO CI would be if >> > all >> > that effort was spent directly there instead -- and then that all >> > work >> > could bear fruit and trickle down downstream for free. >> > Exactly like how we keep improving the testing coverage in >> > puppet-openstack-integration, it's automatically pulled in RDO CI >> > through WeIRDO for free. >> > We make the upstream better and we benefit from it simultaneously: >> > everyone wins. >> > >> > [1]: https://github.com/rdo-infra/weirdo >> > [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst >> > ack >> > [3]: https://github.com/openstack/puppet-openstack-integration#desc >> > ription >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >> > [5]: https://github.com/openstack/packstack#packstack-integration-t >> > ests >> > >> > David Moreau Simard >> > Senior Software Engineer | Openstack RDO >> > >> > dmsimard = [irc, github, twitter] >> > >> > David Moreau Simard >> > Senior Software Engineer | Openstack RDO >> > >> > dmsimard = [irc, github, twitter] >> > >> > >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman >> > wrote: >> > > Hi, >> > > >> > > I would like to start a discussion on the overlap between tools >> > > we >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. >> > > >> > > Several months ago, we worked on one common framework for >> > > deploying >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it >> > > didn't work out well, which eventually led each group to focus on >> > > developing other existing/new tools. >> > > >> > > What we have right now for deploying and testing >> > > -------------------------------------------------------- >> > > === Component CI, Gating === >> > > I'll start with the projects we created, I think that's only fair >> > > :) >> > > >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB >> > > project. >> > > >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per >> > > release. >> > > >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, >> > > tempest, >> > > csit) + Patching RPMs with submitted code. >> > > >> > > === Automation, QE === >> > > * InfraRed[4] - provision install and test. Pluggable and >> > > modular, >> > > allows you to create your own provisioner, installer and tester. >> > > >> > > As far as I know, the groups is working now on different >> > > structure of >> > > one main project and three sub projects (provision, install and >> > > test). >> > > >> > > === RDO === >> > > I didn't use RDO tools, so I apologize if I got something wrong: >> > > >> > > * About ~25 micro independent Ansible roles[5]. You can either >> > > choose >> > > to use one of them or several together. They are used for >> > > provisioning, installing and testing Tripleo. >> > > >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying >> > > tripleo >> > > and test it. >> > > >> > > As I said, I didn't use the tools, so feel free to add more >> > > information you think is relevant. >> > > >> > > === More? === >> > > I hope not. Let us know if are familiar with more tools. >> > > >> > > Conclusion >> > > -------------- >> > > So as you can see, there are several projects that eventually >> > > overlap >> > > in many areas. Each group is basically using the same tasks >> > > (provision >> > > resources, build/import overcloud images, run tempest, collect >> > > logs, >> > > etc.) >> > > >> > > Personally, I think it's a waste of resources. For each task >> > > there is >> > > at least two people from different groups who work on exactly the >> > > same >> > > task. The most recent example I can give is OVB. As far as I >> > > know, >> > > both groups are working on implementing it in their set of tools >> > > right >> > > now. >> > > >> > > On the other hand, you can always claim: "we already tried to >> > > work on >> > > the same framework, we failed to do it successfully" - right, but >> > > maybe with better ground rules we can manage it. We would >> > > defiantly >> > > benefit a lot from doing that. >> > > >> > > What's next? >> > > ---------------- >> > > So first of all, I would like to hear from you if you think that >> > > we >> > > can collaborate once again or is it actually better to keep it as >> > > it >> > > is now. >> > > >> > > If you agree that collaboration here makes sense, maybe you have >> > > ideas >> > > on how we can do it better this time. >> > > >> > > I think that setting up a meeting to discuss the right >> > > architecture >> > > for the project(s) and decide on good review/gating process, >> > > would be >> > > a good start. >> > > >> > > Please let me know what do you think and keep in mind that this >> > > is not >> > > about which tool is better!. As you can see I didn't mention the >> > > time >> > > it takes for each tool to deploy and test, and also not the full >> > > feature list it supports. >> > > If possible, we should keep it about collaborating and not >> > > choosing >> > > the best tool. Our solution could be the combination of two or >> > > more >> > > tools eventually (tripleo-red, infra-quickstart? :D ) >> > > >> > > "You may say I'm a dreamer, but I'm not the only one. I hope some >> > > day >> > > you'll join us and the infra will be as one" :) >> > > >> > > [1] https://github.com/redhat-openstack/ansible-ovb >> > > [2] https://github.com/redhat-openstack/ansible-rhosp >> > > [3] https://github.com/redhat-openstack/octario >> > > [4] https://github.com/rhosqeauto/InfraRed >> > > [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi >> > > ble-role >> > > [6] https://github.com/openstack/tripleo-quickstart >> > > >> > > _______________________________________________ >> > > rdo-list mailing list >> > > rdo-list at redhat.com >> > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > This message is private and confidential. If you have received this > message in error, please notify us immediately and remove it from your > system. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Arie Bregman Red Hat Israel Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview From whayutin at redhat.com Tue Aug 2 12:53:56 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 2 Aug 2016 08:53:56 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: On Tue, Aug 2, 2016 at 4:58 AM, Arie Bregman wrote: > It became a discussion around the official installer and how to > improve it. While it's an important discussion, no doubt, I actually > want to focus on our automation and CI tools. > > Since I see there is an agreement that collaboration does make sense > here, let's move to the hard questions :) > > Wes, Tal - there is huge difference right now between infrared and > tripleo-quickstart in their structure. One is all-in-one project and > the other one is multiple micro projects managed by one project. Do > you think there is a way to consolidate or move to a different model > which will make sense for both RDO and RHOSP? something that both > groups can work on. > I am happy to be part of the discussion, and I am also very willing to help and try to drive suggestions to the tripleo-quickstart community. I need to make a point clear though, just to make sure we're on the same page.. I do not own oooq, I am not a core on oooq. I can help facilitate a discussion but oooq is an upstream tripleo tool that replaces instack-virt-setup [1]. It also happens to be a great tool for easily deploying TripleO end to end [3] What I *can* do is show everyone how to manipulate tripleo-quickstart and customize it with composable ansible roles, templates, settings etc.. This would allow any upstream or downstream project to override the native oooq roles and *any* step that does not work for another group w/ 3rd party roles [2]. These 3rd party roles can be free and opensource or internal only, it works either way. This was discussed in depth as part of the production chain meetings, the message may have been lost unfortunately. I hope this resets your expectations of what I can and can not do as part of these discussions. Let me know where and when and I'm happy to be part of the discussion. Thanks [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart [2] https://github.com/redhat-openstack/?utf8=%E2%9C%93&query=ansible-role-tripleo [3[ https://www.rdoproject.org/tripleo/ > > Raoul - I totally agree with you, especially with "difficult for > anyone to start contributing and collaborate". This is exactly why > this discussion started. If we can agree on one set of tools, it will > make everyone's life easier - current groups, new contributors, folks > that just want to deploy TripleO quickly. But I'm afraid some > sacrifices need to be made by both groups. > > David - I thought WeiRDO is used only for packstack, so I apologize I > didn't include it. It does sound like an anther testing project, is > there a place to merge it with another existing testing project? like > Octario for example or one of TripleO testing projects. Or does it > make sense to keep it a standalone project? > > > > > On Tue, Aug 2, 2016 at 11:12 AM, Christopher Brown > wrote: > > Hello RDOistas (I think that is the expression?), > > > > Another year, another OpenStack deployment tool. :) > > > > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: > >> If we are talking about tools, I would also want to add something > >> with regards to user interface of these tools. This is based on my > >> own experience: > >> > >> I started trying to deploy Openstack with Staypuft and The Foreman. > >> The UI of The Foreman was intuitive enough for the discovery and > >> provisioning of the servers. The OpenStack portion, not so much. > > > > This is exactly mine also. I think this works really well in very large > > enterprise environments where you need to split out services over more > > than three controllers. You do need good in-house puppet skills though > > so better for enterprise with a good sysadmin team. > > > >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I > >> believe) that allowed you to graphically build your Openstack cloud. > >> That was a reasonable good GUI for Openstack. > > > > Well, I found it barely usable. It was only ever good as a graphical > > representiation of what the build was doing. Interacting with it was > > not great. > > > >> Following that, TripleO become a script based installer, that > >> required experience in Heat templates. I know I didn?t have it and > >> had to ask in the mailing list about how to present this or change > >> that. I got a couple of installs working with this setup. > > > > Works well now that I understand all the foibles and have invested time > > into understanding heat templates and puppet modules. Its good in that > > it forces you to learn about orchestration which is such an important > > end-goal of cloud environments. > > > >> In the last session in Austin, my goal was to obtain information on > >> how others were installing Openstack. I was pointed to Fuel as an > >> alternative. I tried it up, and it just worked. It had the > >> discovering capability from The Foreman, and the configuration > >> options from TripleO. I understand that is based in Ansible and > >> because of that, it is not fully CentOS ready for all the nodes (at > >> least not in version 9 that I tried). In any case, as a deployer and > >> installer, it is the most well rounded tool that I found. > > > > This is interesting to know. I've heard of Fuel of course but there are > > some politics involved - it still has the team:single-vendor tag but > > from what I see Mirantis are very keen for it to become the default > > OpenStack installer. I don't think being Ansible-based should be a > > problem - we are deploying OpenShift on OpenStack which uses Openshift- > > ansible - this recently moved to Ansible 2.1 without too much > > disruption. > > > >> I?d love to see RDO moving into that direction, and having an easy to > >> use, end user ready deployer tool. > > > > If its as good as you say its definitely worth evaluating. From our > > point of view, we want to be able to add services to the pacemaker > > cluster with some ease - for example Magnum and Sahara - and it looks > > like there are steps being taken with regards to composable roles and > > simplification of the pacemaker cluster to just core services. > > > > But if someone can explain that better I would appreciate it. > > > > Regards > > > >> IB > >> > >> > >> __ > >> Ignacio Bravo > >> LTG Federal, Inc > >> www.ltgfederal.com > >> > >> > >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard > >> > wrote: > >> > > >> > The vast majority of RDO's CI relies on using upstream > >> > installation/deployment projects in order to test installation of > >> > RDO > >> > packages in different ways and configurations. > >> > > >> > Unless I'm mistaken, TripleO Quickstart was originally created as a > >> > mean to "easily" install TripleO in different topologies without > >> > requiring a massive amount of hardware. > >> > This project allows us to test TripleO in virtual deployments on > >> > just > >> > one server instead of, say, 6. > >> > > >> > There's also WeIRDO [1] which was left out of your list. > >> > WeIRDO is super simple and simply aims to run upstream gate jobs > >> > (such > >> > as puppet-openstack-integration [2][3] and packstack [4][5]) > >> > outside > >> > of the gate. > >> > It'll install dependencies that are expected to be there (i.e, > >> > usually > >> > set up by the openstack-infra gate preparation jobs), set up the > >> > trunk > >> > repositories we're interested in testing and the rest is handled by > >> > the upstream project testing framework. > >> > > >> > The WeIRDO project is /very/ low maintenance and brings an > >> > exceptional > >> > amount of coverage and value. > >> > This coverage is important because RDO provides OpenStack packages > >> > or > >> > projects that are not necessarily used by TripleO and the reality > >> > is > >> > that not everyone deploying OpenStack on CentOS with RDO will be > >> > using > >> > TripleO. > >> > > >> > Anyway, sorry for sidetracking but back to the topic, thanks for > >> > opening the discussion. > >> > > >> > What honestly perplexes me is the situation of CI in RDO and OSP, > >> > especially around TripleO/Director, is the amount of work that is > >> > spent downstream. > >> > And by downstream, here, I mean anything that isn't in TripleO > >> > proper. > >> > > >> > I keep dreaming about how awesome upstream TripleO CI would be if > >> > all > >> > that effort was spent directly there instead -- and then that all > >> > work > >> > could bear fruit and trickle down downstream for free. > >> > Exactly like how we keep improving the testing coverage in > >> > puppet-openstack-integration, it's automatically pulled in RDO CI > >> > through WeIRDO for free. > >> > We make the upstream better and we benefit from it simultaneously: > >> > everyone wins. > >> > > >> > [1]: https://github.com/rdo-infra/weirdo > >> > [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst > >> > ack > >> > [3]: https://github.com/openstack/puppet-openstack-integration#desc > >> > ription > >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack > >> > [5]: https://github.com/openstack/packstack#packstack-integration-t > >> > ests > >> > > >> > David Moreau Simard > >> > Senior Software Engineer | Openstack RDO > >> > > >> > dmsimard = [irc, github, twitter] > >> > > >> > David Moreau Simard > >> > Senior Software Engineer | Openstack RDO > >> > > >> > dmsimard = [irc, github, twitter] > >> > > >> > > >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > >> > wrote: > >> > > Hi, > >> > > > >> > > I would like to start a discussion on the overlap between tools > >> > > we > >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. > >> > > > >> > > Several months ago, we worked on one common framework for > >> > > deploying > >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it > >> > > didn't work out well, which eventually led each group to focus on > >> > > developing other existing/new tools. > >> > > > >> > > What we have right now for deploying and testing > >> > > -------------------------------------------------------- > >> > > === Component CI, Gating === > >> > > I'll start with the projects we created, I think that's only fair > >> > > :) > >> > > > >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB > >> > > project. > >> > > > >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per > >> > > release. > >> > > > >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, > >> > > tempest, > >> > > csit) + Patching RPMs with submitted code. > >> > > > >> > > === Automation, QE === > >> > > * InfraRed[4] - provision install and test. Pluggable and > >> > > modular, > >> > > allows you to create your own provisioner, installer and tester. > >> > > > >> > > As far as I know, the groups is working now on different > >> > > structure of > >> > > one main project and three sub projects (provision, install and > >> > > test). > >> > > > >> > > === RDO === > >> > > I didn't use RDO tools, so I apologize if I got something wrong: > >> > > > >> > > * About ~25 micro independent Ansible roles[5]. You can either > >> > > choose > >> > > to use one of them or several together. They are used for > >> > > provisioning, installing and testing Tripleo. > >> > > > >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying > >> > > tripleo > >> > > and test it. > >> > > > >> > > As I said, I didn't use the tools, so feel free to add more > >> > > information you think is relevant. > >> > > > >> > > === More? === > >> > > I hope not. Let us know if are familiar with more tools. > >> > > > >> > > Conclusion > >> > > -------------- > >> > > So as you can see, there are several projects that eventually > >> > > overlap > >> > > in many areas. Each group is basically using the same tasks > >> > > (provision > >> > > resources, build/import overcloud images, run tempest, collect > >> > > logs, > >> > > etc.) > >> > > > >> > > Personally, I think it's a waste of resources. For each task > >> > > there is > >> > > at least two people from different groups who work on exactly the > >> > > same > >> > > task. The most recent example I can give is OVB. As far as I > >> > > know, > >> > > both groups are working on implementing it in their set of tools > >> > > right > >> > > now. > >> > > > >> > > On the other hand, you can always claim: "we already tried to > >> > > work on > >> > > the same framework, we failed to do it successfully" - right, but > >> > > maybe with better ground rules we can manage it. We would > >> > > defiantly > >> > > benefit a lot from doing that. > >> > > > >> > > What's next? > >> > > ---------------- > >> > > So first of all, I would like to hear from you if you think that > >> > > we > >> > > can collaborate once again or is it actually better to keep it as > >> > > it > >> > > is now. > >> > > > >> > > If you agree that collaboration here makes sense, maybe you have > >> > > ideas > >> > > on how we can do it better this time. > >> > > > >> > > I think that setting up a meeting to discuss the right > >> > > architecture > >> > > for the project(s) and decide on good review/gating process, > >> > > would be > >> > > a good start. > >> > > > >> > > Please let me know what do you think and keep in mind that this > >> > > is not > >> > > about which tool is better!. As you can see I didn't mention the > >> > > time > >> > > it takes for each tool to deploy and test, and also not the full > >> > > feature list it supports. > >> > > If possible, we should keep it about collaborating and not > >> > > choosing > >> > > the best tool. Our solution could be the combination of two or > >> > > more > >> > > tools eventually (tripleo-red, infra-quickstart? :D ) > >> > > > >> > > "You may say I'm a dreamer, but I'm not the only one. I hope some > >> > > day > >> > > you'll join us and the infra will be as one" :) > >> > > > >> > > [1] https://github.com/redhat-openstack/ansible-ovb > >> > > [2] https://github.com/redhat-openstack/ansible-rhosp > >> > > [3] https://github.com/redhat-openstack/octario > >> > > [4] https://github.com/rhosqeauto/InfraRed > >> > > [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi > >> > > ble-role > >> > > [6] https://github.com/openstack/tripleo-quickstart > >> > > > >> > > _______________________________________________ > >> > > rdo-list mailing list > >> > > rdo-list at redhat.com > >> > > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > > >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > _______________________________________________ > >> > rdo-list mailing list > >> > rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > -- > > Regards, > > > > Christopher Brown > > OpenStack Engineer > > OCF plc > > > > Tel: +44 (0)114 257 2200 > > Web: www.ocf.co.uk > > Blog: blog.ocf.co.uk > > Twitter: @ocfplc > > > > Please note, any emails relating to an OCF Support request must always > > be sent to support at ocf.co.uk for a ticket number to be generated or > > existing support ticket to be updated. Should this not be done then OCF > > cannot be held responsible for requests not dealt with in a timely > > manner. > > > > OCF plc is a company registered in England and Wales. Registered number > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > > 2PG. > > > > This message is private and confidential. If you have received this > > message in error, please notify us immediately and remove it from your > > system. > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -- > Arie Bregman > Red Hat Israel > Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Tue Aug 2 14:02:01 2016 From: dms at redhat.com (David Moreau Simard) Date: Tue, 2 Aug 2016 10:02:01 -0400 Subject: [rdo-list] Feedback around deploying an operating RDO/TripleO (was: Multiple tools for deploying and testing TripleO) Message-ID: I think there's a lot of great and valuable feedback being exchanged around the different means of deploying RDO, with or without TripleO in that previous thread [1]. Let's spin that off into it's own thread so the community can discuss this while keeping the other thread focused on continuous integration efforts. If you have any other thoughts around deploying and operating a cloud built with RDO packages, please chime in here. [1]: https://www.redhat.com/archives/rdo-list/2016-August/msg00002.html David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Tue, Aug 2, 2016 at 4:12 AM, Christopher Brown wrote: > Hello RDOistas (I think that is the expression?), > > Another year, another OpenStack deployment tool. :) > > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: >> If we are talking about tools, I would also want to add something >> with regards to user interface of these tools. This is based on my >> own experience: >> >> I started trying to deploy Openstack with Staypuft and The Foreman. >> The UI of The Foreman was intuitive enough for the discovery and >> provisioning of the servers. The OpenStack portion, not so much. > > This is exactly mine also. I think this works really well in very large > enterprise environments where you need to split out services over more > than three controllers. You do need good in-house puppet skills though > so better for enterprise with a good sysadmin team. > >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I >> believe) that allowed you to graphically build your Openstack cloud. >> That was a reasonable good GUI for Openstack. > > Well, I found it barely usable. It was only ever good as a graphical > representiation of what the build was doing. Interacting with it was > not great. > >> Following that, TripleO become a script based installer, that >> required experience in Heat templates. I know I didn?t have it and >> had to ask in the mailing list about how to present this or change >> that. I got a couple of installs working with this setup. > > Works well now that I understand all the foibles and have invested time > into understanding heat templates and puppet modules. Its good in that > it forces you to learn about orchestration which is such an important > end-goal of cloud environments. > >> In the last session in Austin, my goal was to obtain information on >> how others were installing Openstack. I was pointed to Fuel as an >> alternative. I tried it up, and it just worked. It had the >> discovering capability from The Foreman, and the configuration >> options from TripleO. I understand that is based in Ansible and >> because of that, it is not fully CentOS ready for all the nodes (at >> least not in version 9 that I tried). In any case, as a deployer and >> installer, it is the most well rounded tool that I found. > > This is interesting to know. I've heard of Fuel of course but there are > some politics involved - it still has the team:single-vendor tag but > from what I see Mirantis are very keen for it to become the default > OpenStack installer. I don't think being Ansible-based should be a > problem - we are deploying OpenShift on OpenStack which uses Openshift- > ansible - this recently moved to Ansible 2.1 without too much > disruption. > >> I?d love to see RDO moving into that direction, and having an easy to >> use, end user ready deployer tool. > > If its as good as you say its definitely worth evaluating. From our > point of view, we want to be able to add services to the pacemaker > cluster with some ease - for example Magnum and Sahara - and it looks > like there are steps being taken with regards to composable roles and > simplification of the pacemaker cluster to just core services. > > But if someone can explain that better I would appreciate it. > > Regards > >> IB >> >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> >> >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard >> > wrote: >> > >> > The vast majority of RDO's CI relies on using upstream >> > installation/deployment projects in order to test installation of >> > RDO >> > packages in different ways and configurations. >> > >> > Unless I'm mistaken, TripleO Quickstart was originally created as a >> > mean to "easily" install TripleO in different topologies without >> > requiring a massive amount of hardware. >> > This project allows us to test TripleO in virtual deployments on >> > just >> > one server instead of, say, 6. >> > >> > There's also WeIRDO [1] which was left out of your list. >> > WeIRDO is super simple and simply aims to run upstream gate jobs >> > (such >> > as puppet-openstack-integration [2][3] and packstack [4][5]) >> > outside >> > of the gate. >> > It'll install dependencies that are expected to be there (i.e, >> > usually >> > set up by the openstack-infra gate preparation jobs), set up the >> > trunk >> > repositories we're interested in testing and the rest is handled by >> > the upstream project testing framework. >> > >> > The WeIRDO project is /very/ low maintenance and brings an >> > exceptional >> > amount of coverage and value. >> > This coverage is important because RDO provides OpenStack packages >> > or >> > projects that are not necessarily used by TripleO and the reality >> > is >> > that not everyone deploying OpenStack on CentOS with RDO will be >> > using >> > TripleO. >> > >> > Anyway, sorry for sidetracking but back to the topic, thanks for >> > opening the discussion. >> > >> > What honestly perplexes me is the situation of CI in RDO and OSP, >> > especially around TripleO/Director, is the amount of work that is >> > spent downstream. >> > And by downstream, here, I mean anything that isn't in TripleO >> > proper. >> > >> > I keep dreaming about how awesome upstream TripleO CI would be if >> > all >> > that effort was spent directly there instead -- and then that all >> > work >> > could bear fruit and trickle down downstream for free. >> > Exactly like how we keep improving the testing coverage in >> > puppet-openstack-integration, it's automatically pulled in RDO CI >> > through WeIRDO for free. >> > We make the upstream better and we benefit from it simultaneously: >> > everyone wins. >> > >> > [1]: https://github.com/rdo-infra/weirdo >> > [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst >> > ack >> > [3]: https://github.com/openstack/puppet-openstack-integration#desc >> > ription >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >> > [5]: https://github.com/openstack/packstack#packstack-integration-t >> > ests >> > >> > David Moreau Simard >> > Senior Software Engineer | Openstack RDO >> > >> > dmsimard = [irc, github, twitter] >> > >> > David Moreau Simard >> > Senior Software Engineer | Openstack RDO >> > >> > dmsimard = [irc, github, twitter] >> > >> > >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman >> > wrote: >> > > Hi, >> > > >> > > I would like to start a discussion on the overlap between tools >> > > we >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. >> > > >> > > Several months ago, we worked on one common framework for >> > > deploying >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it >> > > didn't work out well, which eventually led each group to focus on >> > > developing other existing/new tools. >> > > >> > > What we have right now for deploying and testing >> > > -------------------------------------------------------- >> > > === Component CI, Gating === >> > > I'll start with the projects we created, I think that's only fair >> > > :) >> > > >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB >> > > project. >> > > >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per >> > > release. >> > > >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, >> > > tempest, >> > > csit) + Patching RPMs with submitted code. >> > > >> > > === Automation, QE === >> > > * InfraRed[4] - provision install and test. Pluggable and >> > > modular, >> > > allows you to create your own provisioner, installer and tester. >> > > >> > > As far as I know, the groups is working now on different >> > > structure of >> > > one main project and three sub projects (provision, install and >> > > test). >> > > >> > > === RDO === >> > > I didn't use RDO tools, so I apologize if I got something wrong: >> > > >> > > * About ~25 micro independent Ansible roles[5]. You can either >> > > choose >> > > to use one of them or several together. They are used for >> > > provisioning, installing and testing Tripleo. >> > > >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying >> > > tripleo >> > > and test it. >> > > >> > > As I said, I didn't use the tools, so feel free to add more >> > > information you think is relevant. >> > > >> > > === More? === >> > > I hope not. Let us know if are familiar with more tools. >> > > >> > > Conclusion >> > > -------------- >> > > So as you can see, there are several projects that eventually >> > > overlap >> > > in many areas. Each group is basically using the same tasks >> > > (provision >> > > resources, build/import overcloud images, run tempest, collect >> > > logs, >> > > etc.) >> > > >> > > Personally, I think it's a waste of resources. For each task >> > > there is >> > > at least two people from different groups who work on exactly the >> > > same >> > > task. The most recent example I can give is OVB. As far as I >> > > know, >> > > both groups are working on implementing it in their set of tools >> > > right >> > > now. >> > > >> > > On the other hand, you can always claim: "we already tried to >> > > work on >> > > the same framework, we failed to do it successfully" - right, but >> > > maybe with better ground rules we can manage it. We would >> > > defiantly >> > > benefit a lot from doing that. >> > > >> > > What's next? >> > > ---------------- >> > > So first of all, I would like to hear from you if you think that >> > > we >> > > can collaborate once again or is it actually better to keep it as >> > > it >> > > is now. >> > > >> > > If you agree that collaboration here makes sense, maybe you have >> > > ideas >> > > on how we can do it better this time. >> > > >> > > I think that setting up a meeting to discuss the right >> > > architecture >> > > for the project(s) and decide on good review/gating process, >> > > would be >> > > a good start. >> > > >> > > Please let me know what do you think and keep in mind that this >> > > is not >> > > about which tool is better!. As you can see I didn't mention the >> > > time >> > > it takes for each tool to deploy and test, and also not the full >> > > feature list it supports. >> > > If possible, we should keep it about collaborating and not >> > > choosing >> > > the best tool. Our solution could be the combination of two or >> > > more >> > > tools eventually (tripleo-red, infra-quickstart? :D ) >> > > >> > > "You may say I'm a dreamer, but I'm not the only one. I hope some >> > > day >> > > you'll join us and the infra will be as one" :) >> > > >> > > [1] https://github.com/redhat-openstack/ansible-ovb >> > > [2] https://github.com/redhat-openstack/ansible-rhosp >> > > [3] https://github.com/redhat-openstack/octario >> > > [4] https://github.com/rhosqeauto/InfraRed >> > > [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi >> > > ble-role >> > > [6] https://github.com/openstack/tripleo-quickstart >> > > >> > > _______________________________________________ >> > > rdo-list mailing list >> > > rdo-list at redhat.com >> > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > This message is private and confidential. If you have received this > message in error, please notify us immediately and remove it from your > system. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From lorenzetto.luca at gmail.com Tue Aug 2 15:59:00 2016 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Tue, 2 Aug 2016 17:59:00 +0200 Subject: [rdo-list] [tripleo] Troubles deploying Libery with HA setup Message-ID: Hello, I'm deploying Liberty on a set of server using tripleo. I imported and correctly introspected 6 nodes. First i did a setup with 1 controller node and 3 computing nodes. No issues during this deploy, everything went fine. On this deployment i used custom hostname formats (defined via yaml environment file) and custom dns_domain (changed in /etc/nova/nova.conf and /etc/neutron/dhcp_agent.ini and all /usr/share/openstack-tripleo-heat-templates/*.yaml files that contains localdomain) Now I'm working to have a 3 controller setup with HA. I'm running this command to deploy: openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/puppet-ceph-external.yaml -e ~/hostname-nostri.yaml --neutron-bridge-mappings datacentre:br-ex,storage-pub:br-stg-pub -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server timesrv1 --control-scale 3 --compute-scale 3 --ceph-storage-scale 0 --control-flavor controlhp --compute-flavor computehp --neutron-network-type vxlan --neutron-tunnel-types vxlan --verbose --debug --log-file overcloud_$(date +%T).log I see that stack creation starts and correctly deploy os on the nodes. Nodes are named according to ControllerHostnameFormat and ComputeHostnameFormat that i specified into ~/hostname-nostri.yaml. Everything goes well until HA configuration starts. I see this stack creation failing: overcloud-ControllerNodesPostDeployment-bbp3c47jgau2-ControllerLoadBalancerDeployment_Step1-sla6ce7n2arq The error message of the deployment command is: Stack failed with status: Resource CREATE failed: resources.ControllerLoadBalancerDeployment_Step1: resources.ControllerNodesPostDeployment.Error: resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 Heat Stack create failed. If i go in depth with heat deployment-show i see that all resources report this deploy_stderr: Error: Could not prefetch mysql_user provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) Error: Could not prefetch mysql_database provider 'mysql': Execution of '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)\ Error: Command exceeded timeout Error: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/returns: change from notrun to 0 failed: Command exceeded timeout Warning: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Skipping because of failed dependencies Warning: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Skipping because of failed dependencies Warning: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Skipping because of failed dependencies Warning: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]: Skipping because of failed dependencies Warning: /Stage[main]/Pacemaker::Stonith/Exec[Disable STONITH]: Skipping because of failed dependencies", I see nodes are stuck on puppet step "auth-successful-across-all-nodes" (defined in /etc/puppet/modules/pacemaker/manifests/corosync.pp) /usr/bin/python2 /usr/sbin/pcs cluster auth opsctrl0 opsctrl1 opsctrl2 -u hacluster -p PASSWORD --force I suppose that the problem is due to corosync service not yet started. But as far as i can see corosync will never start because /etc/corosync/corosync.conf file is missing. in /etc/hosts opsctrl{0-2} are correctly defined and each host can talk with the others. I'm stuck and i don't know what to do. Anyone had similar issues? I'm missing something that i have to enter in the configuration? I'm using a in-house mirror of rdo-release made on April 20, and custom images build from rhel image and this repository. A week ago i tried with latest repository but had the same error (and also others, that's why i returned to this older mirror that was working). This is the list of packages installed from rdo-release repository: crudini-0.7-1.el7.noarch dib-utils-0.0.9-1.el7.noarch dibbler-client-1.0.1-0.RC1.2.el7.x86_64 diskimage-builder-1.4.0-1.el7.noarch erlang-asn1-R16B-03.16.el7.x86_64 erlang-compiler-R16B-03.16.el7.x86_64 erlang-crypto-R16B-03.16.el7.x86_64 erlang-erts-R16B-03.16.el7.x86_64 erlang-hipe-R16B-03.16.el7.x86_64 erlang-inets-R16B-03.16.el7.x86_64 erlang-kernel-R16B-03.16.el7.x86_64 erlang-mnesia-R16B-03.16.el7.x86_64 erlang-os_mon-R16B-03.16.el7.x86_64 erlang-otp_mibs-R16B-03.16.el7.x86_64 erlang-public_key-R16B-03.16.el7.x86_64 erlang-runtime_tools-R16B-03.16.el7.x86_64 erlang-sasl-R16B-03.16.el7.x86_64 erlang-sd_notify-0.1-1.el7.x86_64 erlang-snmp-R16B-03.16.el7.x86_64 erlang-ssl-R16B-03.16.el7.x86_64 erlang-stdlib-R16B-03.16.el7.x86_64 erlang-syntax_tools-R16B-03.16.el7.x86_64 erlang-tools-R16B-03.16.el7.x86_64 erlang-xmerl-R16B-03.16.el7.x86_64 hiera-1.3.4-1.el7.noarch instack-0.0.8-1.el7.noarch instack-undercloud-2.1.3-1.el7.noarch jq-1.3-2.el7.x86_64 liberasurecode-1.1.1-1.el7.x86_64 libnetfilter_queue-1.0.2-2.el7.x86_64 memcached-1.4.25-1.el7.x86_64 1:openstack-ceilometer-alarm-5.0.2-1.el7.noarch 1:openstack-ceilometer-api-5.0.2-1.el7.noarch 1:openstack-ceilometer-central-5.0.2-1.el7.noarch 1:openstack-ceilometer-collector-5.0.2-1.el7.noarch 1:openstack-ceilometer-common-5.0.2-1.el7.noarch 1:openstack-ceilometer-notification-5.0.2-1.el7.noarch 1:openstack-ceilometer-polling-5.0.2-1.el7.noarch 1:openstack-glance-11.0.1-2.el7.noarch 1:openstack-heat-api-5.0.0-1.el7.noarch 1:openstack-heat-api-cfn-5.0.0-1.el7.noarch 1:openstack-heat-api-cloudwatch-5.0.0-1.el7.noarch 1:openstack-heat-common-5.0.0-1.el7.noarch 1:openstack-heat-engine-5.0.0-1.el7.noarch openstack-heat-templates-0-0.1.20151019.el7.noarch 1:openstack-ironic-api-4.2.2-1.el7.noarch 1:openstack-ironic-common-4.2.2-1.el7.noarch 1:openstack-ironic-conductor-4.2.2-1.el7.noarch openstack-ironic-inspector-2.2.2-1.el7.noarch 1:openstack-keystone-8.0.1-1.el7.noarch 1:openstack-neutron-7.0.3-1.el7.noarch 1:openstack-neutron-common-7.0.3-1.el7.noarch 1:openstack-neutron-ml2-7.0.3-1.el7.noarch 1:openstack-neutron-openvswitch-7.0.3-1.el7.noarch 1:openstack-nova-api-12.0.1-1.el7.noarch 1:openstack-nova-cert-12.0.1-1.el7.noarch 1:openstack-nova-common-12.0.1-1.el7.noarch 1:openstack-nova-compute-12.0.1-1.el7.noarch 1:openstack-nova-conductor-12.0.1-1.el7.noarch 1:openstack-nova-scheduler-12.0.1-1.el7.noarch 1:openstack-puppet-modules-7.0.1-1.el7.noarch openstack-selinux-0.6.41-1.el7.noarch openstack-swift-2.5.0-1.el7.noarch openstack-swift-account-2.5.0-1.el7.noarch openstack-swift-container-2.5.0-1.el7.noarch openstack-swift-object-2.5.0-1.el7.noarch openstack-swift-plugin-swift3-1.7-4.el7.noarch openstack-swift-proxy-2.5.0-1.el7.noarch openstack-tripleo-0.0.6-1.el7.noarch openstack-tripleo-heat-templates-0.8.7-1.el7.noarch openstack-tripleo-image-elements-0.9.7-1.el7.noarch openstack-tripleo-puppet-elements-0.0.2-1.el7.noarch openstack-utils-2015.2-1.el7.noarch openvswitch-2.4.0-1.el7.x86_64 os-apply-config-0.1.32-3.el7.noarch os-cloud-config-0.2.10-2.el7.noarch os-collect-config-0.1.36-4.el7.noarch os-net-config-0.1.5-3.el7.noarch os-refresh-config-0.1.11-2.el7.noarch puppet-3.6.2-3.el7.noarch pyOpenSSL-0.15.1-1.el7.noarch pyparsing-2.0.3-1.el7.noarch pysendfile-2.0.0-5.el7.x86_64 pysnmp-4.2.5-2.el7.noarch pystache-0.5.3-2.el7.noarch python-alembic-0.8.3-3.el7.noarch python-amqp-1.4.6-1.el7.noarch python-anyjson-0.3.3-3.el7.noarch python-automaton-0.7.0-1.el7.noarch python-babel-1.3-6.el7.noarch python-bson-3.0.3-1.el7.x86_64 python-cachetools-1.0.3-2.el7.noarch 1:python-ceilometer-5.0.2-1.el7.noarch python-ceilometerclient-1.5.0-1.el7.noarch python-cinderclient-1.4.0-1.el7.noarch python-cliff-1.15.0-1.el7.noarch python-cliff-tablib-1.1-3.el7.noarch python-cmd2-0.6.8-3.el7.noarch python-contextlib2-0.4.0-1.el7.noarch python-croniter-0.3.4-2.el7.noarch python-dogpile-cache-0.5.7-3.el7.noarch python-dogpile-core-0.4.1-2.el7.noarch python-ecdsa-0.11-3.el7.noarch python-editor-0.4-4.el7.noarch python-elasticsearch-1.4.0-2.el7.noarch python-extras-0.0.3-2.el7.noarch python-fixtures-1.4.0-2.el7.noarch python-futures-3.0.3-1.el7.noarch 1:python-glance-11.0.1-2.el7.noarch python-glance-store-0.9.1-1.el7.noarch 1:python-glanceclient-1.1.0-1.el7.noarch python-heatclient-0.8.0-1.el7.noarch python-httplib2-0.9.2-1.el7.noarch python-idna-2.0-1.el7.noarch python-ipaddress-1.0.7-4.el7.noarch python-ironicclient-0.8.1-1.el7.noarch python-jsonpatch-1.2-2.el7.noarch python-jsonpath-rw-1.2.3-2.el7.noarch python-jsonschema-2.3.0-1.el7.noarch python-keyring-5.0-4.el7.noarch 1:python-keystone-8.0.1-1.el7.noarch 1:python-keystoneclient-1.7.2-1.el7.noarch python-keystonemiddleware-2.3.1-1.el7.noarch 1:python-kombu-3.0.32-1.el7.noarch python-ldappool-1.0-4.el7.noarch python-linecache2-1.0.0-1.el7.noarch python-logutils-0.3.3-3.el7.noarch python-memcached-1.54-3.el7.noarch python-migrate-0.10.0-1.el7.noarch python-mimeparse-0.1.4-1.el7.noarch python-monotonic-0.3-1.el7.noarch python-ncclient-0.4.2-2.el7.noarch python-netaddr-0.7.18-1.el7.noarch python-netifaces-0.10.4-1.el7.x86_64 python-networkx-core-1.10-1.el7.noarch 1:python-neutron-7.0.3-1.el7.noarch python-neutronclient-3.1.0-1.el7.noarch python-nose-1.3.7-7.el7.noarch 1:python-nova-12.0.1-1.el7.noarch 1:python-novaclient-2.30.1-1.el7.noarch python-oauthlib-0.7.2-5.20150520git514cad7.el7.noarch python-openstackclient-1.7.2-1.el7.noarch python-openvswitch-2.4.0-1.el7.noarch python-oslo-cache-0.7.0-1.el7.noarch python-oslo-concurrency-2.6.0-1.el7.noarch python-oslo-db-2.6.0-3.el7.noarch python-oslo-log-1.10.0-1.el7.noarch python-oslo-messaging-2.5.0-1.el7.noarch python-oslo-middleware-2.8.0-1.el7.noarch python-oslo-policy-0.11.0-1.el7.noarch python-oslo-rootwrap-2.3.0-1.el7.noarch python-oslo-service-0.9.0-1.el7.noarch python-oslo-versionedobjects-0.10.0-1.el7.noarch python-oslo-vmware-1.21.0-1.el7.noarch python-osprofiler-0.3.0-1.el7.noarch python-paramiko-1.15.1-1.el7.noarch python-paste-deploy-1.5.2-6.el7.noarch python-pbr-1.8.1-2.el7.noarch python-posix_ipc-0.9.8-1.el7.x86_64 python-prettytable-0.7.2-1.el7.noarch python-proliantutils-2.1.7-1.el7.noarch python-psutil-1.2.1-1.el7.x86_64 python-pycadf-1.1.0-1.el7.noarch python-pyeclib-1.2.0-1.el7.x86_64 python-pyghmi-0.8.0-2.el7.noarch python-pygments-2.0.2-4.el7.noarch python-pymongo-3.0.3-1.el7.x86_64 python-pysaml2-3.0.2-1.el7.noarch python-qpid-0.30-1.el7.noarch python-qpid-common-0.30-1.el7.noarch python-repoze-lru-0.4-3.el7.noarch python-repoze-who-2.1-1.el7.noarch python-requests-2.9.1-2.el7.noarch python-retrying-1.2.3-4.el7.noarch python-rfc3986-0.2.0-1.el7.noarch python-routes-1.13-2.el7.noarch python-saharaclient-0.11.1-1.el7.noarch python-semantic_version-2.4.2-1.el7.noarch python-simplegeneric-0.8-7.el7.noarch python-simplejson-3.5.3-5.el7.x86_64 python-sqlalchemy-1.0.11-1.el7.x86_64 python-sqlparse-0.1.18-5.el7.noarch python-stevedore-1.8.0-1.el7.noarch python-swiftclient-2.6.0-1.el7.noarch python-tablib-0.10.0-1.el7.noarch python-taskflow-1.21.0-1.el7.noarch python-tempita-0.5.1-8.el7.noarch python-testtools-1.8.0-2.el7.noarch python-tooz-1.24.0-1.el7.noarch python-traceback2-1.4.0-2.el7.noarch python-tripleoclient-0.0.11-3.el7.noarch python-troveclient-1.3.0-1.el7.noarch python-unicodecsv-0.14.1-1.el7.noarch python-unittest2-1.0.1-1.el7.noarch python-urllib3-1.13.1-3.el7.noarch python-warlock-1.0.1-1.el7.noarch python-webob-1.4.1-2.el7.noarch python-websockify-0.6.0-2.el7.noarch python-wrapt-1.10.5-3.el7.x86_64 python2-PyMySQL-0.6.7-2.el7.noarch python2-appdirs-1.4.0-4.el7.noarch python2-castellan-0.3.1-1.el7.noarch python2-cffi-1.5.2-1.el7.x86_64 python2-cryptography-1.2.1-3.el7.x86_64 python2-debtcollector-0.8.0-1.el7.noarch python2-eventlet-0.17.4-4.el7.noarch python2-fasteners-0.14.1-4.el7.noarch python2-funcsigs-0.4-2.el7.noarch python2-futurist-0.5.0-1.el7.noarch python2-greenlet-0.4.9-1.el7.x86_64 python2-ironic-inspector-client-1.2.0-2.el7.noarch python2-iso8601-0.1.11-1.el7.noarch python2-jsonpath-rw-ext-0.1.7-1.1.el7.noarch python2-mock-1.3.0-1.el7.noarch python2-os-brick-0.5.0-1.el7.noarch python2-os-client-config-1.7.4-1.el7.noarch 2:python2-oslo-config-2.4.0-1.el7.noarch python2-oslo-context-0.6.0-1.el7.noarch python2-oslo-i18n-2.6.0-1.el7.noarch python2-oslo-reports-0.5.0-1.el7.noarch python2-oslo-serialization-1.9.0-1.el7.noarch python2-oslo-utils-2.5.0-1.el7.noarch python2-passlib-1.6.5-1.el7.noarch python2-pecan-1.0.2-2.el7.noarch python2-pyasn1-0.1.9-6.el7.1.noarch python2-rsa-3.3-2.el7.noarch python2-singledispatch-3.4.0.3-4.el7.noarch python2-suds-0.7-0.1.94664ddd46a6.el7.noarch python2-wsme-0.7.0-2.el7.noarch rabbitmq-server-3.3.5-17.el7.noarch ruby-augeas-0.5.0-1.el7.x86_64 ruby-shadow-1.4.1-23.el7.x86_64 rubygem-rgen-0.6.6-2.el7.noarch tripleo-common-0.0.1-2.el7.noarch Thank you, Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From cbrown2 at ocf.co.uk Tue Aug 2 16:34:20 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Tue, 2 Aug 2016 17:34:20 +0100 Subject: [rdo-list] [tripleo] Troubles deploying Libery with HA setup In-Reply-To: References: Message-ID: <1470155660.2497.36.camel@ocf.co.uk> Hi Luca, It rings a bell but to be honest I would try the following: 1. Check hosts file on controllers - I wonder what are the content there? 2. Revert hostname change and see if it is happy 3. Definitely try and compose from latest stable CentOS SIG Liberty packages (not delorean or DLRN or whatever it is these days) 4. Try Mitaka perhaps and follow Graeme's instructions here: https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html You can miss step 4 and 6 as these have been resolved. I tend to work back and reduce things down in complexity. You can specify hostnames as per: http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/no de_placement.html but perhaps this is what you are doing in your ~/hostname-nostri.yaml file? Hope this helps. On Tue, 2016-08-02 at 16:59 +0100, Luca 'remix_tj' Lorenzetto wrote: > Hello, > > I'm deploying Liberty on a set of server using tripleo. I imported > and > correctly introspected 6 nodes. First i did a setup with 1 controller > node and 3 computing nodes. No issues during this deploy, everything > went fine. On this deployment i used custom hostname formats (defined > via yaml environment file) and custom dns_domain (changed in > /etc/nova/nova.conf and /etc/neutron/dhcp_agent.ini and all > /usr/share/openstack-tripleo-heat-templates/*.yaml files that > contains > localdomain) > > Now I'm working to have a 3 controller setup with HA. I'm running > this > command to deploy: > > openstack overcloud deploy --templates -e > /usr/share/openstack-tripleo-heat-templates/environments/network- > isolation.yaml > -e ~/templates/network-environment.yaml -e > ~/templates/puppet-ceph-external.yaml -e ~/hostname-nostri.yaml > --neutron-bridge-mappings datacentre:br-ex,storage-pub:br-stg-pub -e > /usr/share/openstack-tripleo-heat-templates/environments/puppet- > pacemaker.yaml > --ntp-server timesrv1 --control-scale 3 --compute-scale 3 > --ceph-storage-scale 0 --control-flavor controlhp --compute-flavor > computehp --neutron-network-type vxlan --neutron-tunnel-types vxlan > --verbose --debug --log-file overcloud_$(date +%T).log > > I see that stack creation starts and correctly deploy os on the > nodes. > Nodes are named according to ControllerHostnameFormat and > ComputeHostnameFormat that i specified into ~/hostname-nostri.yaml. > > Everything goes well until HA configuration starts. I see this stack > creation failing: > > overcloud-ControllerNodesPostDeployment-bbp3c47jgau2- > ControllerLoadBalancerDeployment_Step1-sla6ce7n2arq > > The error message of the deployment command is: > > Stack failed with status: Resource CREATE failed: > resources.ControllerLoadBalancerDeployment_Step1: > resources.ControllerNodesPostDeployment.Error: resources[0]: > Deployment to server failed: deploy_status_code: Deployment exited > with non-zero status code: 6 > Heat Stack create failed. > > > If i go in depth with heat deployment-show i see that all resources > report this deploy_stderr: > > Error: Could not prefetch mysql_user provider 'mysql': Execution of > '/usr/bin/mysql -NBe SELECT CONCAT(User, '@',Host) AS User FROM > mysql.user' returned 1: ERROR 2002 (HY000): Can't connect to local > MySQL server through socket '/var/lib/mysql/mysql.sock' (2) > Error: Could not prefetch mysql_database provider 'mysql': Execution > of '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 > (HY000): Can't connect to local MySQL server through socket > '/var/lib/mysql/mysql.sock' (2)\ > Error: Command exceeded timeout > Error: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across- > all-nodes]/returns: > change from notrun to 0 failed: Command exceeded timeout > Warning: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster > tripleo_cluster]: Skipping because of failed dependencies > Warning: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster > tripleo_cluster]: Skipping because of failed dependencies > Warning: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: > Skipping because of failed dependencies > Warning: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]: > Skipping because of failed dependencies > Warning: /Stage[main]/Pacemaker::Stonith/Exec[Disable STONITH]: > Skipping because of failed dependencies", > > > > I see nodes are stuck on puppet step > "auth-successful-across-all-nodes" (defined in > /etc/puppet/modules/pacemaker/manifests/corosync.pp) > > /usr/bin/python2 /usr/sbin/pcs cluster auth opsctrl0 opsctrl1 > opsctrl2 > -u hacluster -p PASSWORD --force > > I suppose that the problem is due to corosync service not yet > started. > But as far as i can see corosync will never start because > /etc/corosync/corosync.conf file is missing. > > in /etc/hosts opsctrl{0-2} are correctly defined and each host can > talk with the others. > > > > I'm stuck and i don't know what to do. Anyone had similar issues? I'm > missing something that i have to enter in the configuration? > > > I'm using a in-house mirror of rdo-release made on April 20, and > custom images build from rhel image and this repository. A week ago i > tried with latest repository but had the same error (and also others, > that's why i returned to this older mirror that was working). > > This is the list of packages installed from rdo-release repository: > > crudini-0.7-1.el7.noarch > dib-utils-0.0.9-1.el7.noarch > dibbler-client-1.0.1-0.RC1.2.el7.x86_64 > diskimage-builder-1.4.0-1.el7.noarch > erlang-asn1-R16B-03.16.el7.x86_64 > erlang-compiler-R16B-03.16.el7.x86_64 > erlang-crypto-R16B-03.16.el7.x86_64 > erlang-erts-R16B-03.16.el7.x86_64 > erlang-hipe-R16B-03.16.el7.x86_64 > erlang-inets-R16B-03.16.el7.x86_64 > erlang-kernel-R16B-03.16.el7.x86_64 > erlang-mnesia-R16B-03.16.el7.x86_64 > erlang-os_mon-R16B-03.16.el7.x86_64 > erlang-otp_mibs-R16B-03.16.el7.x86_64 > erlang-public_key-R16B-03.16.el7.x86_64 > erlang-runtime_tools-R16B-03.16.el7.x86_64 > erlang-sasl-R16B-03.16.el7.x86_64 > erlang-sd_notify-0.1-1.el7.x86_64 > erlang-snmp-R16B-03.16.el7.x86_64 > erlang-ssl-R16B-03.16.el7.x86_64 > erlang-stdlib-R16B-03.16.el7.x86_64 > erlang-syntax_tools-R16B-03.16.el7.x86_64 > erlang-tools-R16B-03.16.el7.x86_64 > erlang-xmerl-R16B-03.16.el7.x86_64 > hiera-1.3.4-1.el7.noarch > instack-0.0.8-1.el7.noarch > instack-undercloud-2.1.3-1.el7.noarch > jq-1.3-2.el7.x86_64 > liberasurecode-1.1.1-1.el7.x86_64 > libnetfilter_queue-1.0.2-2.el7.x86_64 > memcached-1.4.25-1.el7.x86_64 > 1:openstack-ceilometer-alarm-5.0.2-1.el7.noarch > 1:openstack-ceilometer-api-5.0.2-1.el7.noarch > 1:openstack-ceilometer-central-5.0.2-1.el7.noarch > 1:openstack-ceilometer-collector-5.0.2-1.el7.noarch > 1:openstack-ceilometer-common-5.0.2-1.el7.noarch > 1:openstack-ceilometer-notification-5.0.2-1.el7.noarch > 1:openstack-ceilometer-polling-5.0.2-1.el7.noarch > 1:openstack-glance-11.0.1-2.el7.noarch > 1:openstack-heat-api-5.0.0-1.el7.noarch > 1:openstack-heat-api-cfn-5.0.0-1.el7.noarch > 1:openstack-heat-api-cloudwatch-5.0.0-1.el7.noarch > 1:openstack-heat-common-5.0.0-1.el7.noarch > 1:openstack-heat-engine-5.0.0-1.el7.noarch > openstack-heat-templates-0-0.1.20151019.el7.noarch > 1:openstack-ironic-api-4.2.2-1.el7.noarch > 1:openstack-ironic-common-4.2.2-1.el7.noarch > 1:openstack-ironic-conductor-4.2.2-1.el7.noarch > openstack-ironic-inspector-2.2.2-1.el7.noarch > 1:openstack-keystone-8.0.1-1.el7.noarch > 1:openstack-neutron-7.0.3-1.el7.noarch > 1:openstack-neutron-common-7.0.3-1.el7.noarch > 1:openstack-neutron-ml2-7.0.3-1.el7.noarch > 1:openstack-neutron-openvswitch-7.0.3-1.el7.noarch > 1:openstack-nova-api-12.0.1-1.el7.noarch > 1:openstack-nova-cert-12.0.1-1.el7.noarch > 1:openstack-nova-common-12.0.1-1.el7.noarch > 1:openstack-nova-compute-12.0.1-1.el7.noarch > 1:openstack-nova-conductor-12.0.1-1.el7.noarch > 1:openstack-nova-scheduler-12.0.1-1.el7.noarch > 1:openstack-puppet-modules-7.0.1-1.el7.noarch > openstack-selinux-0.6.41-1.el7.noarch > openstack-swift-2.5.0-1.el7.noarch > openstack-swift-account-2.5.0-1.el7.noarch > openstack-swift-container-2.5.0-1.el7.noarch > openstack-swift-object-2.5.0-1.el7.noarch > openstack-swift-plugin-swift3-1.7-4.el7.noarch > openstack-swift-proxy-2.5.0-1.el7.noarch > openstack-tripleo-0.0.6-1.el7.noarch > openstack-tripleo-heat-templates-0.8.7-1.el7.noarch > openstack-tripleo-image-elements-0.9.7-1.el7.noarch > openstack-tripleo-puppet-elements-0.0.2-1.el7.noarch > openstack-utils-2015.2-1.el7.noarch > openvswitch-2.4.0-1.el7.x86_64 > os-apply-config-0.1.32-3.el7.noarch > os-cloud-config-0.2.10-2.el7.noarch > os-collect-config-0.1.36-4.el7.noarch > os-net-config-0.1.5-3.el7.noarch > os-refresh-config-0.1.11-2.el7.noarch > puppet-3.6.2-3.el7.noarch > pyOpenSSL-0.15.1-1.el7.noarch > pyparsing-2.0.3-1.el7.noarch > pysendfile-2.0.0-5.el7.x86_64 > pysnmp-4.2.5-2.el7.noarch > pystache-0.5.3-2.el7.noarch > python-alembic-0.8.3-3.el7.noarch > python-amqp-1.4.6-1.el7.noarch > python-anyjson-0.3.3-3.el7.noarch > python-automaton-0.7.0-1.el7.noarch > python-babel-1.3-6.el7.noarch > python-bson-3.0.3-1.el7.x86_64 > python-cachetools-1.0.3-2.el7.noarch > 1:python-ceilometer-5.0.2-1.el7.noarch > python-ceilometerclient-1.5.0-1.el7.noarch > python-cinderclient-1.4.0-1.el7.noarch > python-cliff-1.15.0-1.el7.noarch > python-cliff-tablib-1.1-3.el7.noarch > python-cmd2-0.6.8-3.el7.noarch > python-contextlib2-0.4.0-1.el7.noarch > python-croniter-0.3.4-2.el7.noarch > python-dogpile-cache-0.5.7-3.el7.noarch > python-dogpile-core-0.4.1-2.el7.noarch > python-ecdsa-0.11-3.el7.noarch > python-editor-0.4-4.el7.noarch > python-elasticsearch-1.4.0-2.el7.noarch > python-extras-0.0.3-2.el7.noarch > python-fixtures-1.4.0-2.el7.noarch > python-futures-3.0.3-1.el7.noarch > 1:python-glance-11.0.1-2.el7.noarch > python-glance-store-0.9.1-1.el7.noarch > 1:python-glanceclient-1.1.0-1.el7.noarch > python-heatclient-0.8.0-1.el7.noarch > python-httplib2-0.9.2-1.el7.noarch > python-idna-2.0-1.el7.noarch > python-ipaddress-1.0.7-4.el7.noarch > python-ironicclient-0.8.1-1.el7.noarch > python-jsonpatch-1.2-2.el7.noarch > python-jsonpath-rw-1.2.3-2.el7.noarch > python-jsonschema-2.3.0-1.el7.noarch > python-keyring-5.0-4.el7.noarch > 1:python-keystone-8.0.1-1.el7.noarch > 1:python-keystoneclient-1.7.2-1.el7.noarch > python-keystonemiddleware-2.3.1-1.el7.noarch > 1:python-kombu-3.0.32-1.el7.noarch > python-ldappool-1.0-4.el7.noarch > python-linecache2-1.0.0-1.el7.noarch > python-logutils-0.3.3-3.el7.noarch > python-memcached-1.54-3.el7.noarch > python-migrate-0.10.0-1.el7.noarch > python-mimeparse-0.1.4-1.el7.noarch > python-monotonic-0.3-1.el7.noarch > python-ncclient-0.4.2-2.el7.noarch > python-netaddr-0.7.18-1.el7.noarch > python-netifaces-0.10.4-1.el7.x86_64 > python-networkx-core-1.10-1.el7.noarch > 1:python-neutron-7.0.3-1.el7.noarch > python-neutronclient-3.1.0-1.el7.noarch > python-nose-1.3.7-7.el7.noarch > 1:python-nova-12.0.1-1.el7.noarch > 1:python-novaclient-2.30.1-1.el7.noarch > python-oauthlib-0.7.2-5.20150520git514cad7.el7.noarch > python-openstackclient-1.7.2-1.el7.noarch > python-openvswitch-2.4.0-1.el7.noarch > python-oslo-cache-0.7.0-1.el7.noarch > python-oslo-concurrency-2.6.0-1.el7.noarch > python-oslo-db-2.6.0-3.el7.noarch > python-oslo-log-1.10.0-1.el7.noarch > python-oslo-messaging-2.5.0-1.el7.noarch > python-oslo-middleware-2.8.0-1.el7.noarch > python-oslo-policy-0.11.0-1.el7.noarch > python-oslo-rootwrap-2.3.0-1.el7.noarch > python-oslo-service-0.9.0-1.el7.noarch > python-oslo-versionedobjects-0.10.0-1.el7.noarch > python-oslo-vmware-1.21.0-1.el7.noarch > python-osprofiler-0.3.0-1.el7.noarch > python-paramiko-1.15.1-1.el7.noarch > python-paste-deploy-1.5.2-6.el7.noarch > python-pbr-1.8.1-2.el7.noarch > python-posix_ipc-0.9.8-1.el7.x86_64 > python-prettytable-0.7.2-1.el7.noarch > python-proliantutils-2.1.7-1.el7.noarch > python-psutil-1.2.1-1.el7.x86_64 > python-pycadf-1.1.0-1.el7.noarch > python-pyeclib-1.2.0-1.el7.x86_64 > python-pyghmi-0.8.0-2.el7.noarch > python-pygments-2.0.2-4.el7.noarch > python-pymongo-3.0.3-1.el7.x86_64 > python-pysaml2-3.0.2-1.el7.noarch > python-qpid-0.30-1.el7.noarch > python-qpid-common-0.30-1.el7.noarch > python-repoze-lru-0.4-3.el7.noarch > python-repoze-who-2.1-1.el7.noarch > python-requests-2.9.1-2.el7.noarch > python-retrying-1.2.3-4.el7.noarch > python-rfc3986-0.2.0-1.el7.noarch > python-routes-1.13-2.el7.noarch > python-saharaclient-0.11.1-1.el7.noarch > python-semantic_version-2.4.2-1.el7.noarch > python-simplegeneric-0.8-7.el7.noarch > python-simplejson-3.5.3-5.el7.x86_64 > python-sqlalchemy-1.0.11-1.el7.x86_64 > python-sqlparse-0.1.18-5.el7.noarch > python-stevedore-1.8.0-1.el7.noarch > python-swiftclient-2.6.0-1.el7.noarch > python-tablib-0.10.0-1.el7.noarch > python-taskflow-1.21.0-1.el7.noarch > python-tempita-0.5.1-8.el7.noarch > python-testtools-1.8.0-2.el7.noarch > python-tooz-1.24.0-1.el7.noarch > python-traceback2-1.4.0-2.el7.noarch > python-tripleoclient-0.0.11-3.el7.noarch > python-troveclient-1.3.0-1.el7.noarch > python-unicodecsv-0.14.1-1.el7.noarch > python-unittest2-1.0.1-1.el7.noarch > python-urllib3-1.13.1-3.el7.noarch > python-warlock-1.0.1-1.el7.noarch > python-webob-1.4.1-2.el7.noarch > python-websockify-0.6.0-2.el7.noarch > python-wrapt-1.10.5-3.el7.x86_64 > python2-PyMySQL-0.6.7-2.el7.noarch > python2-appdirs-1.4.0-4.el7.noarch > python2-castellan-0.3.1-1.el7.noarch > python2-cffi-1.5.2-1.el7.x86_64 > python2-cryptography-1.2.1-3.el7.x86_64 > python2-debtcollector-0.8.0-1.el7.noarch > python2-eventlet-0.17.4-4.el7.noarch > python2-fasteners-0.14.1-4.el7.noarch > python2-funcsigs-0.4-2.el7.noarch > python2-futurist-0.5.0-1.el7.noarch > python2-greenlet-0.4.9-1.el7.x86_64 > python2-ironic-inspector-client-1.2.0-2.el7.noarch > python2-iso8601-0.1.11-1.el7.noarch > python2-jsonpath-rw-ext-0.1.7-1.1.el7.noarch > python2-mock-1.3.0-1.el7.noarch > python2-os-brick-0.5.0-1.el7.noarch > python2-os-client-config-1.7.4-1.el7.noarch > 2:python2-oslo-config-2.4.0-1.el7.noarch > python2-oslo-context-0.6.0-1.el7.noarch > python2-oslo-i18n-2.6.0-1.el7.noarch > python2-oslo-reports-0.5.0-1.el7.noarch > python2-oslo-serialization-1.9.0-1.el7.noarch > python2-oslo-utils-2.5.0-1.el7.noarch > python2-passlib-1.6.5-1.el7.noarch > python2-pecan-1.0.2-2.el7.noarch > python2-pyasn1-0.1.9-6.el7.1.noarch > python2-rsa-3.3-2.el7.noarch > python2-singledispatch-3.4.0.3-4.el7.noarch > python2-suds-0.7-0.1.94664ddd46a6.el7.noarch > python2-wsme-0.7.0-2.el7.noarch > rabbitmq-server-3.3.5-17.el7.noarch > ruby-augeas-0.5.0-1.el7.x86_64 > ruby-shadow-1.4.1-23.el7.x86_64 > rubygem-rgen-0.6.6-2.el7.noarch > tripleo-common-0.0.1-2.el7.noarch > > > Thank you, > > > Luca > > > > > > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero > delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet ? la pi? grande biblioteca del mondo. > Ma il problema ? che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Regards, Christopher Brown OpenStack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc Please note, any emails relating to an OCF Support request must always be sent to support at ocf.co.uk for a ticket number to be generated or existing support ticket to be updated. Should this not be done then OCF cannot be held responsible for requests not dealt with in a timely manner. OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 2PG. This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. From abregman at redhat.com Tue Aug 2 17:51:51 2016 From: abregman at redhat.com (Arie Bregman) Date: Tue, 2 Aug 2016 20:51:51 +0300 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: On Tue, Aug 2, 2016 at 3:53 PM, Wesley Hayutin wrote: > > > On Tue, Aug 2, 2016 at 4:58 AM, Arie Bregman wrote: >> >> It became a discussion around the official installer and how to >> improve it. While it's an important discussion, no doubt, I actually >> want to focus on our automation and CI tools. >> >> Since I see there is an agreement that collaboration does make sense >> here, let's move to the hard questions :) >> >> Wes, Tal - there is huge difference right now between infrared and >> tripleo-quickstart in their structure. One is all-in-one project and >> the other one is multiple micro projects managed by one project. Do >> you think there is a way to consolidate or move to a different model >> which will make sense for both RDO and RHOSP? something that both >> groups can work on. > > > I am happy to be part of the discussion, and I am also very willing to help > and try to drive suggestions to the tripleo-quickstart community. > I need to make a point clear though, just to make sure we're on the same > page.. I do not own oooq, I am not a core on oooq. > I can help facilitate a discussion but oooq is an upstream tripleo tool that > replaces instack-virt-setup [1]. > It also happens to be a great tool for easily deploying TripleO end to end > [3] > > What I *can* do is show everyone how to manipulate tripleo-quickstart and > customize it with composable ansible roles, templates, settings etc.. > This would allow any upstream or downstream project to override the native > oooq roles and *any* step that does not work for another group w/ 3rd party > roles [2]. > These 3rd party roles can be free and opensource or internal only, it works > either way. > This was discussed in depth as part of the production chain meetings, the > message may have been lost unfortunately. > > I hope this resets your expectations of what I can and can not do as part of > these discussions. > Let me know where and when and I'm happy to be part of the discussion. Thanks for clarifying :) Next reasonable step would probably be to propose some sort of blueprint for tripleo-quickstart to include some of InfraRed features and by that have one tool driven by upstream development that can be either cloned downstream or used as it is with an internal data project. OR have InfraRed pushed into tripleo/openstack namespace and expose it to the RDO community (without internal data of course). Personally, I really like the pluggable[1] structure (which allows it to actually consume tripleo-quickstart) so I'm not sure if it can be really merged with tripleo-quickstart as proposed in the first option. I like the second option, although it still forces us to have two tools, but after a period of time, I believe it will be clear what the community prefers, which will allow us to remove one of the projects eventually. So, unless there are other ideas, I think the next move should be made by Tal. Tal, I'm willing to help with whatever is needed. [1] http://infrared.readthedocs.io/en/latest > > Thanks > > [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart > [2] > https://github.com/redhat-openstack/?utf8=%E2%9C%93&query=ansible-role-tripleo > [3[ https://www.rdoproject.org/tripleo/ > > >> >> >> Raoul - I totally agree with you, especially with "difficult for >> anyone to start contributing and collaborate". This is exactly why >> this discussion started. If we can agree on one set of tools, it will >> make everyone's life easier - current groups, new contributors, folks >> that just want to deploy TripleO quickly. But I'm afraid some >> sacrifices need to be made by both groups. >> >> David - I thought WeiRDO is used only for packstack, so I apologize I >> didn't include it. It does sound like an anther testing project, is >> there a place to merge it with another existing testing project? like >> Octario for example or one of TripleO testing projects. Or does it >> make sense to keep it a standalone project? >> >> >> >> >> On Tue, Aug 2, 2016 at 11:12 AM, Christopher Brown >> wrote: >> > Hello RDOistas (I think that is the expression?), >> > >> > Another year, another OpenStack deployment tool. :) >> > >> > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: >> >> If we are talking about tools, I would also want to add something >> >> with regards to user interface of these tools. This is based on my >> >> own experience: >> >> >> >> I started trying to deploy Openstack with Staypuft and The Foreman. >> >> The UI of The Foreman was intuitive enough for the discovery and >> >> provisioning of the servers. The OpenStack portion, not so much. >> > >> > This is exactly mine also. I think this works really well in very large >> > enterprise environments where you need to split out services over more >> > than three controllers. You do need good in-house puppet skills though >> > so better for enterprise with a good sysadmin team. >> > >> >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I >> >> believe) that allowed you to graphically build your Openstack cloud. >> >> That was a reasonable good GUI for Openstack. >> > >> > Well, I found it barely usable. It was only ever good as a graphical >> > representiation of what the build was doing. Interacting with it was >> > not great. >> > >> >> Following that, TripleO become a script based installer, that >> >> required experience in Heat templates. I know I didn?t have it and >> >> had to ask in the mailing list about how to present this or change >> >> that. I got a couple of installs working with this setup. >> > >> > Works well now that I understand all the foibles and have invested time >> > into understanding heat templates and puppet modules. Its good in that >> > it forces you to learn about orchestration which is such an important >> > end-goal of cloud environments. >> > >> >> In the last session in Austin, my goal was to obtain information on >> >> how others were installing Openstack. I was pointed to Fuel as an >> >> alternative. I tried it up, and it just worked. It had the >> >> discovering capability from The Foreman, and the configuration >> >> options from TripleO. I understand that is based in Ansible and >> >> because of that, it is not fully CentOS ready for all the nodes (at >> >> least not in version 9 that I tried). In any case, as a deployer and >> >> installer, it is the most well rounded tool that I found. >> > >> > This is interesting to know. I've heard of Fuel of course but there are >> > some politics involved - it still has the team:single-vendor tag but >> > from what I see Mirantis are very keen for it to become the default >> > OpenStack installer. I don't think being Ansible-based should be a >> > problem - we are deploying OpenShift on OpenStack which uses Openshift- >> > ansible - this recently moved to Ansible 2.1 without too much >> > disruption. >> > >> >> I?d love to see RDO moving into that direction, and having an easy to >> >> use, end user ready deployer tool. >> > >> > If its as good as you say its definitely worth evaluating. From our >> > point of view, we want to be able to add services to the pacemaker >> > cluster with some ease - for example Magnum and Sahara - and it looks >> > like there are steps being taken with regards to composable roles and >> > simplification of the pacemaker cluster to just core services. >> > >> > But if someone can explain that better I would appreciate it. >> > >> > Regards >> > >> >> IB >> >> >> >> >> >> __ >> >> Ignacio Bravo >> >> LTG Federal, Inc >> >> www.ltgfederal.com >> >> >> >> >> >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard >> >> > wrote: >> >> > >> >> > The vast majority of RDO's CI relies on using upstream >> >> > installation/deployment projects in order to test installation of >> >> > RDO >> >> > packages in different ways and configurations. >> >> > >> >> > Unless I'm mistaken, TripleO Quickstart was originally created as a >> >> > mean to "easily" install TripleO in different topologies without >> >> > requiring a massive amount of hardware. >> >> > This project allows us to test TripleO in virtual deployments on >> >> > just >> >> > one server instead of, say, 6. >> >> > >> >> > There's also WeIRDO [1] which was left out of your list. >> >> > WeIRDO is super simple and simply aims to run upstream gate jobs >> >> > (such >> >> > as puppet-openstack-integration [2][3] and packstack [4][5]) >> >> > outside >> >> > of the gate. >> >> > It'll install dependencies that are expected to be there (i.e, >> >> > usually >> >> > set up by the openstack-infra gate preparation jobs), set up the >> >> > trunk >> >> > repositories we're interested in testing and the rest is handled by >> >> > the upstream project testing framework. >> >> > >> >> > The WeIRDO project is /very/ low maintenance and brings an >> >> > exceptional >> >> > amount of coverage and value. >> >> > This coverage is important because RDO provides OpenStack packages >> >> > or >> >> > projects that are not necessarily used by TripleO and the reality >> >> > is >> >> > that not everyone deploying OpenStack on CentOS with RDO will be >> >> > using >> >> > TripleO. >> >> > >> >> > Anyway, sorry for sidetracking but back to the topic, thanks for >> >> > opening the discussion. >> >> > >> >> > What honestly perplexes me is the situation of CI in RDO and OSP, >> >> > especially around TripleO/Director, is the amount of work that is >> >> > spent downstream. >> >> > And by downstream, here, I mean anything that isn't in TripleO >> >> > proper. >> >> > >> >> > I keep dreaming about how awesome upstream TripleO CI would be if >> >> > all >> >> > that effort was spent directly there instead -- and then that all >> >> > work >> >> > could bear fruit and trickle down downstream for free. >> >> > Exactly like how we keep improving the testing coverage in >> >> > puppet-openstack-integration, it's automatically pulled in RDO CI >> >> > through WeIRDO for free. >> >> > We make the upstream better and we benefit from it simultaneously: >> >> > everyone wins. >> >> > >> >> > [1]: https://github.com/rdo-infra/weirdo >> >> > [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst >> >> > ack >> >> > [3]: https://github.com/openstack/puppet-openstack-integration#desc >> >> > ription >> >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >> >> > [5]: https://github.com/openstack/packstack#packstack-integration-t >> >> > ests >> >> > >> >> > David Moreau Simard >> >> > Senior Software Engineer | Openstack RDO >> >> > >> >> > dmsimard = [irc, github, twitter] >> >> > >> >> > David Moreau Simard >> >> > Senior Software Engineer | Openstack RDO >> >> > >> >> > dmsimard = [irc, github, twitter] >> >> > >> >> > >> >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman >> >> > wrote: >> >> > > Hi, >> >> > > >> >> > > I would like to start a discussion on the overlap between tools >> >> > > we >> >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. >> >> > > >> >> > > Several months ago, we worked on one common framework for >> >> > > deploying >> >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it >> >> > > didn't work out well, which eventually led each group to focus on >> >> > > developing other existing/new tools. >> >> > > >> >> > > What we have right now for deploying and testing >> >> > > -------------------------------------------------------- >> >> > > === Component CI, Gating === >> >> > > I'll start with the projects we created, I think that's only fair >> >> > > :) >> >> > > >> >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB >> >> > > project. >> >> > > >> >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per >> >> > > release. >> >> > > >> >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, >> >> > > tempest, >> >> > > csit) + Patching RPMs with submitted code. >> >> > > >> >> > > === Automation, QE === >> >> > > * InfraRed[4] - provision install and test. Pluggable and >> >> > > modular, >> >> > > allows you to create your own provisioner, installer and tester. >> >> > > >> >> > > As far as I know, the groups is working now on different >> >> > > structure of >> >> > > one main project and three sub projects (provision, install and >> >> > > test). >> >> > > >> >> > > === RDO === >> >> > > I didn't use RDO tools, so I apologize if I got something wrong: >> >> > > >> >> > > * About ~25 micro independent Ansible roles[5]. You can either >> >> > > choose >> >> > > to use one of them or several together. They are used for >> >> > > provisioning, installing and testing Tripleo. >> >> > > >> >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying >> >> > > tripleo >> >> > > and test it. >> >> > > >> >> > > As I said, I didn't use the tools, so feel free to add more >> >> > > information you think is relevant. >> >> > > >> >> > > === More? === >> >> > > I hope not. Let us know if are familiar with more tools. >> >> > > >> >> > > Conclusion >> >> > > -------------- >> >> > > So as you can see, there are several projects that eventually >> >> > > overlap >> >> > > in many areas. Each group is basically using the same tasks >> >> > > (provision >> >> > > resources, build/import overcloud images, run tempest, collect >> >> > > logs, >> >> > > etc.) >> >> > > >> >> > > Personally, I think it's a waste of resources. For each task >> >> > > there is >> >> > > at least two people from different groups who work on exactly the >> >> > > same >> >> > > task. The most recent example I can give is OVB. As far as I >> >> > > know, >> >> > > both groups are working on implementing it in their set of tools >> >> > > right >> >> > > now. >> >> > > >> >> > > On the other hand, you can always claim: "we already tried to >> >> > > work on >> >> > > the same framework, we failed to do it successfully" - right, but >> >> > > maybe with better ground rules we can manage it. We would >> >> > > defiantly >> >> > > benefit a lot from doing that. >> >> > > >> >> > > What's next? >> >> > > ---------------- >> >> > > So first of all, I would like to hear from you if you think that >> >> > > we >> >> > > can collaborate once again or is it actually better to keep it as >> >> > > it >> >> > > is now. >> >> > > >> >> > > If you agree that collaboration here makes sense, maybe you have >> >> > > ideas >> >> > > on how we can do it better this time. >> >> > > >> >> > > I think that setting up a meeting to discuss the right >> >> > > architecture >> >> > > for the project(s) and decide on good review/gating process, >> >> > > would be >> >> > > a good start. >> >> > > >> >> > > Please let me know what do you think and keep in mind that this >> >> > > is not >> >> > > about which tool is better!. As you can see I didn't mention the >> >> > > time >> >> > > it takes for each tool to deploy and test, and also not the full >> >> > > feature list it supports. >> >> > > If possible, we should keep it about collaborating and not >> >> > > choosing >> >> > > the best tool. Our solution could be the combination of two or >> >> > > more >> >> > > tools eventually (tripleo-red, infra-quickstart? :D ) >> >> > > >> >> > > "You may say I'm a dreamer, but I'm not the only one. I hope some >> >> > > day >> >> > > you'll join us and the infra will be as one" :) >> >> > > >> >> > > [1] https://github.com/redhat-openstack/ansible-ovb >> >> > > [2] https://github.com/redhat-openstack/ansible-rhosp >> >> > > [3] https://github.com/redhat-openstack/octario >> >> > > [4] https://github.com/rhosqeauto/InfraRed >> >> > > [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi >> >> > > ble-role >> >> > > [6] https://github.com/openstack/tripleo-quickstart >> >> > > >> >> > > _______________________________________________ >> >> > > rdo-list mailing list >> >> > > rdo-list at redhat.com >> >> > > https://www.redhat.com/mailman/listinfo/rdo-list >> >> > > >> >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > _______________________________________________ >> >> > rdo-list mailing list >> >> > rdo-list at redhat.com >> >> > https://www.redhat.com/mailman/listinfo/rdo-list >> >> > >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> > -- >> > Regards, >> > >> > Christopher Brown >> > OpenStack Engineer >> > OCF plc >> > >> > Tel: +44 (0)114 257 2200 >> > Web: www.ocf.co.uk >> > Blog: blog.ocf.co.uk >> > Twitter: @ocfplc >> > >> > Please note, any emails relating to an OCF Support request must always >> > be sent to support at ocf.co.uk for a ticket number to be generated or >> > existing support ticket to be updated. Should this not be done then OCF >> > cannot be held responsible for requests not dealt with in a timely >> > manner. >> > >> > OCF plc is a company registered in England and Wales. Registered number >> > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, >> > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 >> > 2PG. >> > >> > This message is private and confidential. If you have received this >> > message in error, please notify us immediately and remove it from your >> > system. >> > >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> -- >> Arie Bregman >> Red Hat Israel >> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview > > -- Arie Bregman Red Hat Israel Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview From whayutin at redhat.com Tue Aug 2 18:14:39 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 2 Aug 2016 14:14:39 -0400 Subject: [rdo-list] rdo infra weekly scrum Message-ID: Greetings, links to our weekly meeting. Highlights: * Paul Belanger walked through his goals w/ oooq and openstack infra. Thanks Paul * ansible logging improvements * disabling TripleO HA from ci.centos and use internal hardware * Gates for baremetal https://review.rdoproject.org/etherpad/p/rdo-infra-scrum https://bluejeans.com/s/a5WO/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Aug 2 18:52:55 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 2 Aug 2016 14:52:55 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: On Tue, Aug 2, 2016 at 1:51 PM, Arie Bregman wrote: > On Tue, Aug 2, 2016 at 3:53 PM, Wesley Hayutin > wrote: > > > > > > On Tue, Aug 2, 2016 at 4:58 AM, Arie Bregman > wrote: > >> > >> It became a discussion around the official installer and how to > >> improve it. While it's an important discussion, no doubt, I actually > >> want to focus on our automation and CI tools. > >> > >> Since I see there is an agreement that collaboration does make sense > >> here, let's move to the hard questions :) > >> > >> Wes, Tal - there is huge difference right now between infrared and > >> tripleo-quickstart in their structure. One is all-in-one project and > >> the other one is multiple micro projects managed by one project. Do > >> you think there is a way to consolidate or move to a different model > >> which will make sense for both RDO and RHOSP? something that both > >> groups can work on. > > > > > > I am happy to be part of the discussion, and I am also very willing to > help > > and try to drive suggestions to the tripleo-quickstart community. > > I need to make a point clear though, just to make sure we're on the same > > page.. I do not own oooq, I am not a core on oooq. > > I can help facilitate a discussion but oooq is an upstream tripleo tool > that > > replaces instack-virt-setup [1]. > > It also happens to be a great tool for easily deploying TripleO end to > end > > [3] > > > > What I *can* do is show everyone how to manipulate tripleo-quickstart and > > customize it with composable ansible roles, templates, settings etc.. > > This would allow any upstream or downstream project to override the > native > > oooq roles and *any* step that does not work for another group w/ 3rd > party > > roles [2]. > > These 3rd party roles can be free and opensource or internal only, it > works > > either way. > > This was discussed in depth as part of the production chain meetings, > the > > message may have been lost unfortunately. > > > > I hope this resets your expectations of what I can and can not do as > part of > > these discussions. > > Let me know where and when and I'm happy to be part of the discussion. > > Thanks for clarifying :) > > Next reasonable step would probably be to propose some sort of > blueprint for tripleo-quickstart to include some of InfraRed features > and by that have one tool driven by upstream development that can be > either cloned downstream or used as it is with an internal data > project. > Sure.. a blueprint would help everyone understand the feature and the motivation. You could also just plug in the feature you are looking for to oooq and see if it meets your requirements. See below. > > OR > > have InfraRed pushed into tripleo/openstack namespace and expose it to > the RDO community (without internal data of course). Personally, I > really like the pluggable[1] structure (which allows it to actually > consume tripleo-quickstart) so I'm not sure if it can be really merged > with tripleo-quickstart as proposed in the first option. > The way oooq is built one can plugin or override any part at run time with custom playbooks, roles, and config. There isn't anything that needs to be checked in directly to oooq to use it. It's designed such that third parties can make their own decisions to use something native to quickstart, something from our role library, or something completely independent. This allows teams, individuals or whom ever to do what they need to with out having to fork or re-roll the entire framework. The important step is to note that these 3rd party roles or (oooq-extras) incubate, mature and then graduate to github/openstack. The upstream openstack community should lead, evaluate, and via blueprints vote on the canonical CI tool set. We can record a demonstration if required, but there is nothing stopping anyone right now from doing this today. I'm just browsing the role library for an example, I had no idea [1] existed. Looks like Raoul had a requirement and just made it work. Justin, from the browbeat project has graciously created some documentation regarding 3rd party roles. It has yet to merge, but it should help illustrate how these roles are used. [2] Thanks Arie for leading the discussion. [1] https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-validate-ha [2] https://review.openstack.org/#/c/346733/ > > I like the second option, although it still forces us to have two > tools, but after a period of time, I believe it will be clear what the > community prefers, which will allow us to remove one of the projects > eventually. > > So, unless there are other ideas, I think the next move should be made by > Tal. > > Tal, I'm willing to help with whatever is needed. > > [1] http://infrared.readthedocs.io/en/latest > > > > > Thanks > > > > [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart > > [2] > > > https://github.com/redhat-openstack/?utf8=%E2%9C%93&query=ansible-role-tripleo > > [3[ https://www.rdoproject.org/tripleo/ > > > > > >> > >> > >> Raoul - I totally agree with you, especially with "difficult for > >> anyone to start contributing and collaborate". This is exactly why > >> this discussion started. If we can agree on one set of tools, it will > >> make everyone's life easier - current groups, new contributors, folks > >> that just want to deploy TripleO quickly. But I'm afraid some > >> sacrifices need to be made by both groups. > >> > >> David - I thought WeiRDO is used only for packstack, so I apologize I > >> didn't include it. It does sound like an anther testing project, is > >> there a place to merge it with another existing testing project? like > >> Octario for example or one of TripleO testing projects. Or does it > >> make sense to keep it a standalone project? > >> > >> > >> > >> > >> On Tue, Aug 2, 2016 at 11:12 AM, Christopher Brown > >> wrote: > >> > Hello RDOistas (I think that is the expression?), > >> > > >> > Another year, another OpenStack deployment tool. :) > >> > > >> > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: > >> >> If we are talking about tools, I would also want to add something > >> >> with regards to user interface of these tools. This is based on my > >> >> own experience: > >> >> > >> >> I started trying to deploy Openstack with Staypuft and The Foreman. > >> >> The UI of The Foreman was intuitive enough for the discovery and > >> >> provisioning of the servers. The OpenStack portion, not so much. > >> > > >> > This is exactly mine also. I think this works really well in very > large > >> > enterprise environments where you need to split out services over more > >> > than three controllers. You do need good in-house puppet skills though > >> > so better for enterprise with a good sysadmin team. > >> > > >> >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I > >> >> believe) that allowed you to graphically build your Openstack cloud. > >> >> That was a reasonable good GUI for Openstack. > >> > > >> > Well, I found it barely usable. It was only ever good as a graphical > >> > representiation of what the build was doing. Interacting with it was > >> > not great. > >> > > >> >> Following that, TripleO become a script based installer, that > >> >> required experience in Heat templates. I know I didn?t have it and > >> >> had to ask in the mailing list about how to present this or change > >> >> that. I got a couple of installs working with this setup. > >> > > >> > Works well now that I understand all the foibles and have invested > time > >> > into understanding heat templates and puppet modules. Its good in that > >> > it forces you to learn about orchestration which is such an important > >> > end-goal of cloud environments. > >> > > >> >> In the last session in Austin, my goal was to obtain information on > >> >> how others were installing Openstack. I was pointed to Fuel as an > >> >> alternative. I tried it up, and it just worked. It had the > >> >> discovering capability from The Foreman, and the configuration > >> >> options from TripleO. I understand that is based in Ansible and > >> >> because of that, it is not fully CentOS ready for all the nodes (at > >> >> least not in version 9 that I tried). In any case, as a deployer and > >> >> installer, it is the most well rounded tool that I found. > >> > > >> > This is interesting to know. I've heard of Fuel of course but there > are > >> > some politics involved - it still has the team:single-vendor tag but > >> > from what I see Mirantis are very keen for it to become the default > >> > OpenStack installer. I don't think being Ansible-based should be a > >> > problem - we are deploying OpenShift on OpenStack which uses > Openshift- > >> > ansible - this recently moved to Ansible 2.1 without too much > >> > disruption. > >> > > >> >> I?d love to see RDO moving into that direction, and having an easy to > >> >> use, end user ready deployer tool. > >> > > >> > If its as good as you say its definitely worth evaluating. From our > >> > point of view, we want to be able to add services to the pacemaker > >> > cluster with some ease - for example Magnum and Sahara - and it looks > >> > like there are steps being taken with regards to composable roles and > >> > simplification of the pacemaker cluster to just core services. > >> > > >> > But if someone can explain that better I would appreciate it. > >> > > >> > Regards > >> > > >> >> IB > >> >> > >> >> > >> >> __ > >> >> Ignacio Bravo > >> >> LTG Federal, Inc > >> >> www.ltgfederal.com > >> >> > >> >> > >> >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard > >> >> > wrote: > >> >> > > >> >> > The vast majority of RDO's CI relies on using upstream > >> >> > installation/deployment projects in order to test installation of > >> >> > RDO > >> >> > packages in different ways and configurations. > >> >> > > >> >> > Unless I'm mistaken, TripleO Quickstart was originally created as a > >> >> > mean to "easily" install TripleO in different topologies without > >> >> > requiring a massive amount of hardware. > >> >> > This project allows us to test TripleO in virtual deployments on > >> >> > just > >> >> > one server instead of, say, 6. > >> >> > > >> >> > There's also WeIRDO [1] which was left out of your list. > >> >> > WeIRDO is super simple and simply aims to run upstream gate jobs > >> >> > (such > >> >> > as puppet-openstack-integration [2][3] and packstack [4][5]) > >> >> > outside > >> >> > of the gate. > >> >> > It'll install dependencies that are expected to be there (i.e, > >> >> > usually > >> >> > set up by the openstack-infra gate preparation jobs), set up the > >> >> > trunk > >> >> > repositories we're interested in testing and the rest is handled by > >> >> > the upstream project testing framework. > >> >> > > >> >> > The WeIRDO project is /very/ low maintenance and brings an > >> >> > exceptional > >> >> > amount of coverage and value. > >> >> > This coverage is important because RDO provides OpenStack packages > >> >> > or > >> >> > projects that are not necessarily used by TripleO and the reality > >> >> > is > >> >> > that not everyone deploying OpenStack on CentOS with RDO will be > >> >> > using > >> >> > TripleO. > >> >> > > >> >> > Anyway, sorry for sidetracking but back to the topic, thanks for > >> >> > opening the discussion. > >> >> > > >> >> > What honestly perplexes me is the situation of CI in RDO and OSP, > >> >> > especially around TripleO/Director, is the amount of work that is > >> >> > spent downstream. > >> >> > And by downstream, here, I mean anything that isn't in TripleO > >> >> > proper. > >> >> > > >> >> > I keep dreaming about how awesome upstream TripleO CI would be if > >> >> > all > >> >> > that effort was spent directly there instead -- and then that all > >> >> > work > >> >> > could bear fruit and trickle down downstream for free. > >> >> > Exactly like how we keep improving the testing coverage in > >> >> > puppet-openstack-integration, it's automatically pulled in RDO CI > >> >> > through WeIRDO for free. > >> >> > We make the upstream better and we benefit from it simultaneously: > >> >> > everyone wins. > >> >> > > >> >> > [1]: https://github.com/rdo-infra/weirdo > >> >> > [2]: > https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst > >> >> > ack > >> >> > [3]: > https://github.com/openstack/puppet-openstack-integration#desc > >> >> > ription > >> >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack > >> >> > [5]: > https://github.com/openstack/packstack#packstack-integration-t > >> >> > ests > >> >> > > >> >> > David Moreau Simard > >> >> > Senior Software Engineer | Openstack RDO > >> >> > > >> >> > dmsimard = [irc, github, twitter] > >> >> > > >> >> > David Moreau Simard > >> >> > Senior Software Engineer | Openstack RDO > >> >> > > >> >> > dmsimard = [irc, github, twitter] > >> >> > > >> >> > > >> >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > > >> >> > wrote: > >> >> > > Hi, > >> >> > > > >> >> > > I would like to start a discussion on the overlap between tools > >> >> > > we > >> >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. > >> >> > > > >> >> > > Several months ago, we worked on one common framework for > >> >> > > deploying > >> >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it > >> >> > > didn't work out well, which eventually led each group to focus on > >> >> > > developing other existing/new tools. > >> >> > > > >> >> > > What we have right now for deploying and testing > >> >> > > -------------------------------------------------------- > >> >> > > === Component CI, Gating === > >> >> > > I'll start with the projects we created, I think that's only fair > >> >> > > :) > >> >> > > > >> >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB > >> >> > > project. > >> >> > > > >> >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per > >> >> > > release. > >> >> > > > >> >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, > >> >> > > tempest, > >> >> > > csit) + Patching RPMs with submitted code. > >> >> > > > >> >> > > === Automation, QE === > >> >> > > * InfraRed[4] - provision install and test. Pluggable and > >> >> > > modular, > >> >> > > allows you to create your own provisioner, installer and tester. > >> >> > > > >> >> > > As far as I know, the groups is working now on different > >> >> > > structure of > >> >> > > one main project and three sub projects (provision, install and > >> >> > > test). > >> >> > > > >> >> > > === RDO === > >> >> > > I didn't use RDO tools, so I apologize if I got something wrong: > >> >> > > > >> >> > > * About ~25 micro independent Ansible roles[5]. You can either > >> >> > > choose > >> >> > > to use one of them or several together. They are used for > >> >> > > provisioning, installing and testing Tripleo. > >> >> > > > >> >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying > >> >> > > tripleo > >> >> > > and test it. > >> >> > > > >> >> > > As I said, I didn't use the tools, so feel free to add more > >> >> > > information you think is relevant. > >> >> > > > >> >> > > === More? === > >> >> > > I hope not. Let us know if are familiar with more tools. > >> >> > > > >> >> > > Conclusion > >> >> > > -------------- > >> >> > > So as you can see, there are several projects that eventually > >> >> > > overlap > >> >> > > in many areas. Each group is basically using the same tasks > >> >> > > (provision > >> >> > > resources, build/import overcloud images, run tempest, collect > >> >> > > logs, > >> >> > > etc.) > >> >> > > > >> >> > > Personally, I think it's a waste of resources. For each task > >> >> > > there is > >> >> > > at least two people from different groups who work on exactly the > >> >> > > same > >> >> > > task. The most recent example I can give is OVB. As far as I > >> >> > > know, > >> >> > > both groups are working on implementing it in their set of tools > >> >> > > right > >> >> > > now. > >> >> > > > >> >> > > On the other hand, you can always claim: "we already tried to > >> >> > > work on > >> >> > > the same framework, we failed to do it successfully" - right, but > >> >> > > maybe with better ground rules we can manage it. We would > >> >> > > defiantly > >> >> > > benefit a lot from doing that. > >> >> > > > >> >> > > What's next? > >> >> > > ---------------- > >> >> > > So first of all, I would like to hear from you if you think that > >> >> > > we > >> >> > > can collaborate once again or is it actually better to keep it as > >> >> > > it > >> >> > > is now. > >> >> > > > >> >> > > If you agree that collaboration here makes sense, maybe you have > >> >> > > ideas > >> >> > > on how we can do it better this time. > >> >> > > > >> >> > > I think that setting up a meeting to discuss the right > >> >> > > architecture > >> >> > > for the project(s) and decide on good review/gating process, > >> >> > > would be > >> >> > > a good start. > >> >> > > > >> >> > > Please let me know what do you think and keep in mind that this > >> >> > > is not > >> >> > > about which tool is better!. As you can see I didn't mention the > >> >> > > time > >> >> > > it takes for each tool to deploy and test, and also not the full > >> >> > > feature list it supports. > >> >> > > If possible, we should keep it about collaborating and not > >> >> > > choosing > >> >> > > the best tool. Our solution could be the combination of two or > >> >> > > more > >> >> > > tools eventually (tripleo-red, infra-quickstart? :D ) > >> >> > > > >> >> > > "You may say I'm a dreamer, but I'm not the only one. I hope some > >> >> > > day > >> >> > > you'll join us and the infra will be as one" :) > >> >> > > > >> >> > > [1] https://github.com/redhat-openstack/ansible-ovb > >> >> > > [2] https://github.com/redhat-openstack/ansible-rhosp > >> >> > > [3] https://github.com/redhat-openstack/octario > >> >> > > [4] https://github.com/rhosqeauto/InfraRed > >> >> > > [5] > https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi > >> >> > > ble-role > >> >> > > [6] https://github.com/openstack/tripleo-quickstart > >> >> > > > >> >> > > _______________________________________________ > >> >> > > rdo-list mailing list > >> >> > > rdo-list at redhat.com > >> >> > > https://www.redhat.com/mailman/listinfo/rdo-list > >> >> > > > >> >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> >> > _______________________________________________ > >> >> > rdo-list mailing list > >> >> > rdo-list at redhat.com > >> >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> >> > > >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> >> > >> > -- > >> > Regards, > >> > > >> > Christopher Brown > >> > OpenStack Engineer > >> > OCF plc > >> > > >> > Tel: +44 (0)114 257 2200 > >> > Web: www.ocf.co.uk > >> > Blog: blog.ocf.co.uk > >> > Twitter: @ocfplc > >> > > >> > Please note, any emails relating to an OCF Support request must always > >> > be sent to support at ocf.co.uk for a ticket number to be generated or > >> > existing support ticket to be updated. Should this not be done then > OCF > >> > cannot be held responsible for requests not dealt with in a timely > >> > manner. > >> > > >> > OCF plc is a company registered in England and Wales. Registered > number > >> > 4132533, VAT number GB 780 6803 14. Registered office address: OCF > plc, > >> > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > >> > 2PG. > >> > > >> > This message is private and confidential. If you have received this > >> > message in error, please notify us immediately and remove it from your > >> > system. > >> > > >> > _______________________________________________ > >> > rdo-list mailing list > >> > rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >> > >> > >> -- > >> Arie Bregman > >> Red Hat Israel > >> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview > > > > > > > > -- > Arie Bregman > Red Hat Israel > Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michele at acksyn.org Wed Aug 3 04:36:25 2016 From: michele at acksyn.org (Michele Baldessari) Date: Wed, 3 Aug 2016 06:36:25 +0200 Subject: [rdo-list] [tripleo] Troubles deploying Libery with HA setup In-Reply-To: References: Message-ID: <20160803043625.GA2440@palahniuk.int.rhx> Hi Luca, On Tue, Aug 02, 2016 at 05:59:00PM +0200, Luca 'remix_tj' Lorenzetto wrote: > If i go in depth with heat deployment-show i see that all resources > report this deploy_stderr: > > Error: Could not prefetch mysql_user provider 'mysql': Execution of > '/usr/bin/mysql -NBe SELECT CONCAT(User, '@',Host) AS User FROM > mysql.user' returned 1: ERROR 2002 (HY000): Can't connect to local > MySQL server through socket '/var/lib/mysql/mysql.sock' (2) > Error: Could not prefetch mysql_database provider 'mysql': Execution > of '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 > (HY000): Can't connect to local MySQL server through socket > '/var/lib/mysql/mysql.sock' (2)\ > Error: Command exceeded timeout > Error: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/returns: > change from notrun to 0 failed: Command exceeded timeout > Warning: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster > tripleo_cluster]: Skipping because of failed dependencies > Warning: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster > tripleo_cluster]: Skipping because of failed dependencies > Warning: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: > Skipping because of failed dependencies > Warning: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]: > Skipping because of failed dependencies > Warning: /Stage[main]/Pacemaker::Stonith/Exec[Disable STONITH]: > Skipping because of failed dependencies", > > I see nodes are stuck on puppet step > "auth-successful-across-all-nodes" (defined in > /etc/puppet/modules/pacemaker/manifests/corosync.pp) > > /usr/bin/python2 /usr/sbin/pcs cluster auth opsctrl0 opsctrl1 opsctrl2 > -u hacluster -p PASSWORD --force > > I suppose that the problem is due to corosync service not yet started. > But as far as i can see corosync will never start because > /etc/corosync/corosync.conf file is missing. What happens at this step is that "pcs cluster auth opsctrl0 opsctrl1 opsctrl2..." will set up a secret key between the three nodes and then configure corosync (/etc/corosync/corosync.conf) and pacemaker and then start both services on all three nodes. What you need to verify is why this command is stuck. It is likely either due to networking issues, dns issues or firewalling issues. You can quickly try and strace the pcs process and see on which network connections it waits for replies that never arrive. cheers, Michele -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From tkammer at redhat.com Wed Aug 3 07:28:54 2016 From: tkammer at redhat.com (Tal Kammer) Date: Wed, 3 Aug 2016 10:28:54 +0300 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: Thanks Arie for starting this discussion! (and sorry for joining in late) Some detailed explanations for those not familiar with InfraRed and would like to get the highlights: (Feel free to skip to my inline comments if you are familiar / too long of a post :) ). The InfraRed project is an Ansible based project comprised from three distinct tools: (currently under the InfraRed[1] project, and being split into their own standalone project soon). 1. ir-provisioner - responsible for building up a custom environment - you can affect the memory, CPU, HDD size, number of HDD each node has + number of networks (with/without DHCP) so for example one can deploy the following topology: (only an example to show the versatile options) 1 undercloud on RHEL 7 with 16GB of ram + 60GB of HDD with 3 network interfaces. 3 controllers on Centos 7 with 8GB of ram + 40GB of HDD 2 compute on Centos 7 with 6GB of ram + 60GB HDD 3 ceph nodes on RHEL 7 with 4GB of ram + 2 HDD one with 20GB + one with 40GB Example usage: (setting up the above VMs with four different HW specs) ir-provisioner virsh --host-address= --host-key= --topology-nodes=undercloud:1,controller:3,compute:2,ceph:3 *Note: while it is written "controller/compute/ceph" this is just setting up VMs, the names act more as a reference to the user of what is the role of each node. The installation of Openstack is done with a dedicated tool called `ir-installer` (next) 2. ir-installer - responsible for installing the product - supports "quickstart" mode (setting up a working environment in ~30 minutes) or E2E mode which does a full installation in ~1h. The installation process is completely customized. You can supply your own heat templates / overcloud invocation / undercloud.conf file to use / etc. You can also just run a specific task (using ansible --tags), so if you have a deployment ready and just need to run say, the introspection phase, you can fully choose what to run and what to skip even. 3. ir-tester - responsible for installing / configuring / running the tests - this project is meant to hold all testing tools we use so a user will be able to run any testing utility he would like without the need to "dive in". we supply a simple interface requesting simple to choose the testing tool and the set of tests one wishes to run and we'll do the work for him :) More info about InfraRed can be found here[1] (though I must admit that we still need some "love" around our docs) [1] - http://infrared.readthedocs.io/en/latest/ On Tue, Aug 2, 2016 at 9:52 PM, Wesley Hayutin wrote: > > > On Tue, Aug 2, 2016 at 1:51 PM, Arie Bregman wrote: > >> On Tue, Aug 2, 2016 at 3:53 PM, Wesley Hayutin >> wrote: >> > >> > >> > On Tue, Aug 2, 2016 at 4:58 AM, Arie Bregman >> wrote: >> >> >> >> It became a discussion around the official installer and how to >> >> improve it. While it's an important discussion, no doubt, I actually >> >> want to focus on our automation and CI tools. >> >> >> >> Since I see there is an agreement that collaboration does make sense >> >> here, let's move to the hard questions :) >> >> >> >> Wes, Tal - there is huge difference right now between infrared and >> >> tripleo-quickstart in their structure. One is all-in-one project and >> >> the other one is multiple micro projects managed by one project. Do >> >> you think there is a way to consolidate or move to a different model >> >> which will make sense for both RDO and RHOSP? something that both >> >> groups can work on. >> > >> > >> > I am happy to be part of the discussion, and I am also very willing to >> help >> > and try to drive suggestions to the tripleo-quickstart community. >> > I need to make a point clear though, just to make sure we're on the same >> > page.. I do not own oooq, I am not a core on oooq. >> > I can help facilitate a discussion but oooq is an upstream tripleo tool >> that >> > replaces instack-virt-setup [1]. >> > It also happens to be a great tool for easily deploying TripleO end to >> end >> > [3] >> > >> > What I *can* do is show everyone how to manipulate tripleo-quickstart >> and >> > customize it with composable ansible roles, templates, settings etc.. >> > This would allow any upstream or downstream project to override the >> native >> > oooq roles and *any* step that does not work for another group w/ 3rd >> party >> > roles [2]. >> > These 3rd party roles can be free and opensource or internal only, it >> works >> > either way. >> > This was discussed in depth as part of the production chain meetings, >> the >> > message may have been lost unfortunately. >> > >> > I hope this resets your expectations of what I can and can not do as >> part of >> > these discussions. >> > Let me know where and when and I'm happy to be part of the discussion. >> >> Thanks for clarifying :) >> >> Next reasonable step would probably be to propose some sort of >> blueprint for tripleo-quickstart to include some of InfraRed features >> and by that have one tool driven by upstream development that can be >> either cloned downstream or used as it is with an internal data >> project. >> > > Sure.. a blueprint would help everyone understand the feature and the > motivation. > You could also just plug in the feature you are looking for to oooq and > see if it meets > your requirements. See below. > While I think a blueprint is a good starting point, I'm afraid that our approach for provisioning machines is completely different so I'm not sure how to propose such a blueprint as it will probably require quite the design change from today's approach. > > >> >> OR >> >> have InfraRed pushed into tripleo/openstack namespace and expose it to >> the RDO community (without internal data of course). Personally, I >> really like the pluggable[1] structure (which allows it to actually >> consume tripleo-quickstart) so I'm not sure if it can be really merged >> with tripleo-quickstart as proposed in the first option. >> > I must admit that I like this option better as it introduces a tool to upstream and let the community drive it further / get more feedback on how to improve. A second benefit might be that by introducing a new concept / design we can take what is best from two worlds and improve. I would love to see an open discussion on the tool upstream and how we can improve the overall process. > The way oooq is built one can plugin or override any part at run time > with custom playbooks, roles, and config. There isn't anything that needs > to be > checked in directly to oooq to use it. > > It's designed such that third parties can make their own decisions to use > something > native to quickstart, something from our role library, or something > completely independent. > This allows teams, individuals or whom ever to do what they need to with > out having to fork or re-roll the entire framework. > > The important step is to note that these 3rd party roles or (oooq-extras) > incubate, mature and then graduate to github/openstack. > The upstream openstack community should lead, evaluate, and via blueprints > vote on the canonical CI tool set. > > We can record a demonstration if required, but there is nothing stopping > anyone right now from > doing this today. I'm just browsing the role library for an example, I > had no idea [1] existed. > Looks like Raoul had a requirement and just made it work. > > Justin, from the browbeat project has graciously created some > documentation regarding 3rd party roles. > It has yet to merge, but it should help illustrate how these roles are > used. [2] > > Thanks Arie for leading the discussion. > > [1] > https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-validate-ha > [2] https://review.openstack.org/#/c/346733/ > > > > > > >> >> I like the second option, although it still forces us to have two >> tools, but after a period of time, I believe it will be clear what the >> community prefers, which will allow us to remove one of the projects >> eventually. >> >> So, unless there are other ideas, I think the next move should be made by >> Tal. >> >> Tal, I'm willing to help with whatever is needed. >> > Thanks Arie for starting this discussion again, I believe we have still much work ahead of us but this is definitely a step in the right direction. > >> [1] http://infrared.readthedocs.io/en/latest >> >> > >> > Thanks >> > >> > [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart >> > [2] >> > >> https://github.com/redhat-openstack/?utf8=%E2%9C%93&query=ansible-role-tripleo >> > [3[ https://www.rdoproject.org/tripleo/ >> > >> > >> >> >> >> >> >> Raoul - I totally agree with you, especially with "difficult for >> >> anyone to start contributing and collaborate". This is exactly why >> >> this discussion started. If we can agree on one set of tools, it will >> >> make everyone's life easier - current groups, new contributors, folks >> >> that just want to deploy TripleO quickly. But I'm afraid some >> >> sacrifices need to be made by both groups. >> >> >> >> David - I thought WeiRDO is used only for packstack, so I apologize I >> >> didn't include it. It does sound like an anther testing project, is >> >> there a place to merge it with another existing testing project? like >> >> Octario for example or one of TripleO testing projects. Or does it >> >> make sense to keep it a standalone project? >> >> >> >> >> >> >> >> >> >> On Tue, Aug 2, 2016 at 11:12 AM, Christopher Brown >> >> wrote: >> >> > Hello RDOistas (I think that is the expression?), >> >> > >> >> > Another year, another OpenStack deployment tool. :) >> >> > >> >> > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: >> >> >> If we are talking about tools, I would also want to add something >> >> >> with regards to user interface of these tools. This is based on my >> >> >> own experience: >> >> >> >> >> >> I started trying to deploy Openstack with Staypuft and The Foreman. >> >> >> The UI of The Foreman was intuitive enough for the discovery and >> >> >> provisioning of the servers. The OpenStack portion, not so much. >> >> > >> >> > This is exactly mine also. I think this works really well in very >> large >> >> > enterprise environments where you need to split out services over >> more >> >> > than three controllers. You do need good in-house puppet skills >> though >> >> > so better for enterprise with a good sysadmin team. >> >> > >> >> >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I >> >> >> believe) that allowed you to graphically build your Openstack cloud. >> >> >> That was a reasonable good GUI for Openstack. >> >> > >> >> > Well, I found it barely usable. It was only ever good as a graphical >> >> > representiation of what the build was doing. Interacting with it was >> >> > not great. >> >> > >> >> >> Following that, TripleO become a script based installer, that >> >> >> required experience in Heat templates. I know I didn?t have it and >> >> >> had to ask in the mailing list about how to present this or change >> >> >> that. I got a couple of installs working with this setup. >> >> > >> >> > Works well now that I understand all the foibles and have invested >> time >> >> > into understanding heat templates and puppet modules. Its good in >> that >> >> > it forces you to learn about orchestration which is such an important >> >> > end-goal of cloud environments. >> >> > >> >> >> In the last session in Austin, my goal was to obtain information on >> >> >> how others were installing Openstack. I was pointed to Fuel as an >> >> >> alternative. I tried it up, and it just worked. It had the >> >> >> discovering capability from The Foreman, and the configuration >> >> >> options from TripleO. I understand that is based in Ansible and >> >> >> because of that, it is not fully CentOS ready for all the nodes (at >> >> >> least not in version 9 that I tried). In any case, as a deployer and >> >> >> installer, it is the most well rounded tool that I found. >> >> > >> >> > This is interesting to know. I've heard of Fuel of course but there >> are >> >> > some politics involved - it still has the team:single-vendor tag but >> >> > from what I see Mirantis are very keen for it to become the default >> >> > OpenStack installer. I don't think being Ansible-based should be a >> >> > problem - we are deploying OpenShift on OpenStack which uses >> Openshift- >> >> > ansible - this recently moved to Ansible 2.1 without too much >> >> > disruption. >> >> > >> >> >> I?d love to see RDO moving into that direction, and having an easy >> to >> >> >> use, end user ready deployer tool. >> >> > >> >> > If its as good as you say its definitely worth evaluating. From our >> >> > point of view, we want to be able to add services to the pacemaker >> >> > cluster with some ease - for example Magnum and Sahara - and it looks >> >> > like there are steps being taken with regards to composable roles and >> >> > simplification of the pacemaker cluster to just core services. >> >> > >> >> > But if someone can explain that better I would appreciate it. >> >> > >> >> > Regards >> >> > >> >> >> IB >> >> >> >> >> >> >> >> >> __ >> >> >> Ignacio Bravo >> >> >> LTG Federal, Inc >> >> >> www.ltgfederal.com >> >> >> >> >> >> >> >> >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard >> >> >> > wrote: >> >> >> > >> >> >> > The vast majority of RDO's CI relies on using upstream >> >> >> > installation/deployment projects in order to test installation of >> >> >> > RDO >> >> >> > packages in different ways and configurations. >> >> >> > >> >> >> > Unless I'm mistaken, TripleO Quickstart was originally created as >> a >> >> >> > mean to "easily" install TripleO in different topologies without >> >> >> > requiring a massive amount of hardware. >> >> >> > This project allows us to test TripleO in virtual deployments on >> >> >> > just >> >> >> > one server instead of, say, 6. >> >> >> > >> >> >> > There's also WeIRDO [1] which was left out of your list. >> >> >> > WeIRDO is super simple and simply aims to run upstream gate jobs >> >> >> > (such >> >> >> > as puppet-openstack-integration [2][3] and packstack [4][5]) >> >> >> > outside >> >> >> > of the gate. >> >> >> > It'll install dependencies that are expected to be there (i.e, >> >> >> > usually >> >> >> > set up by the openstack-infra gate preparation jobs), set up the >> >> >> > trunk >> >> >> > repositories we're interested in testing and the rest is handled >> by >> >> >> > the upstream project testing framework. >> >> >> > >> >> >> > The WeIRDO project is /very/ low maintenance and brings an >> >> >> > exceptional >> >> >> > amount of coverage and value. >> >> >> > This coverage is important because RDO provides OpenStack packages >> >> >> > or >> >> >> > projects that are not necessarily used by TripleO and the reality >> >> >> > is >> >> >> > that not everyone deploying OpenStack on CentOS with RDO will be >> >> >> > using >> >> >> > TripleO. >> >> >> > >> >> >> > Anyway, sorry for sidetracking but back to the topic, thanks for >> >> >> > opening the discussion. >> >> >> > >> >> >> > What honestly perplexes me is the situation of CI in RDO and OSP, >> >> >> > especially around TripleO/Director, is the amount of work that is >> >> >> > spent downstream. >> >> >> > And by downstream, here, I mean anything that isn't in TripleO >> >> >> > proper. >> >> >> > >> >> >> > I keep dreaming about how awesome upstream TripleO CI would be if >> >> >> > all >> >> >> > that effort was spent directly there instead -- and then that all >> >> >> > work >> >> >> > could bear fruit and trickle down downstream for free. >> >> >> > Exactly like how we keep improving the testing coverage in >> >> >> > puppet-openstack-integration, it's automatically pulled in RDO CI >> >> >> > through WeIRDO for free. >> >> >> > We make the upstream better and we benefit from it simultaneously: >> >> >> > everyone wins. >> >> >> > >> >> >> > [1]: https://github.com/rdo-infra/weirdo >> >> >> > [2]: >> https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst >> >> >> > ack >> >> >> > [3]: >> https://github.com/openstack/puppet-openstack-integration#desc >> >> >> > ription >> >> >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >> >> >> > [5]: >> https://github.com/openstack/packstack#packstack-integration-t >> >> >> > ests >> >> >> > >> >> >> > David Moreau Simard >> >> >> > Senior Software Engineer | Openstack RDO >> >> >> > >> >> >> > dmsimard = [irc, github, twitter] >> >> >> > >> >> >> > David Moreau Simard >> >> >> > Senior Software Engineer | Openstack RDO >> >> >> > >> >> >> > dmsimard = [irc, github, twitter] >> >> >> > >> >> >> > >> >> >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman < >> abregman at redhat.com> >> >> >> > wrote: >> >> >> > > Hi, >> >> >> > > >> >> >> > > I would like to start a discussion on the overlap between tools >> >> >> > > we >> >> >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. >> >> >> > > >> >> >> > > Several months ago, we worked on one common framework for >> >> >> > > deploying >> >> >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can say >> it >> >> >> > > didn't work out well, which eventually led each group to focus >> on >> >> >> > > developing other existing/new tools. >> >> >> > > >> >> >> > > What we have right now for deploying and testing >> >> >> > > -------------------------------------------------------- >> >> >> > > === Component CI, Gating === >> >> >> > > I'll start with the projects we created, I think that's only >> fair >> >> >> > > :) >> >> >> > > >> >> >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the >> OVB >> >> >> > > project. >> >> >> > > >> >> >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per >> >> >> > > release. >> >> >> > > >> >> >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, >> >> >> > > tempest, >> >> >> > > csit) + Patching RPMs with submitted code. >> >> >> > > >> >> >> > > === Automation, QE === >> >> >> > > * InfraRed[4] - provision install and test. Pluggable and >> >> >> > > modular, >> >> >> > > allows you to create your own provisioner, installer and tester. >> >> >> > > >> >> >> > > As far as I know, the groups is working now on different >> >> >> > > structure of >> >> >> > > one main project and three sub projects (provision, install and >> >> >> > > test). >> >> >> > > >> >> >> > > === RDO === >> >> >> > > I didn't use RDO tools, so I apologize if I got something wrong: >> >> >> > > >> >> >> > > * About ~25 micro independent Ansible roles[5]. You can either >> >> >> > > choose >> >> >> > > to use one of them or several together. They are used for >> >> >> > > provisioning, installing and testing Tripleo. >> >> >> > > >> >> >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying >> >> >> > > tripleo >> >> >> > > and test it. >> >> >> > > >> >> >> > > As I said, I didn't use the tools, so feel free to add more >> >> >> > > information you think is relevant. >> >> >> > > >> >> >> > > === More? === >> >> >> > > I hope not. Let us know if are familiar with more tools. >> >> >> > > >> >> >> > > Conclusion >> >> >> > > -------------- >> >> >> > > So as you can see, there are several projects that eventually >> >> >> > > overlap >> >> >> > > in many areas. Each group is basically using the same tasks >> >> >> > > (provision >> >> >> > > resources, build/import overcloud images, run tempest, collect >> >> >> > > logs, >> >> >> > > etc.) >> >> >> > > >> >> >> > > Personally, I think it's a waste of resources. For each task >> >> >> > > there is >> >> >> > > at least two people from different groups who work on exactly >> the >> >> >> > > same >> >> >> > > task. The most recent example I can give is OVB. As far as I >> >> >> > > know, >> >> >> > > both groups are working on implementing it in their set of tools >> >> >> > > right >> >> >> > > now. >> >> >> > > >> >> >> > > On the other hand, you can always claim: "we already tried to >> >> >> > > work on >> >> >> > > the same framework, we failed to do it successfully" - right, >> but >> >> >> > > maybe with better ground rules we can manage it. We would >> >> >> > > defiantly >> >> >> > > benefit a lot from doing that. >> >> >> > > >> >> >> > > What's next? >> >> >> > > ---------------- >> >> >> > > So first of all, I would like to hear from you if you think that >> >> >> > > we >> >> >> > > can collaborate once again or is it actually better to keep it >> as >> >> >> > > it >> >> >> > > is now. >> >> >> > > >> >> >> > > If you agree that collaboration here makes sense, maybe you have >> >> >> > > ideas >> >> >> > > on how we can do it better this time. >> >> >> > > >> >> >> > > I think that setting up a meeting to discuss the right >> >> >> > > architecture >> >> >> > > for the project(s) and decide on good review/gating process, >> >> >> > > would be >> >> >> > > a good start. >> >> >> > > >> >> >> > > Please let me know what do you think and keep in mind that this >> >> >> > > is not >> >> >> > > about which tool is better!. As you can see I didn't mention the >> >> >> > > time >> >> >> > > it takes for each tool to deploy and test, and also not the full >> >> >> > > feature list it supports. >> >> >> > > If possible, we should keep it about collaborating and not >> >> >> > > choosing >> >> >> > > the best tool. Our solution could be the combination of two or >> >> >> > > more >> >> >> > > tools eventually (tripleo-red, infra-quickstart? :D ) >> >> >> > > >> >> >> > > "You may say I'm a dreamer, but I'm not the only one. I hope >> some >> >> >> > > day >> >> >> > > you'll join us and the infra will be as one" :) >> >> >> > > >> >> >> > > [1] https://github.com/redhat-openstack/ansible-ovb >> >> >> > > [2] https://github.com/redhat-openstack/ansible-rhosp >> >> >> > > [3] https://github.com/redhat-openstack/octario >> >> >> > > [4] https://github.com/rhosqeauto/InfraRed >> >> >> > > [5] >> https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi >> >> >> > > ble-role >> >> >> > > [6] https://github.com/openstack/tripleo-quickstart >> >> >> > > >> >> >> > > _______________________________________________ >> >> >> > > rdo-list mailing list >> >> >> > > rdo-list at redhat.com >> >> >> > > https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> > > >> >> >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> > _______________________________________________ >> >> >> > rdo-list mailing list >> >> >> > rdo-list at redhat.com >> >> >> > https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> > >> >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> >> > -- >> >> > Regards, >> >> > >> >> > Christopher Brown >> >> > OpenStack Engineer >> >> > OCF plc >> >> > >> >> > Tel: +44 (0)114 257 2200 >> >> > Web: www.ocf.co.uk >> >> > Blog: blog.ocf.co.uk >> >> > Twitter: @ocfplc >> >> > >> >> > Please note, any emails relating to an OCF Support request must >> always >> >> > be sent to support at ocf.co.uk for a ticket number to be generated or >> >> > existing support ticket to be updated. Should this not be done then >> OCF >> >> > cannot be held responsible for requests not dealt with in a timely >> >> > manner. >> >> > >> >> > OCF plc is a company registered in England and Wales. Registered >> number >> >> > 4132533, VAT number GB 780 6803 14. Registered office address: OCF >> plc, >> >> > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield >> S35 >> >> > 2PG. >> >> > >> >> > This message is private and confidential. If you have received this >> >> > message in error, please notify us immediately and remove it from >> your >> >> > system. >> >> > >> >> > _______________________________________________ >> >> > rdo-list mailing list >> >> > rdo-list at redhat.com >> >> > https://www.redhat.com/mailman/listinfo/rdo-list >> >> > >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> >> >> >> >> -- >> >> Arie Bregman >> >> Red Hat Israel >> >> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview >> > >> > >> >> >> >> -- >> Arie Bregman >> Red Hat Israel >> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview >> > > -- Tal Kammer Associate manager, automation and infrastracture, Openstack platform. Red Hat Israel Automation group mojo: https://mojo.redhat.com/docs/DOC-1011659 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Wed Aug 3 08:17:47 2016 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Wed, 3 Aug 2016 10:17:47 +0200 Subject: [rdo-list] [tripleo] Troubles deploying Libery with HA setup In-Reply-To: <20160803043625.GA2440@palahniuk.int.rhx> References: <20160803043625.GA2440@palahniuk.int.rhx> Message-ID: On Wed, Aug 3, 2016 at 6:36 AM, Michele Baldessari wrote: > Hi Luca, [cut] Hi Michele, > What happens at this step is that "pcs cluster auth opsctrl0 opsctrl1 > opsctrl2..." will set up a secret key between the three nodes and then > configure corosync (/etc/corosync/corosync.conf) and pacemaker and then > start both services on all three nodes. Thank you for the explanation. > What you need to verify is why > this command is stuck. It is likely either due to networking issues, dns > issues or firewalling issues. You can quickly try and strace the pcs > process and see on which network connections it waits for replies that > never arrive. I see this on netstat: [heat-admin at opsctrl0 ~]$ sudo netstat -alptn | grep 2224 tcp 0 0 172.25.122.13:44378 172.25.122.12:2224 ESTABLISHED 26473/ruby tcp 0 0 172.25.122.13:55286 172.25.122.13:2224 ESTABLISHED 26473/ruby tcp 0 0 172.25.122.13:43808 172.25.122.14:2224 ESTABLISHED 26473/ruby tcp6 2 0 :::2224 :::* LISTEN 26276/ruby tcp6 200 0 172.25.122.13:2224 172.25.122.13:55286 ESTABLISHED - tcp6 200 0 172.25.122.13:2224 172.25.122.12:59155 ESTABLISHED - tcp6 0 1457 172.25.122.13:2224 172.25.122.14:39918 ESTABLISHED 26276/ruby (similar on the other 3 nodes) PID 26276 is /usr/bin/ruby -I/usr/lib/pcsd /usr/lib/pcsd/ssl.rb PID 26473 is /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb auth which is a child process of /usr/bin/python2 /usr/sbin/pcs cluster auth opsctrl0 opsctrl1 opsctrl2 -u hacluster -p password --force Stracing 26276 i continue to get: select(11, [10], NULL, NULL, {2, 0}) = 0 (Timeout) With lsof i see that there are these fds open: ruby 26276 root 0r CHR 1,3 0t0 1028 /dev/null ruby 26276 root 1u REG 8,2 32181 20889837 /var/log/pcsd/pcsd.log ruby 26276 root 2u REG 8,2 32181 20889837 /var/log/pcsd/pcsd.log ruby 26276 root 3r FIFO 0,8 0t0 72146 pipe ruby 26276 root 4w FIFO 0,8 0t0 72146 pipe ruby 26276 root 5r FIFO 0,8 0t0 72147 pipe ruby 26276 root 6w FIFO 0,8 0t0 72147 pipe ruby 26276 root 7u sock 0,6 0t0 64861 protocol: TCPv6 ruby 26276 root 9w REG 8,2 32181 20889837 /var/log/pcsd/pcsd.log ruby 26276 root 10u IPv6 23096 0t0 TCP *:efi-mg (LISTEN) i suppose that the one involved is the last one. Additionally, since i've seen that a log file is open, i can report that every minute the log get this lines: I, [2016-08-03T04:14:05.213788 #26276] INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_name I, [2016-08-03T04:14:05.214140 #26276] INFO -- : CIB USER: hacluster, groups: I, [2016-08-03T04:14:05.219656 #26276] INFO -- : Return Value: 1 W, [2016-08-03T04:14:05.219834 #26276] WARN -- : Cannot read config 'corosync.conf' from '/etc/corosync/corosync.conf': No such file or directory - /etc/corosync/corosync.conf I'm not able to understand what's happening. -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From lorenzetto.luca at gmail.com Wed Aug 3 08:27:05 2016 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Wed, 3 Aug 2016 10:27:05 +0200 Subject: [rdo-list] [tripleo] Troubles deploying Libery with HA setup In-Reply-To: <1470155660.2497.36.camel@ocf.co.uk> References: <1470155660.2497.36.camel@ocf.co.uk> Message-ID: On Tue, Aug 2, 2016 at 6:34 PM, Christopher Brown wrote: > Hi Luca, Hi Christopher, > > It rings a bell but to be honest I would try the following: > > 1. Check hosts file on controllers - I wonder what are the content > there? # HEAT_HOSTS_START - Do not edit manually within this section! 172.25.122.16 opskvmsvil0.company.it opskvmsvil0 172.25.122.15 opskvmsvil1.company.it opskvmsvil1 172.25.122.17 opskvmsvil2.company.it opskvmsvil2 172.25.122.13 opsctrl0.company.it opsctrl0 overcloud 172.25.122.12 opsctrl1.company.it opsctrl1 overcloud 172.25.122.14 opsctrl2.company.it opsctrl2 overcloud # HEAT_HOSTS_END Seems fine to me. This is exactly what i intended. > 2. Revert hostname change and see if it is happy I'll try. Seems a good solution, but can't understand why this will impact if hosts file is correct and hosts can talk each other using the assigned name. > 3. Definitely try and compose from latest stable CentOS SIG Liberty > packages (not delorean or DLRN or whatever it is these days) I already tried this. I had a strange issue while deploying computing nodes (puppet errors, don't remember exactly what). That's why i returned to my previous mirror since i had successfully deployed a 1+3 setup without issues. > 4. Try Mitaka perhaps and follow Graeme's instructions here: > > https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html > > You can miss step 4 and 6 as these have been resolved. I'd like to complete the setup with Liberty. We are playing also with other distributions (mainly commercial) that are proposing Liberty and we'd like to have everything aligned. Additionally we'd try a forward jump in place, to make a funny release-upgrade test and see what happens. > > I tend to work back and reduce things down in complexity. I agree, i'll try. > > You can specify hostnames as per: > > http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/no > de_placement.html > > but perhaps this is what you are doing in your ~/hostname-nostri.yaml > file? That file has simply this inside: parameter_defaults: ControllerHostnameFormat: 'opsctrl%index%' ComputeHostnameFormat: 'opskvmsvil%index%' That is simply because overcloud-novacompute-0 was too long name Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From pgsousa at gmail.com Wed Aug 3 08:34:12 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Wed, 3 Aug 2016 09:34:12 +0100 Subject: [rdo-list] [tripleo] Troubles deploying Libery with HA setup In-Reply-To: References: Message-ID: Hi, If you are trying to update the stack from 1 to 3 controllers, it will not work, its not supported. You will have to delete the stack and start again with 3 controllers. I also advise you to try mitaka. Regards Em 02/08/2016 17:00, "Luca 'remix_tj' Lorenzetto" escreveu: Hello, I'm deploying Liberty on a set of server using tripleo. I imported and correctly introspected 6 nodes. First i did a setup with 1 controller node and 3 computing nodes. No issues during this deploy, everything went fine. On this deployment i used custom hostname formats (defined via yaml environment file) and custom dns_domain (changed in /etc/nova/nova.conf and /etc/neutron/dhcp_agent.ini and all /usr/share/openstack-tripleo-heat-templates/*.yaml files that contains localdomain) Now I'm working to have a 3 controller setup with HA. I'm running this command to deploy: openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/puppet-ceph-external.yaml -e ~/hostname-nostri.yaml --neutron-bridge-mappings datacentre:br-ex,storage-pub:br-stg-pub -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server timesrv1 --control-scale 3 --compute-scale 3 --ceph-storage-scale 0 --control-flavor controlhp --compute-flavor computehp --neutron-network-type vxlan --neutron-tunnel-types vxlan --verbose --debug --log-file overcloud_$(date +%T).log I see that stack creation starts and correctly deploy os on the nodes. Nodes are named according to ControllerHostnameFormat and ComputeHostnameFormat that i specified into ~/hostname-nostri.yaml. Everything goes well until HA configuration starts. I see this stack creation failing: overcloud-ControllerNodesPostDeployment-bbp3c47jgau2-ControllerLoadBalancerDeployment_Step1-sla6ce7n2arq The error message of the deployment command is: Stack failed with status: Resource CREATE failed: resources.ControllerLoadBalancerDeployment_Step1: resources.ControllerNodesPostDeployment.Error: resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 Heat Stack create failed. If i go in depth with heat deployment-show i see that all resources report this deploy_stderr: Error: Could not prefetch mysql_user provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) Error: Could not prefetch mysql_database provider 'mysql': Execution of '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)\ Error: Command exceeded timeout Error: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/returns: change from notrun to 0 failed: Command exceeded timeout Warning: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Skipping because of failed dependencies Warning: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Skipping because of failed dependencies Warning: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Skipping because of failed dependencies Warning: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]: Skipping because of failed dependencies Warning: /Stage[main]/Pacemaker::Stonith/Exec[Disable STONITH]: Skipping because of failed dependencies", I see nodes are stuck on puppet step "auth-successful-across-all-nodes" (defined in /etc/puppet/modules/pacemaker/manifests/corosync.pp) /usr/bin/python2 /usr/sbin/pcs cluster auth opsctrl0 opsctrl1 opsctrl2 -u hacluster -p PASSWORD --force I suppose that the problem is due to corosync service not yet started. But as far as i can see corosync will never start because /etc/corosync/corosync.conf file is missing. in /etc/hosts opsctrl{0-2} are correctly defined and each host can talk with the others. I'm stuck and i don't know what to do. Anyone had similar issues? I'm missing something that i have to enter in the configuration? I'm using a in-house mirror of rdo-release made on April 20, and custom images build from rhel image and this repository. A week ago i tried with latest repository but had the same error (and also others, that's why i returned to this older mirror that was working). This is the list of packages installed from rdo-release repository: crudini-0.7-1.el7.noarch dib-utils-0.0.9-1.el7.noarch dibbler-client-1.0.1-0.RC1.2.el7.x86_64 diskimage-builder-1.4.0-1.el7.noarch erlang-asn1-R16B-03.16.el7.x86_64 erlang-compiler-R16B-03.16.el7.x86_64 erlang-crypto-R16B-03.16.el7.x86_64 erlang-erts-R16B-03.16.el7.x86_64 erlang-hipe-R16B-03.16.el7.x86_64 erlang-inets-R16B-03.16.el7.x86_64 erlang-kernel-R16B-03.16.el7.x86_64 erlang-mnesia-R16B-03.16.el7.x86_64 erlang-os_mon-R16B-03.16.el7.x86_64 erlang-otp_mibs-R16B-03.16.el7.x86_64 erlang-public_key-R16B-03.16.el7.x86_64 erlang-runtime_tools-R16B-03.16.el7.x86_64 erlang-sasl-R16B-03.16.el7.x86_64 erlang-sd_notify-0.1-1.el7.x86_64 erlang-snmp-R16B-03.16.el7.x86_64 erlang-ssl-R16B-03.16.el7.x86_64 erlang-stdlib-R16B-03.16.el7.x86_64 erlang-syntax_tools-R16B-03.16.el7.x86_64 erlang-tools-R16B-03.16.el7.x86_64 erlang-xmerl-R16B-03.16.el7.x86_64 hiera-1.3.4-1.el7.noarch instack-0.0.8-1.el7.noarch instack-undercloud-2.1.3-1.el7.noarch jq-1.3-2.el7.x86_64 liberasurecode-1.1.1-1.el7.x86_64 libnetfilter_queue-1.0.2-2.el7.x86_64 memcached-1.4.25-1.el7.x86_64 1:openstack-ceilometer-alarm-5.0.2-1.el7.noarch 1:openstack-ceilometer-api-5.0.2-1.el7.noarch 1:openstack-ceilometer-central-5.0.2-1.el7.noarch 1:openstack-ceilometer-collector-5.0.2-1.el7.noarch 1:openstack-ceilometer-common-5.0.2-1.el7.noarch 1:openstack-ceilometer-notification-5.0.2-1.el7.noarch 1:openstack-ceilometer-polling-5.0.2-1.el7.noarch 1:openstack-glance-11.0.1-2.el7.noarch 1:openstack-heat-api-5.0.0-1.el7.noarch 1:openstack-heat-api-cfn-5.0.0-1.el7.noarch 1:openstack-heat-api-cloudwatch-5.0.0-1.el7.noarch 1:openstack-heat-common-5.0.0-1.el7.noarch 1:openstack-heat-engine-5.0.0-1.el7.noarch openstack-heat-templates-0-0.1.20151019.el7.noarch 1:openstack-ironic-api-4.2.2-1.el7.noarch 1:openstack-ironic-common-4.2.2-1.el7.noarch 1:openstack-ironic-conductor-4.2.2-1.el7.noarch openstack-ironic-inspector-2.2.2-1.el7.noarch 1:openstack-keystone-8.0.1-1.el7.noarch 1:openstack-neutron-7.0.3-1.el7.noarch 1:openstack-neutron-common-7.0.3-1.el7.noarch 1:openstack-neutron-ml2-7.0.3-1.el7.noarch 1:openstack-neutron-openvswitch-7.0.3-1.el7.noarch 1:openstack-nova-api-12.0.1-1.el7.noarch 1:openstack-nova-cert-12.0.1-1.el7.noarch 1:openstack-nova-common-12.0.1-1.el7.noarch 1:openstack-nova-compute-12.0.1-1.el7.noarch 1:openstack-nova-conductor-12.0.1-1.el7.noarch 1:openstack-nova-scheduler-12.0.1-1.el7.noarch 1:openstack-puppet-modules-7.0.1-1.el7.noarch openstack-selinux-0.6.41-1.el7.noarch openstack-swift-2.5.0-1.el7.noarch openstack-swift-account-2.5.0-1.el7.noarch openstack-swift-container-2.5.0-1.el7.noarch openstack-swift-object-2.5.0-1.el7.noarch openstack-swift-plugin-swift3-1.7-4.el7.noarch openstack-swift-proxy-2.5.0-1.el7.noarch openstack-tripleo-0.0.6-1.el7.noarch openstack-tripleo-heat-templates-0.8.7-1.el7.noarch openstack-tripleo-image-elements-0.9.7-1.el7.noarch openstack-tripleo-puppet-elements-0.0.2-1.el7.noarch openstack-utils-2015.2-1.el7.noarch openvswitch-2.4.0-1.el7.x86_64 os-apply-config-0.1.32-3.el7.noarch os-cloud-config-0.2.10-2.el7.noarch os-collect-config-0.1.36-4.el7.noarch os-net-config-0.1.5-3.el7.noarch os-refresh-config-0.1.11-2.el7.noarch puppet-3.6.2-3.el7.noarch pyOpenSSL-0.15.1-1.el7.noarch pyparsing-2.0.3-1.el7.noarch pysendfile-2.0.0-5.el7.x86_64 pysnmp-4.2.5-2.el7.noarch pystache-0.5.3-2.el7.noarch python-alembic-0.8.3-3.el7.noarch python-amqp-1.4.6-1.el7.noarch python-anyjson-0.3.3-3.el7.noarch python-automaton-0.7.0-1.el7.noarch python-babel-1.3-6.el7.noarch python-bson-3.0.3-1.el7.x86_64 python-cachetools-1.0.3-2.el7.noarch 1:python-ceilometer-5.0.2-1.el7.noarch python-ceilometerclient-1.5.0-1.el7.noarch python-cinderclient-1.4.0-1.el7.noarch python-cliff-1.15.0-1.el7.noarch python-cliff-tablib-1.1-3.el7.noarch python-cmd2-0.6.8-3.el7.noarch python-contextlib2-0.4.0-1.el7.noarch python-croniter-0.3.4-2.el7.noarch python-dogpile-cache-0.5.7-3.el7.noarch python-dogpile-core-0.4.1-2.el7.noarch python-ecdsa-0.11-3.el7.noarch python-editor-0.4-4.el7.noarch python-elasticsearch-1.4.0-2.el7.noarch python-extras-0.0.3-2.el7.noarch python-fixtures-1.4.0-2.el7.noarch python-futures-3.0.3-1.el7.noarch 1:python-glance-11.0.1-2.el7.noarch python-glance-store-0.9.1-1.el7.noarch 1:python-glanceclient-1.1.0-1.el7.noarch python-heatclient-0.8.0-1.el7.noarch python-httplib2-0.9.2-1.el7.noarch python-idna-2.0-1.el7.noarch python-ipaddress-1.0.7-4.el7.noarch python-ironicclient-0.8.1-1.el7.noarch python-jsonpatch-1.2-2.el7.noarch python-jsonpath-rw-1.2.3-2.el7.noarch python-jsonschema-2.3.0-1.el7.noarch python-keyring-5.0-4.el7.noarch 1:python-keystone-8.0.1-1.el7.noarch 1:python-keystoneclient-1.7.2-1.el7.noarch python-keystonemiddleware-2.3.1-1.el7.noarch 1:python-kombu-3.0.32-1.el7.noarch python-ldappool-1.0-4.el7.noarch python-linecache2-1.0.0-1.el7.noarch python-logutils-0.3.3-3.el7.noarch python-memcached-1.54-3.el7.noarch python-migrate-0.10.0-1.el7.noarch python-mimeparse-0.1.4-1.el7.noarch python-monotonic-0.3-1.el7.noarch python-ncclient-0.4.2-2.el7.noarch python-netaddr-0.7.18-1.el7.noarch python-netifaces-0.10.4-1.el7.x86_64 python-networkx-core-1.10-1.el7.noarch 1:python-neutron-7.0.3-1.el7.noarch python-neutronclient-3.1.0-1.el7.noarch python-nose-1.3.7-7.el7.noarch 1:python-nova-12.0.1-1.el7.noarch 1:python-novaclient-2.30.1-1.el7.noarch python-oauthlib-0.7.2-5.20150520git514cad7.el7.noarch python-openstackclient-1.7.2-1.el7.noarch python-openvswitch-2.4.0-1.el7.noarch python-oslo-cache-0.7.0-1.el7.noarch python-oslo-concurrency-2.6.0-1.el7.noarch python-oslo-db-2.6.0-3.el7.noarch python-oslo-log-1.10.0-1.el7.noarch python-oslo-messaging-2.5.0-1.el7.noarch python-oslo-middleware-2.8.0-1.el7.noarch python-oslo-policy-0.11.0-1.el7.noarch python-oslo-rootwrap-2.3.0-1.el7.noarch python-oslo-service-0.9.0-1.el7.noarch python-oslo-versionedobjects-0.10.0-1.el7.noarch python-oslo-vmware-1.21.0-1.el7.noarch python-osprofiler-0.3.0-1.el7.noarch python-paramiko-1.15.1-1.el7.noarch python-paste-deploy-1.5.2-6.el7.noarch python-pbr-1.8.1-2.el7.noarch python-posix_ipc-0.9.8-1.el7.x86_64 python-prettytable-0.7.2-1.el7.noarch python-proliantutils-2.1.7-1.el7.noarch python-psutil-1.2.1-1.el7.x86_64 python-pycadf-1.1.0-1.el7.noarch python-pyeclib-1.2.0-1.el7.x86_64 python-pyghmi-0.8.0-2.el7.noarch python-pygments-2.0.2-4.el7.noarch python-pymongo-3.0.3-1.el7.x86_64 python-pysaml2-3.0.2-1.el7.noarch python-qpid-0.30-1.el7.noarch python-qpid-common-0.30-1.el7.noarch python-repoze-lru-0.4-3.el7.noarch python-repoze-who-2.1-1.el7.noarch python-requests-2.9.1-2.el7.noarch python-retrying-1.2.3-4.el7.noarch python-rfc3986-0.2.0-1.el7.noarch python-routes-1.13-2.el7.noarch python-saharaclient-0.11.1-1.el7.noarch python-semantic_version-2.4.2-1.el7.noarch python-simplegeneric-0.8-7.el7.noarch python-simplejson-3.5.3-5.el7.x86_64 python-sqlalchemy-1.0.11-1.el7.x86_64 python-sqlparse-0.1.18-5.el7.noarch python-stevedore-1.8.0-1.el7.noarch python-swiftclient-2.6.0-1.el7.noarch python-tablib-0.10.0-1.el7.noarch python-taskflow-1.21.0-1.el7.noarch python-tempita-0.5.1-8.el7.noarch python-testtools-1.8.0-2.el7.noarch python-tooz-1.24.0-1.el7.noarch python-traceback2-1.4.0-2.el7.noarch python-tripleoclient-0.0.11-3.el7.noarch python-troveclient-1.3.0-1.el7.noarch python-unicodecsv-0.14.1-1.el7.noarch python-unittest2-1.0.1-1.el7.noarch python-urllib3-1.13.1-3.el7.noarch python-warlock-1.0.1-1.el7.noarch python-webob-1.4.1-2.el7.noarch python-websockify-0.6.0-2.el7.noarch python-wrapt-1.10.5-3.el7.x86_64 python2-PyMySQL-0.6.7-2.el7.noarch python2-appdirs-1.4.0-4.el7.noarch python2-castellan-0.3.1-1.el7.noarch python2-cffi-1.5.2-1.el7.x86_64 python2-cryptography-1.2.1-3.el7.x86_64 python2-debtcollector-0.8.0-1.el7.noarch python2-eventlet-0.17.4-4.el7.noarch python2-fasteners-0.14.1-4.el7.noarch python2-funcsigs-0.4-2.el7.noarch python2-futurist-0.5.0-1.el7.noarch python2-greenlet-0.4.9-1.el7.x86_64 python2-ironic-inspector-client-1.2.0-2.el7.noarch python2-iso8601-0.1.11-1.el7.noarch python2-jsonpath-rw-ext-0.1.7-1.1.el7.noarch python2-mock-1.3.0-1.el7.noarch python2-os-brick-0.5.0-1.el7.noarch python2-os-client-config-1.7.4-1.el7.noarch 2:python2-oslo-config-2.4.0-1.el7.noarch python2-oslo-context-0.6.0-1.el7.noarch python2-oslo-i18n-2.6.0-1.el7.noarch python2-oslo-reports-0.5.0-1.el7.noarch python2-oslo-serialization-1.9.0-1.el7.noarch python2-oslo-utils-2.5.0-1.el7.noarch python2-passlib-1.6.5-1.el7.noarch python2-pecan-1.0.2-2.el7.noarch python2-pyasn1-0.1.9-6.el7.1.noarch python2-rsa-3.3-2.el7.noarch python2-singledispatch-3.4.0.3-4.el7.noarch python2-suds-0.7-0.1.94664ddd46a6.el7.noarch python2-wsme-0.7.0-2.el7.noarch rabbitmq-server-3.3.5-17.el7.noarch ruby-augeas-0.5.0-1.el7.x86_64 ruby-shadow-1.4.1-23.el7.x86_64 rubygem-rgen-0.6.6-2.el7.noarch tripleo-common-0.0.1-2.el7.noarch Thank you, Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < lorenzetto.luca at gmail.com> _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Wed Aug 3 08:40:36 2016 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Wed, 3 Aug 2016 10:40:36 +0200 Subject: [rdo-list] [tripleo] Troubles deploying Libery with HA setup In-Reply-To: References: Message-ID: On Wed, Aug 3, 2016 at 10:34 AM, Pedro Sousa wrote: > Hi, > > If you are trying to update the stack from 1 to 3 controllers, it will not > work, its not supported. You will have to delete the stack and start again > with 3 controllers. I also advise you to try mitaka. > Hi Pedro, I know. it's a new deployment (stack-delete + overcloud deploy). I'd try mitaka, but not now. I'd like to complete with Liberty (other details on my reply to Christopher) Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From fzdarsky at redhat.com Wed Aug 3 09:23:53 2016 From: fzdarsky at redhat.com (Frank Zdarsky) Date: Wed, 3 Aug 2016 11:23:53 +0200 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: On Wed, Aug 3, 2016 at 9:28 AM, Tal Kammer wrote: > Thanks Arie for starting this discussion! (and sorry for joining in late) > > Some detailed explanations for those not familiar with InfraRed and would > like to get the highlights: (Feel free to skip to my inline comments if > you are familiar / too long of a post :) ). > > The InfraRed project is an Ansible based project comprised from three > distinct tools: (currently under the InfraRed[1] project, and being split > into their own standalone project soon). > > 1. ir-provisioner - responsible for building up a custom environment - you > can affect the memory, CPU, HDD size, number of HDD each node has + number > of networks (with/without DHCP) so for example one can deploy the following > topology: (only an example to show the versatile options) > 1 undercloud on RHEL 7 with 16GB of ram + 60GB of HDD with 3 network > interfaces. > 3 controllers on Centos 7 with 8GB of ram + 40GB of HDD > 2 compute on Centos 7 with 6GB of ram + 60GB HDD > 3 ceph nodes on RHEL 7 with 4GB of ram + 2 HDD one with 20GB + one with > 40GB > > Example usage: (setting up the above VMs with four different HW specs) > ir-provisioner virsh --host-address= > --host-key= > --topology-nodes=undercloud:1,controller:3,compute:2,ceph:3 > > *Note: while it is written "controller/compute/ceph" this is just setting > up VMs, the names act more as a reference to the user of what is the role > of each node. > The installation of Openstack is done with a dedicated tool called > `ir-installer` (next) > > 2. ir-installer - responsible for installing the product - supports > "quickstart" mode (setting up a working environment in ~30 minutes) or E2E > mode which does a full installation in ~1h. > The installation process is completely customized. You can supply your own > heat templates / overcloud invocation / undercloud.conf file to use / etc. > You can also just run a specific task (using ansible --tags), so if you > have a deployment ready and just need to run say, the introspection phase, > you can fully choose what to run and what to skip even. > > 3. ir-tester - responsible for installing / configuring / running the > tests - this project is meant to hold all testing tools we use so a user > will be able to run any testing utility he would like without the need to > "dive in". we supply a simple interface requesting simple to choose the > testing tool and the set of tests one wishes to run and we'll do the work > for him :) > > More info about InfraRed can be found here[1] (though I must admit that we > still need some "love" around our docs) > > [1] - http://infrared.readthedocs.io/en/latest/ > > I agree cleanly separating and modularizing infrastructure provisioining (for some use cases like NFV ideally with mixed VM and baremetal environments), OS installation and testing is a good approach. In that context, what happened to Khaleesi [0]? Haven't seen that mentioned on the original list. Nor Apex [1], the installer from OPNFV that does not only address OpenStack itself, but also integration with complementary projects like ODL? [0] https://github.com/redhat-openstack/khaleesi [1] https://wiki.opnfv.org/display/apex/Apex > > On Tue, Aug 2, 2016 at 9:52 PM, Wesley Hayutin > wrote: > >> >> >> On Tue, Aug 2, 2016 at 1:51 PM, Arie Bregman wrote: >> >>> On Tue, Aug 2, 2016 at 3:53 PM, Wesley Hayutin >>> wrote: >>> > >>> > >>> > On Tue, Aug 2, 2016 at 4:58 AM, Arie Bregman >>> wrote: >>> >> >>> >> It became a discussion around the official installer and how to >>> >> improve it. While it's an important discussion, no doubt, I actually >>> >> want to focus on our automation and CI tools. >>> >> >>> >> Since I see there is an agreement that collaboration does make sense >>> >> here, let's move to the hard questions :) >>> >> >>> >> Wes, Tal - there is huge difference right now between infrared and >>> >> tripleo-quickstart in their structure. One is all-in-one project and >>> >> the other one is multiple micro projects managed by one project. Do >>> >> you think there is a way to consolidate or move to a different model >>> >> which will make sense for both RDO and RHOSP? something that both >>> >> groups can work on. >>> > >>> > >>> > I am happy to be part of the discussion, and I am also very willing to >>> help >>> > and try to drive suggestions to the tripleo-quickstart community. >>> > I need to make a point clear though, just to make sure we're on the >>> same >>> > page.. I do not own oooq, I am not a core on oooq. >>> > I can help facilitate a discussion but oooq is an upstream tripleo >>> tool that >>> > replaces instack-virt-setup [1]. >>> > It also happens to be a great tool for easily deploying TripleO end to >>> end >>> > [3] >>> > >>> > What I *can* do is show everyone how to manipulate tripleo-quickstart >>> and >>> > customize it with composable ansible roles, templates, settings etc.. >>> > This would allow any upstream or downstream project to override the >>> native >>> > oooq roles and *any* step that does not work for another group w/ 3rd >>> party >>> > roles [2]. >>> > These 3rd party roles can be free and opensource or internal only, it >>> works >>> > either way. >>> > This was discussed in depth as part of the production chain meetings, >>> the >>> > message may have been lost unfortunately. >>> > >>> > I hope this resets your expectations of what I can and can not do as >>> part of >>> > these discussions. >>> > Let me know where and when and I'm happy to be part of the discussion. >>> >>> Thanks for clarifying :) >>> >>> Next reasonable step would probably be to propose some sort of >>> blueprint for tripleo-quickstart to include some of InfraRed features >>> and by that have one tool driven by upstream development that can be >>> either cloned downstream or used as it is with an internal data >>> project. >>> >> >> Sure.. a blueprint would help everyone understand the feature and the >> motivation. >> You could also just plug in the feature you are looking for to oooq and >> see if it meets >> your requirements. See below. >> > > While I think a blueprint is a good starting point, I'm afraid that our > approach for provisioning machines is completely different so I'm not sure > how to propose such a blueprint as it will probably require quite the > design change from today's approach. > > >> >> >>> >>> OR >>> >>> have InfraRed pushed into tripleo/openstack namespace and expose it to >>> the RDO community (without internal data of course). Personally, I >>> really like the pluggable[1] structure (which allows it to actually >>> consume tripleo-quickstart) so I'm not sure if it can be really merged >>> with tripleo-quickstart as proposed in the first option. >>> >> > I must admit that I like this option better as it introduces a tool to > upstream and let the community drive it further / get more feedback on how > to improve. > A second benefit might be that by introducing a new concept / design we > can take what is best from two worlds and improve. > I would love to see an open discussion on the tool upstream and how we can > improve the overall process. > > >> The way oooq is built one can plugin or override any part at run time >> with custom playbooks, roles, and config. There isn't anything that >> needs to be >> checked in directly to oooq to use it. >> >> It's designed such that third parties can make their own decisions to use >> something >> native to quickstart, something from our role library, or something >> completely independent. >> This allows teams, individuals or whom ever to do what they need to with >> out having to fork or re-roll the entire framework. >> >> The important step is to note that these 3rd party roles or (oooq-extras) >> incubate, mature and then graduate to github/openstack. >> The upstream openstack community should lead, evaluate, and via >> blueprints vote on the canonical CI tool set. >> >> We can record a demonstration if required, but there is nothing stopping >> anyone right now from >> doing this today. I'm just browsing the role library for an example, I >> had no idea [1] existed. >> Looks like Raoul had a requirement and just made it work. >> >> Justin, from the browbeat project has graciously created some >> documentation regarding 3rd party roles. >> It has yet to merge, but it should help illustrate how these roles are >> used. [2] >> >> Thanks Arie for leading the discussion. >> >> [1] >> https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-validate-ha >> [2] https://review.openstack.org/#/c/346733/ >> >> >> >> >> >> >>> >>> I like the second option, although it still forces us to have two >>> tools, but after a period of time, I believe it will be clear what the >>> community prefers, which will allow us to remove one of the projects >>> eventually. >>> >>> So, unless there are other ideas, I think the next move should be made >>> by Tal. >>> >>> Tal, I'm willing to help with whatever is needed. >>> >> > Thanks Arie for starting this discussion again, I believe we have still > much work ahead of us but this is definitely a step in the right direction. > > >> >>> [1] http://infrared.readthedocs.io/en/latest >>> >>> > >>> > Thanks >>> > >>> > [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart >>> > [2] >>> > >>> https://github.com/redhat-openstack/?utf8=%E2%9C%93&query=ansible-role-tripleo >>> > [3[ https://www.rdoproject.org/tripleo/ >>> > >>> > >>> >> >>> >> >>> >> Raoul - I totally agree with you, especially with "difficult for >>> >> anyone to start contributing and collaborate". This is exactly why >>> >> this discussion started. If we can agree on one set of tools, it will >>> >> make everyone's life easier - current groups, new contributors, folks >>> >> that just want to deploy TripleO quickly. But I'm afraid some >>> >> sacrifices need to be made by both groups. >>> >> >>> >> David - I thought WeiRDO is used only for packstack, so I apologize I >>> >> didn't include it. It does sound like an anther testing project, is >>> >> there a place to merge it with another existing testing project? like >>> >> Octario for example or one of TripleO testing projects. Or does it >>> >> make sense to keep it a standalone project? >>> >> >>> >> >>> >> >>> >> >>> >> On Tue, Aug 2, 2016 at 11:12 AM, Christopher Brown >> > >>> >> wrote: >>> >> > Hello RDOistas (I think that is the expression?), >>> >> > >>> >> > Another year, another OpenStack deployment tool. :) >>> >> > >>> >> > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: >>> >> >> If we are talking about tools, I would also want to add something >>> >> >> with regards to user interface of these tools. This is based on my >>> >> >> own experience: >>> >> >> >>> >> >> I started trying to deploy Openstack with Staypuft and The Foreman. >>> >> >> The UI of The Foreman was intuitive enough for the discovery and >>> >> >> provisioning of the servers. The OpenStack portion, not so much. >>> >> > >>> >> > This is exactly mine also. I think this works really well in very >>> large >>> >> > enterprise environments where you need to split out services over >>> more >>> >> > than three controllers. You do need good in-house puppet skills >>> though >>> >> > so better for enterprise with a good sysadmin team. >>> >> > >>> >> >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I >>> >> >> believe) that allowed you to graphically build your Openstack >>> cloud. >>> >> >> That was a reasonable good GUI for Openstack. >>> >> > >>> >> > Well, I found it barely usable. It was only ever good as a graphical >>> >> > representiation of what the build was doing. Interacting with it was >>> >> > not great. >>> >> > >>> >> >> Following that, TripleO become a script based installer, that >>> >> >> required experience in Heat templates. I know I didn?t have it and >>> >> >> had to ask in the mailing list about how to present this or change >>> >> >> that. I got a couple of installs working with this setup. >>> >> > >>> >> > Works well now that I understand all the foibles and have invested >>> time >>> >> > into understanding heat templates and puppet modules. Its good in >>> that >>> >> > it forces you to learn about orchestration which is such an >>> important >>> >> > end-goal of cloud environments. >>> >> > >>> >> >> In the last session in Austin, my goal was to obtain information on >>> >> >> how others were installing Openstack. I was pointed to Fuel as an >>> >> >> alternative. I tried it up, and it just worked. It had the >>> >> >> discovering capability from The Foreman, and the configuration >>> >> >> options from TripleO. I understand that is based in Ansible and >>> >> >> because of that, it is not fully CentOS ready for all the nodes (at >>> >> >> least not in version 9 that I tried). In any case, as a deployer >>> and >>> >> >> installer, it is the most well rounded tool that I found. >>> >> > >>> >> > This is interesting to know. I've heard of Fuel of course but there >>> are >>> >> > some politics involved - it still has the team:single-vendor tag but >>> >> > from what I see Mirantis are very keen for it to become the default >>> >> > OpenStack installer. I don't think being Ansible-based should be a >>> >> > problem - we are deploying OpenShift on OpenStack which uses >>> Openshift- >>> >> > ansible - this recently moved to Ansible 2.1 without too much >>> >> > disruption. >>> >> > >>> >> >> I?d love to see RDO moving into that direction, and having an easy >>> to >>> >> >> use, end user ready deployer tool. >>> >> > >>> >> > If its as good as you say its definitely worth evaluating. From our >>> >> > point of view, we want to be able to add services to the pacemaker >>> >> > cluster with some ease - for example Magnum and Sahara - and it >>> looks >>> >> > like there are steps being taken with regards to composable roles >>> and >>> >> > simplification of the pacemaker cluster to just core services. >>> >> > >>> >> > But if someone can explain that better I would appreciate it. >>> >> > >>> >> > Regards >>> >> > >>> >> >> IB >>> >> >> >>> >> >> >>> >> >> __ >>> >> >> Ignacio Bravo >>> >> >> LTG Federal, Inc >>> >> >> www.ltgfederal.com >>> >> >> >>> >> >> >>> >> >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard >>> >> >> > wrote: >>> >> >> > >>> >> >> > The vast majority of RDO's CI relies on using upstream >>> >> >> > installation/deployment projects in order to test installation of >>> >> >> > RDO >>> >> >> > packages in different ways and configurations. >>> >> >> > >>> >> >> > Unless I'm mistaken, TripleO Quickstart was originally created >>> as a >>> >> >> > mean to "easily" install TripleO in different topologies without >>> >> >> > requiring a massive amount of hardware. >>> >> >> > This project allows us to test TripleO in virtual deployments on >>> >> >> > just >>> >> >> > one server instead of, say, 6. >>> >> >> > >>> >> >> > There's also WeIRDO [1] which was left out of your list. >>> >> >> > WeIRDO is super simple and simply aims to run upstream gate jobs >>> >> >> > (such >>> >> >> > as puppet-openstack-integration [2][3] and packstack [4][5]) >>> >> >> > outside >>> >> >> > of the gate. >>> >> >> > It'll install dependencies that are expected to be there (i.e, >>> >> >> > usually >>> >> >> > set up by the openstack-infra gate preparation jobs), set up the >>> >> >> > trunk >>> >> >> > repositories we're interested in testing and the rest is handled >>> by >>> >> >> > the upstream project testing framework. >>> >> >> > >>> >> >> > The WeIRDO project is /very/ low maintenance and brings an >>> >> >> > exceptional >>> >> >> > amount of coverage and value. >>> >> >> > This coverage is important because RDO provides OpenStack >>> packages >>> >> >> > or >>> >> >> > projects that are not necessarily used by TripleO and the reality >>> >> >> > is >>> >> >> > that not everyone deploying OpenStack on CentOS with RDO will be >>> >> >> > using >>> >> >> > TripleO. >>> >> >> > >>> >> >> > Anyway, sorry for sidetracking but back to the topic, thanks for >>> >> >> > opening the discussion. >>> >> >> > >>> >> >> > What honestly perplexes me is the situation of CI in RDO and OSP, >>> >> >> > especially around TripleO/Director, is the amount of work that is >>> >> >> > spent downstream. >>> >> >> > And by downstream, here, I mean anything that isn't in TripleO >>> >> >> > proper. >>> >> >> > >>> >> >> > I keep dreaming about how awesome upstream TripleO CI would be if >>> >> >> > all >>> >> >> > that effort was spent directly there instead -- and then that all >>> >> >> > work >>> >> >> > could bear fruit and trickle down downstream for free. >>> >> >> > Exactly like how we keep improving the testing coverage in >>> >> >> > puppet-openstack-integration, it's automatically pulled in RDO CI >>> >> >> > through WeIRDO for free. >>> >> >> > We make the upstream better and we benefit from it >>> simultaneously: >>> >> >> > everyone wins. >>> >> >> > >>> >> >> > [1]: https://github.com/rdo-infra/weirdo >>> >> >> > [2]: >>> https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst >>> >> >> > ack >>> >> >> > [3]: >>> https://github.com/openstack/puppet-openstack-integration#desc >>> >> >> > ription >>> >> >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >>> >> >> > [5]: >>> https://github.com/openstack/packstack#packstack-integration-t >>> >> >> > ests >>> >> >> > >>> >> >> > David Moreau Simard >>> >> >> > Senior Software Engineer | Openstack RDO >>> >> >> > >>> >> >> > dmsimard = [irc, github, twitter] >>> >> >> > >>> >> >> > David Moreau Simard >>> >> >> > Senior Software Engineer | Openstack RDO >>> >> >> > >>> >> >> > dmsimard = [irc, github, twitter] >>> >> >> > >>> >> >> > >>> >> >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman < >>> abregman at redhat.com> >>> >> >> > wrote: >>> >> >> > > Hi, >>> >> >> > > >>> >> >> > > I would like to start a discussion on the overlap between tools >>> >> >> > > we >>> >> >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. >>> >> >> > > >>> >> >> > > Several months ago, we worked on one common framework for >>> >> >> > > deploying >>> >> >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can say >>> it >>> >> >> > > didn't work out well, which eventually led each group to focus >>> on >>> >> >> > > developing other existing/new tools. >>> >> >> > > >>> >> >> > > What we have right now for deploying and testing >>> >> >> > > -------------------------------------------------------- >>> >> >> > > === Component CI, Gating === >>> >> >> > > I'll start with the projects we created, I think that's only >>> fair >>> >> >> > > :) >>> >> >> > > >>> >> >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the >>> OVB >>> >> >> > > project. >>> >> >> > > >>> >> >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per >>> >> >> > > release. >>> >> >> > > >>> >> >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, >>> >> >> > > tempest, >>> >> >> > > csit) + Patching RPMs with submitted code. >>> >> >> > > >>> >> >> > > === Automation, QE === >>> >> >> > > * InfraRed[4] - provision install and test. Pluggable and >>> >> >> > > modular, >>> >> >> > > allows you to create your own provisioner, installer and >>> tester. >>> >> >> > > >>> >> >> > > As far as I know, the groups is working now on different >>> >> >> > > structure of >>> >> >> > > one main project and three sub projects (provision, install and >>> >> >> > > test). >>> >> >> > > >>> >> >> > > === RDO === >>> >> >> > > I didn't use RDO tools, so I apologize if I got something >>> wrong: >>> >> >> > > >>> >> >> > > * About ~25 micro independent Ansible roles[5]. You can either >>> >> >> > > choose >>> >> >> > > to use one of them or several together. They are used for >>> >> >> > > provisioning, installing and testing Tripleo. >>> >> >> > > >>> >> >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying >>> >> >> > > tripleo >>> >> >> > > and test it. >>> >> >> > > >>> >> >> > > As I said, I didn't use the tools, so feel free to add more >>> >> >> > > information you think is relevant. >>> >> >> > > >>> >> >> > > === More? === >>> >> >> > > I hope not. Let us know if are familiar with more tools. >>> >> >> > > >>> >> >> > > Conclusion >>> >> >> > > -------------- >>> >> >> > > So as you can see, there are several projects that eventually >>> >> >> > > overlap >>> >> >> > > in many areas. Each group is basically using the same tasks >>> >> >> > > (provision >>> >> >> > > resources, build/import overcloud images, run tempest, collect >>> >> >> > > logs, >>> >> >> > > etc.) >>> >> >> > > >>> >> >> > > Personally, I think it's a waste of resources. For each task >>> >> >> > > there is >>> >> >> > > at least two people from different groups who work on exactly >>> the >>> >> >> > > same >>> >> >> > > task. The most recent example I can give is OVB. As far as I >>> >> >> > > know, >>> >> >> > > both groups are working on implementing it in their set of >>> tools >>> >> >> > > right >>> >> >> > > now. >>> >> >> > > >>> >> >> > > On the other hand, you can always claim: "we already tried to >>> >> >> > > work on >>> >> >> > > the same framework, we failed to do it successfully" - right, >>> but >>> >> >> > > maybe with better ground rules we can manage it. We would >>> >> >> > > defiantly >>> >> >> > > benefit a lot from doing that. >>> >> >> > > >>> >> >> > > What's next? >>> >> >> > > ---------------- >>> >> >> > > So first of all, I would like to hear from you if you think >>> that >>> >> >> > > we >>> >> >> > > can collaborate once again or is it actually better to keep it >>> as >>> >> >> > > it >>> >> >> > > is now. >>> >> >> > > >>> >> >> > > If you agree that collaboration here makes sense, maybe you >>> have >>> >> >> > > ideas >>> >> >> > > on how we can do it better this time. >>> >> >> > > >>> >> >> > > I think that setting up a meeting to discuss the right >>> >> >> > > architecture >>> >> >> > > for the project(s) and decide on good review/gating process, >>> >> >> > > would be >>> >> >> > > a good start. >>> >> >> > > >>> >> >> > > Please let me know what do you think and keep in mind that this >>> >> >> > > is not >>> >> >> > > about which tool is better!. As you can see I didn't mention >>> the >>> >> >> > > time >>> >> >> > > it takes for each tool to deploy and test, and also not the >>> full >>> >> >> > > feature list it supports. >>> >> >> > > If possible, we should keep it about collaborating and not >>> >> >> > > choosing >>> >> >> > > the best tool. Our solution could be the combination of two or >>> >> >> > > more >>> >> >> > > tools eventually (tripleo-red, infra-quickstart? :D ) >>> >> >> > > >>> >> >> > > "You may say I'm a dreamer, but I'm not the only one. I hope >>> some >>> >> >> > > day >>> >> >> > > you'll join us and the infra will be as one" :) >>> >> >> > > >>> >> >> > > [1] https://github.com/redhat-openstack/ansible-ovb >>> >> >> > > [2] https://github.com/redhat-openstack/ansible-rhosp >>> >> >> > > [3] https://github.com/redhat-openstack/octario >>> >> >> > > [4] https://github.com/rhosqeauto/InfraRed >>> >> >> > > [5] >>> https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi >>> >> >> > > ble-role >>> >> >> > > [6] https://github.com/openstack/tripleo-quickstart >>> >> >> > > >>> >> >> > > _______________________________________________ >>> >> >> > > rdo-list mailing list >>> >> >> > > rdo-list at redhat.com >>> >> >> > > https://www.redhat.com/mailman/listinfo/rdo-list >>> >> >> > > >>> >> >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> > _______________________________________________ >>> >> >> > rdo-list mailing list >>> >> >> > rdo-list at redhat.com >>> >> >> > https://www.redhat.com/mailman/listinfo/rdo-list >>> >> >> > >>> >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> >>> >> > -- >>> >> > Regards, >>> >> > >>> >> > Christopher Brown >>> >> > OpenStack Engineer >>> >> > OCF plc >>> >> > >>> >> > Tel: +44 (0)114 257 2200 >>> >> > Web: www.ocf.co.uk >>> >> > Blog: blog.ocf.co.uk >>> >> > Twitter: @ocfplc >>> >> > >>> >> > Please note, any emails relating to an OCF Support request must >>> always >>> >> > be sent to support at ocf.co.uk for a ticket number to be generated or >>> >> > existing support ticket to be updated. Should this not be done then >>> OCF >>> >> > cannot be held responsible for requests not dealt with in a timely >>> >> > manner. >>> >> > >>> >> > OCF plc is a company registered in England and Wales. Registered >>> number >>> >> > 4132533, VAT number GB 780 6803 14. Registered office address: OCF >>> plc, >>> >> > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield >>> S35 >>> >> > 2PG. >>> >> > >>> >> > This message is private and confidential. If you have received this >>> >> > message in error, please notify us immediately and remove it from >>> your >>> >> > system. >>> >> > >>> >> > _______________________________________________ >>> >> > rdo-list mailing list >>> >> > rdo-list at redhat.com >>> >> > https://www.redhat.com/mailman/listinfo/rdo-list >>> >> > >>> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >>> >> >>> >> >>> >> -- >>> >> Arie Bregman >>> >> Red Hat Israel >>> >> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview >>> > >>> > >>> >>> >>> >>> -- >>> Arie Bregman >>> Red Hat Israel >>> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview >>> >> >> > > > -- > Tal Kammer > Associate manager, automation and infrastracture, Openstack platform. > Red Hat Israel > Automation group mojo: https://mojo.redhat.com/docs/DOC-1011659 > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- {Kind regards | Mit besten Gr??en}, Frank ________________________________ Frank Zdarsky | NFV Partner Engineering | Office of Technology | Red Hat e: fzdarsky at redhat.com | irc: fzdarsky @freenode | m: +49 175 82 11 64 4 | t: +49 711 96 43 70 02 -------------- next part -------------- An HTML attachment was scrubbed... URL: From goneri at redhat.com Wed Aug 3 10:08:07 2016 From: goneri at redhat.com (=?utf-8?Q?Gon=C3=A9ri?= Le Bouder) Date: Wed, 03 Aug 2016 12:08:07 +0200 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: <87mvkuqi60.fsf@redhat.com> Yet another tool, Python based this time :D Some month ago, we started python-tripleo-helper[0]. The idea was to be able to script TripleO deployment like if it was a little Python script. We did this for the Distributed-CI project[1]. We were already providing a Python lib to interact with our server. By using Python everywhere, we can communicate with our server and interact with our nodes directely from the same script. This also give use the ability to use unit-test to validate the behaviour of our scripts. It's also nice to be able to run async task, for example, this is a module that create network issues (Chaos Monkey)[2]. [0]: https://github.com/redhat-openstack/python-tripleo-helper [1]: https://docs.distributed-ci.io/ [2]: https://github.com/redhat-openstack/python-tripleo-helper/blob/master/tripleohelper/chaos_monkey.py -- Gon?ri Le Bouder -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From cbrown2 at ocf.co.uk Wed Aug 3 10:10:24 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Wed, 3 Aug 2016 11:10:24 +0100 Subject: [rdo-list] [tripleo] Troubles deploying Libery with HA setup In-Reply-To: References: <1470155660.2497.36.camel@ocf.co.uk> Message-ID: <1470219024.2497.72.camel@ocf.co.uk> Hi Luca, On Wed, 2016-08-03 at 09:27 +0100, Luca 'remix_tj' Lorenzetto wrote: > On Tue, Aug 2, 2016 at 6:34 PM, Christopher Brown > wrote: > > > > Hi Luca, > > Hi Christopher, > > > > > > > It rings a bell but to be honest I would try the following: > > > > 1. Check hosts file on controllers - I wonder what are the content > > there? > > # HEAT_HOSTS_START - Do not edit manually within this section! > 172.25.122.16 opskvmsvil0.company.it opskvmsvil0 > 172.25.122.15 opskvmsvil1.company.it opskvmsvil1 > 172.25.122.17 opskvmsvil2.company.it opskvmsvil2 > 172.25.122.13 opsctrl0.company.it opsctrl0 overcloud > 172.25.122.12 opsctrl1.company.it opsctrl1 overcloud > 172.25.122.14 opsctrl2.company.it opsctrl2 overcloud > # HEAT_HOSTS_END > > Seems fine to me. This is exactly what i intended. I'm not so sure, you seem to have "overcloud" at the end of all three controller entries? > > > > 2. Revert hostname change and see if it is happy > > I'll try. Seems a good solution, but can't understand why this will > impact if hosts file is correct and hosts can talk each other using > the assigned name. Sure but you specifically acknowledge it is one of the things you have changed so I would be inclined to revert that change and see if the deploy works. > > > > 3. Definitely try and compose from latest stable CentOS SIG Liberty > > packages (not delorean or DLRN or whatever it is these days) > > I already tried this. I had a strange issue while deploying computing > nodes (puppet errors, don't remember exactly what). That's why i > returned to my previous mirror since i had successfully deployed a > 1+3 > setup without issues. > > > > > > 4. Try Mitaka perhaps and follow Graeme's instructions here: > > > > https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html > > > > You can miss step 4 and 6 as these have been resolved. > > I'd like to complete the setup with Liberty. We are playing also with > other distributions (mainly commercial) that are proposing Liberty > and > we'd like to have everything aligned. Additionally we'd try a forward > jump in place, to make a funny release-upgrade test and see what > happens. I like the jump release idea but I really would be inclined to run the newer code as you can always put the upgrade testing into the Newton release. In a few months Liberty will no longer be supported and I'd be inclined not to carry the technical debt from an earlier release into a production deployment, unless you are intending this for testing only? > > > > > > I tend to work back and reduce things down in complexity. > > I agree, i'll try. > > > > > > You can specify hostnames as per: > > > > http://docs.openstack.org/developer/tripleo-docs/advanced_deploymen > > t/no > > de_placement.html > > > > but perhaps this is what you are doing in your ~/hostname- > > nostri.yaml > > file? > > That file has simply this inside: > > parameter_defaults: > ControllerHostnameFormat: 'opsctrl%index%' > ComputeHostnameFormat: 'opskvmsvil%index%' We have more success with scheduler hints. So in our instackenv.json we have: "capabilities": "node:controller-1,boot_option:local", and then we have an environment file: parameter_defaults: ControllerSchedulerHints: 'capabilities:node': 'controller-%index%' NovaComputeSchedulerHints: 'capabilities:node': 'compute-%index%' HostnameMap: overcloud-controller-0: overcloud-ctrl-customer-0 overcloud-controller-1: overcloud-ctrl-customer-1 overcloud-controller-2: overcloud-ctrl-customer-2 overcloud-novacompute-0: overcloud-comp-customer-0 overcloud-novacompute-1: overcloud-comp-customer-1 > > That is simply because overcloud-novacompute-0 was too long name Not sure I understand this bit. Assuming an internal requirement rather than a technical one? > Luca > > > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero > delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet ? la pi? grande biblioteca del mondo. > Ma il problema ? che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , > -- Regards, Christopher From rasca at redhat.com Wed Aug 3 11:10:57 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Wed, 3 Aug 2016 13:10:57 +0200 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: <26bfe572-265b-0654-b3ec-42605de429e1@redhat.com> On 02/08/2016 20:52, Wesley Hayutin wrote: > > > On Tue, Aug 2, 2016 at 1:51 PM, Arie Bregman > wrote: > > On Tue, Aug 2, 2016 at 3:53 PM, Wesley Hayutin > wrote: > > > > > > On Tue, Aug 2, 2016 at 4:58 AM, Arie Bregman > wrote: > >> > >> It became a discussion around the official installer and how to > >> improve it. While it's an important discussion, no doubt, I actually > >> want to focus on our automation and CI tools. > >> > >> Since I see there is an agreement that collaboration does make sense > >> here, let's move to the hard questions :) > >> > >> Wes, Tal - there is huge difference right now between infrared and > >> tripleo-quickstart in their structure. One is all-in-one project and > >> the other one is multiple micro projects managed by one project. Do > >> you think there is a way to consolidate or move to a different model > >> which will make sense for both RDO and RHOSP? something that both > >> groups can work on. > > > > > > I am happy to be part of the discussion, and I am also very > willing to help > > and try to drive suggestions to the tripleo-quickstart community. > > I need to make a point clear though, just to make sure we're on > the same > > page.. I do not own oooq, I am not a core on oooq. > > I can help facilitate a discussion but oooq is an upstream tripleo > tool that > > replaces instack-virt-setup [1]. > > It also happens to be a great tool for easily deploying TripleO > end to end > > [3] > > > > What I *can* do is show everyone how to manipulate > tripleo-quickstart and > > customize it with composable ansible roles, templates, settings etc.. > > This would allow any upstream or downstream project to override > the native > > oooq roles and *any* step that does not work for another group w/ > 3rd party > > roles [2]. > > These 3rd party roles can be free and opensource or internal only, > it works > > either way. > > This was discussed in depth as part of the production chain > meetings, the > > message may have been lost unfortunately. > > > > I hope this resets your expectations of what I can and can not do > as part of > > these discussions. > > Let me know where and when and I'm happy to be part of the discussion. > > Thanks for clarifying :) > > Next reasonable step would probably be to propose some sort of > blueprint for tripleo-quickstart to include some of InfraRed features > and by that have one tool driven by upstream development that can be > either cloned downstream or used as it is with an internal data > project. > > > Sure.. a blueprint would help everyone understand the feature and the > motivation. > You could also just plug in the feature you are looking for to oooq and > see if it meets > your requirements. See below. > > > > OR > > have InfraRed pushed into tripleo/openstack namespace and expose it to > the RDO community (without internal data of course). Personally, I > really like the pluggable[1] structure (which allows it to actually > consume tripleo-quickstart) so I'm not sure if it can be really merged > with tripleo-quickstart as proposed in the first option. > > > The way oooq is built one can plugin or override any part at run time > with custom playbooks, roles, and config. There isn't anything that > needs to be > checked in directly to oooq to use it. > > It's designed such that third parties can make their own decisions to > use something > native to quickstart, something from our role library, or something > completely independent. > This allows teams, individuals or whom ever to do what they need to with > out having to fork or re-roll the entire framework. > > The important step is to note that these 3rd party roles or > (oooq-extras) incubate, mature and then graduate to github/openstack. > The upstream openstack community should lead, evaluate, and via > blueprints vote on the canonical CI tool set. > > We can record a demonstration if required, but there is nothing stopping > anyone right now from > doing this today. I'm just browsing the role library for an example, I > had no idea [1] existed. > Looks like Raoul had a requirement and just made it work. Yes, this is working on quickstart and it's part of the CI process we're using to test HA stuff. But another example that can be made is the ansible-role-tripleo-baremetal-undercloud role, which basically is another thing we created to make our requirement (baremetal) satisfied. >From this point of view I find quickstart (in truth everything for me started with C.A.T.) very open in terms of potential contribution one could make. If you guys think it can be useful I can fill up a document in which I explain how one can add something to quickstart to satisfy a requirement, I just don't know if this is what we are looking for in this discussion. -- Raoul Scarazzini rasca at redhat.com > Justin, from the browbeat project has graciously created some > documentation regarding 3rd party roles. > It has yet to merge, but it should help illustrate how these roles are > used. [2] > > Thanks Arie for leading the discussion. > > [1] https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-validate-ha > [2] https://review.openstack.org/#/c/346733/ > > > > > > > > I like the second option, although it still forces us to have two > tools, but after a period of time, I believe it will be clear what the > community prefers, which will allow us to remove one of the projects > eventually. > > So, unless there are other ideas, I think the next move should be > made by Tal. > > Tal, I'm willing to help with whatever is needed. > > [1] http://infrared.readthedocs.io/en/latest > > > > > Thanks > > > > [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart > > [2] > > > https://github.com/redhat-openstack/?utf8=%E2%9C%93&query=ansible-role-tripleo > > [3[ https://www.rdoproject.org/tripleo/ > > > > > >> > >> > >> Raoul - I totally agree with you, especially with "difficult for > >> anyone to start contributing and collaborate". This is exactly why > >> this discussion started. If we can agree on one set of tools, it will > >> make everyone's life easier - current groups, new contributors, folks > >> that just want to deploy TripleO quickly. But I'm afraid some > >> sacrifices need to be made by both groups. > >> > >> David - I thought WeiRDO is used only for packstack, so I apologize I > >> didn't include it. It does sound like an anther testing project, is > >> there a place to merge it with another existing testing project? like > >> Octario for example or one of TripleO testing projects. Or does it > >> make sense to keep it a standalone project? > >> > >> > >> > >> > >> On Tue, Aug 2, 2016 at 11:12 AM, Christopher Brown > > > >> wrote: > >> > Hello RDOistas (I think that is the expression?), > >> > > >> > Another year, another OpenStack deployment tool. :) > >> > > >> > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: > >> >> If we are talking about tools, I would also want to add something > >> >> with regards to user interface of these tools. This is based on my > >> >> own experience: > >> >> > >> >> I started trying to deploy Openstack with Staypuft and The > Foreman. > >> >> The UI of The Foreman was intuitive enough for the discovery and > >> >> provisioning of the servers. The OpenStack portion, not so much. > >> > > >> > This is exactly mine also. I think this works really well in > very large > >> > enterprise environments where you need to split out services > over more > >> > than three controllers. You do need good in-house puppet skills > though > >> > so better for enterprise with a good sysadmin team. > >> > > >> >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I > >> >> believe) that allowed you to graphically build your Openstack > cloud. > >> >> That was a reasonable good GUI for Openstack. > >> > > >> > Well, I found it barely usable. It was only ever good as a > graphical > >> > representiation of what the build was doing. Interacting with > it was > >> > not great. > >> > > >> >> Following that, TripleO become a script based installer, that > >> >> required experience in Heat templates. I know I didn?t have it and > >> >> had to ask in the mailing list about how to present this or change > >> >> that. I got a couple of installs working with this setup. > >> > > >> > Works well now that I understand all the foibles and have > invested time > >> > into understanding heat templates and puppet modules. Its good > in that > >> > it forces you to learn about orchestration which is such an > important > >> > end-goal of cloud environments. > >> > > >> >> In the last session in Austin, my goal was to obtain > information on > >> >> how others were installing Openstack. I was pointed to Fuel as an > >> >> alternative. I tried it up, and it just worked. It had the > >> >> discovering capability from The Foreman, and the configuration > >> >> options from TripleO. I understand that is based in Ansible and > >> >> because of that, it is not fully CentOS ready for all the > nodes (at > >> >> least not in version 9 that I tried). In any case, as a > deployer and > >> >> installer, it is the most well rounded tool that I found. > >> > > >> > This is interesting to know. I've heard of Fuel of course but > there are > >> > some politics involved - it still has the team:single-vendor > tag but > >> > from what I see Mirantis are very keen for it to become the default > >> > OpenStack installer. I don't think being Ansible-based should be a > >> > problem - we are deploying OpenShift on OpenStack which uses > Openshift- > >> > ansible - this recently moved to Ansible 2.1 without too much > >> > disruption. > >> > > >> >> I?d love to see RDO moving into that direction, and having an > easy to > >> >> use, end user ready deployer tool. > >> > > >> > If its as good as you say its definitely worth evaluating. From our > >> > point of view, we want to be able to add services to the pacemaker > >> > cluster with some ease - for example Magnum and Sahara - and it > looks > >> > like there are steps being taken with regards to composable > roles and > >> > simplification of the pacemaker cluster to just core services. > >> > > >> > But if someone can explain that better I would appreciate it. > >> > > >> > Regards > >> > > >> >> IB > >> >> > >> >> > >> >> __ > >> >> Ignacio Bravo > >> >> LTG Federal, Inc > >> >> www.ltgfederal.com > >> >> > >> >> > >> >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard > > > >> >> > wrote: > >> >> > > >> >> > The vast majority of RDO's CI relies on using upstream > >> >> > installation/deployment projects in order to test > installation of > >> >> > RDO > >> >> > packages in different ways and configurations. > >> >> > > >> >> > Unless I'm mistaken, TripleO Quickstart was originally > created as a > >> >> > mean to "easily" install TripleO in different topologies without > >> >> > requiring a massive amount of hardware. > >> >> > This project allows us to test TripleO in virtual deployments on > >> >> > just > >> >> > one server instead of, say, 6. > >> >> > > >> >> > There's also WeIRDO [1] which was left out of your list. > >> >> > WeIRDO is super simple and simply aims to run upstream gate jobs > >> >> > (such > >> >> > as puppet-openstack-integration [2][3] and packstack [4][5]) > >> >> > outside > >> >> > of the gate. > >> >> > It'll install dependencies that are expected to be there (i.e, > >> >> > usually > >> >> > set up by the openstack-infra gate preparation jobs), set up the > >> >> > trunk > >> >> > repositories we're interested in testing and the rest is > handled by > >> >> > the upstream project testing framework. > >> >> > > >> >> > The WeIRDO project is /very/ low maintenance and brings an > >> >> > exceptional > >> >> > amount of coverage and value. > >> >> > This coverage is important because RDO provides OpenStack > packages > >> >> > or > >> >> > projects that are not necessarily used by TripleO and the > reality > >> >> > is > >> >> > that not everyone deploying OpenStack on CentOS with RDO will be > >> >> > using > >> >> > TripleO. > >> >> > > >> >> > Anyway, sorry for sidetracking but back to the topic, thanks for > >> >> > opening the discussion. > >> >> > > >> >> > What honestly perplexes me is the situation of CI in RDO and > OSP, > >> >> > especially around TripleO/Director, is the amount of work > that is > >> >> > spent downstream. > >> >> > And by downstream, here, I mean anything that isn't in TripleO > >> >> > proper. > >> >> > > >> >> > I keep dreaming about how awesome upstream TripleO CI would > be if > >> >> > all > >> >> > that effort was spent directly there instead -- and then > that all > >> >> > work > >> >> > could bear fruit and trickle down downstream for free. > >> >> > Exactly like how we keep improving the testing coverage in > >> >> > puppet-openstack-integration, it's automatically pulled in > RDO CI > >> >> > through WeIRDO for free. > >> >> > We make the upstream better and we benefit from it > simultaneously: > >> >> > everyone wins. > >> >> > > >> >> > [1]: https://github.com/rdo-infra/weirdo > >> >> > [2]: > https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst > >> >> > ack > >> >> > [3]: > https://github.com/openstack/puppet-openstack-integration#desc > >> >> > ription > >> >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack > >> >> > [5]: > https://github.com/openstack/packstack#packstack-integration-t > >> >> > ests > >> >> > > >> >> > David Moreau Simard > >> >> > Senior Software Engineer | Openstack RDO > >> >> > > >> >> > dmsimard = [irc, github, twitter] > >> >> > > >> >> > David Moreau Simard > >> >> > Senior Software Engineer | Openstack RDO > >> >> > > >> >> > dmsimard = [irc, github, twitter] > >> >> > > >> >> > > >> >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman > > > >> >> > wrote: > >> >> > > Hi, > >> >> > > > >> >> > > I would like to start a discussion on the overlap between > tools > >> >> > > we > >> >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. > >> >> > > > >> >> > > Several months ago, we worked on one common framework for > >> >> > > deploying > >> >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can > say it > >> >> > > didn't work out well, which eventually led each group to > focus on > >> >> > > developing other existing/new tools. > >> >> > > > >> >> > > What we have right now for deploying and testing > >> >> > > -------------------------------------------------------- > >> >> > > === Component CI, Gating === > >> >> > > I'll start with the projects we created, I think that's > only fair > >> >> > > :) > >> >> > > > >> >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using > the OVB > >> >> > > project. > >> >> > > > >> >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per > >> >> > > release. > >> >> > > > >> >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, > >> >> > > tempest, > >> >> > > csit) + Patching RPMs with submitted code. > >> >> > > > >> >> > > === Automation, QE === > >> >> > > * InfraRed[4] - provision install and test. Pluggable and > >> >> > > modular, > >> >> > > allows you to create your own provisioner, installer and > tester. > >> >> > > > >> >> > > As far as I know, the groups is working now on different > >> >> > > structure of > >> >> > > one main project and three sub projects (provision, > install and > >> >> > > test). > >> >> > > > >> >> > > === RDO === > >> >> > > I didn't use RDO tools, so I apologize if I got something > wrong: > >> >> > > > >> >> > > * About ~25 micro independent Ansible roles[5]. You can either > >> >> > > choose > >> >> > > to use one of them or several together. They are used for > >> >> > > provisioning, installing and testing Tripleo. > >> >> > > > >> >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying > >> >> > > tripleo > >> >> > > and test it. > >> >> > > > >> >> > > As I said, I didn't use the tools, so feel free to add more > >> >> > > information you think is relevant. > >> >> > > > >> >> > > === More? === > >> >> > > I hope not. Let us know if are familiar with more tools. > >> >> > > > >> >> > > Conclusion > >> >> > > -------------- > >> >> > > So as you can see, there are several projects that eventually > >> >> > > overlap > >> >> > > in many areas. Each group is basically using the same tasks > >> >> > > (provision > >> >> > > resources, build/import overcloud images, run tempest, collect > >> >> > > logs, > >> >> > > etc.) > >> >> > > > >> >> > > Personally, I think it's a waste of resources. For each task > >> >> > > there is > >> >> > > at least two people from different groups who work on > exactly the > >> >> > > same > >> >> > > task. The most recent example I can give is OVB. As far as I > >> >> > > know, > >> >> > > both groups are working on implementing it in their set of > tools > >> >> > > right > >> >> > > now. > >> >> > > > >> >> > > On the other hand, you can always claim: "we already tried to > >> >> > > work on > >> >> > > the same framework, we failed to do it successfully" - > right, but > >> >> > > maybe with better ground rules we can manage it. We would > >> >> > > defiantly > >> >> > > benefit a lot from doing that. > >> >> > > > >> >> > > What's next? > >> >> > > ---------------- > >> >> > > So first of all, I would like to hear from you if you > think that > >> >> > > we > >> >> > > can collaborate once again or is it actually better to > keep it as > >> >> > > it > >> >> > > is now. > >> >> > > > >> >> > > If you agree that collaboration here makes sense, maybe > you have > >> >> > > ideas > >> >> > > on how we can do it better this time. > >> >> > > > >> >> > > I think that setting up a meeting to discuss the right > >> >> > > architecture > >> >> > > for the project(s) and decide on good review/gating process, > >> >> > > would be > >> >> > > a good start. > >> >> > > > >> >> > > Please let me know what do you think and keep in mind that > this > >> >> > > is not > >> >> > > about which tool is better!. As you can see I didn't > mention the > >> >> > > time > >> >> > > it takes for each tool to deploy and test, and also not > the full > >> >> > > feature list it supports. > >> >> > > If possible, we should keep it about collaborating and not > >> >> > > choosing > >> >> > > the best tool. Our solution could be the combination of two or > >> >> > > more > >> >> > > tools eventually (tripleo-red, infra-quickstart? :D ) > >> >> > > > >> >> > > "You may say I'm a dreamer, but I'm not the only one. I > hope some > >> >> > > day > >> >> > > you'll join us and the infra will be as one" :) > >> >> > > > >> >> > > [1] https://github.com/redhat-openstack/ansible-ovb > >> >> > > [2] https://github.com/redhat-openstack/ansible-rhosp > >> >> > > [3] https://github.com/redhat-openstack/octario > >> >> > > [4] https://github.com/rhosqeauto/InfraRed > >> >> > > [5] > https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi > >> >> > > ble-role > >> >> > > [6] https://github.com/openstack/tripleo-quickstart > >> >> > > > >> >> > > _______________________________________________ > >> >> > > rdo-list mailing list > >> >> > > rdo-list at redhat.com > >> >> > > https://www.redhat.com/mailman/listinfo/rdo-list > >> >> > > > >> >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > >> >> > _______________________________________________ > >> >> > rdo-list mailing list > >> >> > rdo-list at redhat.com > >> >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> >> > > >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > > >> >> > >> > -- > >> > Regards, > >> > > >> > Christopher Brown > >> > OpenStack Engineer > >> > OCF plc > >> > > >> > Tel: +44 (0)114 257 2200 > >> > Web: www.ocf.co.uk > >> > Blog: blog.ocf.co.uk > >> > Twitter: @ocfplc > >> > > >> > Please note, any emails relating to an OCF Support request must > always > >> > be sent to support at ocf.co.uk for a > ticket number to be generated or > >> > existing support ticket to be updated. Should this not be done > then OCF > >> > cannot be held responsible for requests not dealt with in a timely > >> > manner. > >> > > >> > OCF plc is a company registered in England and Wales. > Registered number > >> > 4132533, VAT number GB 780 6803 14. Registered office address: > OCF plc, > >> > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > Sheffield S35 > >> > 2PG. > >> > > >> > This message is private and confidential. If you have received this > >> > message in error, please notify us immediately and remove it > from your > >> > system. > >> > > >> > _______________________________________________ > >> > rdo-list mailing list > >> > rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > > >> > >> > >> > >> -- > >> Arie Bregman > >> Red Hat Israel > >> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview > > > > > > > > -- > Arie Bregman > Red Hat Israel > Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From whayutin at redhat.com Wed Aug 3 13:00:46 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 3 Aug 2016 09:00:46 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: On Wed, Aug 3, 2016 at 5:23 AM, Frank Zdarsky wrote: > On Wed, Aug 3, 2016 at 9:28 AM, Tal Kammer wrote: > >> Thanks Arie for starting this discussion! (and sorry for joining in late) >> >> Some detailed explanations for those not familiar with InfraRed and would >> like to get the highlights: (Feel free to skip to my inline comments if >> you are familiar / too long of a post :) ). >> >> The InfraRed project is an Ansible based project comprised from three >> distinct tools: (currently under the InfraRed[1] project, and being split >> into their own standalone project soon). >> >> 1. ir-provisioner - responsible for building up a custom environment - >> you can affect the memory, CPU, HDD size, number of HDD each node has + >> number of networks (with/without DHCP) so for example one can deploy the >> following topology: (only an example to show the versatile options) >> 1 undercloud on RHEL 7 with 16GB of ram + 60GB of HDD with 3 network >> interfaces. >> 3 controllers on Centos 7 with 8GB of ram + 40GB of HDD >> 2 compute on Centos 7 with 6GB of ram + 60GB HDD >> 3 ceph nodes on RHEL 7 with 4GB of ram + 2 HDD one with 20GB + one with >> 40GB >> >> Example usage: (setting up the above VMs with four different HW specs) >> ir-provisioner virsh --host-address= >> --host-key= >> --topology-nodes=undercloud:1,controller:3,compute:2,ceph:3 >> >> *Note: while it is written "controller/compute/ceph" this is just setting >> up VMs, the names act more as a reference to the user of what is the role >> of each node. >> The installation of Openstack is done with a dedicated tool called >> `ir-installer` (next) >> >> 2. ir-installer - responsible for installing the product - supports >> "quickstart" mode (setting up a working environment in ~30 minutes) or E2E >> mode which does a full installation in ~1h. >> The installation process is completely customized. You can supply your >> own heat templates / overcloud invocation / undercloud.conf file to use / >> etc. You can also just run a specific task (using ansible --tags), so if >> you have a deployment ready and just need to run say, the introspection >> phase, you can fully choose what to run and what to skip even. >> >> 3. ir-tester - responsible for installing / configuring / running the >> tests - this project is meant to hold all testing tools we use so a user >> will be able to run any testing utility he would like without the need to >> "dive in". we supply a simple interface requesting simple to choose the >> testing tool and the set of tests one wishes to run and we'll do the work >> for him :) >> >> More info about InfraRed can be found here[1] (though I must admit that >> we still need some "love" around our docs) >> >> [1] - http://infrared.readthedocs.io/en/latest/ >> >> > I agree cleanly separating and modularizing infrastructure provisioining > (for some use cases like NFV ideally with mixed VM and baremetal > environments), OS installation and testing is a good approach. In that > context, what happened to Khaleesi [0]? Haven't seen that mentioned on the > original list. Nor Apex [1], the installer from OPNFV that does not only > address OpenStack itself, but also integration with complementary projects > like ODL? > I consider provisioning baremetal for either libvirt hosts or baremetal installs is a separate project and conversation from the CI tools used to deploy tripleo. To be a little more clear, provisioning imho should be completely uncoupled from the CI code that deploys tripleo. Any team should be able to use provisioner tools that best fit their needs. If there is interest in collaborating on provisioners maybe we can do that in another thread. Why in another thread? Again my thought is they should be uncoupled completely from our CI source code. The rdo infra team is using two provisioners, both completely uncoupled from tripleo-quickstart. 1. Ansible extra heat provisioner: for ovb, written by Mathieu Bultel https://github.com/ansible/ansible-modules-extras/commit/c6a45234e094922786de78a83db3028804cc9141 2. cico, the official client for ci.centos written by David Simard http://python-cicoclient.readthedocs.io/en/latest/cli_usage.html Our baremetal deployments do not require any provisioning outside of what is done w/ ironic and tripleo. Upstream tripleo ci provisioning is done via node pool and ovb. FYI.. khaleesi has been deprecated and is no longer used by RDO infra. Thanks > > [0] https://github.com/redhat-openstack/khaleesi > [1] https://wiki.opnfv.org/display/apex/Apex > > >> >> On Tue, Aug 2, 2016 at 9:52 PM, Wesley Hayutin >> wrote: >> >>> >>> >>> On Tue, Aug 2, 2016 at 1:51 PM, Arie Bregman >>> wrote: >>> >>>> On Tue, Aug 2, 2016 at 3:53 PM, Wesley Hayutin >>>> wrote: >>>> > >>>> > >>>> > On Tue, Aug 2, 2016 at 4:58 AM, Arie Bregman >>>> wrote: >>>> >> >>>> >> It became a discussion around the official installer and how to >>>> >> improve it. While it's an important discussion, no doubt, I actually >>>> >> want to focus on our automation and CI tools. >>>> >> >>>> >> Since I see there is an agreement that collaboration does make sense >>>> >> here, let's move to the hard questions :) >>>> >> >>>> >> Wes, Tal - there is huge difference right now between infrared and >>>> >> tripleo-quickstart in their structure. One is all-in-one project and >>>> >> the other one is multiple micro projects managed by one project. Do >>>> >> you think there is a way to consolidate or move to a different model >>>> >> which will make sense for both RDO and RHOSP? something that both >>>> >> groups can work on. >>>> > >>>> > >>>> > I am happy to be part of the discussion, and I am also very willing >>>> to help >>>> > and try to drive suggestions to the tripleo-quickstart community. >>>> > I need to make a point clear though, just to make sure we're on the >>>> same >>>> > page.. I do not own oooq, I am not a core on oooq. >>>> > I can help facilitate a discussion but oooq is an upstream tripleo >>>> tool that >>>> > replaces instack-virt-setup [1]. >>>> > It also happens to be a great tool for easily deploying TripleO end >>>> to end >>>> > [3] >>>> > >>>> > What I *can* do is show everyone how to manipulate tripleo-quickstart >>>> and >>>> > customize it with composable ansible roles, templates, settings etc.. >>>> > This would allow any upstream or downstream project to override the >>>> native >>>> > oooq roles and *any* step that does not work for another group w/ 3rd >>>> party >>>> > roles [2]. >>>> > These 3rd party roles can be free and opensource or internal only, it >>>> works >>>> > either way. >>>> > This was discussed in depth as part of the production chain >>>> meetings, the >>>> > message may have been lost unfortunately. >>>> > >>>> > I hope this resets your expectations of what I can and can not do as >>>> part of >>>> > these discussions. >>>> > Let me know where and when and I'm happy to be part of the discussion. >>>> >>>> Thanks for clarifying :) >>>> >>>> Next reasonable step would probably be to propose some sort of >>>> blueprint for tripleo-quickstart to include some of InfraRed features >>>> and by that have one tool driven by upstream development that can be >>>> either cloned downstream or used as it is with an internal data >>>> project. >>>> >>> >>> Sure.. a blueprint would help everyone understand the feature and the >>> motivation. >>> You could also just plug in the feature you are looking for to oooq and >>> see if it meets >>> your requirements. See below. >>> >> >> While I think a blueprint is a good starting point, I'm afraid that our >> approach for provisioning machines is completely different so I'm not sure >> how to propose such a blueprint as it will probably require quite the >> design change from today's approach. >> >> >>> >>> >>>> >>>> OR >>>> >>>> have InfraRed pushed into tripleo/openstack namespace and expose it to >>>> the RDO community (without internal data of course). Personally, I >>>> really like the pluggable[1] structure (which allows it to actually >>>> consume tripleo-quickstart) so I'm not sure if it can be really merged >>>> with tripleo-quickstart as proposed in the first option. >>>> >>> >> I must admit that I like this option better as it introduces a tool to >> upstream and let the community drive it further / get more feedback on how >> to improve. >> A second benefit might be that by introducing a new concept / design we >> can take what is best from two worlds and improve. >> I would love to see an open discussion on the tool upstream and how we >> can improve the overall process. >> >> >>> The way oooq is built one can plugin or override any part at run time >>> with custom playbooks, roles, and config. There isn't anything that >>> needs to be >>> checked in directly to oooq to use it. >>> >>> It's designed such that third parties can make their own decisions to >>> use something >>> native to quickstart, something from our role library, or something >>> completely independent. >>> This allows teams, individuals or whom ever to do what they need to with >>> out having to fork or re-roll the entire framework. >>> >>> The important step is to note that these 3rd party roles or >>> (oooq-extras) incubate, mature and then graduate to github/openstack. >>> The upstream openstack community should lead, evaluate, and via >>> blueprints vote on the canonical CI tool set. >>> >>> We can record a demonstration if required, but there is nothing stopping >>> anyone right now from >>> doing this today. I'm just browsing the role library for an example, I >>> had no idea [1] existed. >>> Looks like Raoul had a requirement and just made it work. >>> >>> Justin, from the browbeat project has graciously created some >>> documentation regarding 3rd party roles. >>> It has yet to merge, but it should help illustrate how these roles are >>> used. [2] >>> >>> Thanks Arie for leading the discussion. >>> >>> [1] >>> https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-validate-ha >>> [2] https://review.openstack.org/#/c/346733/ >>> >>> >>> >>> >>> >>> >>>> >>>> I like the second option, although it still forces us to have two >>>> tools, but after a period of time, I believe it will be clear what the >>>> community prefers, which will allow us to remove one of the projects >>>> eventually. >>>> >>>> So, unless there are other ideas, I think the next move should be made >>>> by Tal. >>>> >>>> Tal, I'm willing to help with whatever is needed. >>>> >>> >> Thanks Arie for starting this discussion again, I believe we have still >> much work ahead of us but this is definitely a step in the right direction. >> >> >>> >>>> [1] http://infrared.readthedocs.io/en/latest >>>> >>>> > >>>> > Thanks >>>> > >>>> > [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart >>>> > [2] >>>> > >>>> https://github.com/redhat-openstack/?utf8=%E2%9C%93&query=ansible-role-tripleo >>>> > [3[ https://www.rdoproject.org/tripleo/ >>>> > >>>> > >>>> >> >>>> >> >>>> >> Raoul - I totally agree with you, especially with "difficult for >>>> >> anyone to start contributing and collaborate". This is exactly why >>>> >> this discussion started. If we can agree on one set of tools, it will >>>> >> make everyone's life easier - current groups, new contributors, folks >>>> >> that just want to deploy TripleO quickly. But I'm afraid some >>>> >> sacrifices need to be made by both groups. >>>> >> >>>> >> David - I thought WeiRDO is used only for packstack, so I apologize I >>>> >> didn't include it. It does sound like an anther testing project, is >>>> >> there a place to merge it with another existing testing project? like >>>> >> Octario for example or one of TripleO testing projects. Or does it >>>> >> make sense to keep it a standalone project? >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> On Tue, Aug 2, 2016 at 11:12 AM, Christopher Brown < >>>> cbrown2 at ocf.co.uk> >>>> >> wrote: >>>> >> > Hello RDOistas (I think that is the expression?), >>>> >> > >>>> >> > Another year, another OpenStack deployment tool. :) >>>> >> > >>>> >> > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: >>>> >> >> If we are talking about tools, I would also want to add something >>>> >> >> with regards to user interface of these tools. This is based on my >>>> >> >> own experience: >>>> >> >> >>>> >> >> I started trying to deploy Openstack with Staypuft and The >>>> Foreman. >>>> >> >> The UI of The Foreman was intuitive enough for the discovery and >>>> >> >> provisioning of the servers. The OpenStack portion, not so much. >>>> >> > >>>> >> > This is exactly mine also. I think this works really well in very >>>> large >>>> >> > enterprise environments where you need to split out services over >>>> more >>>> >> > than three controllers. You do need good in-house puppet skills >>>> though >>>> >> > so better for enterprise with a good sysadmin team. >>>> >> > >>>> >> >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I >>>> >> >> believe) that allowed you to graphically build your Openstack >>>> cloud. >>>> >> >> That was a reasonable good GUI for Openstack. >>>> >> > >>>> >> > Well, I found it barely usable. It was only ever good as a >>>> graphical >>>> >> > representiation of what the build was doing. Interacting with it >>>> was >>>> >> > not great. >>>> >> > >>>> >> >> Following that, TripleO become a script based installer, that >>>> >> >> required experience in Heat templates. I know I didn?t have it and >>>> >> >> had to ask in the mailing list about how to present this or change >>>> >> >> that. I got a couple of installs working with this setup. >>>> >> > >>>> >> > Works well now that I understand all the foibles and have invested >>>> time >>>> >> > into understanding heat templates and puppet modules. Its good in >>>> that >>>> >> > it forces you to learn about orchestration which is such an >>>> important >>>> >> > end-goal of cloud environments. >>>> >> > >>>> >> >> In the last session in Austin, my goal was to obtain information >>>> on >>>> >> >> how others were installing Openstack. I was pointed to Fuel as an >>>> >> >> alternative. I tried it up, and it just worked. It had the >>>> >> >> discovering capability from The Foreman, and the configuration >>>> >> >> options from TripleO. I understand that is based in Ansible and >>>> >> >> because of that, it is not fully CentOS ready for all the nodes >>>> (at >>>> >> >> least not in version 9 that I tried). In any case, as a deployer >>>> and >>>> >> >> installer, it is the most well rounded tool that I found. >>>> >> > >>>> >> > This is interesting to know. I've heard of Fuel of course but >>>> there are >>>> >> > some politics involved - it still has the team:single-vendor tag >>>> but >>>> >> > from what I see Mirantis are very keen for it to become the default >>>> >> > OpenStack installer. I don't think being Ansible-based should be a >>>> >> > problem - we are deploying OpenShift on OpenStack which uses >>>> Openshift- >>>> >> > ansible - this recently moved to Ansible 2.1 without too much >>>> >> > disruption. >>>> >> > >>>> >> >> I?d love to see RDO moving into that direction, and having an >>>> easy to >>>> >> >> use, end user ready deployer tool. >>>> >> > >>>> >> > If its as good as you say its definitely worth evaluating. From our >>>> >> > point of view, we want to be able to add services to the pacemaker >>>> >> > cluster with some ease - for example Magnum and Sahara - and it >>>> looks >>>> >> > like there are steps being taken with regards to composable roles >>>> and >>>> >> > simplification of the pacemaker cluster to just core services. >>>> >> > >>>> >> > But if someone can explain that better I would appreciate it. >>>> >> > >>>> >> > Regards >>>> >> > >>>> >> >> IB >>>> >> >> >>>> >> >> >>>> >> >> __ >>>> >> >> Ignacio Bravo >>>> >> >> LTG Federal, Inc >>>> >> >> www.ltgfederal.com >>>> >> >> >>>> >> >> >>>> >> >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard >>> > >>>> >> >> > wrote: >>>> >> >> > >>>> >> >> > The vast majority of RDO's CI relies on using upstream >>>> >> >> > installation/deployment projects in order to test installation >>>> of >>>> >> >> > RDO >>>> >> >> > packages in different ways and configurations. >>>> >> >> > >>>> >> >> > Unless I'm mistaken, TripleO Quickstart was originally created >>>> as a >>>> >> >> > mean to "easily" install TripleO in different topologies without >>>> >> >> > requiring a massive amount of hardware. >>>> >> >> > This project allows us to test TripleO in virtual deployments on >>>> >> >> > just >>>> >> >> > one server instead of, say, 6. >>>> >> >> > >>>> >> >> > There's also WeIRDO [1] which was left out of your list. >>>> >> >> > WeIRDO is super simple and simply aims to run upstream gate jobs >>>> >> >> > (such >>>> >> >> > as puppet-openstack-integration [2][3] and packstack [4][5]) >>>> >> >> > outside >>>> >> >> > of the gate. >>>> >> >> > It'll install dependencies that are expected to be there (i.e, >>>> >> >> > usually >>>> >> >> > set up by the openstack-infra gate preparation jobs), set up the >>>> >> >> > trunk >>>> >> >> > repositories we're interested in testing and the rest is >>>> handled by >>>> >> >> > the upstream project testing framework. >>>> >> >> > >>>> >> >> > The WeIRDO project is /very/ low maintenance and brings an >>>> >> >> > exceptional >>>> >> >> > amount of coverage and value. >>>> >> >> > This coverage is important because RDO provides OpenStack >>>> packages >>>> >> >> > or >>>> >> >> > projects that are not necessarily used by TripleO and the >>>> reality >>>> >> >> > is >>>> >> >> > that not everyone deploying OpenStack on CentOS with RDO will be >>>> >> >> > using >>>> >> >> > TripleO. >>>> >> >> > >>>> >> >> > Anyway, sorry for sidetracking but back to the topic, thanks for >>>> >> >> > opening the discussion. >>>> >> >> > >>>> >> >> > What honestly perplexes me is the situation of CI in RDO and >>>> OSP, >>>> >> >> > especially around TripleO/Director, is the amount of work that >>>> is >>>> >> >> > spent downstream. >>>> >> >> > And by downstream, here, I mean anything that isn't in TripleO >>>> >> >> > proper. >>>> >> >> > >>>> >> >> > I keep dreaming about how awesome upstream TripleO CI would be >>>> if >>>> >> >> > all >>>> >> >> > that effort was spent directly there instead -- and then that >>>> all >>>> >> >> > work >>>> >> >> > could bear fruit and trickle down downstream for free. >>>> >> >> > Exactly like how we keep improving the testing coverage in >>>> >> >> > puppet-openstack-integration, it's automatically pulled in RDO >>>> CI >>>> >> >> > through WeIRDO for free. >>>> >> >> > We make the upstream better and we benefit from it >>>> simultaneously: >>>> >> >> > everyone wins. >>>> >> >> > >>>> >> >> > [1]: https://github.com/rdo-infra/weirdo >>>> >> >> > [2]: >>>> https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst >>>> >> >> > ack >>>> >> >> > [3]: >>>> https://github.com/openstack/puppet-openstack-integration#desc >>>> >> >> > ription >>>> >> >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >>>> >> >> > [5]: >>>> https://github.com/openstack/packstack#packstack-integration-t >>>> >> >> > ests >>>> >> >> > >>>> >> >> > David Moreau Simard >>>> >> >> > Senior Software Engineer | Openstack RDO >>>> >> >> > >>>> >> >> > dmsimard = [irc, github, twitter] >>>> >> >> > >>>> >> >> > David Moreau Simard >>>> >> >> > Senior Software Engineer | Openstack RDO >>>> >> >> > >>>> >> >> > dmsimard = [irc, github, twitter] >>>> >> >> > >>>> >> >> > >>>> >> >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman < >>>> abregman at redhat.com> >>>> >> >> > wrote: >>>> >> >> > > Hi, >>>> >> >> > > >>>> >> >> > > I would like to start a discussion on the overlap between >>>> tools >>>> >> >> > > we >>>> >> >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. >>>> >> >> > > >>>> >> >> > > Several months ago, we worked on one common framework for >>>> >> >> > > deploying >>>> >> >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can >>>> say it >>>> >> >> > > didn't work out well, which eventually led each group to >>>> focus on >>>> >> >> > > developing other existing/new tools. >>>> >> >> > > >>>> >> >> > > What we have right now for deploying and testing >>>> >> >> > > -------------------------------------------------------- >>>> >> >> > > === Component CI, Gating === >>>> >> >> > > I'll start with the projects we created, I think that's only >>>> fair >>>> >> >> > > :) >>>> >> >> > > >>>> >> >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the >>>> OVB >>>> >> >> > > project. >>>> >> >> > > >>>> >> >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per >>>> >> >> > > release. >>>> >> >> > > >>>> >> >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, >>>> >> >> > > tempest, >>>> >> >> > > csit) + Patching RPMs with submitted code. >>>> >> >> > > >>>> >> >> > > === Automation, QE === >>>> >> >> > > * InfraRed[4] - provision install and test. Pluggable and >>>> >> >> > > modular, >>>> >> >> > > allows you to create your own provisioner, installer and >>>> tester. >>>> >> >> > > >>>> >> >> > > As far as I know, the groups is working now on different >>>> >> >> > > structure of >>>> >> >> > > one main project and three sub projects (provision, install >>>> and >>>> >> >> > > test). >>>> >> >> > > >>>> >> >> > > === RDO === >>>> >> >> > > I didn't use RDO tools, so I apologize if I got something >>>> wrong: >>>> >> >> > > >>>> >> >> > > * About ~25 micro independent Ansible roles[5]. You can either >>>> >> >> > > choose >>>> >> >> > > to use one of them or several together. They are used for >>>> >> >> > > provisioning, installing and testing Tripleo. >>>> >> >> > > >>>> >> >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying >>>> >> >> > > tripleo >>>> >> >> > > and test it. >>>> >> >> > > >>>> >> >> > > As I said, I didn't use the tools, so feel free to add more >>>> >> >> > > information you think is relevant. >>>> >> >> > > >>>> >> >> > > === More? === >>>> >> >> > > I hope not. Let us know if are familiar with more tools. >>>> >> >> > > >>>> >> >> > > Conclusion >>>> >> >> > > -------------- >>>> >> >> > > So as you can see, there are several projects that eventually >>>> >> >> > > overlap >>>> >> >> > > in many areas. Each group is basically using the same tasks >>>> >> >> > > (provision >>>> >> >> > > resources, build/import overcloud images, run tempest, collect >>>> >> >> > > logs, >>>> >> >> > > etc.) >>>> >> >> > > >>>> >> >> > > Personally, I think it's a waste of resources. For each task >>>> >> >> > > there is >>>> >> >> > > at least two people from different groups who work on exactly >>>> the >>>> >> >> > > same >>>> >> >> > > task. The most recent example I can give is OVB. As far as I >>>> >> >> > > know, >>>> >> >> > > both groups are working on implementing it in their set of >>>> tools >>>> >> >> > > right >>>> >> >> > > now. >>>> >> >> > > >>>> >> >> > > On the other hand, you can always claim: "we already tried to >>>> >> >> > > work on >>>> >> >> > > the same framework, we failed to do it successfully" - right, >>>> but >>>> >> >> > > maybe with better ground rules we can manage it. We would >>>> >> >> > > defiantly >>>> >> >> > > benefit a lot from doing that. >>>> >> >> > > >>>> >> >> > > What's next? >>>> >> >> > > ---------------- >>>> >> >> > > So first of all, I would like to hear from you if you think >>>> that >>>> >> >> > > we >>>> >> >> > > can collaborate once again or is it actually better to keep >>>> it as >>>> >> >> > > it >>>> >> >> > > is now. >>>> >> >> > > >>>> >> >> > > If you agree that collaboration here makes sense, maybe you >>>> have >>>> >> >> > > ideas >>>> >> >> > > on how we can do it better this time. >>>> >> >> > > >>>> >> >> > > I think that setting up a meeting to discuss the right >>>> >> >> > > architecture >>>> >> >> > > for the project(s) and decide on good review/gating process, >>>> >> >> > > would be >>>> >> >> > > a good start. >>>> >> >> > > >>>> >> >> > > Please let me know what do you think and keep in mind that >>>> this >>>> >> >> > > is not >>>> >> >> > > about which tool is better!. As you can see I didn't mention >>>> the >>>> >> >> > > time >>>> >> >> > > it takes for each tool to deploy and test, and also not the >>>> full >>>> >> >> > > feature list it supports. >>>> >> >> > > If possible, we should keep it about collaborating and not >>>> >> >> > > choosing >>>> >> >> > > the best tool. Our solution could be the combination of two or >>>> >> >> > > more >>>> >> >> > > tools eventually (tripleo-red, infra-quickstart? :D ) >>>> >> >> > > >>>> >> >> > > "You may say I'm a dreamer, but I'm not the only one. I hope >>>> some >>>> >> >> > > day >>>> >> >> > > you'll join us and the infra will be as one" :) >>>> >> >> > > >>>> >> >> > > [1] https://github.com/redhat-openstack/ansible-ovb >>>> >> >> > > [2] https://github.com/redhat-openstack/ansible-rhosp >>>> >> >> > > [3] https://github.com/redhat-openstack/octario >>>> >> >> > > [4] https://github.com/rhosqeauto/InfraRed >>>> >> >> > > [5] >>>> https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi >>>> >> >> > > ble-role >>>> >> >> > > [6] https://github.com/openstack/tripleo-quickstart >>>> >> >> > > >>>> >> >> > > _______________________________________________ >>>> >> >> > > rdo-list mailing list >>>> >> >> > > rdo-list at redhat.com >>>> >> >> > > https://www.redhat.com/mailman/listinfo/rdo-list >>>> >> >> > > >>>> >> >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >> >> > _______________________________________________ >>>> >> >> > rdo-list mailing list >>>> >> >> > rdo-list at redhat.com >>>> >> >> > https://www.redhat.com/mailman/listinfo/rdo-list >>>> >> >> > >>>> >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >> >> >>>> >> > -- >>>> >> > Regards, >>>> >> > >>>> >> > Christopher Brown >>>> >> > OpenStack Engineer >>>> >> > OCF plc >>>> >> > >>>> >> > Tel: +44 (0)114 257 2200 >>>> >> > Web: www.ocf.co.uk >>>> >> > Blog: blog.ocf.co.uk >>>> >> > Twitter: @ocfplc >>>> >> > >>>> >> > Please note, any emails relating to an OCF Support request must >>>> always >>>> >> > be sent to support at ocf.co.uk for a ticket number to be generated >>>> or >>>> >> > existing support ticket to be updated. Should this not be done >>>> then OCF >>>> >> > cannot be held responsible for requests not dealt with in a timely >>>> >> > manner. >>>> >> > >>>> >> > OCF plc is a company registered in England and Wales. Registered >>>> number >>>> >> > 4132533, VAT number GB 780 6803 14. Registered office address: OCF >>>> plc, >>>> >> > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield >>>> S35 >>>> >> > 2PG. >>>> >> > >>>> >> > This message is private and confidential. If you have received this >>>> >> > message in error, please notify us immediately and remove it from >>>> your >>>> >> > system. >>>> >> > >>>> >> > _______________________________________________ >>>> >> > rdo-list mailing list >>>> >> > rdo-list at redhat.com >>>> >> > https://www.redhat.com/mailman/listinfo/rdo-list >>>> >> > >>>> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >> >>>> >> >>>> >> >>>> >> -- >>>> >> Arie Bregman >>>> >> Red Hat Israel >>>> >> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview >>>> > >>>> > >>>> >>>> >>>> >>>> -- >>>> Arie Bregman >>>> Red Hat Israel >>>> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview >>>> >>> >>> >> >> >> -- >> Tal Kammer >> Associate manager, automation and infrastracture, Openstack platform. >> Red Hat Israel >> Automation group mojo: https://mojo.redhat.com/docs/DOC-1011659 >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > -- > {Kind regards | Mit besten Gr??en}, > > Frank > > ________________________________ > Frank Zdarsky | NFV Partner Engineering | Office of Technology | Red Hat > e: fzdarsky at redhat.com | irc: fzdarsky @freenode | m: +49 175 82 11 64 4 > | t: +49 711 96 43 70 02 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Wed Aug 3 13:50:16 2016 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Wed, 3 Aug 2016 15:50:16 +0200 Subject: [rdo-list] [tripleo] Troubles deploying Libery with HA setup In-Reply-To: <1470219024.2497.72.camel@ocf.co.uk> References: <1470155660.2497.36.camel@ocf.co.uk> <1470219024.2497.72.camel@ocf.co.uk> Message-ID: On Wed, Aug 3, 2016 at 12:10 PM, Christopher Brown wrote: > Hi Luca, [cut] Hi Christopher > I'm not so sure, you seem to have "overcloud" at the end of all three > controller entries? yes they had. > >> > >> > 2. Revert hostname change and see if it is happy >> >> I'll try. Seems a good solution, but can't understand why this will >> impact if hosts file is correct and hosts can talk each other using >> the assigned name. > > Sure but you specifically acknowledge it is one of the things you have > changed so I would be inclined to revert that change and see if the > deploy works. > I did this. I removed dns_domain from neutron.conf, set dhcp_domain to localdomain on dhcp_agent.ini and nova.conf. I restored the stack files to original (where i have put my.domain in place of localdomain). Then the deployment went well. Good result, but now i'm destroying everything and retrying with ControllerHostnameFormat and ComputeHostnameFormat, that were ok. Then i'll try your suggestion. >> > >> > 3. Definitely try and compose from latest stable CentOS SIG Liberty >> > packages (not delorean or DLRN or whatever it is these days) An then this. Thanks, Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From rlandy at redhat.com Wed Aug 3 13:57:02 2016 From: rlandy at redhat.com (Ronelle Landy) Date: Wed, 3 Aug 2016 09:57:02 -0400 (EDT) Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: Message-ID: <771145306.13316401.1470232622085.JavaMail.zimbra@redhat.com> > From: "Wesley Hayutin" > To: "Frank Zdarsky" > Cc: "Raoul Scarazzini" , rdo-list at redhat.com > Sent: Wednesday, August 3, 2016 9:00:46 AM > Subject: Re: [rdo-list] Multiple tools for deploying and testing TripleO > > > > On Wed, Aug 3, 2016 at 5:23 AM, Frank Zdarsky < fzdarsky at redhat.com > wrote: > > > > On Wed, Aug 3, 2016 at 9:28 AM, Tal Kammer < tkammer at redhat.com > wrote: > > > > Thanks Arie for starting this discussion! (and sorry for joining in late) > > Some detailed explanations for those not familiar with InfraRed and would > like to get the highlights: (Feel free to skip to my inline comments if you > are familiar / too long of a post :) ). > > The InfraRed project is an Ansible based project comprised from three > distinct tools: (currently under the InfraRed[1] project, and being split > into their own standalone project soon). > > 1. ir-provisioner - responsible for building up a custom environment - you > can affect the memory, CPU, HDD size, number of HDD each node has + number > of networks (with/without DHCP) so for example one can deploy the following > topology: (only an example to show the versatile options) > 1 undercloud on RHEL 7 with 16GB of ram + 60GB of HDD with 3 network > interfaces. > 3 controllers on Centos 7 with 8GB of ram + 40GB of HDD > 2 compute on Centos 7 with 6GB of ram + 60GB HDD > 3 ceph nodes on RHEL 7 with 4GB of ram + 2 HDD one with 20GB + one with 40GB > > Example usage: (setting up the above VMs with four different HW specs) > ir-provisioner virsh --host-address= > --host-key= > --topology-nodes=undercloud:1,controller:3,compute:2,ceph:3 > > *Note: while it is written "controller/compute/ceph" this is just setting up > VMs, the names act more as a reference to the user of what is the role of > each node. > The installation of Openstack is done with a dedicated tool called > `ir-installer` (next) > > 2. ir-installer - responsible for installing the product - supports > "quickstart" mode (setting up a working environment in ~30 minutes) or E2E > mode which does a full installation in ~1h. > The installation process is completely customized. You can supply your own > heat templates / overcloud invocation / undercloud.conf file to use / etc. > You can also just run a specific task (using ansible --tags), so if you have > a deployment ready and just need to run say, the introspection phase, you > can fully choose what to run and what to skip even. > > 3. ir-tester - responsible for installing / configuring / running the tests - > this project is meant to hold all testing tools we use so a user will be > able to run any testing utility he would like without the need to "dive in". > we supply a simple interface requesting simple to choose the testing tool > and the set of tests one wishes to run and we'll do the work for him :) > > More info about InfraRed can be found here[1] (though I must admit that we > still need some "love" around our docs) > > [1] - http://infrared.readthedocs.io/en/latest/ > > > I agree cleanly separating and modularizing infrastructure provisioining (for > some use cases like NFV ideally with mixed VM and baremetal environments), > OS installation and testing is a good approach. In that context, what > happened to Khaleesi [0]? Haven't seen that mentioned on the original list. > Nor Apex [1], the installer from OPNFV that does not only address OpenStack > itself, but also integration with complementary projects like ODL? > > I consider provisioning baremetal for either libvirt hosts or baremetal > installs is a separate project and conversation from the CI tools used to > deploy tripleo. > To be a little more clear, provisioning imho should be completely uncoupled > from the CI code that deploys tripleo. Any team should be able to use > provisioner tools that best fit their needs. > If there is interest in collaborating on provisioners maybe we can do that in > another thread. Why in another thread? Again my thought is they should be > uncoupled completely from our CI source code. > > The rdo infra team is using two provisioners, both completely uncoupled from > tripleo-quickstart. > > 1. Ansible extra heat provisioner: for ovb, written by Mathieu Bultel > https://github.com/ansible/ansible-modules-extras/commit/c6a45234e094922786de78a83db3028804cc9141 Link to the Ansible role for Tripleo-Quickstart that uses the heat provisioner: https://github.com/redhat-openstack/ansible-role-tripleo-provision-heat (was forked originally from https://github.com/redhat-openstack/ansible-ovb) I am also looking into implementing Goneri's solution to run OVB on public clouds. > > 2. cico, the official client for ci.centos written by David Simard > http://python-cicoclient.readthedocs.io/en/latest/cli_usage.html > > Our baremetal deployments do not require any provisioning outside of what is > done w/ ironic and tripleo. Our current CI baremetal jobs do not require hardware provisioning as we are using a virt undercloud (as we were advised this is pretty standard practice for many users). This role preps the baremetal host machine to run a VM to serve as the undercloud: https://github.com/redhat-openstack/ansible-role-tripleo-baremetal-prep-virthost Other related roles to complete the baremetal workflow (includes some validations): https://github.com/redhat-openstack/ansible-role-tripleo-validate-ipmi https://github.com/redhat-openstack/ansible-role-tripleo-baremetal-overcloud > > Upstream tripleo ci provisioning is done via node pool and ovb. I have these steps and am replicating this workflow on a standard host cloud for comparison and testing. > > FYI.. khaleesi has been deprecated and is no longer used by RDO infra. > > Thanks > > > > > [0] https://github.com/redhat-openstack/khaleesi > [1] https://wiki.opnfv.org/display/apex/Apex > > > > > On Tue, Aug 2, 2016 at 9:52 PM, Wesley Hayutin < whayutin at redhat.com > wrote: > > > > > > On Tue, Aug 2, 2016 at 1:51 PM, Arie Bregman < abregman at redhat.com > wrote: > > > > On Tue, Aug 2, 2016 at 3:53 PM, Wesley Hayutin < whayutin at redhat.com > wrote: > > > > > > On Tue, Aug 2, 2016 at 4:58 AM, Arie Bregman < abregman at redhat.com > wrote: > >> > >> It became a discussion around the official installer and how to > >> improve it. While it's an important discussion, no doubt, I actually > >> want to focus on our automation and CI tools. > >> > >> Since I see there is an agreement that collaboration does make sense > >> here, let's move to the hard questions :) > >> > >> Wes, Tal - there is huge difference right now between infrared and > >> tripleo-quickstart in their structure. One is all-in-one project and > >> the other one is multiple micro projects managed by one project. Do > >> you think there is a way to consolidate or move to a different model > >> which will make sense for both RDO and RHOSP? something that both > >> groups can work on. > > > > > > I am happy to be part of the discussion, and I am also very willing to help > > and try to drive suggestions to the tripleo-quickstart community. > > I need to make a point clear though, just to make sure we're on the same > > page.. I do not own oooq, I am not a core on oooq. > > I can help facilitate a discussion but oooq is an upstream tripleo tool > > that > > replaces instack-virt-setup [1]. > > It also happens to be a great tool for easily deploying TripleO end to end > > [3] > > > > What I *can* do is show everyone how to manipulate tripleo-quickstart and > > customize it with composable ansible roles, templates, settings etc.. > > This would allow any upstream or downstream project to override the native > > oooq roles and *any* step that does not work for another group w/ 3rd party > > roles [2]. > > These 3rd party roles can be free and opensource or internal only, it works > > either way. > > This was discussed in depth as part of the production chain meetings, the > > message may have been lost unfortunately. > > > > I hope this resets your expectations of what I can and can not do as part > > of > > these discussions. > > Let me know where and when and I'm happy to be part of the discussion. > > Thanks for clarifying :) > > Next reasonable step would probably be to propose some sort of > blueprint for tripleo-quickstart to include some of InfraRed features > and by that have one tool driven by upstream development that can be > either cloned downstream or used as it is with an internal data > project. > > Sure.. a blueprint would help everyone understand the feature and the > motivation. > You could also just plug in the feature you are looking for to oooq and see > if it meets > your requirements. See below. > > While I think a blueprint is a good starting point, I'm afraid that our > approach for provisioning machines is completely different so I'm not sure > how to propose such a blueprint as it will probably require quite the design > change from today's approach. > > > > > > > OR > > have InfraRed pushed into tripleo/openstack namespace and expose it to > the RDO community (without internal data of course). Personally, I > really like the pluggable[1] structure (which allows it to actually > consume tripleo-quickstart) so I'm not sure if it can be really merged > with tripleo-quickstart as proposed in the first option. > > I must admit that I like this option better as it introduces a tool to > upstream and let the community drive it further / get more feedback on how > to improve. > A second benefit might be that by introducing a new concept / design we can > take what is best from two worlds and improve. > I would love to see an open discussion on the tool upstream and how we can > improve the overall process. > > > > > > > > > The way oooq is built one can plugin or override any part at run time > with custom playbooks, roles, and config. There isn't anything that needs to > be > checked in directly to oooq to use it. > > It's designed such that third parties can make their own decisions to use > something > native to quickstart, something from our role library, or something > completely independent. > This allows teams, individuals or whom ever to do what they need to with out > having to fork or re-roll the entire framework. > > The important step is to note that these 3rd party roles or (oooq-extras) > incubate, mature and then graduate to github/openstack. > The upstream openstack community should lead, evaluate, and via blueprints > vote on the canonical CI tool set. > > We can record a demonstration if required, but there is nothing stopping > anyone right now from > doing this today. I'm just browsing the role library for an example, I had no > idea [1] existed. > Looks like Raoul had a requirement and just made it work. > > Justin, from the browbeat project has graciously created some documentation > regarding 3rd party roles. > It has yet to merge, but it should help illustrate how these roles are used. > [2] > > Thanks Arie for leading the discussion. > > [1] > https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-validate-ha > [2] https://review.openstack.org/#/c/346733/ > > > > > > > > I like the second option, although it still forces us to have two > tools, but after a period of time, I believe it will be clear what the > community prefers, which will allow us to remove one of the projects > eventually. > > So, unless there are other ideas, I think the next move should be made by > Tal. > > Tal, I'm willing to help with whatever is needed. > > Thanks Arie for starting this discussion again, I believe we have still much > work ahead of us but this is definitely a step in the right direction. > > > > > > > [1] http://infrared.readthedocs.io/en/latest > > > > > Thanks > > > > [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart > > [2] > > https://github.com/redhat-openstack/?utf8=%E2%9C%93&query=ansible-role-tripleo > > [3[ https://www.rdoproject.org/tripleo/ > > > > > >> > >> > >> Raoul - I totally agree with you, especially with "difficult for > >> anyone to start contributing and collaborate". This is exactly why > >> this discussion started. If we can agree on one set of tools, it will > >> make everyone's life easier - current groups, new contributors, folks > >> that just want to deploy TripleO quickly. But I'm afraid some > >> sacrifices need to be made by both groups. > >> > >> David - I thought WeiRDO is used only for packstack, so I apologize I > >> didn't include it. It does sound like an anther testing project, is > >> there a place to merge it with another existing testing project? like > >> Octario for example or one of TripleO testing projects. Or does it > >> make sense to keep it a standalone project? > >> > >> > >> > >> > >> On Tue, Aug 2, 2016 at 11:12 AM, Christopher Brown < cbrown2 at ocf.co.uk > > >> wrote: > >> > Hello RDOistas (I think that is the expression?), > >> > > >> > Another year, another OpenStack deployment tool. :) > >> > > >> > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: > >> >> If we are talking about tools, I would also want to add something > >> >> with regards to user interface of these tools. This is based on my > >> >> own experience: > >> >> > >> >> I started trying to deploy Openstack with Staypuft and The Foreman. > >> >> The UI of The Foreman was intuitive enough for the discovery and > >> >> provisioning of the servers. The OpenStack portion, not so much. > >> > > >> > This is exactly mine also. I think this works really well in very large > >> > enterprise environments where you need to split out services over more > >> > than three controllers. You do need good in-house puppet skills though > >> > so better for enterprise with a good sysadmin team. > >> > > >> >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I > >> >> believe) that allowed you to graphically build your Openstack cloud. > >> >> That was a reasonable good GUI for Openstack. > >> > > >> > Well, I found it barely usable. It was only ever good as a graphical > >> > representiation of what the build was doing. Interacting with it was > >> > not great. > >> > > >> >> Following that, TripleO become a script based installer, that > >> >> required experience in Heat templates. I know I didn?t have it and > >> >> had to ask in the mailing list about how to present this or change > >> >> that. I got a couple of installs working with this setup. > >> > > >> > Works well now that I understand all the foibles and have invested time > >> > into understanding heat templates and puppet modules. Its good in that > >> > it forces you to learn about orchestration which is such an important > >> > end-goal of cloud environments. > >> > > >> >> In the last session in Austin, my goal was to obtain information on > >> >> how others were installing Openstack. I was pointed to Fuel as an > >> >> alternative. I tried it up, and it just worked. It had the > >> >> discovering capability from The Foreman, and the configuration > >> >> options from TripleO. I understand that is based in Ansible and > >> >> because of that, it is not fully CentOS ready for all the nodes (at > >> >> least not in version 9 that I tried). In any case, as a deployer and > >> >> installer, it is the most well rounded tool that I found. > >> > > >> > This is interesting to know. I've heard of Fuel of course but there are > >> > some politics involved - it still has the team:single-vendor tag but > >> > from what I see Mirantis are very keen for it to become the default > >> > OpenStack installer. I don't think being Ansible-based should be a > >> > problem - we are deploying OpenShift on OpenStack which uses Openshift- > >> > ansible - this recently moved to Ansible 2.1 without too much > >> > disruption. > >> > > >> >> I?d love to see RDO moving into that direction, and having an easy to > >> >> use, end user ready deployer tool. > >> > > >> > If its as good as you say its definitely worth evaluating. From our > >> > point of view, we want to be able to add services to the pacemaker > >> > cluster with some ease - for example Magnum and Sahara - and it looks > >> > like there are steps being taken with regards to composable roles and > >> > simplification of the pacemaker cluster to just core services. > >> > > >> > But if someone can explain that better I would appreciate it. > >> > > >> > Regards > >> > > >> >> IB > >> >> > >> >> > >> >> __ > >> >> Ignacio Bravo > >> >> LTG Federal, Inc > >> >> www.ltgfederal.com > >> >> > >> >> > >> >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard < dms at redhat.com > > >> >> > wrote: > >> >> > > >> >> > The vast majority of RDO's CI relies on using upstream > >> >> > installation/deployment projects in order to test installation of > >> >> > RDO > >> >> > packages in different ways and configurations. > >> >> > > >> >> > Unless I'm mistaken, TripleO Quickstart was originally created as a > >> >> > mean to "easily" install TripleO in different topologies without > >> >> > requiring a massive amount of hardware. > >> >> > This project allows us to test TripleO in virtual deployments on > >> >> > just > >> >> > one server instead of, say, 6. > >> >> > > >> >> > There's also WeIRDO [1] which was left out of your list. > >> >> > WeIRDO is super simple and simply aims to run upstream gate jobs > >> >> > (such > >> >> > as puppet-openstack-integration [2][3] and packstack [4][5]) > >> >> > outside > >> >> > of the gate. > >> >> > It'll install dependencies that are expected to be there (i.e, > >> >> > usually > >> >> > set up by the openstack-infra gate preparation jobs), set up the > >> >> > trunk > >> >> > repositories we're interested in testing and the rest is handled by > >> >> > the upstream project testing framework. > >> >> > > >> >> > The WeIRDO project is /very/ low maintenance and brings an > >> >> > exceptional > >> >> > amount of coverage and value. > >> >> > This coverage is important because RDO provides OpenStack packages > >> >> > or > >> >> > projects that are not necessarily used by TripleO and the reality > >> >> > is > >> >> > that not everyone deploying OpenStack on CentOS with RDO will be > >> >> > using > >> >> > TripleO. > >> >> > > >> >> > Anyway, sorry for sidetracking but back to the topic, thanks for > >> >> > opening the discussion. > >> >> > > >> >> > What honestly perplexes me is the situation of CI in RDO and OSP, > >> >> > especially around TripleO/Director, is the amount of work that is > >> >> > spent downstream. > >> >> > And by downstream, here, I mean anything that isn't in TripleO > >> >> > proper. > >> >> > > >> >> > I keep dreaming about how awesome upstream TripleO CI would be if > >> >> > all > >> >> > that effort was spent directly there instead -- and then that all > >> >> > work > >> >> > could bear fruit and trickle down downstream for free. > >> >> > Exactly like how we keep improving the testing coverage in > >> >> > puppet-openstack-integration, it's automatically pulled in RDO CI > >> >> > through WeIRDO for free. > >> >> > We make the upstream better and we benefit from it simultaneously: > >> >> > everyone wins. > >> >> > > >> >> > [1]: https://github.com/rdo-infra/weirdo > >> >> > [2]: https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst > >> >> > ack > >> >> > [3]: https://github.com/openstack/puppet-openstack-integration#desc > >> >> > ription > >> >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack > >> >> > [5]: https://github.com/openstack/packstack#packstack-integration-t > >> >> > ests > >> >> > > >> >> > David Moreau Simard > >> >> > Senior Software Engineer | Openstack RDO > >> >> > > >> >> > dmsimard = [irc, github, twitter] > >> >> > > >> >> > David Moreau Simard > >> >> > Senior Software Engineer | Openstack RDO > >> >> > > >> >> > dmsimard = [irc, github, twitter] > >> >> > > >> >> > > >> >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman < abregman at redhat.com > > >> >> > wrote: > >> >> > > Hi, > >> >> > > > >> >> > > I would like to start a discussion on the overlap between tools > >> >> > > we > >> >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. > >> >> > > > >> >> > > Several months ago, we worked on one common framework for > >> >> > > deploying > >> >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can say it > >> >> > > didn't work out well, which eventually led each group to focus on > >> >> > > developing other existing/new tools. > >> >> > > > >> >> > > What we have right now for deploying and testing > >> >> > > -------------------------------------------------------- > >> >> > > === Component CI, Gating === > >> >> > > I'll start with the projects we created, I think that's only fair > >> >> > > :) > >> >> > > > >> >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB > >> >> > > project. > >> >> > > > >> >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per > >> >> > > release. > >> >> > > > >> >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, > >> >> > > tempest, > >> >> > > csit) + Patching RPMs with submitted code. > >> >> > > > >> >> > > === Automation, QE === > >> >> > > * InfraRed[4] - provision install and test. Pluggable and > >> >> > > modular, > >> >> > > allows you to create your own provisioner, installer and tester. > >> >> > > > >> >> > > As far as I know, the groups is working now on different > >> >> > > structure of > >> >> > > one main project and three sub projects (provision, install and > >> >> > > test). > >> >> > > > >> >> > > === RDO === > >> >> > > I didn't use RDO tools, so I apologize if I got something wrong: > >> >> > > > >> >> > > * About ~25 micro independent Ansible roles[5]. You can either > >> >> > > choose > >> >> > > to use one of them or several together. They are used for > >> >> > > provisioning, installing and testing Tripleo. > >> >> > > > >> >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying > >> >> > > tripleo > >> >> > > and test it. > >> >> > > > >> >> > > As I said, I didn't use the tools, so feel free to add more > >> >> > > information you think is relevant. > >> >> > > > >> >> > > === More? === > >> >> > > I hope not. Let us know if are familiar with more tools. > >> >> > > > >> >> > > Conclusion > >> >> > > -------------- > >> >> > > So as you can see, there are several projects that eventually > >> >> > > overlap > >> >> > > in many areas. Each group is basically using the same tasks > >> >> > > (provision > >> >> > > resources, build/import overcloud images, run tempest, collect > >> >> > > logs, > >> >> > > etc.) > >> >> > > > >> >> > > Personally, I think it's a waste of resources. For each task > >> >> > > there is > >> >> > > at least two people from different groups who work on exactly the > >> >> > > same > >> >> > > task. The most recent example I can give is OVB. As far as I > >> >> > > know, > >> >> > > both groups are working on implementing it in their set of tools > >> >> > > right > >> >> > > now. > >> >> > > > >> >> > > On the other hand, you can always claim: "we already tried to > >> >> > > work on > >> >> > > the same framework, we failed to do it successfully" - right, but > >> >> > > maybe with better ground rules we can manage it. We would > >> >> > > defiantly > >> >> > > benefit a lot from doing that. > >> >> > > > >> >> > > What's next? > >> >> > > ---------------- > >> >> > > So first of all, I would like to hear from you if you think that > >> >> > > we > >> >> > > can collaborate once again or is it actually better to keep it as > >> >> > > it > >> >> > > is now. > >> >> > > > >> >> > > If you agree that collaboration here makes sense, maybe you have > >> >> > > ideas > >> >> > > on how we can do it better this time. > >> >> > > > >> >> > > I think that setting up a meeting to discuss the right > >> >> > > architecture > >> >> > > for the project(s) and decide on good review/gating process, > >> >> > > would be > >> >> > > a good start. > >> >> > > > >> >> > > Please let me know what do you think and keep in mind that this > >> >> > > is not > >> >> > > about which tool is better!. As you can see I didn't mention the > >> >> > > time > >> >> > > it takes for each tool to deploy and test, and also not the full > >> >> > > feature list it supports. > >> >> > > If possible, we should keep it about collaborating and not > >> >> > > choosing > >> >> > > the best tool. Our solution could be the combination of two or > >> >> > > more > >> >> > > tools eventually (tripleo-red, infra-quickstart? :D ) > >> >> > > > >> >> > > "You may say I'm a dreamer, but I'm not the only one. I hope some > >> >> > > day > >> >> > > you'll join us and the infra will be as one" :) > >> >> > > > >> >> > > [1] https://github.com/redhat-openstack/ansible-ovb > >> >> > > [2] https://github.com/redhat-openstack/ansible-rhosp > >> >> > > [3] https://github.com/redhat-openstack/octario > >> >> > > [4] https://github.com/rhosqeauto/InfraRed > >> >> > > [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi > >> >> > > ble-role > >> >> > > [6] https://github.com/openstack/tripleo-quickstart > >> >> > > > >> >> > > _______________________________________________ > >> >> > > rdo-list mailing list > >> >> > > rdo-list at redhat.com > >> >> > > https://www.redhat.com/mailman/listinfo/rdo-list > >> >> > > > >> >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> >> > _______________________________________________ > >> >> > rdo-list mailing list > >> >> > rdo-list at redhat.com > >> >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> >> > > >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> >> > >> > -- > >> > Regards, > >> > > >> > Christopher Brown > >> > OpenStack Engineer > >> > OCF plc > >> > > >> > Tel: +44 (0)114 257 2200 > >> > Web: www.ocf.co.uk > >> > Blog: blog.ocf.co.uk > >> > Twitter: @ocfplc > >> > > >> > Please note, any emails relating to an OCF Support request must always > >> > be sent to support at ocf.co.uk for a ticket number to be generated or > >> > existing support ticket to be updated. Should this not be done then OCF > >> > cannot be held responsible for requests not dealt with in a timely > >> > manner. > >> > > >> > OCF plc is a company registered in England and Wales. Registered number > >> > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > >> > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > >> > 2PG. > >> > > >> > This message is private and confidential. If you have received this > >> > message in error, please notify us immediately and remove it from your > >> > system. > >> > > >> > _______________________________________________ > >> > rdo-list mailing list > >> > rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >> > >> > >> -- > >> Arie Bregman > >> Red Hat Israel > >> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview > > > > > > > > -- > Arie Bregman > Red Hat Israel > Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview > > > > > -- > Tal Kammer > Associate manager, automation and infrastracture, Openstack platform. > Red Hat Israel > Automation group mojo: https://mojo.redhat.com/docs/DOC-1011659 > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -- > {Kind regards | Mit besten Gr??en}, > > Frank > > ________________________________ > Frank Zdarsky | NFV Partner Engineering | Office of Technology | Red Hat > e: fzdarsky at redhat.com | irc: fzdarsky @freenode | m: +49 175 82 11 64 4 | t: > +49 711 96 43 70 02 > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dms at redhat.com Wed Aug 3 15:36:57 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 3 Aug 2016 11:36:57 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: Please hear me out. TL;DR, Let's work upstream and make it awesome so that downstream can be awesome. I've said this before but I'm going to re-iterate that I do not understand why there is so much effort spent around testing TripleO downstream. By downstream, I mean anything that isn't in TripleO or TripleO-CI proper. All this work should be done upstream to make TripleO and it's CI super awesome and this would trickle down for free downstream. The RDO Trunk testing pipeline is composed of two tools, today. The TripleO-Quickstart project [1] is a good example of an initiative that started downstream but always had the intention of being proposed upstream [2] after being "incubated" and fleshed out. The WeIRDO project [3] is nothing less than a simple and dumb package and repository installer and runs upstream gate jobs as is. I'd like to share an example of the benefits (for everyone involved) that shifting focus from downstream and upstream can accomplish. About a year ago, Packstack was installed by Khaleesi (now InfraRed as I understand it?) to test RDO packages. Khaleesi is a tool that has knowledge of how to deploy multiple installers, where to deploy them, how to configure them and how to test them. This results in, objectively, a complex and complicated tool because it does not serve a single purpose - not the UNIX philosophy of "do one thing and do it well". So, in the context of Packstack, Khaleesi could install it in a variety of ways. The important part of the story is that Khaleesi would also install, configure and run Tempest itself after the Packstack installation was complete. But Packstack already supported installing Tempest -- except it was honestly largely broken and missing features. So I went out and massively refactored the support of Tempest in Packstack. It was a lot of work, it took several months of on and off effort. In the end, this Tempest support refactor enabled us to add integration gate jobs that test Packstack in various configurations [5] -- upstream. Before we added those integration jobs, we were mostly blind when merging commits to Packstack: trunk and users were (unfortunately) often broken. Today, these jobs are critical and relied on when merging commits, whether small or incredibly large refactors [6]. And the best thing about that is that we're pulling in those three integration tests for free in RDO CI with WeIRDO just like the puppet-openstack-integration tests and their great coverage [7]. Back to my original point, I hope this example shows that by shifting our focus upstream, we can make upstream better and this will not only improve the project but it will also directly benefit downstream. I'm convinced we can all agree that if upstream is awesome, so will downstream be. Now, I'm not crazy. I realize that, in the end, Red Hat has a product and that product may have higher quality standards than the upstream equivalent. However, I still strongly believe that we should focus on tooling that is first adopted - and used - upstream, ensure that tooling is extendable and pluggable so that downstream can hook in their additional testing if they wish. [1]: https://github.com/openstack/tripleo-quickstart [2]: http://specs.openstack.org/openstack/tripleo-specs/specs/mitaka/tripleo-quickstart.html [3]: https://github.com/rdo-infra/weirdo [4]: https://github.com/openstack/packstack#packstack-integration-tests [5]: https://www.redhat.com/archives/rdo-list/2016-March/msg00007.html [6]: https://review.openstack.org/#/c/329505/ [7]: https://github.com/openstack/puppet-openstack-integration#description David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Aug 3, 2016 at 9:00 AM, Wesley Hayutin wrote: > > > On Wed, Aug 3, 2016 at 5:23 AM, Frank Zdarsky wrote: >> >> On Wed, Aug 3, 2016 at 9:28 AM, Tal Kammer wrote: >>> >>> Thanks Arie for starting this discussion! (and sorry for joining in late) >>> >>> Some detailed explanations for those not familiar with InfraRed and would >>> like to get the highlights: (Feel free to skip to my inline comments if you >>> are familiar / too long of a post :) ). >>> >>> The InfraRed project is an Ansible based project comprised from three >>> distinct tools: (currently under the InfraRed[1] project, and being split >>> into their own standalone project soon). >>> >>> 1. ir-provisioner - responsible for building up a custom environment - >>> you can affect the memory, CPU, HDD size, number of HDD each node has + >>> number of networks (with/without DHCP) so for example one can deploy the >>> following topology: (only an example to show the versatile options) >>> 1 undercloud on RHEL 7 with 16GB of ram + 60GB of HDD with 3 network >>> interfaces. >>> 3 controllers on Centos 7 with 8GB of ram + 40GB of HDD >>> 2 compute on Centos 7 with 6GB of ram + 60GB HDD >>> 3 ceph nodes on RHEL 7 with 4GB of ram + 2 HDD one with 20GB + one with >>> 40GB >>> >>> Example usage: (setting up the above VMs with four different HW specs) >>> ir-provisioner virsh --host-address= >>> --host-key= >>> --topology-nodes=undercloud:1,controller:3,compute:2,ceph:3 >>> >>> *Note: while it is written "controller/compute/ceph" this is just setting >>> up VMs, the names act more as a reference to the user of what is the role of >>> each node. >>> The installation of Openstack is done with a dedicated tool called >>> `ir-installer` (next) >>> >>> 2. ir-installer - responsible for installing the product - supports >>> "quickstart" mode (setting up a working environment in ~30 minutes) or E2E >>> mode which does a full installation in ~1h. >>> The installation process is completely customized. You can supply your >>> own heat templates / overcloud invocation / undercloud.conf file to use / >>> etc. You can also just run a specific task (using ansible --tags), so if you >>> have a deployment ready and just need to run say, the introspection phase, >>> you can fully choose what to run and what to skip even. >>> >>> 3. ir-tester - responsible for installing / configuring / running the >>> tests - this project is meant to hold all testing tools we use so a user >>> will be able to run any testing utility he would like without the need to >>> "dive in". we supply a simple interface requesting simple to choose the >>> testing tool and the set of tests one wishes to run and we'll do the work >>> for him :) >>> >>> More info about InfraRed can be found here[1] (though I must admit that >>> we still need some "love" around our docs) >>> >>> [1] - http://infrared.readthedocs.io/en/latest/ >>> >> >> I agree cleanly separating and modularizing infrastructure provisioining >> (for some use cases like NFV ideally with mixed VM and baremetal >> environments), OS installation and testing is a good approach. In that >> context, what happened to Khaleesi [0]? Haven't seen that mentioned on the >> original list. Nor Apex [1], the installer from OPNFV that does not only >> address OpenStack itself, but also integration with complementary projects >> like ODL? > > > I consider provisioning baremetal for either libvirt hosts or baremetal > installs is a separate project and conversation from the CI tools used to > deploy tripleo. > To be a little more clear, provisioning imho should be completely uncoupled > from the CI code that deploys tripleo. Any team should be able to use > provisioner tools that best fit their needs. > If there is interest in collaborating on provisioners maybe we can do that > in another thread. Why in another thread? Again my thought is they should > be uncoupled completely from our CI source code. > > The rdo infra team is using two provisioners, both completely uncoupled from > tripleo-quickstart. > > 1. Ansible extra heat provisioner: for ovb, written by Mathieu Bultel > https://github.com/ansible/ansible-modules-extras/commit/c6a45234e094922786de78a83db3028804cc9141 > > 2. cico, the official client for ci.centos written by David Simard > http://python-cicoclient.readthedocs.io/en/latest/cli_usage.html > > Our baremetal deployments do not require any provisioning outside of what is > done w/ ironic and tripleo. > > Upstream tripleo ci provisioning is done via node pool and ovb. > > FYI.. khaleesi has been deprecated and is no longer used by RDO infra. > > Thanks > >> >> >> [0] https://github.com/redhat-openstack/khaleesi >> [1] https://wiki.opnfv.org/display/apex/Apex >> >>> >>> >>> On Tue, Aug 2, 2016 at 9:52 PM, Wesley Hayutin >>> wrote: >>>> >>>> >>>> >>>> On Tue, Aug 2, 2016 at 1:51 PM, Arie Bregman >>>> wrote: >>>>> >>>>> On Tue, Aug 2, 2016 at 3:53 PM, Wesley Hayutin >>>>> wrote: >>>>> > >>>>> > >>>>> > On Tue, Aug 2, 2016 at 4:58 AM, Arie Bregman >>>>> > wrote: >>>>> >> >>>>> >> It became a discussion around the official installer and how to >>>>> >> improve it. While it's an important discussion, no doubt, I actually >>>>> >> want to focus on our automation and CI tools. >>>>> >> >>>>> >> Since I see there is an agreement that collaboration does make sense >>>>> >> here, let's move to the hard questions :) >>>>> >> >>>>> >> Wes, Tal - there is huge difference right now between infrared and >>>>> >> tripleo-quickstart in their structure. One is all-in-one project and >>>>> >> the other one is multiple micro projects managed by one project. Do >>>>> >> you think there is a way to consolidate or move to a different model >>>>> >> which will make sense for both RDO and RHOSP? something that both >>>>> >> groups can work on. >>>>> > >>>>> > >>>>> > I am happy to be part of the discussion, and I am also very willing >>>>> > to help >>>>> > and try to drive suggestions to the tripleo-quickstart community. >>>>> > I need to make a point clear though, just to make sure we're on the >>>>> > same >>>>> > page.. I do not own oooq, I am not a core on oooq. >>>>> > I can help facilitate a discussion but oooq is an upstream tripleo >>>>> > tool that >>>>> > replaces instack-virt-setup [1]. >>>>> > It also happens to be a great tool for easily deploying TripleO end >>>>> > to end >>>>> > [3] >>>>> > >>>>> > What I *can* do is show everyone how to manipulate tripleo-quickstart >>>>> > and >>>>> > customize it with composable ansible roles, templates, settings etc.. >>>>> > This would allow any upstream or downstream project to override the >>>>> > native >>>>> > oooq roles and *any* step that does not work for another group w/ 3rd >>>>> > party >>>>> > roles [2]. >>>>> > These 3rd party roles can be free and opensource or internal only, it >>>>> > works >>>>> > either way. >>>>> > This was discussed in depth as part of the production chain meetings, >>>>> > the >>>>> > message may have been lost unfortunately. >>>>> > >>>>> > I hope this resets your expectations of what I can and can not do as >>>>> > part of >>>>> > these discussions. >>>>> > Let me know where and when and I'm happy to be part of the >>>>> > discussion. >>>>> >>>>> Thanks for clarifying :) >>>>> >>>>> Next reasonable step would probably be to propose some sort of >>>>> blueprint for tripleo-quickstart to include some of InfraRed features >>>>> and by that have one tool driven by upstream development that can be >>>>> either cloned downstream or used as it is with an internal data >>>>> project. >>>> >>>> >>>> Sure.. a blueprint would help everyone understand the feature and the >>>> motivation. >>>> You could also just plug in the feature you are looking for to oooq and >>>> see if it meets >>>> your requirements. See below. >>> >>> >>> While I think a blueprint is a good starting point, I'm afraid that our >>> approach for provisioning machines is completely different so I'm not sure >>> how to propose such a blueprint as it will probably require quite the design >>> change from today's approach. >>> >>>> >>>> >>>>> >>>>> >>>>> OR >>>>> >>>>> have InfraRed pushed into tripleo/openstack namespace and expose it to >>>>> the RDO community (without internal data of course). Personally, I >>>>> really like the pluggable[1] structure (which allows it to actually >>>>> consume tripleo-quickstart) so I'm not sure if it can be really merged >>>>> with tripleo-quickstart as proposed in the first option. >>> >>> >>> I must admit that I like this option better as it introduces a tool to >>> upstream and let the community drive it further / get more feedback on how >>> to improve. >>> A second benefit might be that by introducing a new concept / design we >>> can take what is best from two worlds and improve. >>> I would love to see an open discussion on the tool upstream and how we >>> can improve the overall process. >>> >>>> >>>> The way oooq is built one can plugin or override any part at run time >>>> with custom playbooks, roles, and config. There isn't anything that >>>> needs to be >>>> checked in directly to oooq to use it. >>>> >>>> It's designed such that third parties can make their own decisions to >>>> use something >>>> native to quickstart, something from our role library, or something >>>> completely independent. >>>> This allows teams, individuals or whom ever to do what they need to with >>>> out having to fork or re-roll the entire framework. >>>> >>>> The important step is to note that these 3rd party roles or >>>> (oooq-extras) incubate, mature and then graduate to github/openstack. >>>> The upstream openstack community should lead, evaluate, and via >>>> blueprints vote on the canonical CI tool set. >>>> >>>> We can record a demonstration if required, but there is nothing stopping >>>> anyone right now from >>>> doing this today. I'm just browsing the role library for an example, I >>>> had no idea [1] existed. >>>> Looks like Raoul had a requirement and just made it work. >>>> >>>> Justin, from the browbeat project has graciously created some >>>> documentation regarding 3rd party roles. >>>> It has yet to merge, but it should help illustrate how these roles are >>>> used. [2] >>>> >>>> Thanks Arie for leading the discussion. >>>> >>>> [1] >>>> https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-validate-ha >>>> [2] https://review.openstack.org/#/c/346733/ >>>> >>>> >>>> >>>> >>>> >>>>> >>>>> >>>>> I like the second option, although it still forces us to have two >>>>> tools, but after a period of time, I believe it will be clear what the >>>>> community prefers, which will allow us to remove one of the projects >>>>> eventually. >>>>> >>>>> So, unless there are other ideas, I think the next move should be made >>>>> by Tal. >>>>> >>>>> Tal, I'm willing to help with whatever is needed. >>> >>> >>> Thanks Arie for starting this discussion again, I believe we have still >>> much work ahead of us but this is definitely a step in the right direction. >>> >>>>> >>>>> >>>>> [1] http://infrared.readthedocs.io/en/latest >>>>> >>>>> > >>>>> > Thanks >>>>> > >>>>> > [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart >>>>> > [2] >>>>> > >>>>> > https://github.com/redhat-openstack/?utf8=%E2%9C%93&query=ansible-role-tripleo >>>>> > [3[ https://www.rdoproject.org/tripleo/ >>>>> > >>>>> > >>>>> >> >>>>> >> >>>>> >> Raoul - I totally agree with you, especially with "difficult for >>>>> >> anyone to start contributing and collaborate". This is exactly why >>>>> >> this discussion started. If we can agree on one set of tools, it >>>>> >> will >>>>> >> make everyone's life easier - current groups, new contributors, >>>>> >> folks >>>>> >> that just want to deploy TripleO quickly. But I'm afraid some >>>>> >> sacrifices need to be made by both groups. >>>>> >> >>>>> >> David - I thought WeiRDO is used only for packstack, so I apologize >>>>> >> I >>>>> >> didn't include it. It does sound like an anther testing project, is >>>>> >> there a place to merge it with another existing testing project? >>>>> >> like >>>>> >> Octario for example or one of TripleO testing projects. Or does it >>>>> >> make sense to keep it a standalone project? >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> On Tue, Aug 2, 2016 at 11:12 AM, Christopher Brown >>>>> >> >>>>> >> wrote: >>>>> >> > Hello RDOistas (I think that is the expression?), >>>>> >> > >>>>> >> > Another year, another OpenStack deployment tool. :) >>>>> >> > >>>>> >> > On Mon, 2016-08-01 at 18:59 +0100, Ignacio Bravo wrote: >>>>> >> >> If we are talking about tools, I would also want to add something >>>>> >> >> with regards to user interface of these tools. This is based on >>>>> >> >> my >>>>> >> >> own experience: >>>>> >> >> >>>>> >> >> I started trying to deploy Openstack with Staypuft and The >>>>> >> >> Foreman. >>>>> >> >> The UI of The Foreman was intuitive enough for the discovery and >>>>> >> >> provisioning of the servers. The OpenStack portion, not so much. >>>>> >> > >>>>> >> > This is exactly mine also. I think this works really well in very >>>>> >> > large >>>>> >> > enterprise environments where you need to split out services over >>>>> >> > more >>>>> >> > than three controllers. You do need good in-house puppet skills >>>>> >> > though >>>>> >> > so better for enterprise with a good sysadmin team. >>>>> >> > >>>>> >> >> Forward a couple of releases and we had a TripleO GUI (Tuskar, I >>>>> >> >> believe) that allowed you to graphically build your Openstack >>>>> >> >> cloud. >>>>> >> >> That was a reasonable good GUI for Openstack. >>>>> >> > >>>>> >> > Well, I found it barely usable. It was only ever good as a >>>>> >> > graphical >>>>> >> > representiation of what the build was doing. Interacting with it >>>>> >> > was >>>>> >> > not great. >>>>> >> > >>>>> >> >> Following that, TripleO become a script based installer, that >>>>> >> >> required experience in Heat templates. I know I didn?t have it >>>>> >> >> and >>>>> >> >> had to ask in the mailing list about how to present this or >>>>> >> >> change >>>>> >> >> that. I got a couple of installs working with this setup. >>>>> >> > >>>>> >> > Works well now that I understand all the foibles and have invested >>>>> >> > time >>>>> >> > into understanding heat templates and puppet modules. Its good in >>>>> >> > that >>>>> >> > it forces you to learn about orchestration which is such an >>>>> >> > important >>>>> >> > end-goal of cloud environments. >>>>> >> > >>>>> >> >> In the last session in Austin, my goal was to obtain information >>>>> >> >> on >>>>> >> >> how others were installing Openstack. I was pointed to Fuel as an >>>>> >> >> alternative. I tried it up, and it just worked. It had the >>>>> >> >> discovering capability from The Foreman, and the configuration >>>>> >> >> options from TripleO. I understand that is based in Ansible and >>>>> >> >> because of that, it is not fully CentOS ready for all the nodes >>>>> >> >> (at >>>>> >> >> least not in version 9 that I tried). In any case, as a deployer >>>>> >> >> and >>>>> >> >> installer, it is the most well rounded tool that I found. >>>>> >> > >>>>> >> > This is interesting to know. I've heard of Fuel of course but >>>>> >> > there are >>>>> >> > some politics involved - it still has the team:single-vendor tag >>>>> >> > but >>>>> >> > from what I see Mirantis are very keen for it to become the >>>>> >> > default >>>>> >> > OpenStack installer. I don't think being Ansible-based should be a >>>>> >> > problem - we are deploying OpenShift on OpenStack which uses >>>>> >> > Openshift- >>>>> >> > ansible - this recently moved to Ansible 2.1 without too much >>>>> >> > disruption. >>>>> >> > >>>>> >> >> I?d love to see RDO moving into that direction, and having an >>>>> >> >> easy to >>>>> >> >> use, end user ready deployer tool. >>>>> >> > >>>>> >> > If its as good as you say its definitely worth evaluating. From >>>>> >> > our >>>>> >> > point of view, we want to be able to add services to the pacemaker >>>>> >> > cluster with some ease - for example Magnum and Sahara - and it >>>>> >> > looks >>>>> >> > like there are steps being taken with regards to composable roles >>>>> >> > and >>>>> >> > simplification of the pacemaker cluster to just core services. >>>>> >> > >>>>> >> > But if someone can explain that better I would appreciate it. >>>>> >> > >>>>> >> > Regards >>>>> >> > >>>>> >> >> IB >>>>> >> >> >>>>> >> >> >>>>> >> >> __ >>>>> >> >> Ignacio Bravo >>>>> >> >> LTG Federal, Inc >>>>> >> >> www.ltgfederal.com >>>>> >> >> >>>>> >> >> >>>>> >> >> > On Aug 1, 2016, at 1:07 PM, David Moreau Simard >>>>> >> >> > >>>>> >> >> > wrote: >>>>> >> >> > >>>>> >> >> > The vast majority of RDO's CI relies on using upstream >>>>> >> >> > installation/deployment projects in order to test installation >>>>> >> >> > of >>>>> >> >> > RDO >>>>> >> >> > packages in different ways and configurations. >>>>> >> >> > >>>>> >> >> > Unless I'm mistaken, TripleO Quickstart was originally created >>>>> >> >> > as a >>>>> >> >> > mean to "easily" install TripleO in different topologies >>>>> >> >> > without >>>>> >> >> > requiring a massive amount of hardware. >>>>> >> >> > This project allows us to test TripleO in virtual deployments >>>>> >> >> > on >>>>> >> >> > just >>>>> >> >> > one server instead of, say, 6. >>>>> >> >> > >>>>> >> >> > There's also WeIRDO [1] which was left out of your list. >>>>> >> >> > WeIRDO is super simple and simply aims to run upstream gate >>>>> >> >> > jobs >>>>> >> >> > (such >>>>> >> >> > as puppet-openstack-integration [2][3] and packstack [4][5]) >>>>> >> >> > outside >>>>> >> >> > of the gate. >>>>> >> >> > It'll install dependencies that are expected to be there (i.e, >>>>> >> >> > usually >>>>> >> >> > set up by the openstack-infra gate preparation jobs), set up >>>>> >> >> > the >>>>> >> >> > trunk >>>>> >> >> > repositories we're interested in testing and the rest is >>>>> >> >> > handled by >>>>> >> >> > the upstream project testing framework. >>>>> >> >> > >>>>> >> >> > The WeIRDO project is /very/ low maintenance and brings an >>>>> >> >> > exceptional >>>>> >> >> > amount of coverage and value. >>>>> >> >> > This coverage is important because RDO provides OpenStack >>>>> >> >> > packages >>>>> >> >> > or >>>>> >> >> > projects that are not necessarily used by TripleO and the >>>>> >> >> > reality >>>>> >> >> > is >>>>> >> >> > that not everyone deploying OpenStack on CentOS with RDO will >>>>> >> >> > be >>>>> >> >> > using >>>>> >> >> > TripleO. >>>>> >> >> > >>>>> >> >> > Anyway, sorry for sidetracking but back to the topic, thanks >>>>> >> >> > for >>>>> >> >> > opening the discussion. >>>>> >> >> > >>>>> >> >> > What honestly perplexes me is the situation of CI in RDO and >>>>> >> >> > OSP, >>>>> >> >> > especially around TripleO/Director, is the amount of work that >>>>> >> >> > is >>>>> >> >> > spent downstream. >>>>> >> >> > And by downstream, here, I mean anything that isn't in TripleO >>>>> >> >> > proper. >>>>> >> >> > >>>>> >> >> > I keep dreaming about how awesome upstream TripleO CI would be >>>>> >> >> > if >>>>> >> >> > all >>>>> >> >> > that effort was spent directly there instead -- and then that >>>>> >> >> > all >>>>> >> >> > work >>>>> >> >> > could bear fruit and trickle down downstream for free. >>>>> >> >> > Exactly like how we keep improving the testing coverage in >>>>> >> >> > puppet-openstack-integration, it's automatically pulled in RDO >>>>> >> >> > CI >>>>> >> >> > through WeIRDO for free. >>>>> >> >> > We make the upstream better and we benefit from it >>>>> >> >> > simultaneously: >>>>> >> >> > everyone wins. >>>>> >> >> > >>>>> >> >> > [1]: https://github.com/rdo-infra/weirdo >>>>> >> >> > [2]: >>>>> >> >> > https://github.com/rdo-infra/ansible-role-weirdo-puppet-openst >>>>> >> >> > ack >>>>> >> >> > [3]: >>>>> >> >> > https://github.com/openstack/puppet-openstack-integration#desc >>>>> >> >> > ription >>>>> >> >> > [4]: https://github.com/rdo-infra/ansible-role-weirdo-packstack >>>>> >> >> > [5]: >>>>> >> >> > https://github.com/openstack/packstack#packstack-integration-t >>>>> >> >> > ests >>>>> >> >> > >>>>> >> >> > David Moreau Simard >>>>> >> >> > Senior Software Engineer | Openstack RDO >>>>> >> >> > >>>>> >> >> > dmsimard = [irc, github, twitter] >>>>> >> >> > >>>>> >> >> > David Moreau Simard >>>>> >> >> > Senior Software Engineer | Openstack RDO >>>>> >> >> > >>>>> >> >> > dmsimard = [irc, github, twitter] >>>>> >> >> > >>>>> >> >> > >>>>> >> >> > On Mon, Aug 1, 2016 at 11:21 AM, Arie Bregman >>>>> >> >> > >>>>> >> >> > wrote: >>>>> >> >> > > Hi, >>>>> >> >> > > >>>>> >> >> > > I would like to start a discussion on the overlap between >>>>> >> >> > > tools >>>>> >> >> > > we >>>>> >> >> > > have for deploying and testing TripleO (RDO & RHOSP) in CI. >>>>> >> >> > > >>>>> >> >> > > Several months ago, we worked on one common framework for >>>>> >> >> > > deploying >>>>> >> >> > > and testing OpenStack (RDO & RHOSP) in CI. I think you can >>>>> >> >> > > say it >>>>> >> >> > > didn't work out well, which eventually led each group to >>>>> >> >> > > focus on >>>>> >> >> > > developing other existing/new tools. >>>>> >> >> > > >>>>> >> >> > > What we have right now for deploying and testing >>>>> >> >> > > -------------------------------------------------------- >>>>> >> >> > > === Component CI, Gating === >>>>> >> >> > > I'll start with the projects we created, I think that's only >>>>> >> >> > > fair >>>>> >> >> > > :) >>>>> >> >> > > >>>>> >> >> > > * Ansible-OVB[1] - Provisioning Tripleo heat stack, using the >>>>> >> >> > > OVB >>>>> >> >> > > project. >>>>> >> >> > > >>>>> >> >> > > * Ansible-RHOSP[2] - Product installation (RHOSP). Branch per >>>>> >> >> > > release. >>>>> >> >> > > >>>>> >> >> > > * Octario[3] - Testing using RPMs (pep8, unit, functional, >>>>> >> >> > > tempest, >>>>> >> >> > > csit) + Patching RPMs with submitted code. >>>>> >> >> > > >>>>> >> >> > > === Automation, QE === >>>>> >> >> > > * InfraRed[4] - provision install and test. Pluggable and >>>>> >> >> > > modular, >>>>> >> >> > > allows you to create your own provisioner, installer and >>>>> >> >> > > tester. >>>>> >> >> > > >>>>> >> >> > > As far as I know, the groups is working now on different >>>>> >> >> > > structure of >>>>> >> >> > > one main project and three sub projects (provision, install >>>>> >> >> > > and >>>>> >> >> > > test). >>>>> >> >> > > >>>>> >> >> > > === RDO === >>>>> >> >> > > I didn't use RDO tools, so I apologize if I got something >>>>> >> >> > > wrong: >>>>> >> >> > > >>>>> >> >> > > * About ~25 micro independent Ansible roles[5]. You can >>>>> >> >> > > either >>>>> >> >> > > choose >>>>> >> >> > > to use one of them or several together. They are used for >>>>> >> >> > > provisioning, installing and testing Tripleo. >>>>> >> >> > > >>>>> >> >> > > * Tripleo-quickstart[6] - uses the micro roles for deploying >>>>> >> >> > > tripleo >>>>> >> >> > > and test it. >>>>> >> >> > > >>>>> >> >> > > As I said, I didn't use the tools, so feel free to add more >>>>> >> >> > > information you think is relevant. >>>>> >> >> > > >>>>> >> >> > > === More? === >>>>> >> >> > > I hope not. Let us know if are familiar with more tools. >>>>> >> >> > > >>>>> >> >> > > Conclusion >>>>> >> >> > > -------------- >>>>> >> >> > > So as you can see, there are several projects that eventually >>>>> >> >> > > overlap >>>>> >> >> > > in many areas. Each group is basically using the same tasks >>>>> >> >> > > (provision >>>>> >> >> > > resources, build/import overcloud images, run tempest, >>>>> >> >> > > collect >>>>> >> >> > > logs, >>>>> >> >> > > etc.) >>>>> >> >> > > >>>>> >> >> > > Personally, I think it's a waste of resources. For each task >>>>> >> >> > > there is >>>>> >> >> > > at least two people from different groups who work on exactly >>>>> >> >> > > the >>>>> >> >> > > same >>>>> >> >> > > task. The most recent example I can give is OVB. As far as I >>>>> >> >> > > know, >>>>> >> >> > > both groups are working on implementing it in their set of >>>>> >> >> > > tools >>>>> >> >> > > right >>>>> >> >> > > now. >>>>> >> >> > > >>>>> >> >> > > On the other hand, you can always claim: "we already tried to >>>>> >> >> > > work on >>>>> >> >> > > the same framework, we failed to do it successfully" - right, >>>>> >> >> > > but >>>>> >> >> > > maybe with better ground rules we can manage it. We would >>>>> >> >> > > defiantly >>>>> >> >> > > benefit a lot from doing that. >>>>> >> >> > > >>>>> >> >> > > What's next? >>>>> >> >> > > ---------------- >>>>> >> >> > > So first of all, I would like to hear from you if you think >>>>> >> >> > > that >>>>> >> >> > > we >>>>> >> >> > > can collaborate once again or is it actually better to keep >>>>> >> >> > > it as >>>>> >> >> > > it >>>>> >> >> > > is now. >>>>> >> >> > > >>>>> >> >> > > If you agree that collaboration here makes sense, maybe you >>>>> >> >> > > have >>>>> >> >> > > ideas >>>>> >> >> > > on how we can do it better this time. >>>>> >> >> > > >>>>> >> >> > > I think that setting up a meeting to discuss the right >>>>> >> >> > > architecture >>>>> >> >> > > for the project(s) and decide on good review/gating process, >>>>> >> >> > > would be >>>>> >> >> > > a good start. >>>>> >> >> > > >>>>> >> >> > > Please let me know what do you think and keep in mind that >>>>> >> >> > > this >>>>> >> >> > > is not >>>>> >> >> > > about which tool is better!. As you can see I didn't mention >>>>> >> >> > > the >>>>> >> >> > > time >>>>> >> >> > > it takes for each tool to deploy and test, and also not the >>>>> >> >> > > full >>>>> >> >> > > feature list it supports. >>>>> >> >> > > If possible, we should keep it about collaborating and not >>>>> >> >> > > choosing >>>>> >> >> > > the best tool. Our solution could be the combination of two >>>>> >> >> > > or >>>>> >> >> > > more >>>>> >> >> > > tools eventually (tripleo-red, infra-quickstart? :D ) >>>>> >> >> > > >>>>> >> >> > > "You may say I'm a dreamer, but I'm not the only one. I hope >>>>> >> >> > > some >>>>> >> >> > > day >>>>> >> >> > > you'll join us and the infra will be as one" :) >>>>> >> >> > > >>>>> >> >> > > [1] https://github.com/redhat-openstack/ansible-ovb >>>>> >> >> > > [2] https://github.com/redhat-openstack/ansible-rhosp >>>>> >> >> > > [3] https://github.com/redhat-openstack/octario >>>>> >> >> > > [4] https://github.com/rhosqeauto/InfraRed >>>>> >> >> > > [5] >>>>> >> >> > > https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansi >>>>> >> >> > > ble-role >>>>> >> >> > > [6] https://github.com/openstack/tripleo-quickstart >>>>> >> >> > > >>>>> >> >> > > _______________________________________________ >>>>> >> >> > > rdo-list mailing list >>>>> >> >> > > rdo-list at redhat.com >>>>> >> >> > > https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >> >> > > >>>>> >> >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> >> >> > _______________________________________________ >>>>> >> >> > rdo-list mailing list >>>>> >> >> > rdo-list at redhat.com >>>>> >> >> > https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >> >> > >>>>> >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> >> >> >>>>> >> > -- >>>>> >> > Regards, >>>>> >> > >>>>> >> > Christopher Brown >>>>> >> > OpenStack Engineer >>>>> >> > OCF plc >>>>> >> > >>>>> >> > Tel: +44 (0)114 257 2200 >>>>> >> > Web: www.ocf.co.uk >>>>> >> > Blog: blog.ocf.co.uk >>>>> >> > Twitter: @ocfplc >>>>> >> > >>>>> >> > Please note, any emails relating to an OCF Support request must >>>>> >> > always >>>>> >> > be sent to support at ocf.co.uk for a ticket number to be generated >>>>> >> > or >>>>> >> > existing support ticket to be updated. Should this not be done >>>>> >> > then OCF >>>>> >> > cannot be held responsible for requests not dealt with in a timely >>>>> >> > manner. >>>>> >> > >>>>> >> > OCF plc is a company registered in England and Wales. Registered >>>>> >> > number >>>>> >> > 4132533, VAT number GB 780 6803 14. Registered office address: OCF >>>>> >> > plc, >>>>> >> > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield >>>>> >> > S35 >>>>> >> > 2PG. >>>>> >> > >>>>> >> > This message is private and confidential. If you have received >>>>> >> > this >>>>> >> > message in error, please notify us immediately and remove it from >>>>> >> > your >>>>> >> > system. >>>>> >> > >>>>> >> > _______________________________________________ >>>>> >> > rdo-list mailing list >>>>> >> > rdo-list at redhat.com >>>>> >> > https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >> > >>>>> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> >> >>>>> >> >>>>> >> >>>>> >> -- >>>>> >> Arie Bregman >>>>> >> Red Hat Israel >>>>> >> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview >>>>> > >>>>> > >>>>> >>>>> >>>>> >>>>> -- >>>>> Arie Bregman >>>>> Red Hat Israel >>>>> Component CI: https://mojo.redhat.com/groups/rhos-core-ci/overview >>>> >>>> >>> >>> >>> >>> -- >>> Tal Kammer >>> Associate manager, automation and infrastracture, Openstack platform. >>> Red Hat Israel >>> Automation group mojo: https://mojo.redhat.com/docs/DOC-1011659 >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> >> -- >> {Kind regards | Mit besten Gr??en}, >> >> Frank >> >> ________________________________ >> Frank Zdarsky | NFV Partner Engineering | Office of Technology | Red Hat >> e: fzdarsky at redhat.com | irc: fzdarsky @freenode | m: +49 175 82 11 64 4 | >> t: +49 711 96 43 70 02 > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dms at redhat.com Wed Aug 3 16:14:32 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 3 Aug 2016 12:14:32 -0400 Subject: [rdo-list] FYI: New slaves now in use on ci.centos.org for RDO Message-ID: Hi, We've tested successfully three new slaves out of the "beta" OpenStack cloud on ci.centos.org. We're going to be lowering the amount of threads on our existing slave and spread the load evenly across the 4 slaves. The objective is two-fold: - Spread load evenly across four slaves rather than one: redundancy and additional capacity/concurrency - Test real workloads on the ci.centos.org OpenStack cloud before it is opened up to additional tenants I will be monitoring closely (moreso than usual) the jobs but everything /should/ work. You can tell on which slave a particular job was run from at the very beginning of the console output, it looks like this: "Building remotely on rdo-ci-cloudslave01 (rdo) in workspace [...]" If you notice anything odd about jobs running on the new cloudslaves machines, please let me know directly. Thanks ! David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From whayutin at redhat.com Wed Aug 3 16:34:06 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 3 Aug 2016 12:34:06 -0400 Subject: [rdo-list] FYI: New slaves now in use on ci.centos.org for RDO In-Reply-To: References: Message-ID: +1 Thanks David! On Wed, Aug 3, 2016 at 12:14 PM, David Moreau Simard wrote: > Hi, > > We've tested successfully three new slaves out of the "beta" OpenStack > cloud on ci.centos.org. > We're going to be lowering the amount of threads on our existing slave > and spread the load evenly across the 4 slaves. > > The objective is two-fold: > - Spread load evenly across four slaves rather than one: redundancy > and additional capacity/concurrency > - Test real workloads on the ci.centos.org OpenStack cloud before it > is opened up to additional tenants > > I will be monitoring closely (moreso than usual) the jobs but > everything /should/ work. > You can tell on which slave a particular job was run from at the very > beginning of the console output, it looks like this: > "Building remotely on rdo-ci-cloudslave01 (rdo) in workspace [...]" > > If you notice anything odd about jobs running on the new cloudslaves > machines, please let me know directly. > > Thanks ! > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Wed Aug 3 16:40:24 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 3 Aug 2016 12:40:24 -0400 Subject: [rdo-list] FYI: New slaves now in use on ci.centos.org for RDO In-Reply-To: References: Message-ID: Now that's a lot of running jobs [1] :) [1]: http://i.imgur.com/s7Cq53M.png David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Aug 3, 2016 at 12:14 PM, David Moreau Simard wrote: > Hi, > > We've tested successfully three new slaves out of the "beta" OpenStack > cloud on ci.centos.org. > We're going to be lowering the amount of threads on our existing slave > and spread the load evenly across the 4 slaves. > > The objective is two-fold: > - Spread load evenly across four slaves rather than one: redundancy > and additional capacity/concurrency > - Test real workloads on the ci.centos.org OpenStack cloud before it > is opened up to additional tenants > > I will be monitoring closely (moreso than usual) the jobs but > everything /should/ work. > You can tell on which slave a particular job was run from at the very > beginning of the console output, it looks like this: > "Building remotely on rdo-ci-cloudslave01 (rdo) in workspace [...]" > > If you notice anything odd about jobs running on the new cloudslaves > machines, please let me know directly. > > Thanks ! > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] From jslagle at redhat.com Wed Aug 3 16:51:19 2016 From: jslagle at redhat.com (James Slagle) Date: Wed, 3 Aug 2016 12:51:19 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> Message-ID: <20160803165119.GC25838@localhost.localdomain> On Wed, Aug 03, 2016 at 11:36:57AM -0400, David Moreau Simard wrote: > Please hear me out. > TL;DR, Let's work upstream and make it awesome so that downstream can > be awesome. > > I've said this before but I'm going to re-iterate that I do not > understand why there is so much effort spent around testing TripleO > downstream. > By downstream, I mean anything that isn't in TripleO or TripleO-CI proper. > > All this work should be done upstream to make TripleO and it's CI > super awesome and this would trickle down for free downstream. > > The RDO Trunk testing pipeline is composed of two tools, today. > The TripleO-Quickstart project [1] is a good example of an initiative > that started downstream but always had the intention of being proposed > upstream [2] after being "incubated" and fleshed out. tripleo-quickstart was proposed to upstream TripleO as a replacement for the virtual environment setup done by instack-virt-setup. 3rd party CI would be used to gate tripleo-quickstart so that we'd be sure the virt setup was always working. That was the extent of the CI scope defined in the spec. That work is not yet completed (see work items in the spec). Now it seems it is a much more all encompassing CI/automation/testing project that is competing in scope with tripleo-ci itself. I'm all for consolidation of these types of tools *if* there is interest. However, IMO, incubating these things downstream and then trying to get them upstream or get upstream to adopt them is not ideal or a good example. The same topic came up and was pushed several times with khaleesi, and it just never happened, it was continually DOA upstream. I think it would be fairly difficult to get tripleo-ci to wholesale adopt tripleo-quickstart at this stage. The separate irc channel from #tripleo is not conducive to consolidation on tooling and direction imo. The scope of quickstart is actually not fully understood by myself. I've also heard from some in the upstream TripleO community as well who are confused by its direction and are facing similar difficulties using its generated bash scripts that they'd be facing if they were just using TripleO documentation instead. I do think that this sort of problem lends itself easily to one off implementations as is quite evidenced in this thread. Everyone/group wants and needs to automate something in a different way. And imo, none of these tools are building end-user or operator facing interfaces, so they're not fully focused on building something that "just works for everyone". Those interfaces should be developed in TripleO user facing tooling anyway (tripleoclient/openstackclient/etc). So, I actually think it's ok in some degree that things have been automated differently in different tools. Anecdotally, I suspect many users of TripleO in production have their own automation tools as well. And none of the implementations mentioned in this thread would likely meet their needs either. However, if there is a desire to focus resources on consolidated tooling and someone to drive it forward, then I definitely agree that the effort needs to start upstream with a singular plan for tripleo-ci. From what I gather, that would be some sort of alignment and reuse of tripleo-quickstart, and then we could build from there. That could start as a discussion and plan within that community with some agreed on concensus around that plan. There was an initial thread on openstack-dev related to this topic but it is stalled a bit. It could be continually driven to resolution via specs, the tripleo meeting, email or irc discussion until a plan is formed. -- -- James Slagle -- From jruzicka at redhat.com Wed Aug 3 17:06:54 2016 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Wed, 3 Aug 2016 19:06:54 +0200 Subject: [rdo-list] [Minute] RDO meeting (2016-08-03) Minutes Message-ID: ============================== #rdo: RDO meeting - 2016-08-03 ============================== Meeting started by jruzicka at 15:00:41 UTC. The full logs are available at https://meetbot.fedoraproject.org/rdo/2016-08-03/rdo_meeting_-_2016-08-03.2016-08-03-15.00.log.html . Meeting summary --------------- * jar and exceptions for openstack-sahara-tests (https://bugzilla.redhat.com/show_bug.cgi?id=1318765) (jruzicka, 15:02:23) * ACTION: tosky send a review to document how sahara-tests jars are built (number80, 15:16:10) * proposal: grant sahara-tests bundling exceptions and track progress on unbundling jars (apevec, 15:17:38) * AGREED: grant sahara-tests bundling exceptions and track progress on unbundling jars (number80, 15:19:25) * ACTION: number80 create follow-up card in trello (number80, 15:19:55) * temp CI pipeline for http://trunk.rdoproject.org/centos7-new/ (jruzicka, 15:21:00) * LINK: https://github.com/openstack/kolla/blob/56178a58dc1ea3f9117f4c03893ebafcf0e1f57c/kolla/common/config.py#L29? (dmsimard, 15:27:51) * LINK: https://github.com/openstack/kolla/blob/56178a58dc1ea3f9117f4c03893ebafcf0e1f57c/kolla/common/config.py#L29 (dmsimard, 15:27:53) * ACTION: weshay to create temp CI pipeline for http://trunk.rdoproject.org/centos7-new/ (apevec, 15:30:00) * ACTION: jpena to copy puppet-openstack-integration and tripleo-ci odl hashed repos to centos7-newton (apevec, 15:30:43) * LINK: http://trunk.rdoproject.org/centos7-master/current-passed-ci/ already redirects to http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tested// (apevec, 15:38:47) * Ideas to improve DLRN instance performance (jruzicka, 15:40:32) * ACTION: jpena to try parallel mock builds (jruzicka, 15:51:27) * chair for next meeting (jruzicka, 15:54:20) * number80 to chair next meeting (jruzicka, 15:55:22) * open floor (jruzicka, 15:55:33) Meeting ended at 15:58:01 UTC. Action Items ------------ * tosky send a review to document how sahara-tests jars are built * number80 create follow-up card in trello * weshay to create temp CI pipeline for http://trunk.rdoproject.org/centos7-new/ * jpena to copy puppet-openstack-integration and tripleo-ci odl hashed repos to centos7-newton * jpena to try parallel mock builds Action Items, by person ----------------------- * jpena * jpena to copy puppet-openstack-integration and tripleo-ci odl hashed repos to centos7-newton * jpena to try parallel mock builds * number80 * number80 create follow-up card in trello * openstack * jpena to copy puppet-openstack-integration and tripleo-ci odl hashed repos to centos7-newton * tosky * tosky send a review to document how sahara-tests jars are built * weshay * weshay to create temp CI pipeline for http://trunk.rdoproject.org/centos7-new/ * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (87) * number80 (47) * jruzicka (30) * jpena (27) * tosky (21) * sdake (19) * dmsimard (11) * zodbot (8) * rbowen (6) * galiral (6) * social (4) * weshay (4) * flepied (3) * chandankumar (3) * openstack (3) * coolsvap (1) * jrist (1) * imcsk8 (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From dms at redhat.com Wed Aug 3 17:13:34 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 3 Aug 2016 13:13:34 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: <20160803165119.GC25838@localhost.localdomain> References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> <20160803165119.GC25838@localhost.localdomain> Message-ID: On Wed, Aug 3, 2016 at 12:51 PM, James Slagle wrote: > However, IMO, incubating these things downstream and then trying to get them > upstream or get upstream to adopt them is not ideal or a good example. The same > topic came up and was pushed several times with khaleesi, and it just never > happened, it was continually DOA upstream. Agreed, the work for Quickstart should have been upstream from the beginning. Re-reading my email, I didn't express myself very well. What I meant by saying "a good example" is about proposing improvements or tools upstream in general rather than maintaining external tools downstream. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From rbowen at redhat.com Wed Aug 3 17:28:30 2016 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 3 Aug 2016 13:28:30 -0400 Subject: [rdo-list] [Rdo-newsletter] RDO Community Newsletter, August 2016 Message-ID: (Having trouble with the formatting? See the message at https://www.rdoproject.org/newsletter/2016-august/ ) Thanks for being part of the RDO community! Mitaka 2 After a lot of work by the TripleO team, we have Mitaka milestone 2 packages available for RDO. To give it a try, you can use either Packstack or TripleO. To get started, on a fresh CentOS or RHEL VM: |yum -y install yum-plugin-priorities cd /etc/yum.repos.d/ sudo wget http://trunk.rdoproject.org/centos7/delorean-deps.repo sudo wget http://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo | Then for a packstack-based install, start with step 2 of the quickstart , or, for TripleO-based installs, see the TripleO quickstart or the TripleO quickstart USB key . And now, were forging ahead towards milestone 3, which is scheduled for the week of August 29th. We're planning a test day for September 8th and 9th to ensure that it's the best version of RDO yet. Community Bloggers We've had some great blog posts from the RDO community over the last month. You will want to particularly check out these: * Introduction to Red Hat OpenStack Platform Director , by Marcos Garcia. * Improving RDO packaging testing coverage , by David Simard. * How to build new OpenStack packages by Chandan Kumar. * Who is Testing Your Cloud? by Maria Bracho. * Improving the RDO Trunk infrastructure by Javier Pena. If you're blogging about RDO or OpenStack, and want to be included in our weekly blog highlights post on rdo-list , please drop me a note with your blog address. Thanks! Happy Birthday OpenStack! Six years ago in July, OpenStack was announced at the O'Reilly Open Source Convention. In that time, it's grown from 2 companies to over 300 , and from a handful of projects to more than 50 . We hope you had an opportunity to attend one of the many 6th birthday meetups around the world. I attended ours right here in Lexington, Kentucky where we had about 35 OpenStack enthusiasts from the University of Kentucky College of Engineering, where OpenStack, and a 6PB Ceph cluster, support the various research projects around the University, as well as student projects in the school of Computer Science. We also had a good turnout in Manchester, England , Washington, DC , and many other places. If you attended one of these meetups, do share your photos and event reports with us in the RDO G+ community . Upcoming Events Even if you can't make it to the major events where RDO has a presence, we're often at smaller events around the world. We'll be at OpenStack SV next week in Mountain View. Drop by the Red Hat booth and ask about RDO. And we'll be at OpenStack East in New York, August 23rd and 24th. There, too, we'll be in the Red Hat booth. Looking a little further out, we'll also be at the upcoming PyCon in India in September, and at LinuxCon in Berlin in October, as well as, of course, at OpenStack Summit in Barcelona in October. Other RDO events, including the many OpenStack meetups around the world, are always listed at http://rdoproject.org/events If you have an RDO-related event, please feel free to add it by submitting a pull request to https://github.com/rbowen/rh-events/blob/master/2016/RDO-Meetups.yml TripleO TripleO - which stands for 'OpenStack On OpenStack' - is a an OpenStack deployment and management tool, intended to simplify the process of deploying production OpenStack clouds. TripleO is a big focus in this cycle, and if you haven't gotten up to speed yet, there's a great way to do so. Someone has collected an entire YouTube channel of TripleO videos. Next month, in Buffalo, NY, Rain Leander will be giving a talk at Code Daze about TripleO, and being a TripleO contributor. If you're in the area, consider attending. Details are at codedaze.io . Packaging meetings Every Wednesday at 15:00 UTC, we have the weekly RDO community meeting on the #RDO channel on Freenode IRC. And at 15:00 UTC Thursdays, we have the CentOS Cloud SIG Meeting on #centos-devel. Keep in touch There's lots of ways to stay in in touch with what's going on in the RDO community. The best ways are ? WWW * RDO * OpenStack Q&A Mailing Lists: * rdo-list mailing list * This newsletter IRC * IRC - #rdo on Freenode.irc.net * Puppet module development - #rdo-puppet Social Media * Follow us on Twitter * Google+ * Facebook Thanks again for being part of the RDO community! -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From whayutin at redhat.com Wed Aug 3 17:33:08 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 3 Aug 2016 13:33:08 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: <20160803165119.GC25838@localhost.localdomain> References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> <20160803165119.GC25838@localhost.localdomain> Message-ID: On Wed, Aug 3, 2016 at 12:51 PM, James Slagle wrote: > On Wed, Aug 03, 2016 at 11:36:57AM -0400, David Moreau Simard wrote: > > Please hear me out. > > TL;DR, Let's work upstream and make it awesome so that downstream can > > be awesome. > > > > I've said this before but I'm going to re-iterate that I do not > > understand why there is so much effort spent around testing TripleO > > downstream. > > By downstream, I mean anything that isn't in TripleO or TripleO-CI > proper. > > > > All this work should be done upstream to make TripleO and it's CI > > super awesome and this would trickle down for free downstream. > > > > The RDO Trunk testing pipeline is composed of two tools, today. > > The TripleO-Quickstart project [1] is a good example of an initiative > > that started downstream but always had the intention of being proposed > > upstream [2] after being "incubated" and fleshed out. > > tripleo-quickstart was proposed to upstream TripleO as a replacement for > the > virtual environment setup done by instack-virt-setup. 3rd party CI would be > used to gate tripleo-quickstart so that we'd be sure the virt setup was > always > working. That was the extent of the CI scope defined in the spec. That > work is > not yet completed (see work items in the spec). > > Now it seems it is a much more all encompassing CI/automation/testing > project > that is competing in scope with tripleo-ci itself. > IMHO you are correct here. There has been quite a bit of discussion about removing the parts of oooq that are outside of the original blueprint to replace instack-virt-setup w/ oooq. As usual there are many different opinions here. I think there are a lot of RDO guys that would prefer a lot of the native oooq roles stay where they are, I think that is short sighted imho. I agree that anything outside of the blueprint be removed from oooq. This would hopefully allow the upstream to be more comfortable with oooq and allow us to really start consolidating tools. Luckily for the users that still want to use oooq as a full end-to-end solution the 3rd party roles can be used even after tearing out these native roles. > > I'm all for consolidation of these types of tools *if* there is interest. > Roll call.. is there interest? +1 from me. > > However, IMO, incubating these things downstream and then trying to get > them > upstream or get upstream to adopt them is not ideal or a good example. The > same > topic came up and was pushed several times with khaleesi, and it just never > happened, it was continually DOA upstream. > True, however that could be a result of the downstream perceiving barriers ( real or not ) in incubating projects in upstream openstack. > > I think it would be fairly difficult to get tripleo-ci to wholesale adopt > tripleo-quickstart at this stage. The separate irc channel from #tripleo > is not > conducive to consolidation on tooling and direction imo. > The irc channel is easily addressed. We do seem to generate an awful amount of chatter though :) > > The scope of quickstart is actually not fully understood by myself. I've > also > heard from some in the upstream TripleO community as well who are confused > by > its direction and are facing similar difficulties using its generated bash > scripts that they'd be facing if they were just using TripleO documentation > instead. > The point of the generated bash scripts is to create rst documentation and reusable scripts for the end user. Since the documentation and the generated scripts are equivalent I would expect the same errors, problems and issues. I see this as a good thing really. We *want* the CI to hit the same issues as those who are following the doc. > > I do think that this sort of problem lends itself easily to one off > implementations as is quite evidenced in this thread. Everyone/group wants > and > needs to automate something in a different way. And imo, none of these > tools > are building end-user or operator facing interfaces, so they're not fully > focused on building something that "just works for everyone". Those > interfaces > should be developed in TripleO user facing tooling anyway > (tripleoclient/openstackclient/etc). > > So, I actually think it's ok in some degree that things have been automated > differently in different tools. Anecdotally, I suspect many users of > TripleO in > production have their own automation tools as well. And none of the > implementations mentioned in this thread would likely meet their needs > either. > This is true.. without a tool in the upstream that addresses ci, dev, test use cases across the development cycle this will continue to be the case. I suspect even with a perfect tool, it won't ever be perfect for everyone. > > However, if there is a desire to focus resources on consolidated tooling > and > someone to drive it forward, then I definitely agree that the effort needs > to > start upstream with a singular plan for tripleo-ci. From what I gather, > that > would be some sort of alignment and reuse of tripleo-quickstart, and then > we > could build from there. > +1 > > That could start as a discussion and plan within that community with some > agreed on concensus around that plan. There was an initial thread on > openstack-dev related to this topic but it is stalled a bit. It could be > continually driven to resolution via specs, the tripleo meeting, email or > irc > discussion until a plan is formed. > +1, I think the first step is to complete the original blueprint and move on from there. I think there has also been interest in having an in person meeting at summit. Thanks! > > -- > -- James Slagle > -- > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Wed Aug 3 18:31:01 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 3 Aug 2016 14:31:01 -0400 Subject: [rdo-list] FYI: New slaves now in use on ci.centos.org for RDO In-Reply-To: References: Message-ID: We noticed that the promotion jobs relied on a property file created locally on the slave. Since promotion jobs rely on that property file to share parameters (such as which trunk repository jobs should be testing) and jobs could be running on any of the four slaves, this was problematic. For the time being, promotion jobs have been pinned to a single slave but we are looking at a way to remove this limitation to benefit from the redundancy and increased capacity. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Aug 3, 2016 at 12:40 PM, David Moreau Simard wrote: > Now that's a lot of running jobs [1] :) > > [1]: http://i.imgur.com/s7Cq53M.png > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Wed, Aug 3, 2016 at 12:14 PM, David Moreau Simard wrote: >> Hi, >> >> We've tested successfully three new slaves out of the "beta" OpenStack >> cloud on ci.centos.org. >> We're going to be lowering the amount of threads on our existing slave >> and spread the load evenly across the 4 slaves. >> >> The objective is two-fold: >> - Spread load evenly across four slaves rather than one: redundancy >> and additional capacity/concurrency >> - Test real workloads on the ci.centos.org OpenStack cloud before it >> is opened up to additional tenants >> >> I will be monitoring closely (moreso than usual) the jobs but >> everything /should/ work. >> You can tell on which slave a particular job was run from at the very >> beginning of the console output, it looks like this: >> "Building remotely on rdo-ci-cloudslave01 (rdo) in workspace [...]" >> >> If you notice anything odd about jobs running on the new cloudslaves >> machines, please let me know directly. >> >> Thanks ! >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] From Milind.Gunjan at sprint.com Wed Aug 3 18:40:35 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Wed, 3 Aug 2016 18:40:35 +0000 Subject: [rdo-list] RDO TripleO Mitaka Non-HA Overcloud Failing Message-ID: Hi All, I am currently working on Tripleo Mitaka Openstack deployment on baremetal servers: Undercloud - 1 baremetal server with 2 NIC (1 for provisioning and 2nd for external network connectivity) Controller - 1 baremetal server ( 6 NICs with each openstack VLANs on separate NIC) Compute - 1 baremetal server I followed Graeme's instructions here : https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html to set up Undercloud . Undercloud deployment was successful and all the images required for overcloud deployment was properly built as per the instruction. I would like to mention that I used libvirt tools to modify the root password on overcloud-full.qcow2 and we also modified the grub file to include "net.ifnames=0 biosdevname=0" to restore old interface naming. I was able to successfully introspect 2 serves to be used for controller and compute nodes. Also , we added the serial device discovered during introspection as root device: ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add properties/root_device='{"serial": "618e728372833010c79bead9066f0f9e"}' ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add properties/root_device='{"serial": "618e7283728347101f2107b511603adc"}' Next, we added compute and control tag to respective introspected node with local boot option: ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add properties/capabilities='profile:control,boot_option:local' ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add properties/capabilities='profile:compute,boot_option:local' We used multiple NIC templates for control and compute node which has been attached along with network-environment.yaml file. Default network isolation template file has been used. Deployment script looks like this : #!/bin/bash DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" template_base_dir="$DIR" ntpserver= #Sprint LAB openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ${template_base_dir}/environments/network-environment.yaml \ --control-flavor control --compute-flavor compute \ --control-scale 1 --compute-scale 1 \ --ntp-server $ntpserver \ --neutron-network-type vxlan --neutron-tunnel-types vxlan --debug Heat stack deployment goes on more really long time (more than 4 hours) and gets stuck at postdeployment configurations. Please find below the capture during install : Every 2.0s: ironic node-list && nova list && heat stack-list && heat resource-list -n5 overcloud | grep -vi complete Wed Aug 3 17:33:37 2016 +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ | 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 | None | 9e7aae15-cabc-4489-a1b2-778915a78df2 | power on | active | False | | afcfbee3-3108-48da-a6da-aba8f422642c | None | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | power on | active | False | +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ | 9e7aae15-cabc-4489-a1b2-778915a78df2 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.149.9 | | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.168.149.8 | +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ +--------------------------------------+------------+---------------+---------------------+--------------+ | id | stack_name | stack_status | creation_time | updated_time | +--------------------------------------+------------+---------------+---------------------+--------------+ | 26ee0150-4cfa-4268-9107-8bfbf6712913 | overcloud | CREATE_FAILED | 2016-08-03T08:11:34 | None | +--------------------------------------+------------+---------------+---------------------+--------------+ +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | stack_name | +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ | ComputeNodesPostDeployment | 3797aec6-e543-4dda-9cd1-c7261e827a64 | OS::TripleO::ComputePostDeployment | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud | | ControllerNodesPostDeployment | 6ad9f88c-5c55-4125-97f1-eb0e33329d16 | OS::TripleO::ControllerPostDeployment | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud | | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | OS::Heat::StructuredDeployments | CREATE_FAILED | 2016-08-03T08:29:19 | overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy | | ControllerOvercloudServicesDeployment_Step4 | 15509f59-ff28-43af-95dd-6247a6a32c2d | OS::Heat::StructuredDeployments | CREATE_FAILED | 2016-08-03T08:29:20 | overcloud-ControllerNodesPostDeployment-35y7uafngfwj | | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy-ComputePuppetDeployment-cpahcct3tfw3 | | 0 | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | OS::Heat::StructuredDeployment [stack at mitaka-uc ~]$ openstack software deployment show 5e9308f7-c3a9-4a94-a017-e1acb694c036 +---------------+--------------------------------------+ | Field | Value | +---------------+--------------------------------------+ | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | | creation_time | 2016-08-03T08:32:10 | | updated_time | | | status | IN_PROGRESS | | status_reason | Deploy data available | | input_values | {} | | action | CREATE | +---------------+--------------------------------------+ [stack at mitaka-uc ~]$ openstack software deployment show --long 5e9308f7-c3a9-4a94-a017-e1acb694c036 +---------------+--------------------------------------+ | Field | Value | +---------------+--------------------------------------+ | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | | creation_time | 2016-08-03T08:32:10 | | updated_time | | | status | IN_PROGRESS | | status_reason | Deploy data available | | input_values | {} | | action | CREATE | | output_values | None | +---------------+--------------------------------------+ [stack at mitaka-uc ~]$ openstack stack resource list 3797aec6-e543-4dda-9cd1-c7261e827a64 +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ | ComputeArtifactsConfig | a33cd04d-61ab-4429-8565-182409c2b97f | file:///usr/share/openstack-tripleo-heat- | CREATE_COMPLETE | 2016-08-03T08:29:19 | | | | templates/puppet/deploy-artifacts.yaml | | | | ComputePuppetConfig | 5bb712b0-5358-46c7-a444-f9adedfedd50 | OS::Heat::SoftwareConfig | CREATE_COMPLETE | 2016-08-03T08:29:19 | | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | OS::Heat::StructuredDeployments | CREATE_FAILED | 2016-08-03T08:29:19 | | ComputeArtifactsDeploy | 1d13bf34-fc66-4bf1-a3b7-1dd815f58f5a | OS::Heat::StructuredDeployments | CREATE_COMPLETE | 2016-08-03T08:29:19 | | ExtraConfig | | OS::TripleO::NodeExtraConfigPost | INIT_COMPLETE | 2016-08-03T08:29:19 | +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ [stack at mitaka-uc ~]$ openstack stack resource list 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ [stack at mitaka-uc ~]$ openstack software deployment show 7cd0aa3d-742f-4e78-99ca-b2a575913f8e +---------------+--------------------------------------+ | Field | Value | +---------------+--------------------------------------+ | id | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | | server_id | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | | config_id | 24e5c0db-f84f-4a94-8f8e-8e38e73ccc86 | | creation_time | 2016-08-03T08:30:05 | | updated_time | | | status | IN_PROGRESS | | status_reason | Deploy data available | | input_values | {} | | action | CREATE | +---------------+--------------------------------------+ Keystonerc file was not generated. Please find below openstack status command result on controller and compute. [heat-admin at overcloud-controller-0 ~]$ openstack-status == Nova services == openstack-nova-api: active openstack-nova-compute: inactive (disabled on boot) openstack-nova-network: inactive (disabled on boot) openstack-nova-scheduler: activating(disabled on boot) openstack-nova-cert: active openstack-nova-conductor: active openstack-nova-console: inactive (disabled on boot) openstack-nova-consoleauth: active openstack-nova-xvpvncproxy: inactive (disabled on boot) == Glance services == openstack-glance-api: active openstack-glance-registry: active == Keystone service == openstack-keystone: inactive (disabled on boot) == Horizon service == openstack-dashboard: uncontactable == neutron services == neutron-server: failed (disabled on boot) neutron-dhcp-agent: inactive (disabled on boot) neutron-l3-agent: inactive (disabled on boot) neutron-metadata-agent: inactive (disabled on boot) neutron-lbaas-agent: inactive (disabled on boot) neutron-openvswitch-agent: inactive (disabled on boot) neutron-metering-agent: inactive (disabled on boot) == Swift services == openstack-swift-proxy: active openstack-swift-account: active openstack-swift-container: active openstack-swift-object: active == Cinder services == openstack-cinder-api: active openstack-cinder-scheduler: active openstack-cinder-volume: active openstack-cinder-backup: inactive (disabled on boot) == Ceilometer services == openstack-ceilometer-api: active openstack-ceilometer-central: active openstack-ceilometer-compute: inactive (disabled on boot) openstack-ceilometer-collector: active openstack-ceilometer-notification: active == Heat services == openstack-heat-api: inactive (disabled on boot) openstack-heat-api-cfn: active openstack-heat-api-cloudwatch: inactive (disabled on boot) openstack-heat-engine: inactive (disabled on boot) == Sahara services == openstack-sahara-api: active openstack-sahara-engine: active == Support services == libvirtd: active openvswitch: active dbus: active target: active rabbitmq-server: active memcached: active [heat-admin at overcloud-novacompute-0 ~]$ openstack-status == Nova services == openstack-nova-api: inactive (disabled on boot) openstack-nova-compute: activating(disabled on boot) openstack-nova-network: inactive (disabled on boot) openstack-nova-scheduler: inactive (disabled on boot) openstack-nova-cert: inactive (disabled on boot) openstack-nova-conductor: inactive (disabled on boot) openstack-nova-console: inactive (disabled on boot) openstack-nova-consoleauth: inactive (disabled on boot) openstack-nova-xvpvncproxy: inactive (disabled on boot) == Glance services == openstack-glance-api: inactive (disabled on boot) openstack-glance-registry: inactive (disabled on boot) == Keystone service == openstack-keystone: inactive (disabled on boot) == Horizon service == openstack-dashboard: uncontactable == neutron services == neutron-server: inactive (disabled on boot) neutron-dhcp-agent: inactive (disabled on boot) neutron-l3-agent: inactive (disabled on boot) neutron-metadata-agent: inactive (disabled on boot) neutron-lbaas-agent: inactive (disabled on boot) neutron-openvswitch-agent: active neutron-metering-agent: inactive (disabled on boot) == Swift services == openstack-swift-proxy: inactive (disabled on boot) openstack-swift-account: inactive (disabled on boot) openstack-swift-container: inactive (disabled on boot) openstack-swift-object: inactive (disabled on boot) == Cinder services == openstack-cinder-api: inactive (disabled on boot) openstack-cinder-scheduler: inactive (disabled on boot) openstack-cinder-volume: inactive (disabled on boot) openstack-cinder-backup: inactive (disabled on boot) == Ceilometer services == openstack-ceilometer-api: inactive (disabled on boot) openstack-ceilometer-central: inactive (disabled on boot) openstack-ceilometer-compute: inactive (disabled on boot) openstack-ceilometer-collector: inactive (disabled on boot) openstack-ceilometer-notification: inactive (disabled on boot) == Heat services == openstack-heat-api: inactive (disabled on boot) openstack-heat-api-cfn: inactive (disabled on boot) openstack-heat-api-cloudwatch: inactive (disabled on boot) openstack-heat-engine: inactive (disabled on boot) == Sahara services == openstack-sahara-all: inactive (disabled on boot) == Support services == libvirtd: active openvswitch: active dbus: active rabbitmq-server: inactive (disabled on boot) memcached: inactive (disabled on boot) Please let me know if there is any other logs which I can provide that can help in troubleshooting. Thanks a lot in Advance for your help and support. Best Regards, Milind Gunjan ________________________________ This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: compute.yaml Type: application/octet-stream Size: 5598 bytes Desc: compute.yaml URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: controller.yaml Type: application/octet-stream Size: 5194 bytes Desc: controller.yaml URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: network-environment.yaml Type: application/octet-stream Size: 2734 bytes Desc: network-environment.yaml URL: From cbrown2 at ocf.co.uk Wed Aug 3 19:58:18 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Wed, 3 Aug 2016 20:58:18 +0100 Subject: [rdo-list] RDO TripleO Mitaka Non-HA Overcloud Failing In-Reply-To: References: Message-ID: <1470254298.2175.15.camel@ocf.co.uk> Hello, On Wed, 2016-08-03 at 19:40 +0100, Gunjan, Milind [CTO] wrote: > Hi All, > > I am currently working on Tripleo Mitaka Openstack deployment on > baremetal servers: > Undercloud ? 1 baremetal server with 2 NIC (1 for provisioning and > 2nd for external network connectivity) > Controller ? 1 baremetal server ( 6 NICs with each openstack VLANs on > separate NIC) > Compute ? 1 baremetal server > > I followed Graeme's instructions here : https://www.redhat.com/archi > ves/rdo-list/2016-June/msg00049.html to set up Undercloud . > Undercloud deployment was successful and all the images required for > overcloud deployment was properly built as per the instruction. I > would like to mention that I used libvirt tools to modify the root > password on overcloud-full.qcow2 and we also modified the grub file > to include ?net.ifnames=0 biosdevname=0? to restore old interface > naming. I don't think there is a problem doing this but the new naming convention does allow you to specifically target nics during deployment with reliability. > I was able to successfully introspect 2 serves to be used for > controller and compute nodes. Also , we added the serial device > discovered during introspection as root device: > ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add > properties/root_device='{"serial": > "618e728372833010c79bead9066f0f9e"}' > ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add > properties/root_device='{"serial": > "618e7283728347101f2107b511603adc"}' Sure, we use wwn values. I hacked the following which is a bit nasty but hey, it works. There are probably easier ways to do it but... https://github.com/cbrown2/openstack-scripts/blob/master/root_device_co nfig.sh > Next, we added compute and control tag to respective introspected > node with local boot option: > > ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add > properties/capabilities='profile:control,boot_option:local' > ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add > properties/capabilities='profile:compute,boot_option:local' I would just add this parameter to the instackenv.json file - one less thing to have to run. > > We used multiple NIC templates for control and compute node which has > been attached along with network-environment.yaml file. Default > network isolation template file has been used. > > > Deployment script looks like this : > #!/bin/bash > DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" > template_base_dir="$DIR" > ntpserver= #Sprint LAB > openstack overcloud deploy --templates \ > -e /usr/share/openstack-tripleo-heat-templates/environments/network- > isolation.yaml \ > -e ${template_base_dir}/environments/network-environment.yaml \ > --control-flavor control --compute-flavor compute \ > --control-scale 1 --compute-scale 1 \ > --ntp-server $ntpserver \ > --neutron-network-type vxlan --neutron-tunnel-types vxlan --debug I'm not sure why you set ntp and templates variable but meh. I'd really be inclined to drop network isolation and see if it still deploys. > Heat stack deployment goes on more really long time (more than 4 > hours) and gets stuck at postdeployment configurations. Please find > below the capture during install : You can change the deploy timeout with: --timeout 120 for 2 hours for example. > > Every 2.0s: ironic node-list && nova list && heat stack-list && heat > resource-list -n5 overcloud | grep -vi complete > Wed Aug 3 17:33:37 2016 > > +--------------------------------------+------+-------------------- > ------------------+-------------+--------------------+-------------+ > | UUID | Name | Instance > UUID | Power State | Provisioning State | > Maintenance | > +--------------------------------------+------+-------------------- > ------------------+-------------+--------------------+-------------+ > | 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 | None | 9e7aae15-cabc-4489- > a1b2-778915a78df2 | power on | active | False | > | afcfbee3-3108-48da-a6da-aba8f422642c | None | c1ab52a9-461a-4a11- > a13e-e57ff0a3ae2a | power on | active | False | > +--------------------------------------+------+-------------------- > ------------------+-------------+--------------------+-------------+ > +--------------------------------------+-------------------------+--- > -----+------------+-------------+------------------------+ > | ID | Name | > Status | Task State | Power State | Networks | > +--------------------------------------+-------------------------+--- > -----+------------+-------------+------------------------+ > | 9e7aae15-cabc-4489-a1b2-778915a78df2 | overcloud-controller-0 | > ACTIVE | - | Running | ctlplane=192.168.149.9 | > | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | overcloud-novacompute-0 | > ACTIVE | - | Running | ctlplane=192.168.149.8 | > +--------------------------------------+-------------------------+--- > -----+------------+-------------+------------------------+ > +--------------------------------------+------------+--------------- > +---------------------+--------------+ > | id | stack_name | stack_status | > creation_time | updated_time | > +--------------------------------------+------------+--------------- > +---------------------+--------------+ > | 26ee0150-4cfa-4268-9107-8bfbf6712913 | overcloud | CREATE_FAILED | > 2016-08-03T08:11:34 | None | > +--------------------------------------+------------+--------------- > +---------------------+--------------+ > +---------------------------------------------+-------------------- > ---------------------------+----------------------------------------- > ------------------------------- > ---------+--------------------+---------------------+-------------- > ------------------------------------------------------------------- > ------------------------------+ > | resource_name | > physical_resource_id | resource_type > | resource_status | updated_time | > stack_name > | > +---------------------------------------------+-------------------- > ---------------------------+----------------------------------------- > ------------------------------- > ---------+--------------------+---------------------+-------------- > ------------------------------------------------------------------- > ------------------------------+ > | ComputeNodesPostDeployment | 3797aec6-e543-4dda- > 9cd1-c7261e827a64 | OS::TripleO::ComputePostDeployment > | CREATE_FAILED | 2016-08-03T08:11:35 | > overcloud > | > | ControllerNodesPostDeployment | 6ad9f88c-5c55-4125- > 97f1-eb0e33329d16 | OS::TripleO::ControllerPostDeployment > | CREATE_FAILED | 2016-08-03T08:11:35 | > overcloud > | > | ComputePuppetDeployment | 8b199f85-e4f9-48ad- > 9aee-b1cdf4900b9f | OS::Heat::StructuredDeployments > | CREATE_FAILED | 2016-08-03T08:29:19 | overcloud- > ComputeNodesPostDeployment- > 6vxfu2g2qucy > | > | ControllerOvercloudServicesDeployment_Step4 | 15509f59-ff28-43af- > 95dd-6247a6a32c2d | OS::Heat::StructuredDeployments > | CREATE_FAILED | 2016-08-03T08:29:20 | overcloud- > ControllerNodesPostDeployment-35y7uafngfwj > | > | 0 | 7cd0aa3d-742f-4e78- > 99ca-b2a575913f8e | OS::Heat::StructuredDeployment > | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | overcloud- > ComputeNodesPostDeployment-6vxfu2g2qucy-ComputePuppetDeployment- > cpahcct3tfw3 | > | 0 | 5e9308f7-c3a9-4a94- > a017-e1acb694c036 | OS::Heat::StructuredDeployment > > > [stack at mitaka-uc ~]$ openstack software deployment show 5e9308f7- > c3a9-4a94-a017-e1acb694c036 > +---------------+--------------------------------------+ > | Field | Value | > +---------------+--------------------------------------+ > | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | > | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | > | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | > | creation_time | 2016-08-03T08:32:10 | > | updated_time | | > | status | IN_PROGRESS | > | status_reason | Deploy data available | > | input_values | {} | > | action | CREATE | > +---------------+--------------------------------------+ > > [stack at mitaka-uc ~]$ openstack software deployment show --long > 5e9308f7-c3a9-4a94-a017-e1acb694c036 > +---------------+--------------------------------------+ > | Field | Value | > +---------------+--------------------------------------+ > | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | > | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | > | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | > | creation_time | 2016-08-03T08:32:10 | > | updated_time | | > | status | IN_PROGRESS | > | status_reason | Deploy data available | > | input_values | {} | > | action | CREATE | > | output_values | None | > +---------------+--------------------------------------+ > > [stack at mitaka-uc ~]$ openstack stack resource list 3797aec6-e543- > 4dda-9cd1-c7261e827a64 > +-------------------------+--------------------------------------+--- > ----------------------------------------------+-----------------+-- > -------------------+ > | resource_name | physical_resource_id | > resource_type | resource_status | > updated_time | > +-------------------------+--------------------------------------+--- > ----------------------------------------------+-----------------+-- > -------------------+ > | ComputeArtifactsConfig | a33cd04d-61ab-4429-8565-182409c2b97f | > file:///usr/share/openstack-tripleo-heat- | CREATE_COMPLETE | > 2016-08-03T08:29:19 | > | | | > templates/puppet/deploy-artifacts.yaml | > | | > | ComputePuppetConfig | 5bb712b0-5358-46c7-a444-f9adedfedd50 | > OS::Heat::SoftwareConfig | CREATE_COMPLETE | > 2016-08-03T08:29:19 | > | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | > OS::Heat::StructuredDeployments | CREATE_FAILED | > 2016-08-03T08:29:19 | > | ComputeArtifactsDeploy | 1d13bf34-fc66-4bf1-a3b7-1dd815f58f5a | > OS::Heat::StructuredDeployments | CREATE_COMPLETE | > 2016-08-03T08:29:19 | > | ExtraConfig | | > OS::TripleO::NodeExtraConfigPost | INIT_COMPLETE | > 2016-08-03T08:29:19 | > +-------------------------+--------------------------------------+--- > ----------------------------------------------+-----------------+-- > -------------------+ > > [stack at mitaka-uc ~]$ openstack stack resource list 8b199f85-e4f9- > 48ad-9aee-b1cdf4900b9f > +---------------+--------------------------------------+------------- > -------------------+--------------------+---------------------+ > | resource_name | physical_resource_id | > resource_type | resource_status | > updated_time | > +---------------+--------------------------------------+------------- > -------------------+--------------------+---------------------+ > | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | > OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | 2016-08- > 03T08:30:04 | > +---------------+--------------------------------------+------------- > -------------------+--------------------+---------------------+ > [stack at mitaka-uc ~]$ openstack software deployment show 7cd0aa3d- > 742f-4e78-99ca-b2a575913f8e > +---------------+--------------------------------------+ > | Field | Value | > +---------------+--------------------------------------+ > | id | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | > | server_id | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | > | config_id | 24e5c0db-f84f-4a94-8f8e-8e38e73ccc86 | > | creation_time | 2016-08-03T08:30:05 | > | updated_time | | > | status | IN_PROGRESS | > | status_reason | Deploy data available | > | input_values | {} | > | action | CREATE | > +---------------+--------------------------------------+ > I'd be inclined to ssh into the nodes themselves and check the logs. Also watch: http://www.anstack.com/blog/2016/07/22/tripleo-deep-dive-session-3.html for further assistance with debugging from the experts. It just looks like post deploy puppet config has gone wrong so I don't think you are far off. heat --help will give the various options for drilling down to debug the failure on the various levels. > > Please let me know if there is any other logs which I can provide > that can help in troubleshooting. > > > Thanks a lot in Advance for your help and support. > > Best Regards, > Milind Gunjan > > > > This e-mail may contain Sprint proprietary information intended for > the sole use of the recipient(s). Any use by others is prohibited. If > you are not the intended recipient, please contact the sender and > delete all copies of the message. -- Regards, Christopher Brown OpenStack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc From marius at remote-lab.net Wed Aug 3 20:00:26 2016 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 3 Aug 2016 22:00:26 +0200 Subject: [rdo-list] RDO TripleO Mitaka Non-HA Overcloud Failing In-Reply-To: References: Message-ID: Hi, Could you please ssh to the nodes, gather the os-collect-config journals (journalctl -l -u os-collect-config) and attach them here? Thank you, Marius On Wed, Aug 3, 2016 at 8:40 PM, Gunjan, Milind [CTO] wrote: > Hi All, > > > > I am currently working on Tripleo Mitaka Openstack deployment on baremetal > servers: > > Undercloud ? 1 baremetal server with 2 NIC (1 for provisioning and 2nd for > external network connectivity) > > Controller ? 1 baremetal server ( 6 NICs with each openstack VLANs on > separate NIC) > > Compute ? 1 baremetal server > > > > I followed Graeme's instructions here : > https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html to set up > Undercloud . Undercloud deployment was successful and all the images > required for overcloud deployment was properly built as per the instruction. > I would like to mention that I used libvirt tools to modify the root > password on overcloud-full.qcow2 and we also modified the grub file to > include ?net.ifnames=0 biosdevname=0? to restore old interface naming. > > > > I was able to successfully introspect 2 serves to be used for controller and > compute nodes. Also , we added the serial device discovered during > introspection as root device: > > ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add > properties/root_device='{"serial": "618e728372833010c79bead9066f0f9e"}' > > ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add > properties/root_device='{"serial": "618e7283728347101f2107b511603adc"}' > > > > Next, we added compute and control tag to respective introspected node with > local boot option: > > > > ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add > properties/capabilities='profile:control,boot_option:local' > > ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add > properties/capabilities='profile:compute,boot_option:local' > > > > We used multiple NIC templates for control and compute node which has been > attached along with network-environment.yaml file. Default network isolation > template file has been used. > > > > > > Deployment script looks like this : > > #!/bin/bash > > DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" > > template_base_dir="$DIR" > > ntpserver= #Sprint LAB > > openstack overcloud deploy --templates \ > > -e > /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml > \ > > -e ${template_base_dir}/environments/network-environment.yaml \ > > --control-flavor control --compute-flavor compute \ > > --control-scale 1 --compute-scale 1 \ > > --ntp-server $ntpserver \ > > --neutron-network-type vxlan --neutron-tunnel-types vxlan --debug > > > > Heat stack deployment goes on more really long time (more than 4 hours) and > gets stuck at postdeployment configurations. Please find below the capture > during install : > > > > > > Every 2.0s: ironic node-list && nova list && heat stack-list && heat > resource-list -n5 overcloud | grep -vi complete > Wed Aug 3 17:33:37 2016 > > > > +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ > > | UUID | Name | Instance UUID > | Power State | Provisioning State | Maintenance | > > +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ > > | 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 | None | > 9e7aae15-cabc-4489-a1b2-778915a78df2 | power on | active | > False | > > | afcfbee3-3108-48da-a6da-aba8f422642c | None | > c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | power on | active | > False | > > +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ > > +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ > > | ID | Name | Status | > Task State | Power State | Networks | > > +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ > > | 9e7aae15-cabc-4489-a1b2-778915a78df2 | overcloud-controller-0 | ACTIVE | > - | Running | ctlplane=192.168.149.9 | > > | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | overcloud-novacompute-0 | ACTIVE | > - | Running | ctlplane=192.168.149.8 | > > +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ > > +--------------------------------------+------------+---------------+---------------------+--------------+ > > | id | stack_name | stack_status | > creation_time | updated_time | > > +--------------------------------------+------------+---------------+---------------------+--------------+ > > | 26ee0150-4cfa-4268-9107-8bfbf6712913 | overcloud | CREATE_FAILED | > 2016-08-03T08:11:34 | None | > > +--------------------------------------+------------+---------------+---------------------+--------------+ > > +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ > > ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ > > | resource_name | physical_resource_id > | resource_type > > | resource_status | updated_time | stack_name > | > > +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ > > ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ > > | ComputeNodesPostDeployment | > 3797aec6-e543-4dda-9cd1-c7261e827a64 | > OS::TripleO::ComputePostDeployment > > | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud > | > > | ControllerNodesPostDeployment | > 6ad9f88c-5c55-4125-97f1-eb0e33329d16 | > OS::TripleO::ControllerPostDeployment > > | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud > | > > | ComputePuppetDeployment | > 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | > OS::Heat::StructuredDeployments > > | CREATE_FAILED | 2016-08-03T08:29:19 | > overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy > | > > | ControllerOvercloudServicesDeployment_Step4 | > 15509f59-ff28-43af-95dd-6247a6a32c2d | > OS::Heat::StructuredDeployments > > | CREATE_FAILED | 2016-08-03T08:29:20 | > overcloud-ControllerNodesPostDeployment-35y7uafngfwj > | > > | 0 | > 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | > OS::Heat::StructuredDeployment > > | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | > overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy-ComputePuppetDeployment-cpahcct3tfw3 > | > > | 0 | > 5e9308f7-c3a9-4a94-a017-e1acb694c036 | > OS::Heat::StructuredDeployment > > > > > > [stack at mitaka-uc ~]$ openstack software deployment show > 5e9308f7-c3a9-4a94-a017-e1acb694c036 > > +---------------+--------------------------------------+ > > | Field | Value | > > +---------------+--------------------------------------+ > > | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | > > | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | > > | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | > > | creation_time | 2016-08-03T08:32:10 | > > | updated_time | | > > | status | IN_PROGRESS | > > | status_reason | Deploy data available | > > | input_values | {} | > > | action | CREATE | > > +---------------+--------------------------------------+ > > > > [stack at mitaka-uc ~]$ openstack software deployment show --long > 5e9308f7-c3a9-4a94-a017-e1acb694c036 > > +---------------+--------------------------------------+ > > | Field | Value | > > +---------------+--------------------------------------+ > > | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | > > | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | > > | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | > > | creation_time | 2016-08-03T08:32:10 | > > | updated_time | | > > | status | IN_PROGRESS | > > | status_reason | Deploy data available | > > | input_values | {} | > > | action | CREATE | > > | output_values | None | > > +---------------+--------------------------------------+ > > > > [stack at mitaka-uc ~]$ openstack stack resource list > 3797aec6-e543-4dda-9cd1-c7261e827a64 > > +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ > > | resource_name | physical_resource_id | > resource_type | resource_status | > updated_time | > > +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ > > | ComputeArtifactsConfig | a33cd04d-61ab-4429-8565-182409c2b97f | > file:///usr/share/openstack-tripleo-heat- | CREATE_COMPLETE | > 2016-08-03T08:29:19 | > > | | | > templates/puppet/deploy-artifacts.yaml | | > | > > | ComputePuppetConfig | 5bb712b0-5358-46c7-a444-f9adedfedd50 | > OS::Heat::SoftwareConfig | CREATE_COMPLETE | > 2016-08-03T08:29:19 | > > | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | > OS::Heat::StructuredDeployments | CREATE_FAILED | > 2016-08-03T08:29:19 | > > | ComputeArtifactsDeploy | 1d13bf34-fc66-4bf1-a3b7-1dd815f58f5a | > OS::Heat::StructuredDeployments | CREATE_COMPLETE | > 2016-08-03T08:29:19 | > > | ExtraConfig | | > OS::TripleO::NodeExtraConfigPost | INIT_COMPLETE | > 2016-08-03T08:29:19 | > > +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ > > > > [stack at mitaka-uc ~]$ openstack stack resource list > 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f > > +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ > > | resource_name | physical_resource_id | resource_type > | resource_status | updated_time | > > +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ > > | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | > OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | > > +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ > > [stack at mitaka-uc ~]$ openstack software deployment show > 7cd0aa3d-742f-4e78-99ca-b2a575913f8e > > +---------------+--------------------------------------+ > > | Field | Value | > > +---------------+--------------------------------------+ > > | id | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | > > | server_id | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | > > | config_id | 24e5c0db-f84f-4a94-8f8e-8e38e73ccc86 | > > | creation_time | 2016-08-03T08:30:05 | > > | updated_time | | > > | status | IN_PROGRESS | > > | status_reason | Deploy data available | > > | input_values | {} | > > | action | CREATE | > > +---------------+--------------------------------------+ > > > > Keystonerc file was not generated. Please find below openstack status > command result on controller and compute. > > > > [heat-admin at overcloud-controller-0 ~]$ openstack-status > > == Nova services == > > openstack-nova-api: active > > openstack-nova-compute: inactive (disabled on boot) > > openstack-nova-network: inactive (disabled on boot) > > openstack-nova-scheduler: activating(disabled on boot) > > openstack-nova-cert: active > > openstack-nova-conductor: active > > openstack-nova-console: inactive (disabled on boot) > > openstack-nova-consoleauth: active > > openstack-nova-xvpvncproxy: inactive (disabled on boot) > > == Glance services == > > openstack-glance-api: active > > openstack-glance-registry: active > > == Keystone service == > > openstack-keystone: inactive (disabled on boot) > > == Horizon service == > > openstack-dashboard: uncontactable > > == neutron services == > > neutron-server: failed (disabled on boot) > > neutron-dhcp-agent: inactive (disabled on boot) > > neutron-l3-agent: inactive (disabled on boot) > > neutron-metadata-agent: inactive (disabled on boot) > > neutron-lbaas-agent: inactive (disabled on boot) > > neutron-openvswitch-agent: inactive (disabled on boot) > > neutron-metering-agent: inactive (disabled on boot) > > == Swift services == > > openstack-swift-proxy: active > > openstack-swift-account: active > > openstack-swift-container: active > > openstack-swift-object: active > > == Cinder services == > > openstack-cinder-api: active > > openstack-cinder-scheduler: active > > openstack-cinder-volume: active > > openstack-cinder-backup: inactive (disabled on boot) > > == Ceilometer services == > > openstack-ceilometer-api: active > > openstack-ceilometer-central: active > > openstack-ceilometer-compute: inactive (disabled on boot) > > openstack-ceilometer-collector: active > > openstack-ceilometer-notification: active > > == Heat services == > > openstack-heat-api: inactive (disabled on boot) > > openstack-heat-api-cfn: active > > openstack-heat-api-cloudwatch: inactive (disabled on boot) > > openstack-heat-engine: inactive (disabled on boot) > > == Sahara services == > > openstack-sahara-api: active > > openstack-sahara-engine: active > > == Support services == > > libvirtd: active > > openvswitch: active > > dbus: active > > target: active > > rabbitmq-server: active > > memcached: active > > > > > > [heat-admin at overcloud-novacompute-0 ~]$ openstack-status > > == Nova services == > > openstack-nova-api: inactive (disabled on boot) > > openstack-nova-compute: activating(disabled on boot) > > openstack-nova-network: inactive (disabled on boot) > > openstack-nova-scheduler: inactive (disabled on boot) > > openstack-nova-cert: inactive (disabled on boot) > > openstack-nova-conductor: inactive (disabled on boot) > > openstack-nova-console: inactive (disabled on boot) > > openstack-nova-consoleauth: inactive (disabled on boot) > > openstack-nova-xvpvncproxy: inactive (disabled on boot) > > == Glance services == > > openstack-glance-api: inactive (disabled on boot) > > openstack-glance-registry: inactive (disabled on boot) > > == Keystone service == > > openstack-keystone: inactive (disabled on boot) > > == Horizon service == > > openstack-dashboard: uncontactable > > == neutron services == > > neutron-server: inactive (disabled on boot) > > neutron-dhcp-agent: inactive (disabled on boot) > > neutron-l3-agent: inactive (disabled on boot) > > neutron-metadata-agent: inactive (disabled on boot) > > neutron-lbaas-agent: inactive (disabled on boot) > > neutron-openvswitch-agent: active > > neutron-metering-agent: inactive (disabled on boot) > > == Swift services == > > openstack-swift-proxy: inactive (disabled on boot) > > openstack-swift-account: inactive (disabled on boot) > > openstack-swift-container: inactive (disabled on boot) > > openstack-swift-object: inactive (disabled on boot) > > == Cinder services == > > openstack-cinder-api: inactive (disabled on boot) > > openstack-cinder-scheduler: inactive (disabled on boot) > > openstack-cinder-volume: inactive (disabled on boot) > > openstack-cinder-backup: inactive (disabled on boot) > > == Ceilometer services == > > openstack-ceilometer-api: inactive (disabled on boot) > > openstack-ceilometer-central: inactive (disabled on boot) > > openstack-ceilometer-compute: inactive (disabled on boot) > > openstack-ceilometer-collector: inactive (disabled on boot) > > openstack-ceilometer-notification: inactive (disabled on boot) > > == Heat services == > > openstack-heat-api: inactive (disabled on boot) > > openstack-heat-api-cfn: inactive (disabled on boot) > > openstack-heat-api-cloudwatch: inactive (disabled on boot) > > openstack-heat-engine: inactive (disabled on boot) > > == Sahara services == > > openstack-sahara-all: inactive (disabled on boot) > > == Support services == > > libvirtd: active > > openvswitch: active > > dbus: active > > rabbitmq-server: inactive (disabled on boot) > > memcached: inactive (disabled on boot) > > > > > > > > Please let me know if there is any other logs which I can provide that can > help in troubleshooting. > > > > > > Thanks a lot in Advance for your help and support. > > > > Best Regards, > > Milind Gunjan > > > > > ________________________________ > > This e-mail may contain Sprint proprietary information intended for the sole > use of the recipient(s). Any use by others is prohibited. If you are not the > intended recipient, please contact the sender and delete all copies of the > message. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From emilien at redhat.com Wed Aug 3 22:19:32 2016 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 3 Aug 2016 18:19:32 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> <20160803165119.GC25838@localhost.localdomain> Message-ID: On Wed, Aug 3, 2016 at 1:33 PM, Wesley Hayutin wrote: > > > On Wed, Aug 3, 2016 at 12:51 PM, James Slagle wrote: >> >> On Wed, Aug 03, 2016 at 11:36:57AM -0400, David Moreau Simard wrote: >> > Please hear me out. >> > TL;DR, Let's work upstream and make it awesome so that downstream can >> > be awesome. >> > >> > I've said this before but I'm going to re-iterate that I do not >> > understand why there is so much effort spent around testing TripleO >> > downstream. >> > By downstream, I mean anything that isn't in TripleO or TripleO-CI >> > proper. >> > >> > All this work should be done upstream to make TripleO and it's CI >> > super awesome and this would trickle down for free downstream. >> > >> > The RDO Trunk testing pipeline is composed of two tools, today. >> > The TripleO-Quickstart project [1] is a good example of an initiative >> > that started downstream but always had the intention of being proposed >> > upstream [2] after being "incubated" and fleshed out. >> >> tripleo-quickstart was proposed to upstream TripleO as a replacement for >> the >> virtual environment setup done by instack-virt-setup. 3rd party CI would >> be >> used to gate tripleo-quickstart so that we'd be sure the virt setup was >> always >> working. That was the extent of the CI scope defined in the spec. That >> work is >> not yet completed (see work items in the spec). >> >> Now it seems it is a much more all encompassing CI/automation/testing >> project >> that is competing in scope with tripleo-ci itself. > > > IMHO you are correct here. There has been quite a bit of discussion about > removing the parts > of oooq that are outside of the original blueprint to replace > instack-virt-setup w/ oooq. As usual there are many different opinions > here. I think there are a lot of RDO guys that would prefer a lot of the > native oooq roles stay where they are, I think that is short sighted imho. > I agree that anything outside of the blueprint be removed from oooq. This > would hopefully allow the upstream to be more comfortable with oooq and > allow us to really start consolidating tools. > > Luckily for the users that still want to use oooq as a full end-to-end > solution the 3rd party roles can be used even after tearing out these native > roles. > >> >> >> I'm all for consolidation of these types of tools *if* there is interest. > > > Roll call.. is there interest? +1 from me. > >> >> >> However, IMO, incubating these things downstream and then trying to get >> them >> upstream or get upstream to adopt them is not ideal or a good example. The >> same >> topic came up and was pushed several times with khaleesi, and it just >> never >> happened, it was continually DOA upstream. > > > True, however that could be a result of the downstream perceiving barriers ( > real or not ) in incubating projects in upstream openstack. > >> >> >> I think it would be fairly difficult to get tripleo-ci to wholesale adopt >> tripleo-quickstart at this stage. The separate irc channel from #tripleo >> is not >> conducive to consolidation on tooling and direction imo. > > > The irc channel is easily addressed. We do seem to generate an awful amount > of chatter though :) > >> >> >> The scope of quickstart is actually not fully understood by myself. I've >> also >> heard from some in the upstream TripleO community as well who are confused >> by >> its direction and are facing similar difficulties using its generated bash >> scripts that they'd be facing if they were just using TripleO >> documentation >> instead. > > > The point of the generated bash scripts is to create rst documentation and > reusable scripts for the end user. Since the documentation and the > generated scripts are equivalent I would expect the same errors, problems > and issues. I see this as a good thing really. We *want* the CI to hit the > same issues as those who are following the doc. > >> >> >> I do think that this sort of problem lends itself easily to one off >> implementations as is quite evidenced in this thread. Everyone/group wants >> and >> needs to automate something in a different way. And imo, none of these >> tools >> are building end-user or operator facing interfaces, so they're not fully >> focused on building something that "just works for everyone". Those >> interfaces >> should be developed in TripleO user facing tooling anyway >> (tripleoclient/openstackclient/etc). >> >> So, I actually think it's ok in some degree that things have been >> automated >> differently in different tools. Anecdotally, I suspect many users of >> TripleO in >> production have their own automation tools as well. And none of the >> implementations mentioned in this thread would likely meet their needs >> either. > > > This is true.. without a tool in the upstream that addresses ci, dev, test > use cases across the development cycle this will continue to be the case. I > suspect even with a perfect tool, it won't ever be perfect for everyone. > >> >> >> However, if there is a desire to focus resources on consolidated tooling >> and >> someone to drive it forward, then I definitely agree that the effort needs >> to >> start upstream with a singular plan for tripleo-ci. From what I gather, >> that >> would be some sort of alignment and reuse of tripleo-quickstart, and then >> we >> could build from there. > > > +1 > >> >> >> That could start as a discussion and plan within that community with some >> agreed on concensus around that plan. There was an initial thread on >> openstack-dev related to this topic but it is stalled a bit. It could be >> continually driven to resolution via specs, the tripleo meeting, email or >> irc >> discussion until a plan is formed. > > > +1, I think the first step is to complete the original blueprint and move > on from there. > I think there has also been interest in having an in person meeting at > summit. > > Thanks! > >> >> >> -- >> -- James Slagle >> -- >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com I like how the discussion goes though I have some personal (and probably shared) feeling that I would like to share here, more or less related. As a TripleO core developer, I have some frustration to see that a lot of people are involved in making TripleO Quickstart better, while we have a few people actually working on tripleo-ci tool and try to maintain upstream CI stable. As a reminder, tripleo-ci tool is currently the ONLY ONE thing that actually gates TripleO, even if we don't like the tool. It is right now, testing TripleO upstream, everything that is not tested in there will probably break one day downstream CIs. Yes we have this tooling discussion here and that's awesome, but words are words. I would like to see some real engagement to help TripleO CI to converge into something better and not only everyone working on their side. Some examples: - TripleO Quickstart (downstream) CI has coverage for undercloud & overcloud upgrades while TripleO CI freshly has a undercloud upgrade job and used to have a overcloud (minor) upgrade job (disabled now, for some reasons related to our capacity to run jobs and also some blockers into code itself). - TripleO CI has some TripleO Heat templates that could also be re-use by TripleO Quickstart (I'm working on moving them from tripleo-ci to THT, WIP here: https://review.openstack.org/350775). - TripleO CI deploys Ceph Jewel repository, TripleO Quickstart doesn't. - (...) We have been having this discussion for a while now but we're still not making much progress here, I feel like we're in statu quo. James mentioned a blueprint, I like it. We need to engage some upstream discussion about this major CI refactor, like we need with specs and then we'll decide if whether or not we need to change the tool, and how. -- Emilien Macchi From whayutin at redhat.com Wed Aug 3 23:38:03 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 3 Aug 2016 19:38:03 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> <20160803165119.GC25838@localhost.localdomain> Message-ID: On Wed, Aug 3, 2016 at 6:19 PM, Emilien Macchi wrote: > On Wed, Aug 3, 2016 at 1:33 PM, Wesley Hayutin > wrote: > > > > > > On Wed, Aug 3, 2016 at 12:51 PM, James Slagle > wrote: > >> > >> On Wed, Aug 03, 2016 at 11:36:57AM -0400, David Moreau Simard wrote: > >> > Please hear me out. > >> > TL;DR, Let's work upstream and make it awesome so that downstream can > >> > be awesome. > >> > > >> > I've said this before but I'm going to re-iterate that I do not > >> > understand why there is so much effort spent around testing TripleO > >> > downstream. > >> > By downstream, I mean anything that isn't in TripleO or TripleO-CI > >> > proper. > >> > > >> > All this work should be done upstream to make TripleO and it's CI > >> > super awesome and this would trickle down for free downstream. > >> > > >> > The RDO Trunk testing pipeline is composed of two tools, today. > >> > The TripleO-Quickstart project [1] is a good example of an initiative > >> > that started downstream but always had the intention of being proposed > >> > upstream [2] after being "incubated" and fleshed out. > >> > >> tripleo-quickstart was proposed to upstream TripleO as a replacement for > >> the > >> virtual environment setup done by instack-virt-setup. 3rd party CI would > >> be > >> used to gate tripleo-quickstart so that we'd be sure the virt setup was > >> always > >> working. That was the extent of the CI scope defined in the spec. That > >> work is > >> not yet completed (see work items in the spec). > >> > >> Now it seems it is a much more all encompassing CI/automation/testing > >> project > >> that is competing in scope with tripleo-ci itself. > > > > > > IMHO you are correct here. There has been quite a bit of discussion > about > > removing the parts > > of oooq that are outside of the original blueprint to replace > > instack-virt-setup w/ oooq. As usual there are many different opinions > > here. I think there are a lot of RDO guys that would prefer a lot of the > > native oooq roles stay where they are, I think that is short sighted > imho. > > I agree that anything outside of the blueprint be removed from oooq. > This > > would hopefully allow the upstream to be more comfortable with oooq and > > allow us to really start consolidating tools. > > > > Luckily for the users that still want to use oooq as a full end-to-end > > solution the 3rd party roles can be used even after tearing out these > native > > roles. > > > >> > >> > >> I'm all for consolidation of these types of tools *if* there is > interest. > > > > > > Roll call.. is there interest? +1 from me. > > > >> > >> > >> However, IMO, incubating these things downstream and then trying to get > >> them > >> upstream or get upstream to adopt them is not ideal or a good example. > The > >> same > >> topic came up and was pushed several times with khaleesi, and it just > >> never > >> happened, it was continually DOA upstream. > > > > > > True, however that could be a result of the downstream perceiving > barriers ( > > real or not ) in incubating projects in upstream openstack. > > > >> > >> > >> I think it would be fairly difficult to get tripleo-ci to wholesale > adopt > >> tripleo-quickstart at this stage. The separate irc channel from #tripleo > >> is not > >> conducive to consolidation on tooling and direction imo. > > > > > > The irc channel is easily addressed. We do seem to generate an awful > amount > > of chatter though :) > > > >> > >> > >> The scope of quickstart is actually not fully understood by myself. I've > >> also > >> heard from some in the upstream TripleO community as well who are > confused > >> by > >> its direction and are facing similar difficulties using its generated > bash > >> scripts that they'd be facing if they were just using TripleO > >> documentation > >> instead. > > > > > > The point of the generated bash scripts is to create rst documentation > and > > reusable scripts for the end user. Since the documentation and the > > generated scripts are equivalent I would expect the same errors, problems > > and issues. I see this as a good thing really. We *want* the CI to hit > the > > same issues as those who are following the doc. > > > >> > >> > >> I do think that this sort of problem lends itself easily to one off > >> implementations as is quite evidenced in this thread. Everyone/group > wants > >> and > >> needs to automate something in a different way. And imo, none of these > >> tools > >> are building end-user or operator facing interfaces, so they're not > fully > >> focused on building something that "just works for everyone". Those > >> interfaces > >> should be developed in TripleO user facing tooling anyway > >> (tripleoclient/openstackclient/etc). > >> > >> So, I actually think it's ok in some degree that things have been > >> automated > >> differently in different tools. Anecdotally, I suspect many users of > >> TripleO in > >> production have their own automation tools as well. And none of the > >> implementations mentioned in this thread would likely meet their needs > >> either. > > > > > > This is true.. without a tool in the upstream that addresses ci, dev, > test > > use cases across the development cycle this will continue to be the > case. I > > suspect even with a perfect tool, it won't ever be perfect for everyone. > > > >> > >> > >> However, if there is a desire to focus resources on consolidated tooling > >> and > >> someone to drive it forward, then I definitely agree that the effort > needs > >> to > >> start upstream with a singular plan for tripleo-ci. From what I gather, > >> that > >> would be some sort of alignment and reuse of tripleo-quickstart, and > then > >> we > >> could build from there. > > > > > > +1 > > > >> > >> > >> That could start as a discussion and plan within that community with > some > >> agreed on concensus around that plan. There was an initial thread on > >> openstack-dev related to this topic but it is stalled a bit. It could be > >> continually driven to resolution via specs, the tripleo meeting, email > or > >> irc > >> discussion until a plan is formed. > > > > > > +1, I think the first step is to complete the original blueprint and > move > > on from there. > > I think there has also been interest in having an in person meeting at > > summit. > > > > Thanks! > > > >> > >> > >> -- > >> -- James Slagle > >> -- > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > I like how the discussion goes though I have some personal (and > probably shared) feeling that I would like to share here, more or less > related. > > As a TripleO core developer, I have some frustration to see that a lot > of people are involved in making TripleO Quickstart better, while we > have a few people actually working on tripleo-ci tool and try to > maintain upstream CI stable. > As a reminder, tripleo-ci tool is currently the ONLY ONE thing that > actually gates TripleO, even if we don't like the tool. It is right > now, testing TripleO upstream, everything that is not tested in there > will probably break one day downstream CIs. > Yes we have this tooling discussion here and that's awesome, but words > are words. I would like to see some real engagement to help TripleO CI > to converge into something better and not only everyone working on > their side. > You have a valid point and reason to be frustrated. Is the point here that everyone downstream should use tripleo.sh or that everyone should be focused on ci and testing at the tripleo level? > > Some examples: > - TripleO Quickstart (downstream) CI has coverage for undercloud & > overcloud upgrades while TripleO CI freshly has a undercloud upgrade > job and used to have a overcloud (minor) upgrade job (disabled now, > for some reasons related to our capacity to run jobs and also some > blockers into code itself). > - TripleO CI has some TripleO Heat templates that could also be re-use > by TripleO Quickstart (I'm working on moving them from tripleo-ci to > THT, WIP here: https://review.openstack.org/350775). > - TripleO CI deploys Ceph Jewel repository, TripleO Quickstart doesn't. > - (...) > As others have mentioned, there are at least 5-10 tools in development that are used to deploy tripleo in some CI fashion. Calling out tripleo-quickstart alone is not quite right imho. There are a number of tripleo devs that burn cycles on their own ci tools and maybe that is fine thing to do. TripleO-Quickstart is meant to replace instack-virt-setup which it does quite well. The only group that was actually running instack-virt-setup was the RDO CI team, upstream had taken it out of the ci system. I think it's not unfair to say gaps have been left for other teams to fill. > > We have been having this discussion for a while now but we're still > not making much progress here, I feel like we're in statu quo. > James mentioned a blueprint, I like it. We need to engage some > upstream discussion about this major CI refactor, like we need with > specs and then we'll decide if whether or not we need to change the > tool, and how. > Well, this would take some leadership imho. We need some people that are familiar with the upstream, midstream and downstream requirements of CI. This was addressed at the production chain meetings initially but then pretty much ignored. The leaders responsible at the various stages of a build (upstream -> downstream ) failed to take this issue on. Here we are today. Would it be acceptable by anyone.. IF tripleo-quickstart replaced instack-virt-setup [1] and walked through the undercloud install, then handed off to tripleo.sh to deploy, upgrade, update, scale, validate etc??? That these two tools *would* in fact be the the official CI tools of tripleo at the upstream, RDO, and at least parts of the downstream? Would that help to ease the current frustration around CI? Emilien what do you think? [1] https://review.openstack.org/#/c/276810/ > > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Aug 4 00:45:16 2016 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 3 Aug 2016 20:45:16 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> <20160803165119.GC25838@localhost.localdomain> Message-ID: On Wed, Aug 3, 2016 at 7:38 PM, Wesley Hayutin wrote: > > > On Wed, Aug 3, 2016 at 6:19 PM, Emilien Macchi wrote: >> >> On Wed, Aug 3, 2016 at 1:33 PM, Wesley Hayutin >> wrote: >> > >> > >> > On Wed, Aug 3, 2016 at 12:51 PM, James Slagle >> > wrote: >> >> >> >> On Wed, Aug 03, 2016 at 11:36:57AM -0400, David Moreau Simard wrote: >> >> > Please hear me out. >> >> > TL;DR, Let's work upstream and make it awesome so that downstream can >> >> > be awesome. >> >> > >> >> > I've said this before but I'm going to re-iterate that I do not >> >> > understand why there is so much effort spent around testing TripleO >> >> > downstream. >> >> > By downstream, I mean anything that isn't in TripleO or TripleO-CI >> >> > proper. >> >> > >> >> > All this work should be done upstream to make TripleO and it's CI >> >> > super awesome and this would trickle down for free downstream. >> >> > >> >> > The RDO Trunk testing pipeline is composed of two tools, today. >> >> > The TripleO-Quickstart project [1] is a good example of an initiative >> >> > that started downstream but always had the intention of being >> >> > proposed >> >> > upstream [2] after being "incubated" and fleshed out. >> >> >> >> tripleo-quickstart was proposed to upstream TripleO as a replacement >> >> for >> >> the >> >> virtual environment setup done by instack-virt-setup. 3rd party CI >> >> would >> >> be >> >> used to gate tripleo-quickstart so that we'd be sure the virt setup was >> >> always >> >> working. That was the extent of the CI scope defined in the spec. That >> >> work is >> >> not yet completed (see work items in the spec). >> >> >> >> Now it seems it is a much more all encompassing CI/automation/testing >> >> project >> >> that is competing in scope with tripleo-ci itself. >> > >> > >> > IMHO you are correct here. There has been quite a bit of discussion >> > about >> > removing the parts >> > of oooq that are outside of the original blueprint to replace >> > instack-virt-setup w/ oooq. As usual there are many different opinions >> > here. I think there are a lot of RDO guys that would prefer a lot of >> > the >> > native oooq roles stay where they are, I think that is short sighted >> > imho. >> > I agree that anything outside of the blueprint be removed from oooq. >> > This >> > would hopefully allow the upstream to be more comfortable with oooq and >> > allow us to really start consolidating tools. >> > >> > Luckily for the users that still want to use oooq as a full end-to-end >> > solution the 3rd party roles can be used even after tearing out these >> > native >> > roles. >> > >> >> >> >> >> >> I'm all for consolidation of these types of tools *if* there is >> >> interest. >> > >> > >> > Roll call.. is there interest? +1 from me. >> > >> >> >> >> >> >> However, IMO, incubating these things downstream and then trying to get >> >> them >> >> upstream or get upstream to adopt them is not ideal or a good example. >> >> The >> >> same >> >> topic came up and was pushed several times with khaleesi, and it just >> >> never >> >> happened, it was continually DOA upstream. >> > >> > >> > True, however that could be a result of the downstream perceiving >> > barriers ( >> > real or not ) in incubating projects in upstream openstack. >> > >> >> >> >> >> >> I think it would be fairly difficult to get tripleo-ci to wholesale >> >> adopt >> >> tripleo-quickstart at this stage. The separate irc channel from >> >> #tripleo >> >> is not >> >> conducive to consolidation on tooling and direction imo. >> > >> > >> > The irc channel is easily addressed. We do seem to generate an awful >> > amount >> > of chatter though :) >> > >> >> >> >> >> >> The scope of quickstart is actually not fully understood by myself. >> >> I've >> >> also >> >> heard from some in the upstream TripleO community as well who are >> >> confused >> >> by >> >> its direction and are facing similar difficulties using its generated >> >> bash >> >> scripts that they'd be facing if they were just using TripleO >> >> documentation >> >> instead. >> > >> > >> > The point of the generated bash scripts is to create rst documentation >> > and >> > reusable scripts for the end user. Since the documentation and the >> > generated scripts are equivalent I would expect the same errors, >> > problems >> > and issues. I see this as a good thing really. We *want* the CI to hit >> > the >> > same issues as those who are following the doc. >> > >> >> >> >> >> >> I do think that this sort of problem lends itself easily to one off >> >> implementations as is quite evidenced in this thread. Everyone/group >> >> wants >> >> and >> >> needs to automate something in a different way. And imo, none of these >> >> tools >> >> are building end-user or operator facing interfaces, so they're not >> >> fully >> >> focused on building something that "just works for everyone". Those >> >> interfaces >> >> should be developed in TripleO user facing tooling anyway >> >> (tripleoclient/openstackclient/etc). >> >> >> >> So, I actually think it's ok in some degree that things have been >> >> automated >> >> differently in different tools. Anecdotally, I suspect many users of >> >> TripleO in >> >> production have their own automation tools as well. And none of the >> >> implementations mentioned in this thread would likely meet their needs >> >> either. >> > >> > >> > This is true.. without a tool in the upstream that addresses ci, dev, >> > test >> > use cases across the development cycle this will continue to be the >> > case. I >> > suspect even with a perfect tool, it won't ever be perfect for everyone. >> > >> >> >> >> >> >> However, if there is a desire to focus resources on consolidated >> >> tooling >> >> and >> >> someone to drive it forward, then I definitely agree that the effort >> >> needs >> >> to >> >> start upstream with a singular plan for tripleo-ci. From what I gather, >> >> that >> >> would be some sort of alignment and reuse of tripleo-quickstart, and >> >> then >> >> we >> >> could build from there. >> > >> > >> > +1 >> > >> >> >> >> >> >> That could start as a discussion and plan within that community with >> >> some >> >> agreed on concensus around that plan. There was an initial thread on >> >> openstack-dev related to this topic but it is stalled a bit. It could >> >> be >> >> continually driven to resolution via specs, the tripleo meeting, email >> >> or >> >> irc >> >> discussion until a plan is formed. >> > >> > >> > +1, I think the first step is to complete the original blueprint and >> > move >> > on from there. >> > I think there has also been interest in having an in person meeting at >> > summit. >> > >> > Thanks! >> > >> >> >> >> >> >> -- >> >> -- James Slagle >> >> -- >> >> >> >> _______________________________________________ >> >> rdo-list mailing list >> >> rdo-list at redhat.com >> >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> > >> > >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> I like how the discussion goes though I have some personal (and >> probably shared) feeling that I would like to share here, more or less >> related. >> >> As a TripleO core developer, I have some frustration to see that a lot >> of people are involved in making TripleO Quickstart better, while we >> have a few people actually working on tripleo-ci tool and try to >> maintain upstream CI stable. >> As a reminder, tripleo-ci tool is currently the ONLY ONE thing that >> actually gates TripleO, even if we don't like the tool. It is right >> now, testing TripleO upstream, everything that is not tested in there >> will probably break one day downstream CIs. >> Yes we have this tooling discussion here and that's awesome, but words >> are words. I would like to see some real engagement to help TripleO CI >> to converge into something better and not only everyone working on >> their side. > > > You have a valid point and reason to be frustrated. > Is the point here that everyone downstream should use tripleo.sh or that > everyone should be focused on ci and testing at the tripleo level? Not everyone should use tripleo.sh. My point is that we should move forward with a common tool, and stop enlarging the gap between tools. We have created (and are still doing) a technical dept where we have multiple tools with a ton of overlap, the more we wait, more difficult it will be to clean this up. >> >> >> Some examples: >> - TripleO Quickstart (downstream) CI has coverage for undercloud & >> overcloud upgrades while TripleO CI freshly has a undercloud upgrade >> job and used to have a overcloud (minor) upgrade job (disabled now, >> for some reasons related to our capacity to run jobs and also some >> blockers into code itself). >> - TripleO CI has some TripleO Heat templates that could also be re-use >> by TripleO Quickstart (I'm working on moving them from tripleo-ci to >> THT, WIP here: https://review.openstack.org/350775). >> - TripleO CI deploys Ceph Jewel repository, TripleO Quickstart doesn't. >> - (...) > > > As others have mentioned, there are at least 5-10 tools in development that > are used to deploy tripleo in some CI fashion. Calling out > tripleo-quickstart alone is not quite right imho. There are a number of > tripleo devs that burn cycles on their own ci tools and maybe that is fine > thing to do. I called quickstart because that's the one I see everyday but my frustration is about all our tools in general. I'm actually a OOOQ user and I like this tool, really. But as you can see, I'm also working on tripleo-ci right now because I want TripleO CI better and I haven't seen until now some interest to converge. James started something cool by trying to deploy an undercloud using OOOQ from tripleo-ci. That's a start ! We need things like this, prototyping convergence, and see what we can do. > TripleO-Quickstart is meant to replace instack-virt-setup which it does > quite well. The only group that was actually running instack-virt-setup > was the RDO CI team, upstream had taken it out of the ci system. I think > it's not unfair to say gaps have been left for other teams to fill. Gotcha. It was just some examples. >> >> >> We have been having this discussion for a while now but we're still >> not making much progress here, I feel like we're in statu quo. >> James mentioned a blueprint, I like it. We need to engage some >> upstream discussion about this major CI refactor, like we need with >> specs and then we'll decide if whether or not we need to change the >> tool, and how. > > > Well, this would take some leadership imho. We need some people that are > familiar with the upstream, midstream and downstream requirements of CI. > This was addressed at the production chain meetings initially but then > pretty much ignored. The leaders responsible at the various stages of a > build (upstream -> downstream ) failed to take this issue on. Here we are > today. > > Would it be acceptable by anyone.. IF > > tripleo-quickstart replaced instack-virt-setup [1] and walked through the > undercloud install, then handed off to tripleo.sh to deploy, upgrade, > update, scale, validate etc??? That's something we can try. > That these two tools *would* in fact be the the official CI tools of tripleo > at the upstream, RDO, and at least parts of the downstream? My opinion on this is that upstream and downstream CI should only differ on: * the packages (OSP vs RDO) * the scenarios (downstream could have customer-specific things) And that's it. Tools should remain the same IMHO. > Would that help to ease the current frustration around CI? Emilien what do > you think? I spent the last months working on composable roles and I have now more time to work on CI; $topic is definitely something where I would like to help. -- Emilien Macchi From Milind.Gunjan at sprint.com Thu Aug 4 01:22:37 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Thu, 4 Aug 2016 01:22:37 +0000 Subject: [rdo-list] RDO TripleO Mitaka Overcloud Failing Message-ID: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> Thanks a lot Christopher for the suggestions. Marius: Thanks a lot for helping me out. I am attaching the requested logs. I tried to redeploy overcloud with 3 controller but the issue remains the same. Overcloud stack deployment is failing at Post-deployment configuration steps as before. When I was going to /var/log/messages for different services, it seems there is issue with haproxy service. Neutron service is failing too and the service endpoints being configured through puppet are not reachable for all failed service. I have attached os-collect-config journals from all four nodes. Please let me know if there is any other logs or any other troubleshooting steps which I can implement. Best Regards, Milind -----Original Message----- From: Marius Cornea [mailto:marius at remote-lab.net] Sent: Wednesday, August 03, 2016 4:00 PM To: Gunjan, Milind [CTO] Cc: rdo-list at redhat.com Subject: Re: [rdo-list] RDO TripleO Mitaka Non-HA Overcloud Failing Hi, Could you please ssh to the nodes, gather the os-collect-config journals (journalctl -l -u os-collect-config) and attach them here? Thank you, Marius On Wed, Aug 3, 2016 at 8:40 PM, Gunjan, Milind [CTO] wrote: > Hi All, > > > > I am currently working on Tripleo Mitaka Openstack deployment on > baremetal > servers: > > Undercloud ? 1 baremetal server with 2 NIC (1 for provisioning and 2nd > for external network connectivity) > > Controller ? 1 baremetal server ( 6 NICs with each openstack VLANs on > separate NIC) > > Compute ? 1 baremetal server > > > > I followed Graeme's instructions here : > https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html to > set up Undercloud . Undercloud deployment was successful and all the > images required for overcloud deployment was properly built as per the instruction. > I would like to mention that I used libvirt tools to modify the root > password on overcloud-full.qcow2 and we also modified the grub file to > include ?net.ifnames=0 biosdevname=0? to restore old interface naming. > > > > I was able to successfully introspect 2 serves to be used for > controller and compute nodes. Also , we added the serial device > discovered during introspection as root device: > > ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add > properties/root_device='{"serial": "618e728372833010c79bead9066f0f9e"}' > > ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add > properties/root_device='{"serial": "618e7283728347101f2107b511603adc"}' > > > > Next, we added compute and control tag to respective introspected node > with local boot option: > > > > ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add > properties/capabilities='profile:control,boot_option:local' > > ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add > properties/capabilities='profile:compute,boot_option:local' > > > > We used multiple NIC templates for control and compute node which has > been attached along with network-environment.yaml file. Default > network isolation template file has been used. > > > > > > Deployment script looks like this : > > #!/bin/bash > > DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" > > template_base_dir="$DIR" > > ntpserver= #Sprint LAB > > openstack overcloud deploy --templates \ > > -e > /usr/share/openstack-tripleo-heat-templates/environments/network-isola > tion.yaml > \ > > -e ${template_base_dir}/environments/network-environment.yaml \ > > --control-flavor control --compute-flavor compute \ > > --control-scale 1 --compute-scale 1 \ > > --ntp-server $ntpserver \ > > --neutron-network-type vxlan --neutron-tunnel-types vxlan --debug > > > > Heat stack deployment goes on more really long time (more than 4 > hours) and gets stuck at postdeployment configurations. Please find > below the capture during install : > > > > > > Every 2.0s: ironic node-list && nova list && heat stack-list && heat > resource-list -n5 overcloud | grep -vi complete Wed Aug 3 17:33:37 > 2016 > > > > +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ > > | UUID | Name | Instance UUID > | Power State | Provisioning State | Maintenance | > > +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ > > | 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 | None | > 9e7aae15-cabc-4489-a1b2-778915a78df2 | power on | active | > False | > > | afcfbee3-3108-48da-a6da-aba8f422642c | None | > c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | power on | active | > False | > > +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ > > +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ > > | ID | Name | Status | > Task State | Power State | Networks | > > +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ > > | 9e7aae15-cabc-4489-a1b2-778915a78df2 | overcloud-controller-0 | > | ACTIVE | > - | Running | ctlplane=192.168.149.9 | > > | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | overcloud-novacompute-0 | > | ACTIVE | > - | Running | ctlplane=192.168.149.8 | > > +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ > > +--------------------------------------+------------+---------------+---------------------+--------------+ > > | id | stack_name | stack_status | > creation_time | updated_time | > > +--------------------------------------+------------+---------------+---------------------+--------------+ > > | 26ee0150-4cfa-4268-9107-8bfbf6712913 | overcloud | CREATE_FAILED | > 2016-08-03T08:11:34 | None | > > +--------------------------------------+------------+---------------+---------------------+--------------+ > > +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ > > ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ > > | resource_name | physical_resource_id > | resource_type > > | resource_status | updated_time | stack_name > | > > +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ > > ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ > > | ComputeNodesPostDeployment | > 3797aec6-e543-4dda-9cd1-c7261e827a64 | > OS::TripleO::ComputePostDeployment > > | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud > | > > | ControllerNodesPostDeployment | > 6ad9f88c-5c55-4125-97f1-eb0e33329d16 | > OS::TripleO::ControllerPostDeployment > > | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud > | > > | ComputePuppetDeployment | > 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | > OS::Heat::StructuredDeployments > > | CREATE_FAILED | 2016-08-03T08:29:19 | > overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy > | > > | ControllerOvercloudServicesDeployment_Step4 | > 15509f59-ff28-43af-95dd-6247a6a32c2d | > OS::Heat::StructuredDeployments > > | CREATE_FAILED | 2016-08-03T08:29:20 | > overcloud-ControllerNodesPostDeployment-35y7uafngfwj > | > > | 0 | > 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | > OS::Heat::StructuredDeployment > > | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | > overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy-ComputePuppetDeploym > ent-cpahcct3tfw3 > | > > | 0 | > 5e9308f7-c3a9-4a94-a017-e1acb694c036 | > OS::Heat::StructuredDeployment > > > > > > [stack at mitaka-uc ~]$ openstack software deployment show > 5e9308f7-c3a9-4a94-a017-e1acb694c036 > > +---------------+--------------------------------------+ > > | Field | Value | > > +---------------+--------------------------------------+ > > | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | > > | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | > > | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | > > | creation_time | 2016-08-03T08:32:10 | > > | updated_time | | > > | status | IN_PROGRESS | > > | status_reason | Deploy data available | > > | input_values | {} | > > | action | CREATE | > > +---------------+--------------------------------------+ > > > > [stack at mitaka-uc ~]$ openstack software deployment show --long > 5e9308f7-c3a9-4a94-a017-e1acb694c036 > > +---------------+--------------------------------------+ > > | Field | Value | > > +---------------+--------------------------------------+ > > | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | > > | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | > > | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | > > | creation_time | 2016-08-03T08:32:10 | > > | updated_time | | > > | status | IN_PROGRESS | > > | status_reason | Deploy data available | > > | input_values | {} | > > | action | CREATE | > > | output_values | None | > > +---------------+--------------------------------------+ > > > > [stack at mitaka-uc ~]$ openstack stack resource list > 3797aec6-e543-4dda-9cd1-c7261e827a64 > > +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ > > | resource_name | physical_resource_id | > resource_type | resource_status | > updated_time | > > +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ > > | ComputeArtifactsConfig | a33cd04d-61ab-4429-8565-182409c2b97f | > file:///usr/share/openstack-tripleo-heat- | CREATE_COMPLETE | > 2016-08-03T08:29:19 | > > | | | > templates/puppet/deploy-artifacts.yaml | | > | > > | ComputePuppetConfig | 5bb712b0-5358-46c7-a444-f9adedfedd50 | > OS::Heat::SoftwareConfig | CREATE_COMPLETE | > 2016-08-03T08:29:19 | > > | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | > OS::Heat::StructuredDeployments | CREATE_FAILED | > 2016-08-03T08:29:19 | > > | ComputeArtifactsDeploy | 1d13bf34-fc66-4bf1-a3b7-1dd815f58f5a | > OS::Heat::StructuredDeployments | CREATE_COMPLETE | > 2016-08-03T08:29:19 | > > | ExtraConfig | | > OS::TripleO::NodeExtraConfigPost | INIT_COMPLETE | > 2016-08-03T08:29:19 | > > +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ > > > > [stack at mitaka-uc ~]$ openstack stack resource list > 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f > > +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ > > | resource_name | physical_resource_id | resource_type > | resource_status | updated_time | > > +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ > > | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | > OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | > 2016-08-03T08:30:04 | > > +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ > > [stack at mitaka-uc ~]$ openstack software deployment show > 7cd0aa3d-742f-4e78-99ca-b2a575913f8e > > +---------------+--------------------------------------+ > > | Field | Value | > > +---------------+--------------------------------------+ > > | id | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | > > | server_id | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | > > | config_id | 24e5c0db-f84f-4a94-8f8e-8e38e73ccc86 | > > | creation_time | 2016-08-03T08:30:05 | > > | updated_time | | > > | status | IN_PROGRESS | > > | status_reason | Deploy data available | > > | input_values | {} | > > | action | CREATE | > > +---------------+--------------------------------------+ > > > > Keystonerc file was not generated. Please find below openstack status > command result on controller and compute. > > > > [heat-admin at overcloud-controller-0 ~]$ openstack-status > > == Nova services == > > openstack-nova-api: active > > openstack-nova-compute: inactive (disabled on boot) > > openstack-nova-network: inactive (disabled on boot) > > openstack-nova-scheduler: activating(disabled on boot) > > openstack-nova-cert: active > > openstack-nova-conductor: active > > openstack-nova-console: inactive (disabled on boot) > > openstack-nova-consoleauth: active > > openstack-nova-xvpvncproxy: inactive (disabled on boot) > > == Glance services == > > openstack-glance-api: active > > openstack-glance-registry: active > > == Keystone service == > > openstack-keystone: inactive (disabled on boot) > > == Horizon service == > > openstack-dashboard: uncontactable > > == neutron services == > > neutron-server: failed (disabled on boot) > > neutron-dhcp-agent: inactive (disabled on boot) > > neutron-l3-agent: inactive (disabled on boot) > > neutron-metadata-agent: inactive (disabled on boot) > > neutron-lbaas-agent: inactive (disabled on boot) > > neutron-openvswitch-agent: inactive (disabled on boot) > > neutron-metering-agent: inactive (disabled on boot) > > == Swift services == > > openstack-swift-proxy: active > > openstack-swift-account: active > > openstack-swift-container: active > > openstack-swift-object: active > > == Cinder services == > > openstack-cinder-api: active > > openstack-cinder-scheduler: active > > openstack-cinder-volume: active > > openstack-cinder-backup: inactive (disabled on boot) > > == Ceilometer services == > > openstack-ceilometer-api: active > > openstack-ceilometer-central: active > > openstack-ceilometer-compute: inactive (disabled on boot) > > openstack-ceilometer-collector: active > > openstack-ceilometer-notification: active > > == Heat services == > > openstack-heat-api: inactive (disabled on boot) > > openstack-heat-api-cfn: active > > openstack-heat-api-cloudwatch: inactive (disabled on boot) > > openstack-heat-engine: inactive (disabled on boot) > > == Sahara services == > > openstack-sahara-api: active > > openstack-sahara-engine: active > > == Support services == > > libvirtd: active > > openvswitch: active > > dbus: active > > target: active > > rabbitmq-server: active > > memcached: active > > > > > > [heat-admin at overcloud-novacompute-0 ~]$ openstack-status > > == Nova services == > > openstack-nova-api: inactive (disabled on boot) > > openstack-nova-compute: activating(disabled on boot) > > openstack-nova-network: inactive (disabled on boot) > > openstack-nova-scheduler: inactive (disabled on boot) > > openstack-nova-cert: inactive (disabled on boot) > > openstack-nova-conductor: inactive (disabled on boot) > > openstack-nova-console: inactive (disabled on boot) > > openstack-nova-consoleauth: inactive (disabled on boot) > > openstack-nova-xvpvncproxy: inactive (disabled on boot) > > == Glance services == > > openstack-glance-api: inactive (disabled on boot) > > openstack-glance-registry: inactive (disabled on boot) > > == Keystone service == > > openstack-keystone: inactive (disabled on boot) > > == Horizon service == > > openstack-dashboard: uncontactable > > == neutron services == > > neutron-server: inactive (disabled on boot) > > neutron-dhcp-agent: inactive (disabled on boot) > > neutron-l3-agent: inactive (disabled on boot) > > neutron-metadata-agent: inactive (disabled on boot) > > neutron-lbaas-agent: inactive (disabled on boot) > > neutron-openvswitch-agent: active > > neutron-metering-agent: inactive (disabled on boot) > > == Swift services == > > openstack-swift-proxy: inactive (disabled on boot) > > openstack-swift-account: inactive (disabled on boot) > > openstack-swift-container: inactive (disabled on boot) > > openstack-swift-object: inactive (disabled on boot) > > == Cinder services == > > openstack-cinder-api: inactive (disabled on boot) > > openstack-cinder-scheduler: inactive (disabled on boot) > > openstack-cinder-volume: inactive (disabled on boot) > > openstack-cinder-backup: inactive (disabled on boot) > > == Ceilometer services == > > openstack-ceilometer-api: inactive (disabled on boot) > > openstack-ceilometer-central: inactive (disabled on boot) > > openstack-ceilometer-compute: inactive (disabled on boot) > > openstack-ceilometer-collector: inactive (disabled on boot) > > openstack-ceilometer-notification: inactive (disabled on boot) > > == Heat services == > > openstack-heat-api: inactive (disabled on boot) > > openstack-heat-api-cfn: inactive (disabled on boot) > > openstack-heat-api-cloudwatch: inactive (disabled on boot) > > openstack-heat-engine: inactive (disabled on boot) > > == Sahara services == > > openstack-sahara-all: inactive (disabled on boot) > > == Support services == > > libvirtd: active > > openvswitch: active > > dbus: active > > rabbitmq-server: inactive (disabled on boot) > > memcached: inactive (disabled on boot) > > > > > > > > Please let me know if there is any other logs which I can provide that > can help in troubleshooting. > > > > > > Thanks a lot in Advance for your help and support. > > > > Best Regards, > > Milind Gunjan > > > > > ________________________________ > > This e-mail may contain Sprint proprietary information intended for > the sole use of the recipient(s). Any use by others is prohibited. If > you are not the intended recipient, please contact the sender and > delete all copies of the message. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com ________________________________ This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. -------------- next part -------------- A non-text attachment was scrubbed... Name: os-collect-config-journal-compute0.log Type: application/octet-stream Size: 337554 bytes Desc: os-collect-config-journal-compute0.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: os-collect-config-journal-controller0.log Type: application/octet-stream Size: 805836 bytes Desc: os-collect-config-journal-controller0.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: os-collect-config-journal-controller1.log Type: application/octet-stream Size: 814216 bytes Desc: os-collect-config-journal-controller1.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: os-collect-config-journal-controller2.log Type: application/octet-stream Size: 857425 bytes Desc: os-collect-config-journal-controller2.log URL: From me at gbraad.nl Thu Aug 4 01:46:36 2016 From: me at gbraad.nl (Gerard Braad) Date: Thu, 4 Aug 2016 09:46:36 +0800 Subject: [rdo-list] RDO TripleO Mitaka Overcloud Failing In-Reply-To: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> References: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> Message-ID: Hi, On Thu, Aug 4, 2016 at 9:22 AM, Gunjan, Milind [CTO] wrote: > Please let me know if there is any other logs > From: Marius Cornea [mailto:marius at remote-lab.net] > attach them here? Would be appreciated if in future attachments can be placed on a website instead. Such as creating a bug on the TripleO launchpad and do the attachments from there: https://launchpad.net/tripleo. Alternative would be to create a question on ask.openstack.org, referring to this email and vice versa, and upload the logs there. ;-) Regards, Gerard From Milind.Gunjan at sprint.com Thu Aug 4 01:52:19 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Thu, 4 Aug 2016 01:52:19 +0000 Subject: [rdo-list] RDO TripleO Mitaka Overcloud Failing In-Reply-To: References: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> Message-ID: Thanks a lot Gerard for suggestion. I will definitely go ahead and file a bug and submit all the attachments on launchpad. I already have created question on ask.openstack.org but due to insufficient karma :( , I am unable to attach files there. I am working towards improving my OpenStack karma :) Best regards, Milind -----Original Message----- From: Gerard Braad [mailto:me at gbraad.nl] Sent: Wednesday, August 03, 2016 9:47 PM To: Gunjan, Milind [CTO] ; rdo-list Cc: Marius Cornea Subject: Re: [rdo-list] RDO TripleO Mitaka Overcloud Failing Hi, On Thu, Aug 4, 2016 at 9:22 AM, Gunjan, Milind [CTO] wrote: > Please let me know if there is any other logs > From: Marius Cornea [mailto:marius at remote-lab.net] attach them here? Would be appreciated if in future attachments can be placed on a website instead. Such as creating a bug on the TripleO launchpad and do the attachments from there: https://launchpad.net/tripleo. Alternative would be to create a question on ask.openstack.org, referring to this email and vice versa, and upload the logs there. ;-) Regards, Gerard ________________________________ This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. From me at gbraad.nl Thu Aug 4 01:54:06 2016 From: me at gbraad.nl (Gerard Braad) Date: Thu, 4 Aug 2016 09:54:06 +0800 Subject: [rdo-list] RDO TripleO Mitaka Overcloud Failing In-Reply-To: References: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> Message-ID: Hi On Thu, Aug 4, 2016 at 9:52 AM, Gunjan, Milind [CTO] wrote: > I already have created question on ask.openstack.org but due to insufficient karma :( , I am unable to attach files there. I am working towards improving my OpenStack karma :) Karma is something you can easily earn... Can you refer to the URL for this question? regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From Milind.Gunjan at sprint.com Thu Aug 4 01:57:33 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Thu, 4 Aug 2016 01:57:33 +0000 Subject: [rdo-list] RDO TripleO Mitaka Overcloud Failing In-Reply-To: References: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> Message-ID: <9f68e015aa044ae8b2045aab65b8fba1@PREWE13M11.ad.sprint.com> Please find attached the requested link : https://ask.openstack.org/en/question/95249/rdo-tripleo-mitaka-non-ha-overcloud-failing/ Best Regards, Milind -----Original Message----- From: Gerard Braad [mailto:me at gbraad.nl] Sent: Wednesday, August 03, 2016 9:54 PM To: Gunjan, Milind [CTO] Cc: rdo-list ; Marius Cornea Subject: Re: [rdo-list] RDO TripleO Mitaka Overcloud Failing Hi On Thu, Aug 4, 2016 at 9:52 AM, Gunjan, Milind [CTO] wrote: > I already have created question on ask.openstack.org but due to > insufficient karma :( , I am unable to attach files there. I am > working towards improving my OpenStack karma :) Karma is something you can easily earn... Can you refer to the URL for this question? regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] ________________________________ This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. From dms at redhat.com Thu Aug 4 03:04:25 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 3 Aug 2016 23:04:25 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> <20160803165119.GC25838@localhost.localdomain> Message-ID: On Wed, Aug 3, 2016 at 8:45 PM, Emilien Macchi wrote: > My opinion on this is that upstream and downstream CI should only differ on: > * the packages (OSP vs RDO) > * the scenarios (downstream could have customer-specific things) > And that's it. Tools should remain the same IMHO. Nothing to add here but all of my +1's. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From marius at remote-lab.net Thu Aug 4 08:25:51 2016 From: marius at remote-lab.net (Marius Cornea) Date: Thu, 4 Aug 2016 10:25:51 +0200 Subject: [rdo-list] RDO TripleO Mitaka Overcloud Failing In-Reply-To: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> References: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> Message-ID: OK, I don't actually see an error in the logs, the last thing that shows up is: on controller-0: [DEBUG] Running /var/lib/heat-config/hooks/puppet < /var/lib/heat-config/deployed/c989f58d-cd38-4813-a174-7e42c82bcb6f.json on compute-0: [DEBUG] Running /var/lib/heat-config/hooks/puppet < /var/lib/heat-config/deployed/c5265c58-96ae-49d5-9c1e-a38041e2b130.json I suspect these steps are timing out so let's try running them manually to figure out what's going on: Running the commands manually will output a puppet apply command, showing one from my environment as an example: # /var/lib/heat-config/hooks/puppet < /var/lib/heat-config/deployed/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.json [2016-08-04 08:12:21,609] (heat-config) [DEBUG] Running FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff" FACTER_fqdn="overcloud-controller-0.localdomain" FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" puppet apply --detailed-exitcodes /var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.pp Next step is to stop it(ctrl+c), copy the puppet apply command, add --debug and run it: # FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff" FACTER_fqdn="overcloud-controller-0.localdomain" FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" puppet apply --detailed-exitcodes /var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.pp --debug This should output puppet debug info that might lead us to where it gets stuck. Please paste the output so we can investigate further. Thanks On Thu, Aug 4, 2016 at 3:22 AM, Gunjan, Milind [CTO] wrote: > Thanks a lot Christopher for the suggestions. > > Marius: Thanks a lot for helping me out. I am attaching the requested logs. > > I tried to redeploy overcloud with 3 controller but the issue remains the same. Overcloud stack deployment is failing at Post-deployment configuration steps as before. When I was going to /var/log/messages for different services, it seems there is issue with haproxy service. Neutron service is failing too and the service endpoints being configured through puppet are not reachable for all failed service. I have attached os-collect-config journals from all four nodes. > > > Please let me know if there is any other logs or any other troubleshooting steps which I can implement. > > Best Regards, > Milind > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Wednesday, August 03, 2016 4:00 PM > To: Gunjan, Milind [CTO] > Cc: rdo-list at redhat.com > Subject: Re: [rdo-list] RDO TripleO Mitaka Non-HA Overcloud Failing > > Hi, > > Could you please ssh to the nodes, gather the os-collect-config journals (journalctl -l -u os-collect-config) and attach them here? > > Thank you, > Marius > > On Wed, Aug 3, 2016 at 8:40 PM, Gunjan, Milind [CTO] wrote: >> Hi All, >> >> >> >> I am currently working on Tripleo Mitaka Openstack deployment on >> baremetal >> servers: >> >> Undercloud ? 1 baremetal server with 2 NIC (1 for provisioning and 2nd >> for external network connectivity) >> >> Controller ? 1 baremetal server ( 6 NICs with each openstack VLANs on >> separate NIC) >> >> Compute ? 1 baremetal server >> >> >> >> I followed Graeme's instructions here : >> https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html to >> set up Undercloud . Undercloud deployment was successful and all the >> images required for overcloud deployment was properly built as per the instruction. >> I would like to mention that I used libvirt tools to modify the root >> password on overcloud-full.qcow2 and we also modified the grub file to >> include ?net.ifnames=0 biosdevname=0? to restore old interface naming. >> >> >> >> I was able to successfully introspect 2 serves to be used for >> controller and compute nodes. Also , we added the serial device >> discovered during introspection as root device: >> >> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add >> properties/root_device='{"serial": "618e728372833010c79bead9066f0f9e"}' >> >> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add >> properties/root_device='{"serial": "618e7283728347101f2107b511603adc"}' >> >> >> >> Next, we added compute and control tag to respective introspected node >> with local boot option: >> >> >> >> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add >> properties/capabilities='profile:control,boot_option:local' >> >> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add >> properties/capabilities='profile:compute,boot_option:local' >> >> >> >> We used multiple NIC templates for control and compute node which has >> been attached along with network-environment.yaml file. Default >> network isolation template file has been used. >> >> >> >> >> >> Deployment script looks like this : >> >> #!/bin/bash >> >> DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" >> >> template_base_dir="$DIR" >> >> ntpserver= #Sprint LAB >> >> openstack overcloud deploy --templates \ >> >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/network-isola >> tion.yaml >> \ >> >> -e ${template_base_dir}/environments/network-environment.yaml \ >> >> --control-flavor control --compute-flavor compute \ >> >> --control-scale 1 --compute-scale 1 \ >> >> --ntp-server $ntpserver \ >> >> --neutron-network-type vxlan --neutron-tunnel-types vxlan --debug >> >> >> >> Heat stack deployment goes on more really long time (more than 4 >> hours) and gets stuck at postdeployment configurations. Please find >> below the capture during install : >> >> >> >> >> >> Every 2.0s: ironic node-list && nova list && heat stack-list && heat >> resource-list -n5 overcloud | grep -vi complete Wed Aug 3 17:33:37 >> 2016 >> >> >> >> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >> >> | UUID | Name | Instance UUID >> | Power State | Provisioning State | Maintenance | >> >> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >> >> | 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 | None | >> 9e7aae15-cabc-4489-a1b2-778915a78df2 | power on | active | >> False | >> >> | afcfbee3-3108-48da-a6da-aba8f422642c | None | >> c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | power on | active | >> False | >> >> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >> >> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >> >> | ID | Name | Status | >> Task State | Power State | Networks | >> >> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >> >> | 9e7aae15-cabc-4489-a1b2-778915a78df2 | overcloud-controller-0 | >> | ACTIVE | >> - | Running | ctlplane=192.168.149.9 | >> >> | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | overcloud-novacompute-0 | >> | ACTIVE | >> - | Running | ctlplane=192.168.149.8 | >> >> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >> >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> >> | id | stack_name | stack_status | >> creation_time | updated_time | >> >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> >> | 26ee0150-4cfa-4268-9107-8bfbf6712913 | overcloud | CREATE_FAILED | >> 2016-08-03T08:11:34 | None | >> >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> >> +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ >> >> ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ >> >> | resource_name | physical_resource_id >> | resource_type >> >> | resource_status | updated_time | stack_name >> | >> >> +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ >> >> ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ >> >> | ComputeNodesPostDeployment | >> 3797aec6-e543-4dda-9cd1-c7261e827a64 | >> OS::TripleO::ComputePostDeployment >> >> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud >> | >> >> | ControllerNodesPostDeployment | >> 6ad9f88c-5c55-4125-97f1-eb0e33329d16 | >> OS::TripleO::ControllerPostDeployment >> >> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud >> | >> >> | ComputePuppetDeployment | >> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | >> OS::Heat::StructuredDeployments >> >> | CREATE_FAILED | 2016-08-03T08:29:19 | >> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy >> | >> >> | ControllerOvercloudServicesDeployment_Step4 | >> 15509f59-ff28-43af-95dd-6247a6a32c2d | >> OS::Heat::StructuredDeployments >> >> | CREATE_FAILED | 2016-08-03T08:29:20 | >> overcloud-ControllerNodesPostDeployment-35y7uafngfwj >> | >> >> | 0 | >> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >> OS::Heat::StructuredDeployment >> >> | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | >> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy-ComputePuppetDeploym >> ent-cpahcct3tfw3 >> | >> >> | 0 | >> 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >> OS::Heat::StructuredDeployment >> >> >> >> >> >> [stack at mitaka-uc ~]$ openstack software deployment show >> 5e9308f7-c3a9-4a94-a017-e1acb694c036 >> >> +---------------+--------------------------------------+ >> >> | Field | Value | >> >> +---------------+--------------------------------------+ >> >> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >> >> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | >> >> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | >> >> | creation_time | 2016-08-03T08:32:10 | >> >> | updated_time | | >> >> | status | IN_PROGRESS | >> >> | status_reason | Deploy data available | >> >> | input_values | {} | >> >> | action | CREATE | >> >> +---------------+--------------------------------------+ >> >> >> >> [stack at mitaka-uc ~]$ openstack software deployment show --long >> 5e9308f7-c3a9-4a94-a017-e1acb694c036 >> >> +---------------+--------------------------------------+ >> >> | Field | Value | >> >> +---------------+--------------------------------------+ >> >> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >> >> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | >> >> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | >> >> | creation_time | 2016-08-03T08:32:10 | >> >> | updated_time | | >> >> | status | IN_PROGRESS | >> >> | status_reason | Deploy data available | >> >> | input_values | {} | >> >> | action | CREATE | >> >> | output_values | None | >> >> +---------------+--------------------------------------+ >> >> >> >> [stack at mitaka-uc ~]$ openstack stack resource list >> 3797aec6-e543-4dda-9cd1-c7261e827a64 >> >> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >> >> | resource_name | physical_resource_id | >> resource_type | resource_status | >> updated_time | >> >> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >> >> | ComputeArtifactsConfig | a33cd04d-61ab-4429-8565-182409c2b97f | >> file:///usr/share/openstack-tripleo-heat- | CREATE_COMPLETE | >> 2016-08-03T08:29:19 | >> >> | | | >> templates/puppet/deploy-artifacts.yaml | | >> | >> >> | ComputePuppetConfig | 5bb712b0-5358-46c7-a444-f9adedfedd50 | >> OS::Heat::SoftwareConfig | CREATE_COMPLETE | >> 2016-08-03T08:29:19 | >> >> | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | >> OS::Heat::StructuredDeployments | CREATE_FAILED | >> 2016-08-03T08:29:19 | >> >> | ComputeArtifactsDeploy | 1d13bf34-fc66-4bf1-a3b7-1dd815f58f5a | >> OS::Heat::StructuredDeployments | CREATE_COMPLETE | >> 2016-08-03T08:29:19 | >> >> | ExtraConfig | | >> OS::TripleO::NodeExtraConfigPost | INIT_COMPLETE | >> 2016-08-03T08:29:19 | >> >> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >> >> >> >> [stack at mitaka-uc ~]$ openstack stack resource list >> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f >> >> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >> >> | resource_name | physical_resource_id | resource_type >> | resource_status | updated_time | >> >> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >> >> | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >> OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | >> 2016-08-03T08:30:04 | >> >> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >> >> [stack at mitaka-uc ~]$ openstack software deployment show >> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e >> >> +---------------+--------------------------------------+ >> >> | Field | Value | >> >> +---------------+--------------------------------------+ >> >> | id | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >> >> | server_id | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | >> >> | config_id | 24e5c0db-f84f-4a94-8f8e-8e38e73ccc86 | >> >> | creation_time | 2016-08-03T08:30:05 | >> >> | updated_time | | >> >> | status | IN_PROGRESS | >> >> | status_reason | Deploy data available | >> >> | input_values | {} | >> >> | action | CREATE | >> >> +---------------+--------------------------------------+ >> >> >> >> Keystonerc file was not generated. Please find below openstack status >> command result on controller and compute. >> >> >> >> [heat-admin at overcloud-controller-0 ~]$ openstack-status >> >> == Nova services == >> >> openstack-nova-api: active >> >> openstack-nova-compute: inactive (disabled on boot) >> >> openstack-nova-network: inactive (disabled on boot) >> >> openstack-nova-scheduler: activating(disabled on boot) >> >> openstack-nova-cert: active >> >> openstack-nova-conductor: active >> >> openstack-nova-console: inactive (disabled on boot) >> >> openstack-nova-consoleauth: active >> >> openstack-nova-xvpvncproxy: inactive (disabled on boot) >> >> == Glance services == >> >> openstack-glance-api: active >> >> openstack-glance-registry: active >> >> == Keystone service == >> >> openstack-keystone: inactive (disabled on boot) >> >> == Horizon service == >> >> openstack-dashboard: uncontactable >> >> == neutron services == >> >> neutron-server: failed (disabled on boot) >> >> neutron-dhcp-agent: inactive (disabled on boot) >> >> neutron-l3-agent: inactive (disabled on boot) >> >> neutron-metadata-agent: inactive (disabled on boot) >> >> neutron-lbaas-agent: inactive (disabled on boot) >> >> neutron-openvswitch-agent: inactive (disabled on boot) >> >> neutron-metering-agent: inactive (disabled on boot) >> >> == Swift services == >> >> openstack-swift-proxy: active >> >> openstack-swift-account: active >> >> openstack-swift-container: active >> >> openstack-swift-object: active >> >> == Cinder services == >> >> openstack-cinder-api: active >> >> openstack-cinder-scheduler: active >> >> openstack-cinder-volume: active >> >> openstack-cinder-backup: inactive (disabled on boot) >> >> == Ceilometer services == >> >> openstack-ceilometer-api: active >> >> openstack-ceilometer-central: active >> >> openstack-ceilometer-compute: inactive (disabled on boot) >> >> openstack-ceilometer-collector: active >> >> openstack-ceilometer-notification: active >> >> == Heat services == >> >> openstack-heat-api: inactive (disabled on boot) >> >> openstack-heat-api-cfn: active >> >> openstack-heat-api-cloudwatch: inactive (disabled on boot) >> >> openstack-heat-engine: inactive (disabled on boot) >> >> == Sahara services == >> >> openstack-sahara-api: active >> >> openstack-sahara-engine: active >> >> == Support services == >> >> libvirtd: active >> >> openvswitch: active >> >> dbus: active >> >> target: active >> >> rabbitmq-server: active >> >> memcached: active >> >> >> >> >> >> [heat-admin at overcloud-novacompute-0 ~]$ openstack-status >> >> == Nova services == >> >> openstack-nova-api: inactive (disabled on boot) >> >> openstack-nova-compute: activating(disabled on boot) >> >> openstack-nova-network: inactive (disabled on boot) >> >> openstack-nova-scheduler: inactive (disabled on boot) >> >> openstack-nova-cert: inactive (disabled on boot) >> >> openstack-nova-conductor: inactive (disabled on boot) >> >> openstack-nova-console: inactive (disabled on boot) >> >> openstack-nova-consoleauth: inactive (disabled on boot) >> >> openstack-nova-xvpvncproxy: inactive (disabled on boot) >> >> == Glance services == >> >> openstack-glance-api: inactive (disabled on boot) >> >> openstack-glance-registry: inactive (disabled on boot) >> >> == Keystone service == >> >> openstack-keystone: inactive (disabled on boot) >> >> == Horizon service == >> >> openstack-dashboard: uncontactable >> >> == neutron services == >> >> neutron-server: inactive (disabled on boot) >> >> neutron-dhcp-agent: inactive (disabled on boot) >> >> neutron-l3-agent: inactive (disabled on boot) >> >> neutron-metadata-agent: inactive (disabled on boot) >> >> neutron-lbaas-agent: inactive (disabled on boot) >> >> neutron-openvswitch-agent: active >> >> neutron-metering-agent: inactive (disabled on boot) >> >> == Swift services == >> >> openstack-swift-proxy: inactive (disabled on boot) >> >> openstack-swift-account: inactive (disabled on boot) >> >> openstack-swift-container: inactive (disabled on boot) >> >> openstack-swift-object: inactive (disabled on boot) >> >> == Cinder services == >> >> openstack-cinder-api: inactive (disabled on boot) >> >> openstack-cinder-scheduler: inactive (disabled on boot) >> >> openstack-cinder-volume: inactive (disabled on boot) >> >> openstack-cinder-backup: inactive (disabled on boot) >> >> == Ceilometer services == >> >> openstack-ceilometer-api: inactive (disabled on boot) >> >> openstack-ceilometer-central: inactive (disabled on boot) >> >> openstack-ceilometer-compute: inactive (disabled on boot) >> >> openstack-ceilometer-collector: inactive (disabled on boot) >> >> openstack-ceilometer-notification: inactive (disabled on boot) >> >> == Heat services == >> >> openstack-heat-api: inactive (disabled on boot) >> >> openstack-heat-api-cfn: inactive (disabled on boot) >> >> openstack-heat-api-cloudwatch: inactive (disabled on boot) >> >> openstack-heat-engine: inactive (disabled on boot) >> >> == Sahara services == >> >> openstack-sahara-all: inactive (disabled on boot) >> >> == Support services == >> >> libvirtd: active >> >> openvswitch: active >> >> dbus: active >> >> rabbitmq-server: inactive (disabled on boot) >> >> memcached: inactive (disabled on boot) >> >> >> >> >> >> >> >> Please let me know if there is any other logs which I can provide that >> can help in troubleshooting. >> >> >> >> >> >> Thanks a lot in Advance for your help and support. >> >> >> >> Best Regards, >> >> Milind Gunjan >> >> >> >> >> ________________________________ >> >> This e-mail may contain Sprint proprietary information intended for >> the sole use of the recipient(s). Any use by others is prohibited. If >> you are not the intended recipient, please contact the sender and >> delete all copies of the message. >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > ________________________________ > > This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. From tkammer at redhat.com Thu Aug 4 09:10:40 2016 From: tkammer at redhat.com (Tal Kammer) Date: Thu, 4 Aug 2016 12:10:40 +0300 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> <20160803165119.GC25838@localhost.localdomain> Message-ID: Reading this through I can't help but feel that once again we are straying from the original intent of the discussion. I also share the same feeling as Emilien and believe we should focus our efforts on where it matters rather than having each group focus on one part trying to "win" over the other. My concern since day 1 was that no one has asked what are the requirements of each group and how does the tool serve that requirement. Simply put, the requirement was "we need a CI tool" while I feel that the question of "what is a CI tool?" or "what/who does it need to serve?" was never raised and was left to self interpretation. The result became that each group started working according to its own understanding of a "CI tool" and I feel that even now, going over the e-mail thread, this question is left unanswered. I believe that in order to achieve a consensus around a tool, we should at least agree first on the needs. Some questions that come to mind: (feel free to pitch in more questions) 1. should the tool provide provisioning? if yes, what type? (virsh? openstack? foreman? etc..) --> this can be further discussed as it's quite a subject to cover 2. should the tool be Ansible based / other language? 3. where do we push / maintain this tool? (I believe 'upstream' is the answer but before it can officially accepted, we need a "midstream" place to hold it) 4. what other requirements does this tool needs to support? (does it need to include testing as well? what is the proper "plugin" mechanism to maintain, etc) Once we understand the above, we can ask: 1. "is oooq the right tool to use/push upstream?" (maybe one of the other projects like Octario, Weirdo, InfraRed, etc, has a better base / better suited for the job?) 2. "what are we looking for in such a tool?" (are we developing a tool for CI? for Automation? can it be shared across products/projects or dedicated Openstack tool? etc..) I propose that all CI groups meet up and present their tool to each other. After that, we should have an open discussion on the pros and cons of each tool allowing everyone to comment on the design openly. Then after everyone shared their thoughts, we can start improving / create a proper tool from the experience of everyone. Whether it will be oooq based, InfraRed based or Octario based, it really doesn't matter, as long as there is an open discussion (ego aside) and an honest decision between all parties on how we can improve, we can get to where everyone is aiming, a truly collaborative project by all groups. Thoughts? On Thu, Aug 4, 2016 at 3:45 AM, Emilien Macchi wrote: > On Wed, Aug 3, 2016 at 7:38 PM, Wesley Hayutin > wrote: > > > > > > On Wed, Aug 3, 2016 at 6:19 PM, Emilien Macchi > wrote: > >> > >> On Wed, Aug 3, 2016 at 1:33 PM, Wesley Hayutin > >> wrote: > >> > > >> > > >> > On Wed, Aug 3, 2016 at 12:51 PM, James Slagle > >> > wrote: > >> >> > >> >> On Wed, Aug 03, 2016 at 11:36:57AM -0400, David Moreau Simard wrote: > >> >> > Please hear me out. > >> >> > TL;DR, Let's work upstream and make it awesome so that downstream > can > >> >> > be awesome. > >> >> > > >> >> > I've said this before but I'm going to re-iterate that I do not > >> >> > understand why there is so much effort spent around testing TripleO > >> >> > downstream. > >> >> > By downstream, I mean anything that isn't in TripleO or TripleO-CI > >> >> > proper. > >> >> > > >> >> > All this work should be done upstream to make TripleO and it's CI > >> >> > super awesome and this would trickle down for free downstream. > >> >> > > >> >> > The RDO Trunk testing pipeline is composed of two tools, today. > >> >> > The TripleO-Quickstart project [1] is a good example of an > initiative > >> >> > that started downstream but always had the intention of being > >> >> > proposed > >> >> > upstream [2] after being "incubated" and fleshed out. > >> >> > >> >> tripleo-quickstart was proposed to upstream TripleO as a replacement > >> >> for > >> >> the > >> >> virtual environment setup done by instack-virt-setup. 3rd party CI > >> >> would > >> >> be > >> >> used to gate tripleo-quickstart so that we'd be sure the virt setup > was > >> >> always > >> >> working. That was the extent of the CI scope defined in the spec. > That > >> >> work is > >> >> not yet completed (see work items in the spec). > >> >> > >> >> Now it seems it is a much more all encompassing CI/automation/testing > >> >> project > >> >> that is competing in scope with tripleo-ci itself. > >> > > >> > > >> > IMHO you are correct here. There has been quite a bit of discussion > >> > about > >> > removing the parts > >> > of oooq that are outside of the original blueprint to replace > >> > instack-virt-setup w/ oooq. As usual there are many different > opinions > >> > here. I think there are a lot of RDO guys that would prefer a lot of > >> > the > >> > native oooq roles stay where they are, I think that is short sighted > >> > imho. > >> > I agree that anything outside of the blueprint be removed from oooq. > >> > This > >> > would hopefully allow the upstream to be more comfortable with oooq > and > >> > allow us to really start consolidating tools. > >> > > >> > Luckily for the users that still want to use oooq as a full end-to-end > >> > solution the 3rd party roles can be used even after tearing out these > >> > native > >> > roles. > >> > > >> >> > >> >> > >> >> I'm all for consolidation of these types of tools *if* there is > >> >> interest. > >> > > >> > > >> > Roll call.. is there interest? +1 from me. > >> > > >> >> > >> >> > >> >> However, IMO, incubating these things downstream and then trying to > get > >> >> them > >> >> upstream or get upstream to adopt them is not ideal or a good > example. > >> >> The > >> >> same > >> >> topic came up and was pushed several times with khaleesi, and it just > >> >> never > >> >> happened, it was continually DOA upstream. > >> > > >> > > >> > True, however that could be a result of the downstream perceiving > >> > barriers ( > >> > real or not ) in incubating projects in upstream openstack. > >> > > >> >> > >> >> > >> >> I think it would be fairly difficult to get tripleo-ci to wholesale > >> >> adopt > >> >> tripleo-quickstart at this stage. The separate irc channel from > >> >> #tripleo > >> >> is not > >> >> conducive to consolidation on tooling and direction imo. > >> > > >> > > >> > The irc channel is easily addressed. We do seem to generate an awful > >> > amount > >> > of chatter though :) > >> > > >> >> > >> >> > >> >> The scope of quickstart is actually not fully understood by myself. > >> >> I've > >> >> also > >> >> heard from some in the upstream TripleO community as well who are > >> >> confused > >> >> by > >> >> its direction and are facing similar difficulties using its generated > >> >> bash > >> >> scripts that they'd be facing if they were just using TripleO > >> >> documentation > >> >> instead. > >> > > >> > > >> > The point of the generated bash scripts is to create rst documentation > >> > and > >> > reusable scripts for the end user. Since the documentation and the > >> > generated scripts are equivalent I would expect the same errors, > >> > problems > >> > and issues. I see this as a good thing really. We *want* the CI to > hit > >> > the > >> > same issues as those who are following the doc. > >> > > >> >> > >> >> > >> >> I do think that this sort of problem lends itself easily to one off > >> >> implementations as is quite evidenced in this thread. Everyone/group > >> >> wants > >> >> and > >> >> needs to automate something in a different way. And imo, none of > these > >> >> tools > >> >> are building end-user or operator facing interfaces, so they're not > >> >> fully > >> >> focused on building something that "just works for everyone". Those > >> >> interfaces > >> >> should be developed in TripleO user facing tooling anyway > >> >> (tripleoclient/openstackclient/etc). > >> >> > >> >> So, I actually think it's ok in some degree that things have been > >> >> automated > >> >> differently in different tools. Anecdotally, I suspect many users of > >> >> TripleO in > >> >> production have their own automation tools as well. And none of the > >> >> implementations mentioned in this thread would likely meet their > needs > >> >> either. > >> > > >> > > >> > This is true.. without a tool in the upstream that addresses ci, dev, > >> > test > >> > use cases across the development cycle this will continue to be the > >> > case. I > >> > suspect even with a perfect tool, it won't ever be perfect for > everyone. > >> > > >> >> > >> >> > >> >> However, if there is a desire to focus resources on consolidated > >> >> tooling > >> >> and > >> >> someone to drive it forward, then I definitely agree that the effort > >> >> needs > >> >> to > >> >> start upstream with a singular plan for tripleo-ci. From what I > gather, > >> >> that > >> >> would be some sort of alignment and reuse of tripleo-quickstart, and > >> >> then > >> >> we > >> >> could build from there. > >> > > >> > > >> > +1 > >> > > >> >> > >> >> > >> >> That could start as a discussion and plan within that community with > >> >> some > >> >> agreed on concensus around that plan. There was an initial thread on > >> >> openstack-dev related to this topic but it is stalled a bit. It could > >> >> be > >> >> continually driven to resolution via specs, the tripleo meeting, > email > >> >> or > >> >> irc > >> >> discussion until a plan is formed. > >> > > >> > > >> > +1, I think the first step is to complete the original blueprint and > >> > move > >> > on from there. > >> > I think there has also been interest in having an in person meeting at > >> > summit. > >> > > >> > Thanks! > >> > > >> >> > >> >> > >> >> -- > >> >> -- James Slagle > >> >> -- > >> >> > >> >> _______________________________________________ > >> >> rdo-list mailing list > >> >> rdo-list at redhat.com > >> >> https://www.redhat.com/mailman/listinfo/rdo-list > >> >> > >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > >> > > >> > > >> > _______________________________________________ > >> > rdo-list mailing list > >> > rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >> I like how the discussion goes though I have some personal (and > >> probably shared) feeling that I would like to share here, more or less > >> related. > >> > >> As a TripleO core developer, I have some frustration to see that a lot > >> of people are involved in making TripleO Quickstart better, while we > >> have a few people actually working on tripleo-ci tool and try to > >> maintain upstream CI stable. > >> As a reminder, tripleo-ci tool is currently the ONLY ONE thing that > >> actually gates TripleO, even if we don't like the tool. It is right > >> now, testing TripleO upstream, everything that is not tested in there > >> will probably break one day downstream CIs. > >> Yes we have this tooling discussion here and that's awesome, but words > >> are words. I would like to see some real engagement to help TripleO CI > >> to converge into something better and not only everyone working on > >> their side. > > > > > > You have a valid point and reason to be frustrated. > > Is the point here that everyone downstream should use tripleo.sh or that > > everyone should be focused on ci and testing at the tripleo level? > > Not everyone should use tripleo.sh. My point is that we should move > forward with a common tool, and stop enlarging the gap between tools. > We have created (and are still doing) a technical dept where we have > multiple tools with a ton of overlap, the more we wait, more difficult > it will be to clean this up. > > >> > >> > >> Some examples: > >> - TripleO Quickstart (downstream) CI has coverage for undercloud & > >> overcloud upgrades while TripleO CI freshly has a undercloud upgrade > >> job and used to have a overcloud (minor) upgrade job (disabled now, > >> for some reasons related to our capacity to run jobs and also some > >> blockers into code itself). > >> - TripleO CI has some TripleO Heat templates that could also be re-use > >> by TripleO Quickstart (I'm working on moving them from tripleo-ci to > >> THT, WIP here: https://review.openstack.org/350775). > >> - TripleO CI deploys Ceph Jewel repository, TripleO Quickstart doesn't. > >> - (...) > > > > > > As others have mentioned, there are at least 5-10 tools in development > that > > are used to deploy tripleo in some CI fashion. Calling out > > tripleo-quickstart alone is not quite right imho. There are a number of > > tripleo devs that burn cycles on their own ci tools and maybe that is > fine > > thing to do. > > I called quickstart because that's the one I see everyday but my > frustration is about all our tools in general. > I'm actually a OOOQ user and I like this tool, really. > But as you can see, I'm also working on tripleo-ci right now because I > want TripleO CI better and I haven't seen until now some interest to > converge. > James started something cool by trying to deploy an undercloud using > OOOQ from tripleo-ci. That's a start ! We need things like this, > prototyping convergence, and see what we can do. > > > TripleO-Quickstart is meant to replace instack-virt-setup which it does > > quite well. The only group that was actually running instack-virt-setup > > was the RDO CI team, upstream had taken it out of the ci system. I think > > it's not unfair to say gaps have been left for other teams to fill. > > Gotcha. It was just some examples. > > >> > >> > >> We have been having this discussion for a while now but we're still > >> not making much progress here, I feel like we're in statu quo. > >> James mentioned a blueprint, I like it. We need to engage some > >> upstream discussion about this major CI refactor, like we need with > >> specs and then we'll decide if whether or not we need to change the > >> tool, and how. > > > > > > Well, this would take some leadership imho. We need some people that are > > familiar with the upstream, midstream and downstream requirements of CI. > > This was addressed at the production chain meetings initially but then > > pretty much ignored. The leaders responsible at the various stages of a > > build (upstream -> downstream ) failed to take this issue on. Here we > are > > today. > > > > Would it be acceptable by anyone.. IF > > > > tripleo-quickstart replaced instack-virt-setup [1] and walked through > the > > undercloud install, then handed off to tripleo.sh to deploy, upgrade, > > update, scale, validate etc??? > > That's something we can try. > > > That these two tools *would* in fact be the the official CI tools of > tripleo > > at the upstream, RDO, and at least parts of the downstream? > > My opinion on this is that upstream and downstream CI should only differ > on: > * the packages (OSP vs RDO) > * the scenarios (downstream could have customer-specific things) > And that's it. Tools should remain the same IMHO. > > > Would that help to ease the current frustration around CI? Emilien what > do > > you think? > > I spent the last months working on composable roles and I have now > more time to work on CI; $topic is definitely something where I would > like to help. > -- > Emilien Macchi > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Tal Kammer Associate manager, automation and infrastracture, Openstack platform. Red Hat Israel Automation group mojo: https://mojo.redhat.com/docs/DOC-1011659 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Thu Aug 4 09:34:18 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 4 Aug 2016 10:34:18 +0100 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime Message-ID: Hi all, I have an overcloud with 3 controller nodes, everything is working fine, the problem is when I reboot one of the controllers. When the node comes online, all the services (nova-api, neutron-server) on the other nodes are also restarted, causing a couple of minutes of downtime until everything is recovered. In the example below I restarted controller2 and I see these messages on controller0. My question is if this is the expected behavior, because in my opinion it shouldn't happen. *Authorization Failed: Service Unavailable (HTTP 503)* *== Glance images ==* *Service Unavailable (HTTP 503)* *== Nova managed services ==* *No handlers could be found for logger "keystoneauth.identity.generic.base"* *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* *== Nova networks ==* *No handlers could be found for logger "keystoneauth.identity.generic.base"* *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* *== Nova instance flavors ==* *No handlers could be found for logger "keystoneauth.identity.generic.base"* *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* *== Nova instances ==* *No handlers could be found for logger "keystoneauth.identity.generic.base"* *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* *[root at overcloud-controller-0 ~]# openstack-status * *Broadcast message from systemd-journald at overcloud-controller-0.localdomain (Thu 2016-08-04 09:22:31 UTC):* *haproxy[2816]: proxy neutron has no server available!* Thanks, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Thu Aug 4 10:21:21 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Thu, 4 Aug 2016 12:21:21 +0200 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: References: Message-ID: Hi, can you please give us more information about the environment you are using? Release, package versions and so on. -- Raoul Scarazzini rasca at redhat.com On 04/08/2016 11:34, Pedro Sousa wrote: > Hi all, > > I have an overcloud with 3 controller nodes, everything is working fine, > the problem is when I reboot one of the controllers. When the node comes > online, all the services (nova-api, neutron-server) on the other nodes > are also restarted, causing a couple of minutes of downtime until > everything is recovered. > > In the example below I restarted controller2 and I see these messages on > controller0. My question is if this is the expected behavior, because in > my opinion it shouldn't happen. > > *Authorization Failed: Service Unavailable (HTTP 503)* > *== Glance images ==* > *Service Unavailable (HTTP 503)* > *== Nova managed services ==* > *No handlers could be found for logger "keystoneauth.identity.generic.base"* > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > *== Nova networks ==* > *No handlers could be found for logger "keystoneauth.identity.generic.base"* > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > *== Nova instance flavors ==* > *No handlers could be found for logger "keystoneauth.identity.generic.base"* > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > *== Nova instances ==* > *No handlers could be found for logger "keystoneauth.identity.generic.base"* > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > *[root at overcloud-controller-0 ~]# openstack-status * > *Broadcast message from > systemd-journald at overcloud-controller-0.localdomain (Thu 2016-08-04 > 09:22:31 UTC):* > * > * > *haproxy[2816]: proxy neutron has no server available!* > > Thanks, > Pedro Sousa > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From pgsousa at gmail.com Thu Aug 4 10:34:15 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 4 Aug 2016 11:34:15 +0100 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: References: Message-ID: Hi, I use mitaka from centos sig repos: Centos 7.2 centos-release-openstack-mitaka-1-3.el7.noarch pacemaker-cli-1.1.13-10.el7_2.2.x86_64 pacemaker-1.1.13-10.el7_2.2.x86_64 pacemaker-remote-1.1.13-10.el7_2.2.x86_64 pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 pacemaker-libs-1.1.13-10.el7_2.2.x86_64 corosynclib-2.3.4-7.el7_2.3.x86_64 corosync-2.3.4-7.el7_2.3.x86_64 resource-agents-3.9.5-54.el7_2.10.x86_64 Let me know if you need more info. Thanks On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini wrote: > Hi, > can you please give us more information about the environment you are > using? Release, package versions and so on. > > -- > Raoul Scarazzini > rasca at redhat.com > > On 04/08/2016 11:34, Pedro Sousa wrote: > > Hi all, > > > > I have an overcloud with 3 controller nodes, everything is working fine, > > the problem is when I reboot one of the controllers. When the node comes > > online, all the services (nova-api, neutron-server) on the other nodes > > are also restarted, causing a couple of minutes of downtime until > > everything is recovered. > > > > In the example below I restarted controller2 and I see these messages on > > controller0. My question is if this is the expected behavior, because in > > my opinion it shouldn't happen. > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > *== Glance images ==* > > *Service Unavailable (HTTP 503)* > > *== Nova managed services ==* > > *No handlers could be found for logger > "keystoneauth.identity.generic.base"* > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > *== Nova networks ==* > > *No handlers could be found for logger > "keystoneauth.identity.generic.base"* > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > *== Nova instance flavors ==* > > *No handlers could be found for logger > "keystoneauth.identity.generic.base"* > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > *== Nova instances ==* > > *No handlers could be found for logger > "keystoneauth.identity.generic.base"* > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > *[root at overcloud-controller-0 ~]# openstack-status * > > *Broadcast message from > > systemd-journald at overcloud-controller-0.localdomain (Thu 2016-08-04 > > 09:22:31 UTC):* > > * > > * > > *haproxy[2816]: proxy neutron has no server available!* > > > > Thanks, > > Pedro Sousa > > > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Thu Aug 4 12:51:59 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Thu, 4 Aug 2016 14:51:59 +0200 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: References: Message-ID: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> Ok, so we are on mitaka. Here we have VIPs that are a (Optional) dependency for haproxy, which is a (Mandatory) dependency for openstack-core from which all the others (nova, neutron, cinder and so on) depends. This means that if you are rebooting a controller in which a VIP is active you will NOT have a restart of openstack-core since haproxy will not be restarted, because of the OPTIONAL constraint. So the behavior you're describing is quite strange. Maybe other components are in the game here. Can you open a bugzilla with the exact steps you're using to reproduce the problem and share the sosreports of your systems? Thanks, -- Raoul Scarazzini rasca at redhat.com On 04/08/2016 12:34, Pedro Sousa wrote: > Hi, > > I use mitaka from centos sig repos: > > Centos 7.2 > centos-release-openstack-mitaka-1-3.el7.noarch > pacemaker-cli-1.1.13-10.el7_2.2.x86_64 > pacemaker-1.1.13-10.el7_2.2.x86_64 > pacemaker-remote-1.1.13-10.el7_2.2.x86_64 > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 > pacemaker-libs-1.1.13-10.el7_2.2.x86_64 > corosynclib-2.3.4-7.el7_2.3.x86_64 > corosync-2.3.4-7.el7_2.3.x86_64 > resource-agents-3.9.5-54.el7_2.10.x86_64 > > Let me know if you need more info. > > Thanks > > > > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini > wrote: > > Hi, > can you please give us more information about the environment you are > using? Release, package versions and so on. > > -- > Raoul Scarazzini > rasca at redhat.com > > On 04/08/2016 11:34, Pedro Sousa wrote: > > Hi all, > > > > I have an overcloud with 3 controller nodes, everything is working fine, > > the problem is when I reboot one of the controllers. When the node comes > > online, all the services (nova-api, neutron-server) on the other nodes > > are also restarted, causing a couple of minutes of downtime until > > everything is recovered. > > > > In the example below I restarted controller2 and I see these messages on > > controller0. My question is if this is the expected behavior, because in > > my opinion it shouldn't happen. > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > *== Glance images ==* > > *Service Unavailable (HTTP 503)* > > *== Nova managed services ==* > > *No handlers could be found for logger > "keystoneauth.identity.generic.base"* > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > *== Nova networks ==* > > *No handlers could be found for logger > "keystoneauth.identity.generic.base"* > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > *== Nova instance flavors ==* > > *No handlers could be found for logger > "keystoneauth.identity.generic.base"* > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > *== Nova instances ==* > > *No handlers could be found for logger > "keystoneauth.identity.generic.base"* > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > *[root at overcloud-controller-0 ~]# openstack-status * > > *Broadcast message from > > systemd-journald at overcloud-controller-0.localdomain (Thu 2016-08-04 > > 09:22:31 UTC):* > > * > > * > > *haproxy[2816]: proxy neutron has no server available!* > > > > Thanks, > > Pedro Sousa > > > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > From pgsousa at gmail.com Thu Aug 4 13:29:50 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 4 Aug 2016 14:29:50 +0100 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> References: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> Message-ID: Hi Raoul, this only happens when the node comes back online after booting. When I stop the node with "pcs cluster stop", everything works fine, even if VIP is active on that node. Anyway I will file a bugzilla. Thanks On Thu, Aug 4, 2016 at 1:51 PM, Raoul Scarazzini wrote: > Ok, so we are on mitaka. Here we have VIPs that are a (Optional) > dependency for haproxy, which is a (Mandatory) dependency for > openstack-core from which all the others (nova, neutron, cinder and so > on) depends. > This means that if you are rebooting a controller in which a VIP is > active you will NOT have a restart of openstack-core since haproxy will > not be restarted, because of the OPTIONAL constraint. > So the behavior you're describing is quite strange. > Maybe other components are in the game here. Can you open a bugzilla > with the exact steps you're using to reproduce the problem and share the > sosreports of your systems? > > Thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > > On 04/08/2016 12:34, Pedro Sousa wrote: > > Hi, > > > > I use mitaka from centos sig repos: > > > > Centos 7.2 > > centos-release-openstack-mitaka-1-3.el7.noarch > > pacemaker-cli-1.1.13-10.el7_2.2.x86_64 > > pacemaker-1.1.13-10.el7_2.2.x86_64 > > pacemaker-remote-1.1.13-10.el7_2.2.x86_64 > > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 > > pacemaker-libs-1.1.13-10.el7_2.2.x86_64 > > corosynclib-2.3.4-7.el7_2.3.x86_64 > > corosync-2.3.4-7.el7_2.3.x86_64 > > resource-agents-3.9.5-54.el7_2.10.x86_64 > > > > Let me know if you need more info. > > > > Thanks > > > > > > > > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini > > wrote: > > > > Hi, > > can you please give us more information about the environment you are > > using? Release, package versions and so on. > > > > -- > > Raoul Scarazzini > > rasca at redhat.com > > > > On 04/08/2016 11:34, Pedro Sousa wrote: > > > Hi all, > > > > > > I have an overcloud with 3 controller nodes, everything is working > fine, > > > the problem is when I reboot one of the controllers. When the node > comes > > > online, all the services (nova-api, neutron-server) on the other > nodes > > > are also restarted, causing a couple of minutes of downtime until > > > everything is recovered. > > > > > > In the example below I restarted controller2 and I see these > messages on > > > controller0. My question is if this is the expected behavior, > because in > > > my opinion it shouldn't happen. > > > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > > *== Glance images ==* > > > *Service Unavailable (HTTP 503)* > > > *== Nova managed services ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova networks ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instance flavors ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instances ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *[root at overcloud-controller-0 ~]# openstack-status * > > > *Broadcast message from > > > systemd-journald at overcloud-controller-0.localdomain (Thu > 2016-08-04 > > > 09:22:31 UTC):* > > > * > > > * > > > *haproxy[2816]: proxy neutron has no server available!* > > > > > > Thanks, > > > Pedro Sousa > > > > > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Thu Aug 4 13:31:03 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Thu, 4 Aug 2016 15:31:03 +0200 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: References: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> Message-ID: That will be great, thank you, put me in CC so I can follow this. Thanks, -- Raoul Scarazzini rasca at redhat.com On 04/08/2016 15:29, Pedro Sousa wrote: > Hi Raoul, > > this only happens when the node comes back online after booting. When I > stop the node with "pcs cluster stop", everything works fine, even if > VIP is active on that node. > > Anyway I will file a bugzilla. > > Thanks > > > > > On Thu, Aug 4, 2016 at 1:51 PM, Raoul Scarazzini > wrote: > > Ok, so we are on mitaka. Here we have VIPs that are a (Optional) > dependency for haproxy, which is a (Mandatory) dependency for > openstack-core from which all the others (nova, neutron, cinder and so > on) depends. > This means that if you are rebooting a controller in which a VIP is > active you will NOT have a restart of openstack-core since haproxy will > not be restarted, because of the OPTIONAL constraint. > So the behavior you're describing is quite strange. > Maybe other components are in the game here. Can you open a bugzilla > with the exact steps you're using to reproduce the problem and share the > sosreports of your systems? > > Thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > > On 04/08/2016 12:34, Pedro Sousa wrote: > > Hi, > > > > I use mitaka from centos sig repos: > > > > Centos 7.2 > > centos-release-openstack-mitaka-1-3.el7.noarch > > pacemaker-cli-1.1.13-10.el7_2.2.x86_64 > > pacemaker-1.1.13-10.el7_2.2.x86_64 > > pacemaker-remote-1.1.13-10.el7_2.2.x86_64 > > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 > > pacemaker-libs-1.1.13-10.el7_2.2.x86_64 > > corosynclib-2.3.4-7.el7_2.3.x86_64 > > corosync-2.3.4-7.el7_2.3.x86_64 > > resource-agents-3.9.5-54.el7_2.10.x86_64 > > > > Let me know if you need more info. > > > > Thanks > > > > > > > > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini > > >> wrote: > > > > Hi, > > can you please give us more information about the environment you are > > using? Release, package versions and so on. > > > > -- > > Raoul Scarazzini > > rasca at redhat.com > > > > > > On 04/08/2016 11:34, Pedro Sousa wrote: > > > Hi all, > > > > > > I have an overcloud with 3 controller nodes, everything is > working fine, > > > the problem is when I reboot one of the controllers. When > the node comes > > > online, all the services (nova-api, neutron-server) on the > other nodes > > > are also restarted, causing a couple of minutes of downtime > until > > > everything is recovered. > > > > > > In the example below I restarted controller2 and I see these > messages on > > > controller0. My question is if this is the expected > behavior, because in > > > my opinion it shouldn't happen. > > > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > > *== Glance images ==* > > > *Service Unavailable (HTTP 503)* > > > *== Nova managed services ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova networks ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instance flavors ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instances ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *[root at overcloud-controller-0 ~]# openstack-status * > > > *Broadcast message from > > > systemd-journald at overcloud-controller-0.localdomain (Thu > 2016-08-04 > > > 09:22:31 UTC):* > > > * > > > * > > > *haproxy[2816]: proxy neutron has no server available!* > > > > > > Thanks, > > > Pedro Sousa > > > > > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > From whayutin at redhat.com Thu Aug 4 13:43:50 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 4 Aug 2016 09:43:50 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> <20160803165119.GC25838@localhost.localdomain> Message-ID: On Wed, Aug 3, 2016 at 8:45 PM, Emilien Macchi wrote: > On Wed, Aug 3, 2016 at 7:38 PM, Wesley Hayutin > wrote: > > > > > > On Wed, Aug 3, 2016 at 6:19 PM, Emilien Macchi > wrote: > >> > >> On Wed, Aug 3, 2016 at 1:33 PM, Wesley Hayutin > >> wrote: > >> > > >> > > >> > On Wed, Aug 3, 2016 at 12:51 PM, James Slagle > >> > wrote: > >> >> > >> >> On Wed, Aug 03, 2016 at 11:36:57AM -0400, David Moreau Simard wrote: > >> >> > Please hear me out. > >> >> > TL;DR, Let's work upstream and make it awesome so that downstream > can > >> >> > be awesome. > >> >> > > >> >> > I've said this before but I'm going to re-iterate that I do not > >> >> > understand why there is so much effort spent around testing TripleO > >> >> > downstream. > >> >> > By downstream, I mean anything that isn't in TripleO or TripleO-CI > >> >> > proper. > >> >> > > >> >> > All this work should be done upstream to make TripleO and it's CI > >> >> > super awesome and this would trickle down for free downstream. > >> >> > > >> >> > The RDO Trunk testing pipeline is composed of two tools, today. > >> >> > The TripleO-Quickstart project [1] is a good example of an > initiative > >> >> > that started downstream but always had the intention of being > >> >> > proposed > >> >> > upstream [2] after being "incubated" and fleshed out. > >> >> > >> >> tripleo-quickstart was proposed to upstream TripleO as a replacement > >> >> for > >> >> the > >> >> virtual environment setup done by instack-virt-setup. 3rd party CI > >> >> would > >> >> be > >> >> used to gate tripleo-quickstart so that we'd be sure the virt setup > was > >> >> always > >> >> working. That was the extent of the CI scope defined in the spec. > That > >> >> work is > >> >> not yet completed (see work items in the spec). > >> >> > >> >> Now it seems it is a much more all encompassing CI/automation/testing > >> >> project > >> >> that is competing in scope with tripleo-ci itself. > >> > > >> > > >> > IMHO you are correct here. There has been quite a bit of discussion > >> > about > >> > removing the parts > >> > of oooq that are outside of the original blueprint to replace > >> > instack-virt-setup w/ oooq. As usual there are many different > opinions > >> > here. I think there are a lot of RDO guys that would prefer a lot of > >> > the > >> > native oooq roles stay where they are, I think that is short sighted > >> > imho. > >> > I agree that anything outside of the blueprint be removed from oooq. > >> > This > >> > would hopefully allow the upstream to be more comfortable with oooq > and > >> > allow us to really start consolidating tools. > >> > > >> > Luckily for the users that still want to use oooq as a full end-to-end > >> > solution the 3rd party roles can be used even after tearing out these > >> > native > >> > roles. > >> > > >> >> > >> >> > >> >> I'm all for consolidation of these types of tools *if* there is > >> >> interest. > >> > > >> > > >> > Roll call.. is there interest? +1 from me. > >> > > >> >> > >> >> > >> >> However, IMO, incubating these things downstream and then trying to > get > >> >> them > >> >> upstream or get upstream to adopt them is not ideal or a good > example. > >> >> The > >> >> same > >> >> topic came up and was pushed several times with khaleesi, and it just > >> >> never > >> >> happened, it was continually DOA upstream. > >> > > >> > > >> > True, however that could be a result of the downstream perceiving > >> > barriers ( > >> > real or not ) in incubating projects in upstream openstack. > >> > > >> >> > >> >> > >> >> I think it would be fairly difficult to get tripleo-ci to wholesale > >> >> adopt > >> >> tripleo-quickstart at this stage. The separate irc channel from > >> >> #tripleo > >> >> is not > >> >> conducive to consolidation on tooling and direction imo. > >> > > >> > > >> > The irc channel is easily addressed. We do seem to generate an awful > >> > amount > >> > of chatter though :) > >> > > >> >> > >> >> > >> >> The scope of quickstart is actually not fully understood by myself. > >> >> I've > >> >> also > >> >> heard from some in the upstream TripleO community as well who are > >> >> confused > >> >> by > >> >> its direction and are facing similar difficulties using its generated > >> >> bash > >> >> scripts that they'd be facing if they were just using TripleO > >> >> documentation > >> >> instead. > >> > > >> > > >> > The point of the generated bash scripts is to create rst documentation > >> > and > >> > reusable scripts for the end user. Since the documentation and the > >> > generated scripts are equivalent I would expect the same errors, > >> > problems > >> > and issues. I see this as a good thing really. We *want* the CI to > hit > >> > the > >> > same issues as those who are following the doc. > >> > > >> >> > >> >> > >> >> I do think that this sort of problem lends itself easily to one off > >> >> implementations as is quite evidenced in this thread. Everyone/group > >> >> wants > >> >> and > >> >> needs to automate something in a different way. And imo, none of > these > >> >> tools > >> >> are building end-user or operator facing interfaces, so they're not > >> >> fully > >> >> focused on building something that "just works for everyone". Those > >> >> interfaces > >> >> should be developed in TripleO user facing tooling anyway > >> >> (tripleoclient/openstackclient/etc). > >> >> > >> >> So, I actually think it's ok in some degree that things have been > >> >> automated > >> >> differently in different tools. Anecdotally, I suspect many users of > >> >> TripleO in > >> >> production have their own automation tools as well. And none of the > >> >> implementations mentioned in this thread would likely meet their > needs > >> >> either. > >> > > >> > > >> > This is true.. without a tool in the upstream that addresses ci, dev, > >> > test > >> > use cases across the development cycle this will continue to be the > >> > case. I > >> > suspect even with a perfect tool, it won't ever be perfect for > everyone. > >> > > >> >> > >> >> > >> >> However, if there is a desire to focus resources on consolidated > >> >> tooling > >> >> and > >> >> someone to drive it forward, then I definitely agree that the effort > >> >> needs > >> >> to > >> >> start upstream with a singular plan for tripleo-ci. From what I > gather, > >> >> that > >> >> would be some sort of alignment and reuse of tripleo-quickstart, and > >> >> then > >> >> we > >> >> could build from there. > >> > > >> > > >> > +1 > >> > > >> >> > >> >> > >> >> That could start as a discussion and plan within that community with > >> >> some > >> >> agreed on concensus around that plan. There was an initial thread on > >> >> openstack-dev related to this topic but it is stalled a bit. It could > >> >> be > >> >> continually driven to resolution via specs, the tripleo meeting, > email > >> >> or > >> >> irc > >> >> discussion until a plan is formed. > >> > > >> > > >> > +1, I think the first step is to complete the original blueprint and > >> > move > >> > on from there. > >> > I think there has also been interest in having an in person meeting at > >> > summit. > >> > > >> > Thanks! > >> > > >> >> > >> >> > >> >> -- > >> >> -- James Slagle > >> >> -- > >> >> > >> >> _______________________________________________ > >> >> rdo-list mailing list > >> >> rdo-list at redhat.com > >> >> https://www.redhat.com/mailman/listinfo/rdo-list > >> >> > >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > >> > > >> > > >> > _______________________________________________ > >> > rdo-list mailing list > >> > rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >> I like how the discussion goes though I have some personal (and > >> probably shared) feeling that I would like to share here, more or less > >> related. > >> > >> As a TripleO core developer, I have some frustration to see that a lot > >> of people are involved in making TripleO Quickstart better, while we > >> have a few people actually working on tripleo-ci tool and try to > >> maintain upstream CI stable. > >> As a reminder, tripleo-ci tool is currently the ONLY ONE thing that > >> actually gates TripleO, even if we don't like the tool. It is right > >> now, testing TripleO upstream, everything that is not tested in there > >> will probably break one day downstream CIs. > >> Yes we have this tooling discussion here and that's awesome, but words > >> are words. I would like to see some real engagement to help TripleO CI > >> to converge into something better and not only everyone working on > >> their side. > > > > > > You have a valid point and reason to be frustrated. > > Is the point here that everyone downstream should use tripleo.sh or that > > everyone should be focused on ci and testing at the tripleo level? > > Not everyone should use tripleo.sh. My point is that we should move > forward with a common tool, and stop enlarging the gap between tools. > We have created (and are still doing) a technical dept where we have > multiple tools with a ton of overlap, the more we wait, more difficult > it will be to clean this up. > +1 > > >> > >> > >> Some examples: > >> - TripleO Quickstart (downstream) CI has coverage for undercloud & > >> overcloud upgrades while TripleO CI freshly has a undercloud upgrade > >> job and used to have a overcloud (minor) upgrade job (disabled now, > >> for some reasons related to our capacity to run jobs and also some > >> blockers into code itself). > >> - TripleO CI has some TripleO Heat templates that could also be re-use > >> by TripleO Quickstart (I'm working on moving them from tripleo-ci to > >> THT, WIP here: https://review.openstack.org/350775). > >> - TripleO CI deploys Ceph Jewel repository, TripleO Quickstart doesn't. > >> - (...) > > > > > > As others have mentioned, there are at least 5-10 tools in development > that > > are used to deploy tripleo in some CI fashion. Calling out > > tripleo-quickstart alone is not quite right imho. There are a number of > > tripleo devs that burn cycles on their own ci tools and maybe that is > fine > > thing to do. > > I called quickstart because that's the one I see everyday but my > frustration is about all our tools in general. > I'm actually a OOOQ user and I like this tool, really. > But as you can see, I'm also working on tripleo-ci right now because I > want TripleO CI better and I haven't seen until now some interest to > converge. > James started something cool by trying to deploy an undercloud using > OOOQ from tripleo-ci. That's a start ! We need things like this, > prototyping convergence, and see what we can do. > +1 working in multiple ci systems is painful, I know your pain! I've been playing around w/ Jame's patch [1] to see if I can help. I like Jame's approach, I think I may be missing some setup steps that nodepool provides. [1] https://review.openstack.org/#/c/348530/ > > > TripleO-Quickstart is meant to replace instack-virt-setup which it does > > quite well. The only group that was actually running instack-virt-setup > > was the RDO CI team, upstream had taken it out of the ci system. I think > > it's not unfair to say gaps have been left for other teams to fill. > > Gotcha. It was just some examples. > > >> > >> > >> We have been having this discussion for a while now but we're still > >> not making much progress here, I feel like we're in statu quo. > >> James mentioned a blueprint, I like it. We need to engage some > >> upstream discussion about this major CI refactor, like we need with > >> specs and then we'll decide if whether or not we need to change the > >> tool, and how. > > > > > > Well, this would take some leadership imho. We need some people that are > > familiar with the upstream, midstream and downstream requirements of CI. > > This was addressed at the production chain meetings initially but then > > pretty much ignored. The leaders responsible at the various stages of a > > build (upstream -> downstream ) failed to take this issue on. Here we > are > > today. > > > > Would it be acceptable by anyone.. IF > > > > tripleo-quickstart replaced instack-virt-setup [1] and walked through > the > > undercloud install, then handed off to tripleo.sh to deploy, upgrade, > > update, scale, validate etc??? > > That's something we can try. > > > That these two tools *would* in fact be the the official CI tools of > tripleo > > at the upstream, RDO, and at least parts of the downstream? > > My opinion on this is that upstream and downstream CI should only differ > on: > * the packages (OSP vs RDO) > * the scenarios (downstream could have customer-specific things) > And that's it. Tools should remain the same IMHO. > > > Would that help to ease the current frustration around CI? Emilien what > do > > you think? > > I spent the last months working on composable roles and I have now > more time to work on CI; $topic is definitely something where I would > like to help. > woot, I'm excited that you are freeing up and can be more involved! Thanks as usual Emilien! > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imaslov at dispersivegroup.com Thu Aug 4 15:12:32 2016 From: imaslov at dispersivegroup.com (Ilja Maslov) Date: Thu, 4 Aug 2016 15:12:32 +0000 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: References: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> Message-ID: <4195e85cb5f94d76abacf9289cdcda11@svr2-disp-exch.dispersive.local> Hi, I've noticed similar behavior on Mitaka installed from trunk/mitaka/passed-ci. Appreciate if you could put me in CC. Additional detail is that during initial deployment, nova services, neutron agents and heat engines are registered with the short hostnames and upon controller node restart, these will all show with state=down. Probably because hosts files are re-written after the services had been started with FQDN as a first entry. I do not know to what extent pacemaker resources are monitored, but it could be related to the problem you are reporting. Cheers, Ilja -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Raoul Scarazzini Sent: Thursday, August 04, 2016 9:31 AM To: Pedro Sousa Cc: rdo-list Subject: Re: [rdo-list] Overcloud pacemaker services restart behavior causes downtime That will be great, thank you, put me in CC so I can follow this. Thanks, -- Raoul Scarazzini rasca at redhat.com On 04/08/2016 15:29, Pedro Sousa wrote: > Hi Raoul, > > this only happens when the node comes back online after booting. When I > stop the node with "pcs cluster stop", everything works fine, even if > VIP is active on that node. > > Anyway I will file a bugzilla. > > Thanks > > > > > On Thu, Aug 4, 2016 at 1:51 PM, Raoul Scarazzini > wrote: > > Ok, so we are on mitaka. Here we have VIPs that are a (Optional) > dependency for haproxy, which is a (Mandatory) dependency for > openstack-core from which all the others (nova, neutron, cinder and so > on) depends. > This means that if you are rebooting a controller in which a VIP is > active you will NOT have a restart of openstack-core since haproxy will > not be restarted, because of the OPTIONAL constraint. > So the behavior you're describing is quite strange. > Maybe other components are in the game here. Can you open a bugzilla > with the exact steps you're using to reproduce the problem and share the > sosreports of your systems? > > Thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > > On 04/08/2016 12:34, Pedro Sousa wrote: > > Hi, > > > > I use mitaka from centos sig repos: > > > > Centos 7.2 > > centos-release-openstack-mitaka-1-3.el7.noarch > > pacemaker-cli-1.1.13-10.el7_2.2.x86_64 > > pacemaker-1.1.13-10.el7_2.2.x86_64 > > pacemaker-remote-1.1.13-10.el7_2.2.x86_64 > > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 > > pacemaker-libs-1.1.13-10.el7_2.2.x86_64 > > corosynclib-2.3.4-7.el7_2.3.x86_64 > > corosync-2.3.4-7.el7_2.3.x86_64 > > resource-agents-3.9.5-54.el7_2.10.x86_64 > > > > Let me know if you need more info. > > > > Thanks > > > > > > > > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini > > >> wrote: > > > > Hi, > > can you please give us more information about the environment you are > > using? Release, package versions and so on. > > > > -- > > Raoul Scarazzini > > rasca at redhat.com > > > > > > On 04/08/2016 11:34, Pedro Sousa wrote: > > > Hi all, > > > > > > I have an overcloud with 3 controller nodes, everything is > working fine, > > > the problem is when I reboot one of the controllers. When > the node comes > > > online, all the services (nova-api, neutron-server) on the > other nodes > > > are also restarted, causing a couple of minutes of downtime > until > > > everything is recovered. > > > > > > In the example below I restarted controller2 and I see these > messages on > > > controller0. My question is if this is the expected > behavior, because in > > > my opinion it shouldn't happen. > > > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > > *== Glance images ==* > > > *Service Unavailable (HTTP 503)* > > > *== Nova managed services ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova networks ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instance flavors ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instances ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *[root at overcloud-controller-0 ~]# openstack-status * > > > *Broadcast message from > > > systemd-journald at overcloud-controller-0.localdomain (Thu > 2016-08-04 > > > 09:22:31 UTC):* > > > * > > > * > > > *haproxy[2816]: proxy neutron has no server available!* > > > > > > Thanks, > > > Pedro Sousa > > > > > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From pgsousa at gmail.com Thu Aug 4 15:23:20 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 4 Aug 2016 16:23:20 +0100 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: <4195e85cb5f94d76abacf9289cdcda11@svr2-disp-exch.dispersive.local> References: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> <4195e85cb5f94d76abacf9289cdcda11@svr2-disp-exch.dispersive.local> Message-ID: Hi Ilja, I noticed that too. Did you try to delete the services that are marked down and retest? Thanks On Thu, Aug 4, 2016 at 4:12 PM, Ilja Maslov wrote: > Hi, > > I've noticed similar behavior on Mitaka installed from > trunk/mitaka/passed-ci. Appreciate if you could put me in CC. > > Additional detail is that during initial deployment, nova services, > neutron agents and heat engines are registered with the short hostnames and > upon controller node restart, these will all show with state=down. > Probably because hosts files are re-written after the services had been > started with FQDN as a first entry. I do not know to what extent pacemaker > resources are monitored, but it could be related to the problem you are > reporting. > > Cheers, > Ilja > > > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Raoul Scarazzini > Sent: Thursday, August 04, 2016 9:31 AM > To: Pedro Sousa > Cc: rdo-list > Subject: Re: [rdo-list] Overcloud pacemaker services restart behavior > causes downtime > > That will be great, thank you, put me in CC so I can follow this. > > Thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > > On 04/08/2016 15:29, Pedro Sousa wrote: > > Hi Raoul, > > > > this only happens when the node comes back online after booting. When I > > stop the node with "pcs cluster stop", everything works fine, even if > > VIP is active on that node. > > > > Anyway I will file a bugzilla. > > > > Thanks > > > > > > > > > > On Thu, Aug 4, 2016 at 1:51 PM, Raoul Scarazzini > > wrote: > > > > Ok, so we are on mitaka. Here we have VIPs that are a (Optional) > > dependency for haproxy, which is a (Mandatory) dependency for > > openstack-core from which all the others (nova, neutron, cinder and > so > > on) depends. > > This means that if you are rebooting a controller in which a VIP is > > active you will NOT have a restart of openstack-core since haproxy > will > > not be restarted, because of the OPTIONAL constraint. > > So the behavior you're describing is quite strange. > > Maybe other components are in the game here. Can you open a bugzilla > > with the exact steps you're using to reproduce the problem and share > the > > sosreports of your systems? > > > > Thanks, > > > > -- > > Raoul Scarazzini > > rasca at redhat.com > > > > On 04/08/2016 12:34, Pedro Sousa wrote: > > > Hi, > > > > > > I use mitaka from centos sig repos: > > > > > > Centos 7.2 > > > centos-release-openstack-mitaka-1-3.el7.noarch > > > pacemaker-cli-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-remote-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-libs-1.1.13-10.el7_2.2.x86_64 > > > corosynclib-2.3.4-7.el7_2.3.x86_64 > > > corosync-2.3.4-7.el7_2.3.x86_64 > > > resource-agents-3.9.5-54.el7_2.10.x86_64 > > > > > > Let me know if you need more info. > > > > > > Thanks > > > > > > > > > > > > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini < > rasca at redhat.com > > > >> wrote: > > > > > > Hi, > > > can you please give us more information about the environment > you are > > > using? Release, package versions and so on. > > > > > > -- > > > Raoul Scarazzini > > > rasca at redhat.com > > > > > > > > > On 04/08/2016 11:34, Pedro Sousa wrote: > > > > Hi all, > > > > > > > > I have an overcloud with 3 controller nodes, everything is > > working fine, > > > > the problem is when I reboot one of the controllers. When > > the node comes > > > > online, all the services (nova-api, neutron-server) on the > > other nodes > > > > are also restarted, causing a couple of minutes of downtime > > until > > > > everything is recovered. > > > > > > > > In the example below I restarted controller2 and I see these > > messages on > > > > controller0. My question is if this is the expected > > behavior, because in > > > > my opinion it shouldn't happen. > > > > > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > > > *== Glance images ==* > > > > *Service Unavailable (HTTP 503)* > > > > *== Nova managed services ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *== Nova networks ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *== Nova instance flavors ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *== Nova instances ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *[root at overcloud-controller-0 ~]# openstack-status * > > > > *Broadcast message from > > > > systemd-journald at overcloud-controller-0.localdomain (Thu > > 2016-08-04 > > > > 09:22:31 UTC):* > > > > * > > > > * > > > > *haproxy[2816]: proxy neutron has no server available!* > > > > > > > > Thanks, > > > > Pedro Sousa > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > rdo-list mailing list > > > > rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Thu Aug 4 15:24:01 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Thu, 4 Aug 2016 17:24:01 +0200 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: References: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> Message-ID: I saw the bugzilla, but I don't have right now a Mitaka env to test. Can you please upload sosreports? Thanks, -- Raoul Scarazzini rasca at redhat.com On 04/08/2016 15:29, Pedro Sousa wrote: > Hi Raoul, > > this only happens when the node comes back online after booting. When I > stop the node with "pcs cluster stop", everything works fine, even if > VIP is active on that node. > > Anyway I will file a bugzilla. > > Thanks > > > > > On Thu, Aug 4, 2016 at 1:51 PM, Raoul Scarazzini > wrote: > > Ok, so we are on mitaka. Here we have VIPs that are a (Optional) > dependency for haproxy, which is a (Mandatory) dependency for > openstack-core from which all the others (nova, neutron, cinder and so > on) depends. > This means that if you are rebooting a controller in which a VIP is > active you will NOT have a restart of openstack-core since haproxy will > not be restarted, because of the OPTIONAL constraint. > So the behavior you're describing is quite strange. > Maybe other components are in the game here. Can you open a bugzilla > with the exact steps you're using to reproduce the problem and share the > sosreports of your systems? > > Thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > > On 04/08/2016 12:34, Pedro Sousa wrote: > > Hi, > > > > I use mitaka from centos sig repos: > > > > Centos 7.2 > > centos-release-openstack-mitaka-1-3.el7.noarch > > pacemaker-cli-1.1.13-10.el7_2.2.x86_64 > > pacemaker-1.1.13-10.el7_2.2.x86_64 > > pacemaker-remote-1.1.13-10.el7_2.2.x86_64 > > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 > > pacemaker-libs-1.1.13-10.el7_2.2.x86_64 > > corosynclib-2.3.4-7.el7_2.3.x86_64 > > corosync-2.3.4-7.el7_2.3.x86_64 > > resource-agents-3.9.5-54.el7_2.10.x86_64 > > > > Let me know if you need more info. > > > > Thanks > > > > > > > > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini > > >> wrote: > > > > Hi, > > can you please give us more information about the environment you are > > using? Release, package versions and so on. > > > > -- > > Raoul Scarazzini > > rasca at redhat.com > > > > > > On 04/08/2016 11:34, Pedro Sousa wrote: > > > Hi all, > > > > > > I have an overcloud with 3 controller nodes, everything is > working fine, > > > the problem is when I reboot one of the controllers. When > the node comes > > > online, all the services (nova-api, neutron-server) on the > other nodes > > > are also restarted, causing a couple of minutes of downtime > until > > > everything is recovered. > > > > > > In the example below I restarted controller2 and I see these > messages on > > > controller0. My question is if this is the expected > behavior, because in > > > my opinion it shouldn't happen. > > > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > > *== Glance images ==* > > > *Service Unavailable (HTTP 503)* > > > *== Nova managed services ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova networks ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instance flavors ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instances ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *[root at overcloud-controller-0 ~]# openstack-status * > > > *Broadcast message from > > > systemd-journald at overcloud-controller-0.localdomain (Thu > 2016-08-04 > > > 09:22:31 UTC):* > > > * > > > * > > > *haproxy[2816]: proxy neutron has no server available!* > > > > > > Thanks, > > > Pedro Sousa > > > > > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > From imaslov at dispersivegroup.com Thu Aug 4 15:32:04 2016 From: imaslov at dispersivegroup.com (Ilja Maslov) Date: Thu, 4 Aug 2016 15:32:04 +0000 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: References: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> <4195e85cb5f94d76abacf9289cdcda11@svr2-disp-exch.dispersive.local> Message-ID: <1aed738bd93843c8b68ba17adc0b3685@svr2-disp-exch.dispersive.local> Not on this fresh install, but what I saw few weeks back was that when controller nodes restart, I see services created with FQDN names that were up and I was able to safely clean the original services with short host names. But I haven?t re-tested controller restarts afterwards. With my fresh install, rabbitmq is not coming up upon reboot (?unknown error? (1)), so I need to fix this first before I?m able to proceed with testing. I?ll let you know how it goes. Ilja From: Pedro Sousa [mailto:pgsousa at gmail.com] Sent: Thursday, August 04, 2016 11:23 AM To: Ilja Maslov Cc: Raoul Scarazzini ; rdo-list Subject: Re: [rdo-list] Overcloud pacemaker services restart behavior causes downtime Hi Ilja, I noticed that too. Did you try to delete the services that are marked down and retest? Thanks On Thu, Aug 4, 2016 at 4:12 PM, Ilja Maslov > wrote: Hi, I've noticed similar behavior on Mitaka installed from trunk/mitaka/passed-ci. Appreciate if you could put me in CC. Additional detail is that during initial deployment, nova services, neutron agents and heat engines are registered with the short hostnames and upon controller node restart, these will all show with state=down. Probably because hosts files are re-written after the services had been started with FQDN as a first entry. I do not know to what extent pacemaker resources are monitored, but it could be related to the problem you are reporting. Cheers, Ilja -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Raoul Scarazzini Sent: Thursday, August 04, 2016 9:31 AM To: Pedro Sousa > Cc: rdo-list > Subject: Re: [rdo-list] Overcloud pacemaker services restart behavior causes downtime That will be great, thank you, put me in CC so I can follow this. Thanks, -- Raoul Scarazzini rasca at redhat.com On 04/08/2016 15:29, Pedro Sousa wrote: > Hi Raoul, > > this only happens when the node comes back online after booting. When I > stop the node with "pcs cluster stop", everything works fine, even if > VIP is active on that node. > > Anyway I will file a bugzilla. > > Thanks > > > > > On Thu, Aug 4, 2016 at 1:51 PM, Raoul Scarazzini > >> wrote: > > Ok, so we are on mitaka. Here we have VIPs that are a (Optional) > dependency for haproxy, which is a (Mandatory) dependency for > openstack-core from which all the others (nova, neutron, cinder and so > on) depends. > This means that if you are rebooting a controller in which a VIP is > active you will NOT have a restart of openstack-core since haproxy will > not be restarted, because of the OPTIONAL constraint. > So the behavior you're describing is quite strange. > Maybe other components are in the game here. Can you open a bugzilla > with the exact steps you're using to reproduce the problem and share the > sosreports of your systems? > > Thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > > > On 04/08/2016 12:34, Pedro Sousa wrote: > > Hi, > > > > I use mitaka from centos sig repos: > > > > Centos 7.2 > > centos-release-openstack-mitaka-1-3.el7.noarch > > pacemaker-cli-1.1.13-10.el7_2.2.x86_64 > > pacemaker-1.1.13-10.el7_2.2.x86_64 > > pacemaker-remote-1.1.13-10.el7_2.2.x86_64 > > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 > > pacemaker-libs-1.1.13-10.el7_2.2.x86_64 > > corosynclib-2.3.4-7.el7_2.3.x86_64 > > corosync-2.3.4-7.el7_2.3.x86_64 > > resource-agents-3.9.5-54.el7_2.10.x86_64 > > > > Let me know if you need more info. > > > > Thanks > > > > > > > > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini > > > >>> wrote: > > > > Hi, > > can you please give us more information about the environment you are > > using? Release, package versions and so on. > > > > -- > > Raoul Scarazzini > > rasca at redhat.com > > >> > > > > On 04/08/2016 11:34, Pedro Sousa wrote: > > > Hi all, > > > > > > I have an overcloud with 3 controller nodes, everything is > working fine, > > > the problem is when I reboot one of the controllers. When > the node comes > > > online, all the services (nova-api, neutron-server) on the > other nodes > > > are also restarted, causing a couple of minutes of downtime > until > > > everything is recovered. > > > > > > In the example below I restarted controller2 and I see these > messages on > > > controller0. My question is if this is the expected > behavior, because in > > > my opinion it shouldn't happen. > > > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > > *== Glance images ==* > > > *Service Unavailable (HTTP 503)* > > > *== Nova managed services ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova networks ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instance flavors ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instances ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *[root at overcloud-controller-0 ~]# openstack-status * > > > *Broadcast message from > > > systemd-journald at overcloud-controller-0.localdomain (Thu > 2016-08-04 > > > 09:22:31 UTC):* > > > * > > > * > > > *haproxy[2816]: proxy neutron has no server available!* > > > > > > Thanks, > > > Pedro Sousa > > > > > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > >> > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > >> > > > > > > > > > _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Thu Aug 4 15:37:38 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 4 Aug 2016 16:37:38 +0100 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: <1aed738bd93843c8b68ba17adc0b3685@svr2-disp-exch.dispersive.local> References: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> <4195e85cb5f94d76abacf9289cdcda11@svr2-disp-exch.dispersive.local> <1aed738bd93843c8b68ba17adc0b3685@svr2-disp-exch.dispersive.local> Message-ID: I'll test. For the rabbitmq issue you need this patch: https://github.com/ClusterLabs/resource-agents/commit/1d15f54923a77969fb035313eb979ee6efb3470c Had the same problem too :) On Thu, Aug 4, 2016 at 4:32 PM, Ilja Maslov wrote: > Not on this fresh install, but what I saw few weeks back was that when > controller nodes restart, I see services created with FQDN names that were > up and I was able to safely clean the original services with short host > names. But I haven?t re-tested controller restarts afterwards. > > > > With my fresh install, rabbitmq is not coming up upon reboot (?unknown > error? (1)), so I need to fix this first before I?m able to proceed with > testing. I?ll let you know how it goes. > > > > Ilja > > > > *From:* Pedro Sousa [mailto:pgsousa at gmail.com] > *Sent:* Thursday, August 04, 2016 11:23 AM > *To:* Ilja Maslov > *Cc:* Raoul Scarazzini ; rdo-list > > *Subject:* Re: [rdo-list] Overcloud pacemaker services restart behavior > causes downtime > > > > Hi Ilja, > > > > I noticed that too. Did you try to delete the services that are marked > down and retest? > > > > Thanks > > > > On Thu, Aug 4, 2016 at 4:12 PM, Ilja Maslov > wrote: > > Hi, > > I've noticed similar behavior on Mitaka installed from > trunk/mitaka/passed-ci. Appreciate if you could put me in CC. > > Additional detail is that during initial deployment, nova services, > neutron agents and heat engines are registered with the short hostnames and > upon controller node restart, these will all show with state=down. > Probably because hosts files are re-written after the services had been > started with FQDN as a first entry. I do not know to what extent pacemaker > resources are monitored, but it could be related to the problem you are > reporting. > > Cheers, > Ilja > > > > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Raoul Scarazzini > Sent: Thursday, August 04, 2016 9:31 AM > To: Pedro Sousa > Cc: rdo-list > Subject: Re: [rdo-list] Overcloud pacemaker services restart behavior > causes downtime > > That will be great, thank you, put me in CC so I can follow this. > > Thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > > On 04/08/2016 15:29, Pedro Sousa wrote: > > Hi Raoul, > > > > this only happens when the node comes back online after booting. When I > > stop the node with "pcs cluster stop", everything works fine, even if > > VIP is active on that node. > > > > Anyway I will file a bugzilla. > > > > Thanks > > > > > > > > > > On Thu, Aug 4, 2016 at 1:51 PM, Raoul Scarazzini > > wrote: > > > > Ok, so we are on mitaka. Here we have VIPs that are a (Optional) > > dependency for haproxy, which is a (Mandatory) dependency for > > openstack-core from which all the others (nova, neutron, cinder and > so > > on) depends. > > This means that if you are rebooting a controller in which a VIP is > > active you will NOT have a restart of openstack-core since haproxy > will > > not be restarted, because of the OPTIONAL constraint. > > So the behavior you're describing is quite strange. > > Maybe other components are in the game here. Can you open a bugzilla > > with the exact steps you're using to reproduce the problem and share > the > > sosreports of your systems? > > > > Thanks, > > > > -- > > Raoul Scarazzini > > rasca at redhat.com > > > > On 04/08/2016 12:34, Pedro Sousa wrote: > > > Hi, > > > > > > I use mitaka from centos sig repos: > > > > > > Centos 7.2 > > > centos-release-openstack-mitaka-1-3.el7.noarch > > > pacemaker-cli-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-remote-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-libs-1.1.13-10.el7_2.2.x86_64 > > > corosynclib-2.3.4-7.el7_2.3.x86_64 > > > corosync-2.3.4-7.el7_2.3.x86_64 > > > resource-agents-3.9.5-54.el7_2.10.x86_64 > > > > > > Let me know if you need more info. > > > > > > Thanks > > > > > > > > > > > > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini < > rasca at redhat.com > > > >> wrote: > > > > > > Hi, > > > can you please give us more information about the environment > you are > > > using? Release, package versions and so on. > > > > > > -- > > > Raoul Scarazzini > > > rasca at redhat.com > > > > > > > > > On 04/08/2016 11:34, Pedro Sousa wrote: > > > > Hi all, > > > > > > > > I have an overcloud with 3 controller nodes, everything is > > working fine, > > > > the problem is when I reboot one of the controllers. When > > the node comes > > > > online, all the services (nova-api, neutron-server) on the > > other nodes > > > > are also restarted, causing a couple of minutes of downtime > > until > > > > everything is recovered. > > > > > > > > In the example below I restarted controller2 and I see these > > messages on > > > > controller0. My question is if this is the expected > > behavior, because in > > > > my opinion it shouldn't happen. > > > > > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > > > *== Glance images ==* > > > > *Service Unavailable (HTTP 503)* > > > > *== Nova managed services ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *== Nova networks ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *== Nova instance flavors ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *== Nova instances ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *[root at overcloud-controller-0 ~]# openstack-status * > > > > *Broadcast message from > > > > systemd-journald at overcloud-controller-0.localdomain (Thu > > 2016-08-04 > > > > 09:22:31 UTC):* > > > > * > > > > * > > > > *haproxy[2816]: proxy neutron has no server available!* > > > > > > > > Thanks, > > > > Pedro Sousa > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > rdo-list mailing list > > > > rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Thu Aug 4 16:02:26 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 4 Aug 2016 17:02:26 +0100 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: <1aed738bd93843c8b68ba17adc0b3685@svr2-disp-exch.dispersive.local> References: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> <4195e85cb5f94d76abacf9289cdcda11@svr2-disp-exch.dispersive.local> <1aed738bd93843c8b68ba17adc0b3685@svr2-disp-exch.dispersive.local> Message-ID: Hi, I've deleted the nova and neutron services but the issue persists, so I guess it's not related. Filing the sosreport. Thanks On Thu, Aug 4, 2016 at 4:32 PM, Ilja Maslov wrote: > Not on this fresh install, but what I saw few weeks back was that when > controller nodes restart, I see services created with FQDN names that were > up and I was able to safely clean the original services with short host > names. But I haven?t re-tested controller restarts afterwards. > > > > With my fresh install, rabbitmq is not coming up upon reboot (?unknown > error? (1)), so I need to fix this first before I?m able to proceed with > testing. I?ll let you know how it goes. > > > > Ilja > > > > *From:* Pedro Sousa [mailto:pgsousa at gmail.com] > *Sent:* Thursday, August 04, 2016 11:23 AM > *To:* Ilja Maslov > *Cc:* Raoul Scarazzini ; rdo-list > > *Subject:* Re: [rdo-list] Overcloud pacemaker services restart behavior > causes downtime > > > > Hi Ilja, > > > > I noticed that too. Did you try to delete the services that are marked > down and retest? > > > > Thanks > > > > On Thu, Aug 4, 2016 at 4:12 PM, Ilja Maslov > wrote: > > Hi, > > I've noticed similar behavior on Mitaka installed from > trunk/mitaka/passed-ci. Appreciate if you could put me in CC. > > Additional detail is that during initial deployment, nova services, > neutron agents and heat engines are registered with the short hostnames and > upon controller node restart, these will all show with state=down. > Probably because hosts files are re-written after the services had been > started with FQDN as a first entry. I do not know to what extent pacemaker > resources are monitored, but it could be related to the problem you are > reporting. > > Cheers, > Ilja > > > > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Raoul Scarazzini > Sent: Thursday, August 04, 2016 9:31 AM > To: Pedro Sousa > Cc: rdo-list > Subject: Re: [rdo-list] Overcloud pacemaker services restart behavior > causes downtime > > That will be great, thank you, put me in CC so I can follow this. > > Thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > > On 04/08/2016 15:29, Pedro Sousa wrote: > > Hi Raoul, > > > > this only happens when the node comes back online after booting. When I > > stop the node with "pcs cluster stop", everything works fine, even if > > VIP is active on that node. > > > > Anyway I will file a bugzilla. > > > > Thanks > > > > > > > > > > On Thu, Aug 4, 2016 at 1:51 PM, Raoul Scarazzini > > wrote: > > > > Ok, so we are on mitaka. Here we have VIPs that are a (Optional) > > dependency for haproxy, which is a (Mandatory) dependency for > > openstack-core from which all the others (nova, neutron, cinder and > so > > on) depends. > > This means that if you are rebooting a controller in which a VIP is > > active you will NOT have a restart of openstack-core since haproxy > will > > not be restarted, because of the OPTIONAL constraint. > > So the behavior you're describing is quite strange. > > Maybe other components are in the game here. Can you open a bugzilla > > with the exact steps you're using to reproduce the problem and share > the > > sosreports of your systems? > > > > Thanks, > > > > -- > > Raoul Scarazzini > > rasca at redhat.com > > > > On 04/08/2016 12:34, Pedro Sousa wrote: > > > Hi, > > > > > > I use mitaka from centos sig repos: > > > > > > Centos 7.2 > > > centos-release-openstack-mitaka-1-3.el7.noarch > > > pacemaker-cli-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-remote-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 > > > pacemaker-libs-1.1.13-10.el7_2.2.x86_64 > > > corosynclib-2.3.4-7.el7_2.3.x86_64 > > > corosync-2.3.4-7.el7_2.3.x86_64 > > > resource-agents-3.9.5-54.el7_2.10.x86_64 > > > > > > Let me know if you need more info. > > > > > > Thanks > > > > > > > > > > > > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini < > rasca at redhat.com > > > >> wrote: > > > > > > Hi, > > > can you please give us more information about the environment > you are > > > using? Release, package versions and so on. > > > > > > -- > > > Raoul Scarazzini > > > rasca at redhat.com > > > > > > > > > On 04/08/2016 11:34, Pedro Sousa wrote: > > > > Hi all, > > > > > > > > I have an overcloud with 3 controller nodes, everything is > > working fine, > > > > the problem is when I reboot one of the controllers. When > > the node comes > > > > online, all the services (nova-api, neutron-server) on the > > other nodes > > > > are also restarted, causing a couple of minutes of downtime > > until > > > > everything is recovered. > > > > > > > > In the example below I restarted controller2 and I see these > > messages on > > > > controller0. My question is if this is the expected > > behavior, because in > > > > my opinion it shouldn't happen. > > > > > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > > > *== Glance images ==* > > > > *Service Unavailable (HTTP 503)* > > > > *== Nova managed services ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *== Nova networks ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *== Nova instance flavors ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *== Nova instances ==* > > > > *No handlers could be found for logger > > > "keystoneauth.identity.generic.base"* > > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > > *[root at overcloud-controller-0 ~]# openstack-status * > > > > *Broadcast message from > > > > systemd-journald at overcloud-controller-0.localdomain (Thu > > 2016-08-04 > > > > 09:22:31 UTC):* > > > > * > > > > * > > > > *haproxy[2816]: proxy neutron has no server available!* > > > > > > > > Thanks, > > > > Pedro Sousa > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > rdo-list mailing list > > > > rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Aug 4 18:55:11 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 4 Aug 2016 14:55:11 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> <20160803165119.GC25838@localhost.localdomain> Message-ID: On Thu, Aug 4, 2016 at 9:43 AM, Wesley Hayutin wrote: > > > On Wed, Aug 3, 2016 at 8:45 PM, Emilien Macchi wrote: > >> On Wed, Aug 3, 2016 at 7:38 PM, Wesley Hayutin >> wrote: >> > >> > >> > On Wed, Aug 3, 2016 at 6:19 PM, Emilien Macchi >> wrote: >> >> >> >> On Wed, Aug 3, 2016 at 1:33 PM, Wesley Hayutin >> >> wrote: >> >> > >> >> > >> >> > On Wed, Aug 3, 2016 at 12:51 PM, James Slagle >> >> > wrote: >> >> >> >> >> >> On Wed, Aug 03, 2016 at 11:36:57AM -0400, David Moreau Simard wrote: >> >> >> > Please hear me out. >> >> >> > TL;DR, Let's work upstream and make it awesome so that downstream >> can >> >> >> > be awesome. >> >> >> > >> >> >> > I've said this before but I'm going to re-iterate that I do not >> >> >> > understand why there is so much effort spent around testing >> TripleO >> >> >> > downstream. >> >> >> > By downstream, I mean anything that isn't in TripleO or TripleO-CI >> >> >> > proper. >> >> >> > >> >> >> > All this work should be done upstream to make TripleO and it's CI >> >> >> > super awesome and this would trickle down for free downstream. >> >> >> > >> >> >> > The RDO Trunk testing pipeline is composed of two tools, today. >> >> >> > The TripleO-Quickstart project [1] is a good example of an >> initiative >> >> >> > that started downstream but always had the intention of being >> >> >> > proposed >> >> >> > upstream [2] after being "incubated" and fleshed out. >> >> >> >> >> >> tripleo-quickstart was proposed to upstream TripleO as a replacement >> >> >> for >> >> >> the >> >> >> virtual environment setup done by instack-virt-setup. 3rd party CI >> >> >> would >> >> >> be >> >> >> used to gate tripleo-quickstart so that we'd be sure the virt setup >> was >> >> >> always >> >> >> working. That was the extent of the CI scope defined in the spec. >> That >> >> >> work is >> >> >> not yet completed (see work items in the spec). >> >> >> >> >> >> Now it seems it is a much more all encompassing >> CI/automation/testing >> >> >> project >> >> >> that is competing in scope with tripleo-ci itself. >> >> > >> >> > >> >> > IMHO you are correct here. There has been quite a bit of discussion >> >> > about >> >> > removing the parts >> >> > of oooq that are outside of the original blueprint to replace >> >> > instack-virt-setup w/ oooq. As usual there are many different >> opinions >> >> > here. I think there are a lot of RDO guys that would prefer a lot of >> >> > the >> >> > native oooq roles stay where they are, I think that is short sighted >> >> > imho. >> >> > I agree that anything outside of the blueprint be removed from oooq. >> >> > This >> >> > would hopefully allow the upstream to be more comfortable with oooq >> and >> >> > allow us to really start consolidating tools. >> >> > >> >> > Luckily for the users that still want to use oooq as a full >> end-to-end >> >> > solution the 3rd party roles can be used even after tearing out these >> >> > native >> >> > roles. >> >> > >> >> >> >> >> >> >> >> >> I'm all for consolidation of these types of tools *if* there is >> >> >> interest. >> >> > >> >> > >> >> > Roll call.. is there interest? +1 from me. >> >> > >> >> >> >> >> >> >> >> >> However, IMO, incubating these things downstream and then trying to >> get >> >> >> them >> >> >> upstream or get upstream to adopt them is not ideal or a good >> example. >> >> >> The >> >> >> same >> >> >> topic came up and was pushed several times with khaleesi, and it >> just >> >> >> never >> >> >> happened, it was continually DOA upstream. >> >> > >> >> > >> >> > True, however that could be a result of the downstream perceiving >> >> > barriers ( >> >> > real or not ) in incubating projects in upstream openstack. >> >> > >> >> >> >> >> >> >> >> >> I think it would be fairly difficult to get tripleo-ci to wholesale >> >> >> adopt >> >> >> tripleo-quickstart at this stage. The separate irc channel from >> >> >> #tripleo >> >> >> is not >> >> >> conducive to consolidation on tooling and direction imo. >> >> > >> >> > >> >> > The irc channel is easily addressed. We do seem to generate an awful >> >> > amount >> >> > of chatter though :) >> >> > >> >> >> >> >> >> >> >> >> The scope of quickstart is actually not fully understood by myself. >> >> >> I've >> >> >> also >> >> >> heard from some in the upstream TripleO community as well who are >> >> >> confused >> >> >> by >> >> >> its direction and are facing similar difficulties using its >> generated >> >> >> bash >> >> >> scripts that they'd be facing if they were just using TripleO >> >> >> documentation >> >> >> instead. >> >> > >> >> > >> >> > The point of the generated bash scripts is to create rst >> documentation >> >> > and >> >> > reusable scripts for the end user. Since the documentation and the >> >> > generated scripts are equivalent I would expect the same errors, >> >> > problems >> >> > and issues. I see this as a good thing really. We *want* the CI to >> hit >> >> > the >> >> > same issues as those who are following the doc. >> >> > >> >> >> >> >> >> >> >> >> I do think that this sort of problem lends itself easily to one off >> >> >> implementations as is quite evidenced in this thread. Everyone/group >> >> >> wants >> >> >> and >> >> >> needs to automate something in a different way. And imo, none of >> these >> >> >> tools >> >> >> are building end-user or operator facing interfaces, so they're not >> >> >> fully >> >> >> focused on building something that "just works for everyone". Those >> >> >> interfaces >> >> >> should be developed in TripleO user facing tooling anyway >> >> >> (tripleoclient/openstackclient/etc). >> >> >> >> >> >> So, I actually think it's ok in some degree that things have been >> >> >> automated >> >> >> differently in different tools. Anecdotally, I suspect many users of >> >> >> TripleO in >> >> >> production have their own automation tools as well. And none of the >> >> >> implementations mentioned in this thread would likely meet their >> needs >> >> >> either. >> >> > >> >> > >> >> > This is true.. without a tool in the upstream that addresses ci, >> dev, >> >> > test >> >> > use cases across the development cycle this will continue to be the >> >> > case. I >> >> > suspect even with a perfect tool, it won't ever be perfect for >> everyone. >> >> > >> >> >> >> >> >> >> >> >> However, if there is a desire to focus resources on consolidated >> >> >> tooling >> >> >> and >> >> >> someone to drive it forward, then I definitely agree that the effort >> >> >> needs >> >> >> to >> >> >> start upstream with a singular plan for tripleo-ci. From what I >> gather, >> >> >> that >> >> >> would be some sort of alignment and reuse of tripleo-quickstart, and >> >> >> then >> >> >> we >> >> >> could build from there. >> >> > >> >> > >> >> > +1 >> >> > >> >> >> >> >> >> >> >> >> That could start as a discussion and plan within that community with >> >> >> some >> >> >> agreed on concensus around that plan. There was an initial thread on >> >> >> openstack-dev related to this topic but it is stalled a bit. It >> could >> >> >> be >> >> >> continually driven to resolution via specs, the tripleo meeting, >> email >> >> >> or >> >> >> irc >> >> >> discussion until a plan is formed. >> >> > >> >> > >> >> > +1, I think the first step is to complete the original blueprint and >> >> > move >> >> > on from there. >> >> > I think there has also been interest in having an in person meeting >> at >> >> > summit. >> >> > >> >> > Thanks! >> >> > >> >> >> >> >> >> >> >> >> -- >> >> >> -- James Slagle >> >> >> -- >> >> >> >> >> >> _______________________________________________ >> >> >> rdo-list mailing list >> >> >> rdo-list at redhat.com >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > >> >> > >> >> > >> >> > _______________________________________________ >> >> > rdo-list mailing list >> >> > rdo-list at redhat.com >> >> > https://www.redhat.com/mailman/listinfo/rdo-list >> >> > >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> I like how the discussion goes though I have some personal (and >> >> probably shared) feeling that I would like to share here, more or less >> >> related. >> >> >> >> As a TripleO core developer, I have some frustration to see that a lot >> >> of people are involved in making TripleO Quickstart better, while we >> >> have a few people actually working on tripleo-ci tool and try to >> >> maintain upstream CI stable. >> >> As a reminder, tripleo-ci tool is currently the ONLY ONE thing that >> >> actually gates TripleO, even if we don't like the tool. It is right >> >> now, testing TripleO upstream, everything that is not tested in there >> >> will probably break one day downstream CIs. >> >> Yes we have this tooling discussion here and that's awesome, but words >> >> are words. I would like to see some real engagement to help TripleO CI >> >> to converge into something better and not only everyone working on >> >> their side. >> > >> > >> > You have a valid point and reason to be frustrated. >> > Is the point here that everyone downstream should use tripleo.sh or that >> > everyone should be focused on ci and testing at the tripleo level? >> >> Not everyone should use tripleo.sh. My point is that we should move >> forward with a common tool, and stop enlarging the gap between tools. >> We have created (and are still doing) a technical dept where we have >> multiple tools with a ton of overlap, the more we wait, more difficult >> it will be to clean this up. >> > > +1 > >> >> >> >> >> >> >> Some examples: >> >> - TripleO Quickstart (downstream) CI has coverage for undercloud & >> >> overcloud upgrades while TripleO CI freshly has a undercloud upgrade >> >> job and used to have a overcloud (minor) upgrade job (disabled now, >> >> for some reasons related to our capacity to run jobs and also some >> >> blockers into code itself). >> >> - TripleO CI has some TripleO Heat templates that could also be re-use >> >> by TripleO Quickstart (I'm working on moving them from tripleo-ci to >> >> THT, WIP here: https://review.openstack.org/350775). >> >> - TripleO CI deploys Ceph Jewel repository, TripleO Quickstart doesn't. >> >> - (...) >> > >> > >> > As others have mentioned, there are at least 5-10 tools in development >> that >> > are used to deploy tripleo in some CI fashion. Calling out >> > tripleo-quickstart alone is not quite right imho. There are a number >> of >> > tripleo devs that burn cycles on their own ci tools and maybe that is >> fine >> > thing to do. >> >> I called quickstart because that's the one I see everyday but my >> frustration is about all our tools in general. >> I'm actually a OOOQ user and I like this tool, really. >> But as you can see, I'm also working on tripleo-ci right now because I >> want TripleO CI better and I haven't seen until now some interest to >> converge. >> James started something cool by trying to deploy an undercloud using >> OOOQ from tripleo-ci. That's a start ! We need things like this, >> prototyping convergence, and see what we can do. >> > > +1 working in multiple ci systems is painful, I know your pain! > I've been playing around w/ Jame's patch [1] to see if I can help. > I like Jame's approach, I think I may be missing some setup steps that > nodepool provides. > > [1] https://review.openstack.org/#/c/348530/ > > >> >> > TripleO-Quickstart is meant to replace instack-virt-setup which it does >> > quite well. The only group that was actually running >> instack-virt-setup >> > was the RDO CI team, upstream had taken it out of the ci system. I >> think >> > it's not unfair to say gaps have been left for other teams to fill. >> >> Gotcha. It was just some examples. >> >> >> >> >> >> >> We have been having this discussion for a while now but we're still >> >> not making much progress here, I feel like we're in statu quo. >> >> James mentioned a blueprint, I like it. We need to engage some >> >> upstream discussion about this major CI refactor, like we need with >> >> specs and then we'll decide if whether or not we need to change the >> >> tool, and how. >> > >> > >> > Well, this would take some leadership imho. We need some people that >> are >> > familiar with the upstream, midstream and downstream requirements of CI. >> > This was addressed at the production chain meetings initially but then >> > pretty much ignored. The leaders responsible at the various stages of >> a >> > build (upstream -> downstream ) failed to take this issue on. Here we >> are >> > today. >> > >> > Would it be acceptable by anyone.. IF >> > >> > tripleo-quickstart replaced instack-virt-setup [1] and walked through >> the >> > undercloud install, then handed off to tripleo.sh to deploy, upgrade, >> > update, scale, validate etc??? >> >> That's something we can try. >> >> > That these two tools *would* in fact be the the official CI tools of >> tripleo >> > at the upstream, RDO, and at least parts of the downstream? >> >> My opinion on this is that upstream and downstream CI should only differ >> on: >> * the packages (OSP vs RDO) >> * the scenarios (downstream could have customer-specific things) >> And that's it. Tools should remain the same IMHO. >> >> > Would that help to ease the current frustration around CI? Emilien what >> do >> > you think? >> >> I spent the last months working on composable roles and I have now >> more time to work on CI; $topic is definitely something where I would >> like to help. >> > > woot, I'm excited that you are freeing up and can be more involved! > Thanks as usual Emilien! > > >> -- >> Emilien Macchi >> > > I would add one thing.. If there are folks out there that rely on CI or CI tools and need to be part of this process than please speak up! If you have tools, ideas, requirements now's a pretty good time to be verbose about it. Some of you already have, some have not. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From dyasny at redhat.com Thu Aug 4 20:24:50 2016 From: dyasny at redhat.com (Dan Yasny) Date: Thu, 4 Aug 2016 16:24:50 -0400 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <1470125532.18770.12.camel@ocf.co.uk> <20160803165119.GC25838@localhost.localdomain> Message-ID: On Thu, Aug 4, 2016 at 2:55 PM, Wesley Hayutin wrote: > > > > On Thu, Aug 4, 2016 at 9:43 AM, Wesley Hayutin > wrote: > >> >> >> On Wed, Aug 3, 2016 at 8:45 PM, Emilien Macchi >> wrote: >> >>> On Wed, Aug 3, 2016 at 7:38 PM, Wesley Hayutin >>> wrote: >>> > >>> > >>> > On Wed, Aug 3, 2016 at 6:19 PM, Emilien Macchi >>> wrote: >>> >> >>> >> On Wed, Aug 3, 2016 at 1:33 PM, Wesley Hayutin >>> >> wrote: >>> >> > >>> >> > >>> >> > On Wed, Aug 3, 2016 at 12:51 PM, James Slagle >>> >> > wrote: >>> >> >> >>> >> >> On Wed, Aug 03, 2016 at 11:36:57AM -0400, David Moreau Simard >>> wrote: >>> >> >> > Please hear me out. >>> >> >> > TL;DR, Let's work upstream and make it awesome so that >>> downstream can >>> >> >> > be awesome. >>> >> >> > >>> >> >> > I've said this before but I'm going to re-iterate that I do not >>> >> >> > understand why there is so much effort spent around testing >>> TripleO >>> >> >> > downstream. >>> >> >> > By downstream, I mean anything that isn't in TripleO or >>> TripleO-CI >>> >> >> > proper. >>> >> >> > >>> >> >> > All this work should be done upstream to make TripleO and it's CI >>> >> >> > super awesome and this would trickle down for free downstream. >>> >> >> > >>> >> >> > The RDO Trunk testing pipeline is composed of two tools, today. >>> >> >> > The TripleO-Quickstart project [1] is a good example of an >>> initiative >>> >> >> > that started downstream but always had the intention of being >>> >> >> > proposed >>> >> >> > upstream [2] after being "incubated" and fleshed out. >>> >> >> >>> >> >> tripleo-quickstart was proposed to upstream TripleO as a >>> replacement >>> >> >> for >>> >> >> the >>> >> >> virtual environment setup done by instack-virt-setup. 3rd party CI >>> >> >> would >>> >> >> be >>> >> >> used to gate tripleo-quickstart so that we'd be sure the virt >>> setup was >>> >> >> always >>> >> >> working. That was the extent of the CI scope defined in the spec. >>> That >>> >> >> work is >>> >> >> not yet completed (see work items in the spec). >>> >> >> >>> >> >> Now it seems it is a much more all encompassing >>> CI/automation/testing >>> >> >> project >>> >> >> that is competing in scope with tripleo-ci itself. >>> >> > >>> >> > >>> >> > IMHO you are correct here. There has been quite a bit of discussion >>> >> > about >>> >> > removing the parts >>> >> > of oooq that are outside of the original blueprint to replace >>> >> > instack-virt-setup w/ oooq. As usual there are many different >>> opinions >>> >> > here. I think there are a lot of RDO guys that would prefer a lot >>> of >>> >> > the >>> >> > native oooq roles stay where they are, I think that is short >>> sighted >>> >> > imho. >>> >> > I agree that anything outside of the blueprint be removed from oooq. >>> >> > This >>> >> > would hopefully allow the upstream to be more comfortable with oooq >>> and >>> >> > allow us to really start consolidating tools. >>> >> > >>> >> > Luckily for the users that still want to use oooq as a full >>> end-to-end >>> >> > solution the 3rd party roles can be used even after tearing out >>> these >>> >> > native >>> >> > roles. >>> >> > >>> >> >> >>> >> >> >>> >> >> I'm all for consolidation of these types of tools *if* there is >>> >> >> interest. >>> >> > >>> >> > >>> >> > Roll call.. is there interest? +1 from me. >>> >> > >>> >> >> >>> >> >> >>> >> >> However, IMO, incubating these things downstream and then trying >>> to get >>> >> >> them >>> >> >> upstream or get upstream to adopt them is not ideal or a good >>> example. >>> >> >> The >>> >> >> same >>> >> >> topic came up and was pushed several times with khaleesi, and it >>> just >>> >> >> never >>> >> >> happened, it was continually DOA upstream. >>> >> > >>> >> > >>> >> > True, however that could be a result of the downstream perceiving >>> >> > barriers ( >>> >> > real or not ) in incubating projects in upstream openstack. >>> >> > >>> >> >> >>> >> >> >>> >> >> I think it would be fairly difficult to get tripleo-ci to wholesale >>> >> >> adopt >>> >> >> tripleo-quickstart at this stage. The separate irc channel from >>> >> >> #tripleo >>> >> >> is not >>> >> >> conducive to consolidation on tooling and direction imo. >>> >> > >>> >> > >>> >> > The irc channel is easily addressed. We do seem to generate an >>> awful >>> >> > amount >>> >> > of chatter though :) >>> >> > >>> >> >> >>> >> >> >>> >> >> The scope of quickstart is actually not fully understood by myself. >>> >> >> I've >>> >> >> also >>> >> >> heard from some in the upstream TripleO community as well who are >>> >> >> confused >>> >> >> by >>> >> >> its direction and are facing similar difficulties using its >>> generated >>> >> >> bash >>> >> >> scripts that they'd be facing if they were just using TripleO >>> >> >> documentation >>> >> >> instead. >>> >> > >>> >> > >>> >> > The point of the generated bash scripts is to create rst >>> documentation >>> >> > and >>> >> > reusable scripts for the end user. Since the documentation and the >>> >> > generated scripts are equivalent I would expect the same errors, >>> >> > problems >>> >> > and issues. I see this as a good thing really. We *want* the CI >>> to hit >>> >> > the >>> >> > same issues as those who are following the doc. >>> >> > >>> >> >> >>> >> >> >>> >> >> I do think that this sort of problem lends itself easily to one off >>> >> >> implementations as is quite evidenced in this thread. >>> Everyone/group >>> >> >> wants >>> >> >> and >>> >> >> needs to automate something in a different way. And imo, none of >>> these >>> >> >> tools >>> >> >> are building end-user or operator facing interfaces, so they're not >>> >> >> fully >>> >> >> focused on building something that "just works for everyone". Those >>> >> >> interfaces >>> >> >> should be developed in TripleO user facing tooling anyway >>> >> >> (tripleoclient/openstackclient/etc). >>> >> >> >>> >> >> So, I actually think it's ok in some degree that things have been >>> >> >> automated >>> >> >> differently in different tools. Anecdotally, I suspect many users >>> of >>> >> >> TripleO in >>> >> >> production have their own automation tools as well. And none of the >>> >> >> implementations mentioned in this thread would likely meet their >>> needs >>> >> >> either. >>> >> > >>> >> > >>> >> > This is true.. without a tool in the upstream that addresses ci, >>> dev, >>> >> > test >>> >> > use cases across the development cycle this will continue to be the >>> >> > case. I >>> >> > suspect even with a perfect tool, it won't ever be perfect for >>> everyone. >>> >> > >>> >> >> >>> >> >> >>> >> >> However, if there is a desire to focus resources on consolidated >>> >> >> tooling >>> >> >> and >>> >> >> someone to drive it forward, then I definitely agree that the >>> effort >>> >> >> needs >>> >> >> to >>> >> >> start upstream with a singular plan for tripleo-ci. From what I >>> gather, >>> >> >> that >>> >> >> would be some sort of alignment and reuse of tripleo-quickstart, >>> and >>> >> >> then >>> >> >> we >>> >> >> could build from there. >>> >> > >>> >> > >>> >> > +1 >>> >> > >>> >> >> >>> >> >> >>> >> >> That could start as a discussion and plan within that community >>> with >>> >> >> some >>> >> >> agreed on concensus around that plan. There was an initial thread >>> on >>> >> >> openstack-dev related to this topic but it is stalled a bit. It >>> could >>> >> >> be >>> >> >> continually driven to resolution via specs, the tripleo meeting, >>> email >>> >> >> or >>> >> >> irc >>> >> >> discussion until a plan is formed. >>> >> > >>> >> > >>> >> > +1, I think the first step is to complete the original blueprint >>> and >>> >> > move >>> >> > on from there. >>> >> > I think there has also been interest in having an in person meeting >>> at >>> >> > summit. >>> >> > >>> >> > Thanks! >>> >> > >>> >> >> >>> >> >> >>> >> >> -- >>> >> >> -- James Slagle >>> >> >> -- >>> >> >> >>> >> >> _______________________________________________ >>> >> >> rdo-list mailing list >>> >> >> rdo-list at redhat.com >>> >> >> https://www.redhat.com/mailman/listinfo/rdo-list >>> >> >> >>> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> > >>> >> > >>> >> > >>> >> > _______________________________________________ >>> >> > rdo-list mailing list >>> >> > rdo-list at redhat.com >>> >> > https://www.redhat.com/mailman/listinfo/rdo-list >>> >> > >>> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >>> >> I like how the discussion goes though I have some personal (and >>> >> probably shared) feeling that I would like to share here, more or less >>> >> related. >>> >> >>> >> As a TripleO core developer, I have some frustration to see that a lot >>> >> of people are involved in making TripleO Quickstart better, while we >>> >> have a few people actually working on tripleo-ci tool and try to >>> >> maintain upstream CI stable. >>> >> As a reminder, tripleo-ci tool is currently the ONLY ONE thing that >>> >> actually gates TripleO, even if we don't like the tool. It is right >>> >> now, testing TripleO upstream, everything that is not tested in there >>> >> will probably break one day downstream CIs. >>> >> Yes we have this tooling discussion here and that's awesome, but words >>> >> are words. I would like to see some real engagement to help TripleO CI >>> >> to converge into something better and not only everyone working on >>> >> their side. >>> > >>> > >>> > You have a valid point and reason to be frustrated. >>> > Is the point here that everyone downstream should use tripleo.sh or >>> that >>> > everyone should be focused on ci and testing at the tripleo level? >>> >>> Not everyone should use tripleo.sh. My point is that we should move >>> forward with a common tool, and stop enlarging the gap between tools. >>> We have created (and are still doing) a technical dept where we have >>> multiple tools with a ton of overlap, the more we wait, more difficult >>> it will be to clean this up. >>> >> >> +1 >> >>> >>> >> >>> >> >>> >> Some examples: >>> >> - TripleO Quickstart (downstream) CI has coverage for undercloud & >>> >> overcloud upgrades while TripleO CI freshly has a undercloud upgrade >>> >> job and used to have a overcloud (minor) upgrade job (disabled now, >>> >> for some reasons related to our capacity to run jobs and also some >>> >> blockers into code itself). >>> >> - TripleO CI has some TripleO Heat templates that could also be re-use >>> >> by TripleO Quickstart (I'm working on moving them from tripleo-ci to >>> >> THT, WIP here: https://review.openstack.org/350775). >>> >> - TripleO CI deploys Ceph Jewel repository, TripleO Quickstart >>> doesn't. >>> >> - (...) >>> > >>> > >>> > As others have mentioned, there are at least 5-10 tools in development >>> that >>> > are used to deploy tripleo in some CI fashion. Calling out >>> > tripleo-quickstart alone is not quite right imho. There are a number >>> of >>> > tripleo devs that burn cycles on their own ci tools and maybe that is >>> fine >>> > thing to do. >>> >>> I called quickstart because that's the one I see everyday but my >>> frustration is about all our tools in general. >>> I'm actually a OOOQ user and I like this tool, really. >>> But as you can see, I'm also working on tripleo-ci right now because I >>> want TripleO CI better and I haven't seen until now some interest to >>> converge. >>> James started something cool by trying to deploy an undercloud using >>> OOOQ from tripleo-ci. That's a start ! We need things like this, >>> prototyping convergence, and see what we can do. >>> >> >> +1 working in multiple ci systems is painful, I know your pain! >> I've been playing around w/ Jame's patch [1] to see if I can help. >> I like Jame's approach, I think I may be missing some setup steps that >> nodepool provides. >> >> [1] https://review.openstack.org/#/c/348530/ >> >> >>> >>> > TripleO-Quickstart is meant to replace instack-virt-setup which it does >>> > quite well. The only group that was actually running >>> instack-virt-setup >>> > was the RDO CI team, upstream had taken it out of the ci system. I >>> think >>> > it's not unfair to say gaps have been left for other teams to fill. >>> >>> Gotcha. It was just some examples. >>> >>> >> >>> >> >>> >> We have been having this discussion for a while now but we're still >>> >> not making much progress here, I feel like we're in statu quo. >>> >> James mentioned a blueprint, I like it. We need to engage some >>> >> upstream discussion about this major CI refactor, like we need with >>> >> specs and then we'll decide if whether or not we need to change the >>> >> tool, and how. >>> > >>> > >>> > Well, this would take some leadership imho. We need some people that >>> are >>> > familiar with the upstream, midstream and downstream requirements of >>> CI. >>> > This was addressed at the production chain meetings initially but then >>> > pretty much ignored. The leaders responsible at the various stages >>> of a >>> > build (upstream -> downstream ) failed to take this issue on. Here we >>> are >>> > today. >>> > >>> > Would it be acceptable by anyone.. IF >>> > >>> > tripleo-quickstart replaced instack-virt-setup [1] and walked through >>> the >>> > undercloud install, then handed off to tripleo.sh to deploy, upgrade, >>> > update, scale, validate etc??? >>> >>> That's something we can try. >>> >>> > That these two tools *would* in fact be the the official CI tools of >>> tripleo >>> > at the upstream, RDO, and at least parts of the downstream? >>> >>> My opinion on this is that upstream and downstream CI should only differ >>> on: >>> * the packages (OSP vs RDO) >>> * the scenarios (downstream could have customer-specific things) >>> And that's it. Tools should remain the same IMHO. >>> >>> > Would that help to ease the current frustration around CI? Emilien >>> what do >>> > you think? >>> >>> I spent the last months working on composable roles and I have now >>> more time to work on CI; $topic is definitely something where I would >>> like to help. >>> >> >> woot, I'm excited that you are freeing up and can be more involved! >> Thanks as usual Emilien! >> >> >>> -- >>> Emilien Macchi >>> >> >> > I would add one thing.. If there are folks out there that rely on CI or CI > tools and need to be part of this process than please speak up! > If you have tools, ideas, requirements now's a pretty good time to be > verbose about it. Some of you already have, some have not. > After the quickstart fiasco and the uncertainty around infrared, what I really want to see is a simple, adjustable/configurable and useful replacement for instack-virt (which is the only working dpeloyment tool I currently have). I have several issues with instack-virt, but none of them are critical enough to care much, since my setups tend to be very short lived, and I prefer a faster deployment to features atm. Any script that will do what instack-virt does, but also will allow me to adjust the parameters of the deployed VMs (disk/s; cpu; ram; NICs), and be able to predict the VMs' IP addresses without a cumbersome framework around it will be perfect. > > Thanks > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Fri Aug 5 13:31:25 2016 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Fri, 5 Aug 2016 15:31:25 +0200 Subject: [rdo-list] Multiple tools for deploying and testing TripleO In-Reply-To: References: <045FCFDA-D59B-4D0D-B62A-0538980A051A@ltgfederal.com> <20160803165119.GC25838@localhost.localdomain> Message-ID: On 4.8.2016 20:55, Wesley Hayutin wrote: > I would add one thing.. If there are folks out there that rely on CI or CI > tools and need to be part of this process than please speak up! > If you have tools, ideas, requirements now's a pretty good time to be > verbose about it. Some of you already have, some have not. > Thanks for the poke Wes, i'll post my 2 cents as well :) I have a developer point of view rather than CI point of view but hopefully that's interesting as well. My subjective view on automation tool requirements: --------------------------------------------------- I've been using inlunch [1]. It's halfway between a bash script and a "usual" Ansible solution. Much of the workflow is exported into an answer file which contains bash snippets. From previous discussions i know many people don't like such approach and would prefer to properly Ansiblize everything, but for my productivity it is vital that i can just add a (sometimes temporary) Bash line here and there and go on. Ansible is there just to ensure (limited) idempotency and restartability of the deployment from somewhere in the middle. Some other features i consider vital for a TripleO automation tool as a developer: * Allow to deploy trunk and stables as CI deploys them. (E.g. OOOQ is really impressive but diverges from the way things are done in CI, which, when testing a patch, can result in local failures when CI would pass and vice versa. We've hit one such issue wrt undercloud image building.) * Allow to deploy downstream following product docs as close as possible. (I know this may not be interesting from a community point of view, but it's a must have for me personally, as downstream involvement is a part of my job.) A couple of general points: --------------------------- * The replacement for instack-virt-setup should be a tool completely separate from install workflow automation tool IMO. If we manage to extract it into an independent project with a defined interface and feature set, we should be able to adopt it in the CI much easier than full OOOQ. * From the past discussions i've been involved in on this topic (e.g. inlunch vs. khaleesi vs. OOOQ), the expectations/requirements vary so much that coming to a single unified automation tool is very difficult. * Whatever automation tool we build, we need to keep in mind that the documented manual installation method (which ideally would be the base of all automations and CIs anyway) has to be kept as simple as possible too. And if the manual installation workflow is sane, maintaining an automation tool that fits the needs of a few people is not very time consuming (see how few commits per month inlunch needs [2]). Perhaps that's why we have so many of these tools :) The benefits of creating something that fits *your* use case sometimes outweigh the time spent building it and drawbacks of having to adopt and bend some existing solution. * That said, there's still great value in having one of the tools be more official than others, maintained and ready especially for folks who are starting with TripleO (again i see the developer use case here more clearly than the CI use case). From what i've seen OOOQ seems to be very good in this aspect. Cheers Jirka [1] https://github.com/jistr/inlunch [2] https://github.com/jistr/inlunch/commits/master From Milind.Gunjan at sprint.com Fri Aug 5 14:27:39 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Fri, 5 Aug 2016 14:27:39 +0000 Subject: [rdo-list] RDO TripleO Mitaka Overcloud Failing In-Reply-To: References: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> Message-ID: <6e1c2211c3ea4970b784308f5ce6d154@PREWE13M11.ad.sprint.com> Hi Marius, This is what I see when I ran the puppet script in debug mode: Debug: Executing '/bin/systemctl start neutron-server' Error: Could not start Service[neutron-server]: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. Wrapped exception: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. Error: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: change from stopped to running failed: Could not start Service[neutron-server]: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]: Dependency Service[neutron-server] has failures: true Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Dependency Service[neutron-server] has failures: true Warning: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Dependency Service[neutron-server] has failures: true Warning: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: Dependency Service[neutron-server] has failures: true Warning: /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: Skipping because of failed dependencies Aslo, the script is stuck at present at this step : Debug: Executing '/bin/systemctl start openstack-nova-scheduler' Best Regards, Milind -----Original Message----- From: Marius Cornea [mailto:marius at remote-lab.net] Sent: Thursday, August 04, 2016 4:26 AM To: Gunjan, Milind [CTO] Cc: rdo-list at redhat.com Subject: Re: [rdo-list] RDO TripleO Mitaka Overcloud Failing OK, I don't actually see an error in the logs, the last thing that shows up is: on controller-0: [DEBUG] Running /var/lib/heat-config/hooks/puppet < /var/lib/heat-config/deployed/c989f58d-cd38-4813-a174-7e42c82bcb6f.json on compute-0: [DEBUG] Running /var/lib/heat-config/hooks/puppet < /var/lib/heat-config/deployed/c5265c58-96ae-49d5-9c1e-a38041e2b130.json I suspect these steps are timing out so let's try running them manually to figure out what's going on: Running the commands manually will output a puppet apply command, showing one from my environment as an example: # /var/lib/heat-config/hooks/puppet < /var/lib/heat-config/deployed/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.json [2016-08-04 08:12:21,609] (heat-config) [DEBUG] Running FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff" FACTER_fqdn="overcloud-controller-0.localdomain" FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" puppet apply --detailed-exitcodes /var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.pp Next step is to stop it(ctrl+c), copy the puppet apply command, add --debug and run it: # FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff" FACTER_fqdn="overcloud-controller-0.localdomain" FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" puppet apply --detailed-exitcodes /var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.pp --debug This should output puppet debug info that might lead us to where it gets stuck. Please paste the output so we can investigate further. Thanks On Thu, Aug 4, 2016 at 3:22 AM, Gunjan, Milind [CTO] wrote: > Thanks a lot Christopher for the suggestions. > > Marius: Thanks a lot for helping me out. I am attaching the requested logs. > > I tried to redeploy overcloud with 3 controller but the issue remains the same. Overcloud stack deployment is failing at Post-deployment configuration steps as before. When I was going to /var/log/messages for different services, it seems there is issue with haproxy service. Neutron service is failing too and the service endpoints being configured through puppet are not reachable for all failed service. I have attached os-collect-config journals from all four nodes. > > > Please let me know if there is any other logs or any other troubleshooting steps which I can implement. > > Best Regards, > Milind > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Wednesday, August 03, 2016 4:00 PM > To: Gunjan, Milind [CTO] > Cc: rdo-list at redhat.com > Subject: Re: [rdo-list] RDO TripleO Mitaka Non-HA Overcloud Failing > > Hi, > > Could you please ssh to the nodes, gather the os-collect-config journals (journalctl -l -u os-collect-config) and attach them here? > > Thank you, > Marius > > On Wed, Aug 3, 2016 at 8:40 PM, Gunjan, Milind [CTO] wrote: >> Hi All, >> >> >> >> I am currently working on Tripleo Mitaka Openstack deployment on >> baremetal >> servers: >> >> Undercloud ? 1 baremetal server with 2 NIC (1 for provisioning and >> 2nd for external network connectivity) >> >> Controller ? 1 baremetal server ( 6 NICs with each openstack VLANs on >> separate NIC) >> >> Compute ? 1 baremetal server >> >> >> >> I followed Graeme's instructions here : >> https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html to >> set up Undercloud . Undercloud deployment was successful and all the >> images required for overcloud deployment was properly built as per the instruction. >> I would like to mention that I used libvirt tools to modify the root >> password on overcloud-full.qcow2 and we also modified the grub file >> to include ?net.ifnames=0 biosdevname=0? to restore old interface naming. >> >> >> >> I was able to successfully introspect 2 serves to be used for >> controller and compute nodes. Also , we added the serial device >> discovered during introspection as root device: >> >> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add >> properties/root_device='{"serial": "618e728372833010c79bead9066f0f9e"}' >> >> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add >> properties/root_device='{"serial": "618e7283728347101f2107b511603adc"}' >> >> >> >> Next, we added compute and control tag to respective introspected >> node with local boot option: >> >> >> >> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add >> properties/capabilities='profile:control,boot_option:local' >> >> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add >> properties/capabilities='profile:compute,boot_option:local' >> >> >> >> We used multiple NIC templates for control and compute node which has >> been attached along with network-environment.yaml file. Default >> network isolation template file has been used. >> >> >> >> >> >> Deployment script looks like this : >> >> #!/bin/bash >> >> DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" >> >> template_base_dir="$DIR" >> >> ntpserver= #Sprint LAB >> >> openstack overcloud deploy --templates \ >> >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/network-isol >> a >> tion.yaml >> \ >> >> -e ${template_base_dir}/environments/network-environment.yaml \ >> >> --control-flavor control --compute-flavor compute \ >> >> --control-scale 1 --compute-scale 1 \ >> >> --ntp-server $ntpserver \ >> >> --neutron-network-type vxlan --neutron-tunnel-types vxlan --debug >> >> >> >> Heat stack deployment goes on more really long time (more than 4 >> hours) and gets stuck at postdeployment configurations. Please find >> below the capture during install : >> >> >> >> >> >> Every 2.0s: ironic node-list && nova list && heat stack-list && heat >> resource-list -n5 overcloud | grep -vi complete Wed Aug 3 17:33:37 >> 2016 >> >> >> >> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >> >> | UUID | Name | Instance UUID >> | Power State | Provisioning State | Maintenance | >> >> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >> >> | 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 | None | >> 9e7aae15-cabc-4489-a1b2-778915a78df2 | power on | active | >> False | >> >> | afcfbee3-3108-48da-a6da-aba8f422642c | None | >> c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | power on | active | >> False | >> >> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >> >> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >> >> | ID | Name | Status | >> Task State | Power State | Networks | >> >> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >> >> | 9e7aae15-cabc-4489-a1b2-778915a78df2 | overcloud-controller-0 | >> | ACTIVE | >> - | Running | ctlplane=192.168.149.9 | >> >> | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | overcloud-novacompute-0 | >> | ACTIVE | >> - | Running | ctlplane=192.168.149.8 | >> >> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >> >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> >> | id | stack_name | stack_status | >> creation_time | updated_time | >> >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> >> | 26ee0150-4cfa-4268-9107-8bfbf6712913 | overcloud | CREATE_FAILED | >> 2016-08-03T08:11:34 | None | >> >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> >> +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ >> >> ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ >> >> | resource_name | physical_resource_id >> | resource_type >> >> | resource_status | updated_time | stack_name >> | >> >> +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ >> >> ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ >> >> | ComputeNodesPostDeployment | >> 3797aec6-e543-4dda-9cd1-c7261e827a64 | >> OS::TripleO::ComputePostDeployment >> >> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud >> | >> >> | ControllerNodesPostDeployment | >> 6ad9f88c-5c55-4125-97f1-eb0e33329d16 | >> OS::TripleO::ControllerPostDeployment >> >> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud >> | >> >> | ComputePuppetDeployment | >> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | >> OS::Heat::StructuredDeployments >> >> | CREATE_FAILED | 2016-08-03T08:29:19 | >> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy >> | >> >> | ControllerOvercloudServicesDeployment_Step4 | >> 15509f59-ff28-43af-95dd-6247a6a32c2d | >> OS::Heat::StructuredDeployments >> >> | CREATE_FAILED | 2016-08-03T08:29:20 | >> overcloud-ControllerNodesPostDeployment-35y7uafngfwj >> | >> >> | 0 | >> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >> OS::Heat::StructuredDeployment >> >> | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | >> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy-ComputePuppetDeploy >> m >> ent-cpahcct3tfw3 >> | >> >> | 0 | >> 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >> OS::Heat::StructuredDeployment >> >> >> >> >> >> [stack at mitaka-uc ~]$ openstack software deployment show >> 5e9308f7-c3a9-4a94-a017-e1acb694c036 >> >> +---------------+--------------------------------------+ >> >> | Field | Value | >> >> +---------------+--------------------------------------+ >> >> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >> >> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | >> >> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | >> >> | creation_time | 2016-08-03T08:32:10 | >> >> | updated_time | | >> >> | status | IN_PROGRESS | >> >> | status_reason | Deploy data available | >> >> | input_values | {} | >> >> | action | CREATE | >> >> +---------------+--------------------------------------+ >> >> >> >> [stack at mitaka-uc ~]$ openstack software deployment show --long >> 5e9308f7-c3a9-4a94-a017-e1acb694c036 >> >> +---------------+--------------------------------------+ >> >> | Field | Value | >> >> +---------------+--------------------------------------+ >> >> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >> >> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | >> >> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | >> >> | creation_time | 2016-08-03T08:32:10 | >> >> | updated_time | | >> >> | status | IN_PROGRESS | >> >> | status_reason | Deploy data available | >> >> | input_values | {} | >> >> | action | CREATE | >> >> | output_values | None | >> >> +---------------+--------------------------------------+ >> >> >> >> [stack at mitaka-uc ~]$ openstack stack resource list >> 3797aec6-e543-4dda-9cd1-c7261e827a64 >> >> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >> >> | resource_name | physical_resource_id | >> resource_type | resource_status | >> updated_time | >> >> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >> >> | ComputeArtifactsConfig | a33cd04d-61ab-4429-8565-182409c2b97f | >> file:///usr/share/openstack-tripleo-heat- | CREATE_COMPLETE | >> 2016-08-03T08:29:19 | >> >> | | | >> templates/puppet/deploy-artifacts.yaml | | >> | >> >> | ComputePuppetConfig | 5bb712b0-5358-46c7-a444-f9adedfedd50 | >> OS::Heat::SoftwareConfig | CREATE_COMPLETE | >> 2016-08-03T08:29:19 | >> >> | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | >> OS::Heat::StructuredDeployments | CREATE_FAILED | >> 2016-08-03T08:29:19 | >> >> | ComputeArtifactsDeploy | 1d13bf34-fc66-4bf1-a3b7-1dd815f58f5a | >> OS::Heat::StructuredDeployments | CREATE_COMPLETE | >> 2016-08-03T08:29:19 | >> >> | ExtraConfig | | >> OS::TripleO::NodeExtraConfigPost | INIT_COMPLETE | >> 2016-08-03T08:29:19 | >> >> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >> >> >> >> [stack at mitaka-uc ~]$ openstack stack resource list >> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f >> >> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >> >> | resource_name | physical_resource_id | resource_type >> | resource_status | updated_time | >> >> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >> >> | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >> OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | >> 2016-08-03T08:30:04 | >> >> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >> >> [stack at mitaka-uc ~]$ openstack software deployment show >> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e >> >> +---------------+--------------------------------------+ >> >> | Field | Value | >> >> +---------------+--------------------------------------+ >> >> | id | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >> >> | server_id | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | >> >> | config_id | 24e5c0db-f84f-4a94-8f8e-8e38e73ccc86 | >> >> | creation_time | 2016-08-03T08:30:05 | >> >> | updated_time | | >> >> | status | IN_PROGRESS | >> >> | status_reason | Deploy data available | >> >> | input_values | {} | >> >> | action | CREATE | >> >> +---------------+--------------------------------------+ >> >> >> >> Keystonerc file was not generated. Please find below openstack status >> command result on controller and compute. >> >> >> >> [heat-admin at overcloud-controller-0 ~]$ openstack-status >> >> == Nova services == >> >> openstack-nova-api: active >> >> openstack-nova-compute: inactive (disabled on boot) >> >> openstack-nova-network: inactive (disabled on boot) >> >> openstack-nova-scheduler: activating(disabled on boot) >> >> openstack-nova-cert: active >> >> openstack-nova-conductor: active >> >> openstack-nova-console: inactive (disabled on boot) >> >> openstack-nova-consoleauth: active >> >> openstack-nova-xvpvncproxy: inactive (disabled on boot) >> >> == Glance services == >> >> openstack-glance-api: active >> >> openstack-glance-registry: active >> >> == Keystone service == >> >> openstack-keystone: inactive (disabled on boot) >> >> == Horizon service == >> >> openstack-dashboard: uncontactable >> >> == neutron services == >> >> neutron-server: failed (disabled on boot) >> >> neutron-dhcp-agent: inactive (disabled on boot) >> >> neutron-l3-agent: inactive (disabled on boot) >> >> neutron-metadata-agent: inactive (disabled on boot) >> >> neutron-lbaas-agent: inactive (disabled on boot) >> >> neutron-openvswitch-agent: inactive (disabled on boot) >> >> neutron-metering-agent: inactive (disabled on boot) >> >> == Swift services == >> >> openstack-swift-proxy: active >> >> openstack-swift-account: active >> >> openstack-swift-container: active >> >> openstack-swift-object: active >> >> == Cinder services == >> >> openstack-cinder-api: active >> >> openstack-cinder-scheduler: active >> >> openstack-cinder-volume: active >> >> openstack-cinder-backup: inactive (disabled on boot) >> >> == Ceilometer services == >> >> openstack-ceilometer-api: active >> >> openstack-ceilometer-central: active >> >> openstack-ceilometer-compute: inactive (disabled on boot) >> >> openstack-ceilometer-collector: active >> >> openstack-ceilometer-notification: active >> >> == Heat services == >> >> openstack-heat-api: inactive (disabled on boot) >> >> openstack-heat-api-cfn: active >> >> openstack-heat-api-cloudwatch: inactive (disabled on boot) >> >> openstack-heat-engine: inactive (disabled on boot) >> >> == Sahara services == >> >> openstack-sahara-api: active >> >> openstack-sahara-engine: active >> >> == Support services == >> >> libvirtd: active >> >> openvswitch: active >> >> dbus: active >> >> target: active >> >> rabbitmq-server: active >> >> memcached: active >> >> >> >> >> >> [heat-admin at overcloud-novacompute-0 ~]$ openstack-status >> >> == Nova services == >> >> openstack-nova-api: inactive (disabled on boot) >> >> openstack-nova-compute: activating(disabled on boot) >> >> openstack-nova-network: inactive (disabled on boot) >> >> openstack-nova-scheduler: inactive (disabled on boot) >> >> openstack-nova-cert: inactive (disabled on boot) >> >> openstack-nova-conductor: inactive (disabled on boot) >> >> openstack-nova-console: inactive (disabled on boot) >> >> openstack-nova-consoleauth: inactive (disabled on boot) >> >> openstack-nova-xvpvncproxy: inactive (disabled on boot) >> >> == Glance services == >> >> openstack-glance-api: inactive (disabled on boot) >> >> openstack-glance-registry: inactive (disabled on boot) >> >> == Keystone service == >> >> openstack-keystone: inactive (disabled on boot) >> >> == Horizon service == >> >> openstack-dashboard: uncontactable >> >> == neutron services == >> >> neutron-server: inactive (disabled on boot) >> >> neutron-dhcp-agent: inactive (disabled on boot) >> >> neutron-l3-agent: inactive (disabled on boot) >> >> neutron-metadata-agent: inactive (disabled on boot) >> >> neutron-lbaas-agent: inactive (disabled on boot) >> >> neutron-openvswitch-agent: active >> >> neutron-metering-agent: inactive (disabled on boot) >> >> == Swift services == >> >> openstack-swift-proxy: inactive (disabled on boot) >> >> openstack-swift-account: inactive (disabled on boot) >> >> openstack-swift-container: inactive (disabled on boot) >> >> openstack-swift-object: inactive (disabled on boot) >> >> == Cinder services == >> >> openstack-cinder-api: inactive (disabled on boot) >> >> openstack-cinder-scheduler: inactive (disabled on boot) >> >> openstack-cinder-volume: inactive (disabled on boot) >> >> openstack-cinder-backup: inactive (disabled on boot) >> >> == Ceilometer services == >> >> openstack-ceilometer-api: inactive (disabled on boot) >> >> openstack-ceilometer-central: inactive (disabled on boot) >> >> openstack-ceilometer-compute: inactive (disabled on boot) >> >> openstack-ceilometer-collector: inactive (disabled on boot) >> >> openstack-ceilometer-notification: inactive (disabled on boot) >> >> == Heat services == >> >> openstack-heat-api: inactive (disabled on boot) >> >> openstack-heat-api-cfn: inactive (disabled on boot) >> >> openstack-heat-api-cloudwatch: inactive (disabled on boot) >> >> openstack-heat-engine: inactive (disabled on boot) >> >> == Sahara services == >> >> openstack-sahara-all: inactive (disabled on boot) >> >> == Support services == >> >> libvirtd: active >> >> openvswitch: active >> >> dbus: active >> >> rabbitmq-server: inactive (disabled on boot) >> >> memcached: inactive (disabled on boot) >> >> >> >> >> >> >> >> Please let me know if there is any other logs which I can provide >> that can help in troubleshooting. >> >> >> >> >> >> Thanks a lot in Advance for your help and support. >> >> >> >> Best Regards, >> >> Milind Gunjan >> >> >> >> >> ________________________________ >> >> This e-mail may contain Sprint proprietary information intended for >> the sole use of the recipient(s). Any use by others is prohibited. If >> you are not the intended recipient, please contact the sender and >> delete all copies of the message. >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > ________________________________ > > This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. From imaslov at dispersivegroup.com Fri Aug 5 15:53:38 2016 From: imaslov at dispersivegroup.com (Ilja Maslov) Date: Fri, 5 Aug 2016 15:53:38 +0000 Subject: [rdo-list] Overcloud pacemaker services restart behavior causes downtime In-Reply-To: References: <84ca6f1b-941a-c891-06dd-0c7cbee6b1ca@redhat.com> <4195e85cb5f94d76abacf9289cdcda11@svr2-disp-exch.dispersive.local> <1aed738bd93843c8b68ba17adc0b3685@svr2-disp-exch.dispersive.local> Message-ID: <869384c07968446f873795017fe7003d@svr2-disp-exch.dispersive.local> Hi, I can?t seem to be able to reproduce your scenario from the bug #1364129. Stopped cluster on a controller then rebooted while tailing neutron server log and nova conductor and API logs on the other two nodes. Did this with all 3 controllers and was clicking in Horizon while the server was rebooting. Logs show connectivity lost and 4 dead neutron agents, but things re-connect upon reboot as expected. Horizon works OK, when I rebooted second and third nodes, got kicked out of horizon and had to log in again. Saw a warning in neutron server.log about keystonemiddleware.auth_token Usin the in-process toke cache is deprecated as of 4.2.0 release ? (this error only appeared during two reboot tests, when I was kicked out of horizon) Now, pcs status had recovered completely after all reboots, but ceph status shows HEALTH_WARN clock skew detected on This got me looking into the clock synchronization. TripleO installs and configured ntpd, but my tripleo-built images also have chronyd installed and enabled. The result is that ntpd.service configured with my NTP server is inactive (dead) and chronyd.service with default centos configuration is running. I use the same package versions you?ve reported in the bug, could it be that nova/neutron/glance restarts you experience are related to cluster time sync problems? Let me know if you?d like to compare our environments or if I can help in any other way. Cheers, Ilja From: Pedro Sousa [mailto:pgsousa at gmail.com] Sent: Thursday, August 04, 2016 12:02 PM To: Ilja Maslov Cc: Raoul Scarazzini ; rdo-list Subject: Re: [rdo-list] Overcloud pacemaker services restart behavior causes downtime Hi, I've deleted the nova and neutron services but the issue persists, so I guess it's not related. Filing the sosreport. Thanks On Thu, Aug 4, 2016 at 4:32 PM, Ilja Maslov > wrote: Not on this fresh install, but what I saw few weeks back was that when controller nodes restart, I see services created with FQDN names that were up and I was able to safely clean the original services with short host names. But I haven?t re-tested controller restarts afterwards. With my fresh install, rabbitmq is not coming up upon reboot (?unknown error? (1)), so I need to fix this first before I?m able to proceed with testing. I?ll let you know how it goes. Ilja From: Pedro Sousa [mailto:pgsousa at gmail.com] Sent: Thursday, August 04, 2016 11:23 AM To: Ilja Maslov > Cc: Raoul Scarazzini >; rdo-list > Subject: Re: [rdo-list] Overcloud pacemaker services restart behavior causes downtime Hi Ilja, I noticed that too. Did you try to delete the services that are marked down and retest? Thanks On Thu, Aug 4, 2016 at 4:12 PM, Ilja Maslov > wrote: Hi, I've noticed similar behavior on Mitaka installed from trunk/mitaka/passed-ci. Appreciate if you could put me in CC. Additional detail is that during initial deployment, nova services, neutron agents and heat engines are registered with the short hostnames and upon controller node restart, these will all show with state=down. Probably because hosts files are re-written after the services had been started with FQDN as a first entry. I do not know to what extent pacemaker resources are monitored, but it could be related to the problem you are reporting. Cheers, Ilja -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Raoul Scarazzini Sent: Thursday, August 04, 2016 9:31 AM To: Pedro Sousa > Cc: rdo-list > Subject: Re: [rdo-list] Overcloud pacemaker services restart behavior causes downtime That will be great, thank you, put me in CC so I can follow this. Thanks, -- Raoul Scarazzini rasca at redhat.com On 04/08/2016 15:29, Pedro Sousa wrote: > Hi Raoul, > > this only happens when the node comes back online after booting. When I > stop the node with "pcs cluster stop", everything works fine, even if > VIP is active on that node. > > Anyway I will file a bugzilla. > > Thanks > > > > > On Thu, Aug 4, 2016 at 1:51 PM, Raoul Scarazzini > >> wrote: > > Ok, so we are on mitaka. Here we have VIPs that are a (Optional) > dependency for haproxy, which is a (Mandatory) dependency for > openstack-core from which all the others (nova, neutron, cinder and so > on) depends. > This means that if you are rebooting a controller in which a VIP is > active you will NOT have a restart of openstack-core since haproxy will > not be restarted, because of the OPTIONAL constraint. > So the behavior you're describing is quite strange. > Maybe other components are in the game here. Can you open a bugzilla > with the exact steps you're using to reproduce the problem and share the > sosreports of your systems? > > Thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > > > On 04/08/2016 12:34, Pedro Sousa wrote: > > Hi, > > > > I use mitaka from centos sig repos: > > > > Centos 7.2 > > centos-release-openstack-mitaka-1-3.el7.noarch > > pacemaker-cli-1.1.13-10.el7_2.2.x86_64 > > pacemaker-1.1.13-10.el7_2.2.x86_64 > > pacemaker-remote-1.1.13-10.el7_2.2.x86_64 > > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64 > > pacemaker-libs-1.1.13-10.el7_2.2.x86_64 > > corosynclib-2.3.4-7.el7_2.3.x86_64 > > corosync-2.3.4-7.el7_2.3.x86_64 > > resource-agents-3.9.5-54.el7_2.10.x86_64 > > > > Let me know if you need more info. > > > > Thanks > > > > > > > > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini > > > >>> wrote: > > > > Hi, > > can you please give us more information about the environment you are > > using? Release, package versions and so on. > > > > -- > > Raoul Scarazzini > > rasca at redhat.com > > >> > > > > On 04/08/2016 11:34, Pedro Sousa wrote: > > > Hi all, > > > > > > I have an overcloud with 3 controller nodes, everything is > working fine, > > > the problem is when I reboot one of the controllers. When > the node comes > > > online, all the services (nova-api, neutron-server) on the > other nodes > > > are also restarted, causing a couple of minutes of downtime > until > > > everything is recovered. > > > > > > In the example below I restarted controller2 and I see these > messages on > > > controller0. My question is if this is the expected > behavior, because in > > > my opinion it shouldn't happen. > > > > > > *Authorization Failed: Service Unavailable (HTTP 503)* > > > *== Glance images ==* > > > *Service Unavailable (HTTP 503)* > > > *== Nova managed services ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova networks ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instance flavors ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *== Nova instances ==* > > > *No handlers could be found for logger > > "keystoneauth.identity.generic.base"* > > > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)* > > > *[root at overcloud-controller-0 ~]# openstack-status * > > > *Broadcast message from > > > systemd-journald at overcloud-controller-0.localdomain (Thu > 2016-08-04 > > > 09:22:31 UTC):* > > > * > > > * > > > *haproxy[2816]: proxy neutron has no server available!* > > > > > > Thanks, > > > Pedro Sousa > > > > > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > >> > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > >> > > > > > > > > > _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Fri Aug 5 15:58:58 2016 From: marius at remote-lab.net (Marius Cornea) Date: Fri, 5 Aug 2016 17:58:58 +0200 Subject: [rdo-list] RDO TripleO Mitaka Overcloud Failing In-Reply-To: <6e1c2211c3ea4970b784308f5ce6d154@PREWE13M11.ad.sprint.com> References: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> <6e1c2211c3ea4970b784308f5ce6d154@PREWE13M11.ad.sprint.com> Message-ID: Hi, Can you check /var/log/nova/nova-scheduler.log to see if it's got any indication on why it's filing to start? Thanks On Fri, Aug 5, 2016 at 4:27 PM, Gunjan, Milind [CTO] wrote: > Hi Marius, > > This is what I see when I ran the puppet script in debug mode: > > Debug: Executing '/bin/systemctl start neutron-server' > Error: Could not start Service[neutron-server]: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. > Wrapped exception: > Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. > Error: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: change from stopped to running failed: Could not start Service[neutron-server]: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. > Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]: Dependency Service[neutron-server] has failures: true > Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]: Skipping because of failed dependencies > Notice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Dependency Service[neutron-server] has failures: true > Warning: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Skipping because of failed dependencies > Notice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Dependency Service[neutron-server] has failures: true > Warning: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Skipping because of failed dependencies > Notice: /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: Dependency Service[neutron-server] has failures: true > Warning: /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: Skipping because of failed dependencies > > Aslo, the script is stuck at present at this step : > > Debug: Executing '/bin/systemctl start openstack-nova-scheduler' > > > Best Regards, > Milind > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Thursday, August 04, 2016 4:26 AM > To: Gunjan, Milind [CTO] > Cc: rdo-list at redhat.com > Subject: Re: [rdo-list] RDO TripleO Mitaka Overcloud Failing > > OK, I don't actually see an error in the logs, the last thing that shows up is: > > on controller-0: > [DEBUG] Running /var/lib/heat-config/hooks/puppet < /var/lib/heat-config/deployed/c989f58d-cd38-4813-a174-7e42c82bcb6f.json > > on compute-0: > [DEBUG] Running /var/lib/heat-config/hooks/puppet < /var/lib/heat-config/deployed/c5265c58-96ae-49d5-9c1e-a38041e2b130.json > > I suspect these steps are timing out so let's try running them manually to figure out what's going on: > > Running the commands manually will output a puppet apply command, showing one from my environment as an example: > > # /var/lib/heat-config/hooks/puppet < > /var/lib/heat-config/deployed/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.json > > [2016-08-04 08:12:21,609] (heat-config) [DEBUG] Running FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff" > FACTER_fqdn="overcloud-controller-0.localdomain" > FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" > puppet apply --detailed-exitcodes > /var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.pp > > Next step is to stop it(ctrl+c), copy the puppet apply command, add --debug and run it: > > # FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff" > FACTER_fqdn="overcloud-controller-0.localdomain" > FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" > puppet apply --detailed-exitcodes > /var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.pp > --debug > > This should output puppet debug info that might lead us to where it gets stuck. Please paste the output so we can investigate further. > > Thanks > > On Thu, Aug 4, 2016 at 3:22 AM, Gunjan, Milind [CTO] wrote: >> Thanks a lot Christopher for the suggestions. >> >> Marius: Thanks a lot for helping me out. I am attaching the requested logs. >> >> I tried to redeploy overcloud with 3 controller but the issue remains the same. Overcloud stack deployment is failing at Post-deployment configuration steps as before. When I was going to /var/log/messages for different services, it seems there is issue with haproxy service. Neutron service is failing too and the service endpoints being configured through puppet are not reachable for all failed service. I have attached os-collect-config journals from all four nodes. >> >> >> Please let me know if there is any other logs or any other troubleshooting steps which I can implement. >> >> Best Regards, >> Milind >> >> -----Original Message----- >> From: Marius Cornea [mailto:marius at remote-lab.net] >> Sent: Wednesday, August 03, 2016 4:00 PM >> To: Gunjan, Milind [CTO] >> Cc: rdo-list at redhat.com >> Subject: Re: [rdo-list] RDO TripleO Mitaka Non-HA Overcloud Failing >> >> Hi, >> >> Could you please ssh to the nodes, gather the os-collect-config journals (journalctl -l -u os-collect-config) and attach them here? >> >> Thank you, >> Marius >> >> On Wed, Aug 3, 2016 at 8:40 PM, Gunjan, Milind [CTO] wrote: >>> Hi All, >>> >>> >>> >>> I am currently working on Tripleo Mitaka Openstack deployment on >>> baremetal >>> servers: >>> >>> Undercloud ? 1 baremetal server with 2 NIC (1 for provisioning and >>> 2nd for external network connectivity) >>> >>> Controller ? 1 baremetal server ( 6 NICs with each openstack VLANs on >>> separate NIC) >>> >>> Compute ? 1 baremetal server >>> >>> >>> >>> I followed Graeme's instructions here : >>> https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html to >>> set up Undercloud . Undercloud deployment was successful and all the >>> images required for overcloud deployment was properly built as per the instruction. >>> I would like to mention that I used libvirt tools to modify the root >>> password on overcloud-full.qcow2 and we also modified the grub file >>> to include ?net.ifnames=0 biosdevname=0? to restore old interface naming. >>> >>> >>> >>> I was able to successfully introspect 2 serves to be used for >>> controller and compute nodes. Also , we added the serial device >>> discovered during introspection as root device: >>> >>> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add >>> properties/root_device='{"serial": "618e728372833010c79bead9066f0f9e"}' >>> >>> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add >>> properties/root_device='{"serial": "618e7283728347101f2107b511603adc"}' >>> >>> >>> >>> Next, we added compute and control tag to respective introspected >>> node with local boot option: >>> >>> >>> >>> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add >>> properties/capabilities='profile:control,boot_option:local' >>> >>> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add >>> properties/capabilities='profile:compute,boot_option:local' >>> >>> >>> >>> We used multiple NIC templates for control and compute node which has >>> been attached along with network-environment.yaml file. Default >>> network isolation template file has been used. >>> >>> >>> >>> >>> >>> Deployment script looks like this : >>> >>> #!/bin/bash >>> >>> DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" >>> >>> template_base_dir="$DIR" >>> >>> ntpserver= #Sprint LAB >>> >>> openstack overcloud deploy --templates \ >>> >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/network-isol >>> a >>> tion.yaml >>> \ >>> >>> -e ${template_base_dir}/environments/network-environment.yaml \ >>> >>> --control-flavor control --compute-flavor compute \ >>> >>> --control-scale 1 --compute-scale 1 \ >>> >>> --ntp-server $ntpserver \ >>> >>> --neutron-network-type vxlan --neutron-tunnel-types vxlan --debug >>> >>> >>> >>> Heat stack deployment goes on more really long time (more than 4 >>> hours) and gets stuck at postdeployment configurations. Please find >>> below the capture during install : >>> >>> >>> >>> >>> >>> Every 2.0s: ironic node-list && nova list && heat stack-list && heat >>> resource-list -n5 overcloud | grep -vi complete Wed Aug 3 17:33:37 >>> 2016 >>> >>> >>> >>> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >>> >>> | UUID | Name | Instance UUID >>> | Power State | Provisioning State | Maintenance | >>> >>> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >>> >>> | 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 | None | >>> 9e7aae15-cabc-4489-a1b2-778915a78df2 | power on | active | >>> False | >>> >>> | afcfbee3-3108-48da-a6da-aba8f422642c | None | >>> c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | power on | active | >>> False | >>> >>> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >>> >>> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >>> >>> | ID | Name | Status | >>> Task State | Power State | Networks | >>> >>> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >>> >>> | 9e7aae15-cabc-4489-a1b2-778915a78df2 | overcloud-controller-0 | >>> | ACTIVE | >>> - | Running | ctlplane=192.168.149.9 | >>> >>> | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | overcloud-novacompute-0 | >>> | ACTIVE | >>> - | Running | ctlplane=192.168.149.8 | >>> >>> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >>> >>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>> >>> | id | stack_name | stack_status | >>> creation_time | updated_time | >>> >>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>> >>> | 26ee0150-4cfa-4268-9107-8bfbf6712913 | overcloud | CREATE_FAILED | >>> 2016-08-03T08:11:34 | None | >>> >>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>> >>> +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ >>> >>> ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ >>> >>> | resource_name | physical_resource_id >>> | resource_type >>> >>> | resource_status | updated_time | stack_name >>> | >>> >>> +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ >>> >>> ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ >>> >>> | ComputeNodesPostDeployment | >>> 3797aec6-e543-4dda-9cd1-c7261e827a64 | >>> OS::TripleO::ComputePostDeployment >>> >>> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud >>> | >>> >>> | ControllerNodesPostDeployment | >>> 6ad9f88c-5c55-4125-97f1-eb0e33329d16 | >>> OS::TripleO::ControllerPostDeployment >>> >>> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud >>> | >>> >>> | ComputePuppetDeployment | >>> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | >>> OS::Heat::StructuredDeployments >>> >>> | CREATE_FAILED | 2016-08-03T08:29:19 | >>> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy >>> | >>> >>> | ControllerOvercloudServicesDeployment_Step4 | >>> 15509f59-ff28-43af-95dd-6247a6a32c2d | >>> OS::Heat::StructuredDeployments >>> >>> | CREATE_FAILED | 2016-08-03T08:29:20 | >>> overcloud-ControllerNodesPostDeployment-35y7uafngfwj >>> | >>> >>> | 0 | >>> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >>> OS::Heat::StructuredDeployment >>> >>> | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | >>> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy-ComputePuppetDeploy >>> m >>> ent-cpahcct3tfw3 >>> | >>> >>> | 0 | >>> 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >>> OS::Heat::StructuredDeployment >>> >>> >>> >>> >>> >>> [stack at mitaka-uc ~]$ openstack software deployment show >>> 5e9308f7-c3a9-4a94-a017-e1acb694c036 >>> >>> +---------------+--------------------------------------+ >>> >>> | Field | Value | >>> >>> +---------------+--------------------------------------+ >>> >>> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >>> >>> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | >>> >>> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | >>> >>> | creation_time | 2016-08-03T08:32:10 | >>> >>> | updated_time | | >>> >>> | status | IN_PROGRESS | >>> >>> | status_reason | Deploy data available | >>> >>> | input_values | {} | >>> >>> | action | CREATE | >>> >>> +---------------+--------------------------------------+ >>> >>> >>> >>> [stack at mitaka-uc ~]$ openstack software deployment show --long >>> 5e9308f7-c3a9-4a94-a017-e1acb694c036 >>> >>> +---------------+--------------------------------------+ >>> >>> | Field | Value | >>> >>> +---------------+--------------------------------------+ >>> >>> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >>> >>> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | >>> >>> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | >>> >>> | creation_time | 2016-08-03T08:32:10 | >>> >>> | updated_time | | >>> >>> | status | IN_PROGRESS | >>> >>> | status_reason | Deploy data available | >>> >>> | input_values | {} | >>> >>> | action | CREATE | >>> >>> | output_values | None | >>> >>> +---------------+--------------------------------------+ >>> >>> >>> >>> [stack at mitaka-uc ~]$ openstack stack resource list >>> 3797aec6-e543-4dda-9cd1-c7261e827a64 >>> >>> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >>> >>> | resource_name | physical_resource_id | >>> resource_type | resource_status | >>> updated_time | >>> >>> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >>> >>> | ComputeArtifactsConfig | a33cd04d-61ab-4429-8565-182409c2b97f | >>> file:///usr/share/openstack-tripleo-heat- | CREATE_COMPLETE | >>> 2016-08-03T08:29:19 | >>> >>> | | | >>> templates/puppet/deploy-artifacts.yaml | | >>> | >>> >>> | ComputePuppetConfig | 5bb712b0-5358-46c7-a444-f9adedfedd50 | >>> OS::Heat::SoftwareConfig | CREATE_COMPLETE | >>> 2016-08-03T08:29:19 | >>> >>> | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | >>> OS::Heat::StructuredDeployments | CREATE_FAILED | >>> 2016-08-03T08:29:19 | >>> >>> | ComputeArtifactsDeploy | 1d13bf34-fc66-4bf1-a3b7-1dd815f58f5a | >>> OS::Heat::StructuredDeployments | CREATE_COMPLETE | >>> 2016-08-03T08:29:19 | >>> >>> | ExtraConfig | | >>> OS::TripleO::NodeExtraConfigPost | INIT_COMPLETE | >>> 2016-08-03T08:29:19 | >>> >>> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >>> >>> >>> >>> [stack at mitaka-uc ~]$ openstack stack resource list >>> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f >>> >>> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >>> >>> | resource_name | physical_resource_id | resource_type >>> | resource_status | updated_time | >>> >>> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >>> >>> | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >>> OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | >>> 2016-08-03T08:30:04 | >>> >>> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >>> >>> [stack at mitaka-uc ~]$ openstack software deployment show >>> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e >>> >>> +---------------+--------------------------------------+ >>> >>> | Field | Value | >>> >>> +---------------+--------------------------------------+ >>> >>> | id | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >>> >>> | server_id | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | >>> >>> | config_id | 24e5c0db-f84f-4a94-8f8e-8e38e73ccc86 | >>> >>> | creation_time | 2016-08-03T08:30:05 | >>> >>> | updated_time | | >>> >>> | status | IN_PROGRESS | >>> >>> | status_reason | Deploy data available | >>> >>> | input_values | {} | >>> >>> | action | CREATE | >>> >>> +---------------+--------------------------------------+ >>> >>> >>> >>> Keystonerc file was not generated. Please find below openstack status >>> command result on controller and compute. >>> >>> >>> >>> [heat-admin at overcloud-controller-0 ~]$ openstack-status >>> >>> == Nova services == >>> >>> openstack-nova-api: active >>> >>> openstack-nova-compute: inactive (disabled on boot) >>> >>> openstack-nova-network: inactive (disabled on boot) >>> >>> openstack-nova-scheduler: activating(disabled on boot) >>> >>> openstack-nova-cert: active >>> >>> openstack-nova-conductor: active >>> >>> openstack-nova-console: inactive (disabled on boot) >>> >>> openstack-nova-consoleauth: active >>> >>> openstack-nova-xvpvncproxy: inactive (disabled on boot) >>> >>> == Glance services == >>> >>> openstack-glance-api: active >>> >>> openstack-glance-registry: active >>> >>> == Keystone service == >>> >>> openstack-keystone: inactive (disabled on boot) >>> >>> == Horizon service == >>> >>> openstack-dashboard: uncontactable >>> >>> == neutron services == >>> >>> neutron-server: failed (disabled on boot) >>> >>> neutron-dhcp-agent: inactive (disabled on boot) >>> >>> neutron-l3-agent: inactive (disabled on boot) >>> >>> neutron-metadata-agent: inactive (disabled on boot) >>> >>> neutron-lbaas-agent: inactive (disabled on boot) >>> >>> neutron-openvswitch-agent: inactive (disabled on boot) >>> >>> neutron-metering-agent: inactive (disabled on boot) >>> >>> == Swift services == >>> >>> openstack-swift-proxy: active >>> >>> openstack-swift-account: active >>> >>> openstack-swift-container: active >>> >>> openstack-swift-object: active >>> >>> == Cinder services == >>> >>> openstack-cinder-api: active >>> >>> openstack-cinder-scheduler: active >>> >>> openstack-cinder-volume: active >>> >>> openstack-cinder-backup: inactive (disabled on boot) >>> >>> == Ceilometer services == >>> >>> openstack-ceilometer-api: active >>> >>> openstack-ceilometer-central: active >>> >>> openstack-ceilometer-compute: inactive (disabled on boot) >>> >>> openstack-ceilometer-collector: active >>> >>> openstack-ceilometer-notification: active >>> >>> == Heat services == >>> >>> openstack-heat-api: inactive (disabled on boot) >>> >>> openstack-heat-api-cfn: active >>> >>> openstack-heat-api-cloudwatch: inactive (disabled on boot) >>> >>> openstack-heat-engine: inactive (disabled on boot) >>> >>> == Sahara services == >>> >>> openstack-sahara-api: active >>> >>> openstack-sahara-engine: active >>> >>> == Support services == >>> >>> libvirtd: active >>> >>> openvswitch: active >>> >>> dbus: active >>> >>> target: active >>> >>> rabbitmq-server: active >>> >>> memcached: active >>> >>> >>> >>> >>> >>> [heat-admin at overcloud-novacompute-0 ~]$ openstack-status >>> >>> == Nova services == >>> >>> openstack-nova-api: inactive (disabled on boot) >>> >>> openstack-nova-compute: activating(disabled on boot) >>> >>> openstack-nova-network: inactive (disabled on boot) >>> >>> openstack-nova-scheduler: inactive (disabled on boot) >>> >>> openstack-nova-cert: inactive (disabled on boot) >>> >>> openstack-nova-conductor: inactive (disabled on boot) >>> >>> openstack-nova-console: inactive (disabled on boot) >>> >>> openstack-nova-consoleauth: inactive (disabled on boot) >>> >>> openstack-nova-xvpvncproxy: inactive (disabled on boot) >>> >>> == Glance services == >>> >>> openstack-glance-api: inactive (disabled on boot) >>> >>> openstack-glance-registry: inactive (disabled on boot) >>> >>> == Keystone service == >>> >>> openstack-keystone: inactive (disabled on boot) >>> >>> == Horizon service == >>> >>> openstack-dashboard: uncontactable >>> >>> == neutron services == >>> >>> neutron-server: inactive (disabled on boot) >>> >>> neutron-dhcp-agent: inactive (disabled on boot) >>> >>> neutron-l3-agent: inactive (disabled on boot) >>> >>> neutron-metadata-agent: inactive (disabled on boot) >>> >>> neutron-lbaas-agent: inactive (disabled on boot) >>> >>> neutron-openvswitch-agent: active >>> >>> neutron-metering-agent: inactive (disabled on boot) >>> >>> == Swift services == >>> >>> openstack-swift-proxy: inactive (disabled on boot) >>> >>> openstack-swift-account: inactive (disabled on boot) >>> >>> openstack-swift-container: inactive (disabled on boot) >>> >>> openstack-swift-object: inactive (disabled on boot) >>> >>> == Cinder services == >>> >>> openstack-cinder-api: inactive (disabled on boot) >>> >>> openstack-cinder-scheduler: inactive (disabled on boot) >>> >>> openstack-cinder-volume: inactive (disabled on boot) >>> >>> openstack-cinder-backup: inactive (disabled on boot) >>> >>> == Ceilometer services == >>> >>> openstack-ceilometer-api: inactive (disabled on boot) >>> >>> openstack-ceilometer-central: inactive (disabled on boot) >>> >>> openstack-ceilometer-compute: inactive (disabled on boot) >>> >>> openstack-ceilometer-collector: inactive (disabled on boot) >>> >>> openstack-ceilometer-notification: inactive (disabled on boot) >>> >>> == Heat services == >>> >>> openstack-heat-api: inactive (disabled on boot) >>> >>> openstack-heat-api-cfn: inactive (disabled on boot) >>> >>> openstack-heat-api-cloudwatch: inactive (disabled on boot) >>> >>> openstack-heat-engine: inactive (disabled on boot) >>> >>> == Sahara services == >>> >>> openstack-sahara-all: inactive (disabled on boot) >>> >>> == Support services == >>> >>> libvirtd: active >>> >>> openvswitch: active >>> >>> dbus: active >>> >>> rabbitmq-server: inactive (disabled on boot) >>> >>> memcached: inactive (disabled on boot) >>> >>> >>> >>> >>> >>> >>> >>> Please let me know if there is any other logs which I can provide >>> that can help in troubleshooting. >>> >>> >>> >>> >>> >>> Thanks a lot in Advance for your help and support. >>> >>> >>> >>> Best Regards, >>> >>> Milind Gunjan >>> >>> >>> >>> >>> ________________________________ >>> >>> This e-mail may contain Sprint proprietary information intended for >>> the sole use of the recipient(s). Any use by others is prohibited. If >>> you are not the intended recipient, please contact the sender and >>> delete all copies of the message. >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> ________________________________ >> >> This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. From Milind.Gunjan at sprint.com Fri Aug 5 16:12:27 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Fri, 5 Aug 2016 16:12:27 +0000 Subject: [rdo-list] RDO TripleO Mitaka Overcloud Failing In-Reply-To: References: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> <6e1c2211c3ea4970b784308f5ce6d154@PREWE13M11.ad.sprint.com> Message-ID: Hi Marius, Please find the output below from nova-scheduler.log : 2016-08-05 16:08:35.642 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7037 attempts left. 2016-08-05 16:08:47.666 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7038 attempts left. 2016-08-05 16:08:59.690 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7039 attempts left. 2016-08-05 16:09:11.713 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7040 attempts left. 2016-08-05 16:09:23.738 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7041 attempts left. 2016-08-05 16:09:35.762 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7042 attempts left. 2016-08-05 16:09:47.786 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7043 attempts left. 2016-08-05 16:09:59.810 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7044 attempts left. 2016-08-05 16:10:11.833 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7045 attempts left. It seems the SQL connection is failing. I saw the similar messages in neutron logs too. What can be the probable cause of this ? Best Regards, Milind -----Original Message----- From: Marius Cornea [mailto:marius at remote-lab.net] Sent: Friday, August 05, 2016 11:59 AM To: Gunjan, Milind [CTO] Cc: rdo-list at redhat.com Subject: Re: [rdo-list] RDO TripleO Mitaka Overcloud Failing Hi, Can you check /var/log/nova/nova-scheduler.log to see if it's got any indication on why it's filing to start? Thanks On Fri, Aug 5, 2016 at 4:27 PM, Gunjan, Milind [CTO] wrote: > Hi Marius, > > This is what I see when I ran the puppet script in debug mode: > > Debug: Executing '/bin/systemctl start neutron-server' > Error: Could not start Service[neutron-server]: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. > Wrapped exception: > Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. > Error: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: change from stopped to running failed: Could not start Service[neutron-server]: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. > Notice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-servi > ce]: Dependency Service[neutron-server] has failures: true > Warning: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-servi > ce]: Skipping because of failed dependencies > Notice: > /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: > Dependency Service[neutron-server] has failures: true > Warning: > /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: > Skipping because of failed dependencies > Notice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: > Dependency Service[neutron-server] has failures: true > Warning: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: > Skipping because of failed dependencies > Notice: > /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: > Dependency Service[neutron-server] has failures: true > Warning: > /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: > Skipping because of failed dependencies > > Aslo, the script is stuck at present at this step : > > Debug: Executing '/bin/systemctl start openstack-nova-scheduler' > > > Best Regards, > Milind > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Thursday, August 04, 2016 4:26 AM > To: Gunjan, Milind [CTO] > Cc: rdo-list at redhat.com > Subject: Re: [rdo-list] RDO TripleO Mitaka Overcloud Failing > > OK, I don't actually see an error in the logs, the last thing that shows up is: > > on controller-0: > [DEBUG] Running /var/lib/heat-config/hooks/puppet < > /var/lib/heat-config/deployed/c989f58d-cd38-4813-a174-7e42c82bcb6f.jso > n > > on compute-0: > [DEBUG] Running /var/lib/heat-config/hooks/puppet < > /var/lib/heat-config/deployed/c5265c58-96ae-49d5-9c1e-a38041e2b130.jso > n > > I suspect these steps are timing out so let's try running them manually to figure out what's going on: > > Running the commands manually will output a puppet apply command, showing one from my environment as an example: > > # /var/lib/heat-config/hooks/puppet < > /var/lib/heat-config/deployed/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.jso > n > > [2016-08-04 08:12:21,609] (heat-config) [DEBUG] Running FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff" > FACTER_fqdn="overcloud-controller-0.localdomain" > FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" > puppet apply --detailed-exitcodes > /var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5 > de5bff.pp > > Next step is to stop it(ctrl+c), copy the puppet apply command, add --debug and run it: > > # FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff" > FACTER_fqdn="overcloud-controller-0.localdomain" > FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" > puppet apply --detailed-exitcodes > /var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5 > de5bff.pp > --debug > > This should output puppet debug info that might lead us to where it gets stuck. Please paste the output so we can investigate further. > > Thanks > > On Thu, Aug 4, 2016 at 3:22 AM, Gunjan, Milind [CTO] wrote: >> Thanks a lot Christopher for the suggestions. >> >> Marius: Thanks a lot for helping me out. I am attaching the requested logs. >> >> I tried to redeploy overcloud with 3 controller but the issue remains the same. Overcloud stack deployment is failing at Post-deployment configuration steps as before. When I was going to /var/log/messages for different services, it seems there is issue with haproxy service. Neutron service is failing too and the service endpoints being configured through puppet are not reachable for all failed service. I have attached os-collect-config journals from all four nodes. >> >> >> Please let me know if there is any other logs or any other troubleshooting steps which I can implement. >> >> Best Regards, >> Milind >> >> -----Original Message----- >> From: Marius Cornea [mailto:marius at remote-lab.net] >> Sent: Wednesday, August 03, 2016 4:00 PM >> To: Gunjan, Milind [CTO] >> Cc: rdo-list at redhat.com >> Subject: Re: [rdo-list] RDO TripleO Mitaka Non-HA Overcloud Failing >> >> Hi, >> >> Could you please ssh to the nodes, gather the os-collect-config journals (journalctl -l -u os-collect-config) and attach them here? >> >> Thank you, >> Marius >> >> On Wed, Aug 3, 2016 at 8:40 PM, Gunjan, Milind [CTO] wrote: >>> Hi All, >>> >>> >>> >>> I am currently working on Tripleo Mitaka Openstack deployment on >>> baremetal >>> servers: >>> >>> Undercloud ? 1 baremetal server with 2 NIC (1 for provisioning and >>> 2nd for external network connectivity) >>> >>> Controller ? 1 baremetal server ( 6 NICs with each openstack VLANs >>> on separate NIC) >>> >>> Compute ? 1 baremetal server >>> >>> >>> >>> I followed Graeme's instructions here : >>> https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html to >>> set up Undercloud . Undercloud deployment was successful and all the >>> images required for overcloud deployment was properly built as per the instruction. >>> I would like to mention that I used libvirt tools to modify the root >>> password on overcloud-full.qcow2 and we also modified the grub file >>> to include ?net.ifnames=0 biosdevname=0? to restore old interface naming. >>> >>> >>> >>> I was able to successfully introspect 2 serves to be used for >>> controller and compute nodes. Also , we added the serial device >>> discovered during introspection as root device: >>> >>> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add >>> properties/root_device='{"serial": "618e728372833010c79bead9066f0f9e"}' >>> >>> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add >>> properties/root_device='{"serial": "618e7283728347101f2107b511603adc"}' >>> >>> >>> >>> Next, we added compute and control tag to respective introspected >>> node with local boot option: >>> >>> >>> >>> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add >>> properties/capabilities='profile:control,boot_option:local' >>> >>> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add >>> properties/capabilities='profile:compute,boot_option:local' >>> >>> >>> >>> We used multiple NIC templates for control and compute node which >>> has been attached along with network-environment.yaml file. Default >>> network isolation template file has been used. >>> >>> >>> >>> >>> >>> Deployment script looks like this : >>> >>> #!/bin/bash >>> >>> DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" >>> >>> template_base_dir="$DIR" >>> >>> ntpserver= #Sprint LAB >>> >>> openstack overcloud deploy --templates \ >>> >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/network-iso >>> l >>> a >>> tion.yaml >>> \ >>> >>> -e ${template_base_dir}/environments/network-environment.yaml \ >>> >>> --control-flavor control --compute-flavor compute \ >>> >>> --control-scale 1 --compute-scale 1 \ >>> >>> --ntp-server $ntpserver \ >>> >>> --neutron-network-type vxlan --neutron-tunnel-types vxlan --debug >>> >>> >>> >>> Heat stack deployment goes on more really long time (more than 4 >>> hours) and gets stuck at postdeployment configurations. Please find >>> below the capture during install : >>> >>> >>> >>> >>> >>> Every 2.0s: ironic node-list && nova list && heat stack-list && heat >>> resource-list -n5 overcloud | grep -vi complete Wed Aug 3 17:33:37 >>> 2016 >>> >>> >>> >>> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >>> >>> | UUID | Name | Instance UUID >>> | Power State | Provisioning State | Maintenance | >>> >>> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >>> >>> | 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 | None | >>> 9e7aae15-cabc-4489-a1b2-778915a78df2 | power on | active | >>> False | >>> >>> | afcfbee3-3108-48da-a6da-aba8f422642c | None | >>> c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | power on | active | >>> False | >>> >>> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >>> >>> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >>> >>> | ID | Name | Status | >>> Task State | Power State | Networks | >>> >>> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >>> >>> | 9e7aae15-cabc-4489-a1b2-778915a78df2 | overcloud-controller-0 | >>> | ACTIVE | >>> - | Running | ctlplane=192.168.149.9 | >>> >>> | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | overcloud-novacompute-0 | >>> | ACTIVE | >>> - | Running | ctlplane=192.168.149.8 | >>> >>> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >>> >>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>> >>> | id | stack_name | stack_status | >>> creation_time | updated_time | >>> >>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>> >>> | 26ee0150-4cfa-4268-9107-8bfbf6712913 | overcloud | CREATE_FAILED >>> | | >>> 2016-08-03T08:11:34 | None | >>> >>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>> >>> +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ >>> >>> ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ >>> >>> | resource_name | physical_resource_id >>> | resource_type >>> >>> | resource_status | updated_time | stack_name >>> | >>> >>> +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ >>> >>> ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ >>> >>> | ComputeNodesPostDeployment | >>> 3797aec6-e543-4dda-9cd1-c7261e827a64 | >>> OS::TripleO::ComputePostDeployment >>> >>> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud >>> | >>> >>> | ControllerNodesPostDeployment | >>> 6ad9f88c-5c55-4125-97f1-eb0e33329d16 | >>> OS::TripleO::ControllerPostDeployment >>> >>> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud >>> | >>> >>> | ComputePuppetDeployment | >>> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | >>> OS::Heat::StructuredDeployments >>> >>> | CREATE_FAILED | 2016-08-03T08:29:19 | >>> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy >>> | >>> >>> | ControllerOvercloudServicesDeployment_Step4 | >>> 15509f59-ff28-43af-95dd-6247a6a32c2d | >>> OS::Heat::StructuredDeployments >>> >>> | CREATE_FAILED | 2016-08-03T08:29:20 | >>> overcloud-ControllerNodesPostDeployment-35y7uafngfwj >>> | >>> >>> | 0 | >>> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >>> OS::Heat::StructuredDeployment >>> >>> | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | >>> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy-ComputePuppetDeplo >>> y >>> m >>> ent-cpahcct3tfw3 >>> | >>> >>> | 0 | >>> 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >>> OS::Heat::StructuredDeployment >>> >>> >>> >>> >>> >>> [stack at mitaka-uc ~]$ openstack software deployment show >>> 5e9308f7-c3a9-4a94-a017-e1acb694c036 >>> >>> +---------------+--------------------------------------+ >>> >>> | Field | Value | >>> >>> +---------------+--------------------------------------+ >>> >>> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >>> >>> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | >>> >>> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | >>> >>> | creation_time | 2016-08-03T08:32:10 | >>> >>> | updated_time | | >>> >>> | status | IN_PROGRESS | >>> >>> | status_reason | Deploy data available | >>> >>> | input_values | {} | >>> >>> | action | CREATE | >>> >>> +---------------+--------------------------------------+ >>> >>> >>> >>> [stack at mitaka-uc ~]$ openstack software deployment show --long >>> 5e9308f7-c3a9-4a94-a017-e1acb694c036 >>> >>> +---------------+--------------------------------------+ >>> >>> | Field | Value | >>> >>> +---------------+--------------------------------------+ >>> >>> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >>> >>> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | >>> >>> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | >>> >>> | creation_time | 2016-08-03T08:32:10 | >>> >>> | updated_time | | >>> >>> | status | IN_PROGRESS | >>> >>> | status_reason | Deploy data available | >>> >>> | input_values | {} | >>> >>> | action | CREATE | >>> >>> | output_values | None | >>> >>> +---------------+--------------------------------------+ >>> >>> >>> >>> [stack at mitaka-uc ~]$ openstack stack resource list >>> 3797aec6-e543-4dda-9cd1-c7261e827a64 >>> >>> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >>> >>> | resource_name | physical_resource_id | >>> resource_type | resource_status | >>> updated_time | >>> >>> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >>> >>> | ComputeArtifactsConfig | a33cd04d-61ab-4429-8565-182409c2b97f | >>> file:///usr/share/openstack-tripleo-heat- | CREATE_COMPLETE | >>> 2016-08-03T08:29:19 | >>> >>> | | | >>> templates/puppet/deploy-artifacts.yaml | | >>> | >>> >>> | ComputePuppetConfig | 5bb712b0-5358-46c7-a444-f9adedfedd50 | >>> OS::Heat::SoftwareConfig | CREATE_COMPLETE | >>> 2016-08-03T08:29:19 | >>> >>> | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | >>> OS::Heat::StructuredDeployments | CREATE_FAILED | >>> 2016-08-03T08:29:19 | >>> >>> | ComputeArtifactsDeploy | 1d13bf34-fc66-4bf1-a3b7-1dd815f58f5a | >>> OS::Heat::StructuredDeployments | CREATE_COMPLETE | >>> 2016-08-03T08:29:19 | >>> >>> | ExtraConfig | | >>> OS::TripleO::NodeExtraConfigPost | INIT_COMPLETE | >>> 2016-08-03T08:29:19 | >>> >>> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >>> >>> >>> >>> [stack at mitaka-uc ~]$ openstack stack resource list >>> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f >>> >>> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >>> >>> | resource_name | physical_resource_id | resource_type >>> | resource_status | updated_time | >>> >>> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >>> >>> | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >>> OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | >>> 2016-08-03T08:30:04 | >>> >>> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >>> >>> [stack at mitaka-uc ~]$ openstack software deployment show >>> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e >>> >>> +---------------+--------------------------------------+ >>> >>> | Field | Value | >>> >>> +---------------+--------------------------------------+ >>> >>> | id | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >>> >>> | server_id | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | >>> >>> | config_id | 24e5c0db-f84f-4a94-8f8e-8e38e73ccc86 | >>> >>> | creation_time | 2016-08-03T08:30:05 | >>> >>> | updated_time | | >>> >>> | status | IN_PROGRESS | >>> >>> | status_reason | Deploy data available | >>> >>> | input_values | {} | >>> >>> | action | CREATE | >>> >>> +---------------+--------------------------------------+ >>> >>> >>> >>> Keystonerc file was not generated. Please find below openstack >>> status command result on controller and compute. >>> >>> >>> >>> [heat-admin at overcloud-controller-0 ~]$ openstack-status >>> >>> == Nova services == >>> >>> openstack-nova-api: active >>> >>> openstack-nova-compute: inactive (disabled on boot) >>> >>> openstack-nova-network: inactive (disabled on boot) >>> >>> openstack-nova-scheduler: activating(disabled on boot) >>> >>> openstack-nova-cert: active >>> >>> openstack-nova-conductor: active >>> >>> openstack-nova-console: inactive (disabled on boot) >>> >>> openstack-nova-consoleauth: active >>> >>> openstack-nova-xvpvncproxy: inactive (disabled on boot) >>> >>> == Glance services == >>> >>> openstack-glance-api: active >>> >>> openstack-glance-registry: active >>> >>> == Keystone service == >>> >>> openstack-keystone: inactive (disabled on boot) >>> >>> == Horizon service == >>> >>> openstack-dashboard: uncontactable >>> >>> == neutron services == >>> >>> neutron-server: failed (disabled on boot) >>> >>> neutron-dhcp-agent: inactive (disabled on boot) >>> >>> neutron-l3-agent: inactive (disabled on boot) >>> >>> neutron-metadata-agent: inactive (disabled on boot) >>> >>> neutron-lbaas-agent: inactive (disabled on boot) >>> >>> neutron-openvswitch-agent: inactive (disabled on boot) >>> >>> neutron-metering-agent: inactive (disabled on boot) >>> >>> == Swift services == >>> >>> openstack-swift-proxy: active >>> >>> openstack-swift-account: active >>> >>> openstack-swift-container: active >>> >>> openstack-swift-object: active >>> >>> == Cinder services == >>> >>> openstack-cinder-api: active >>> >>> openstack-cinder-scheduler: active >>> >>> openstack-cinder-volume: active >>> >>> openstack-cinder-backup: inactive (disabled on boot) >>> >>> == Ceilometer services == >>> >>> openstack-ceilometer-api: active >>> >>> openstack-ceilometer-central: active >>> >>> openstack-ceilometer-compute: inactive (disabled on boot) >>> >>> openstack-ceilometer-collector: active >>> >>> openstack-ceilometer-notification: active >>> >>> == Heat services == >>> >>> openstack-heat-api: inactive (disabled on boot) >>> >>> openstack-heat-api-cfn: active >>> >>> openstack-heat-api-cloudwatch: inactive (disabled on boot) >>> >>> openstack-heat-engine: inactive (disabled on boot) >>> >>> == Sahara services == >>> >>> openstack-sahara-api: active >>> >>> openstack-sahara-engine: active >>> >>> == Support services == >>> >>> libvirtd: active >>> >>> openvswitch: active >>> >>> dbus: active >>> >>> target: active >>> >>> rabbitmq-server: active >>> >>> memcached: active >>> >>> >>> >>> >>> >>> [heat-admin at overcloud-novacompute-0 ~]$ openstack-status >>> >>> == Nova services == >>> >>> openstack-nova-api: inactive (disabled on boot) >>> >>> openstack-nova-compute: activating(disabled on boot) >>> >>> openstack-nova-network: inactive (disabled on boot) >>> >>> openstack-nova-scheduler: inactive (disabled on boot) >>> >>> openstack-nova-cert: inactive (disabled on boot) >>> >>> openstack-nova-conductor: inactive (disabled on boot) >>> >>> openstack-nova-console: inactive (disabled on boot) >>> >>> openstack-nova-consoleauth: inactive (disabled on boot) >>> >>> openstack-nova-xvpvncproxy: inactive (disabled on boot) >>> >>> == Glance services == >>> >>> openstack-glance-api: inactive (disabled on boot) >>> >>> openstack-glance-registry: inactive (disabled on boot) >>> >>> == Keystone service == >>> >>> openstack-keystone: inactive (disabled on boot) >>> >>> == Horizon service == >>> >>> openstack-dashboard: uncontactable >>> >>> == neutron services == >>> >>> neutron-server: inactive (disabled on boot) >>> >>> neutron-dhcp-agent: inactive (disabled on boot) >>> >>> neutron-l3-agent: inactive (disabled on boot) >>> >>> neutron-metadata-agent: inactive (disabled on boot) >>> >>> neutron-lbaas-agent: inactive (disabled on boot) >>> >>> neutron-openvswitch-agent: active >>> >>> neutron-metering-agent: inactive (disabled on boot) >>> >>> == Swift services == >>> >>> openstack-swift-proxy: inactive (disabled on boot) >>> >>> openstack-swift-account: inactive (disabled on boot) >>> >>> openstack-swift-container: inactive (disabled on boot) >>> >>> openstack-swift-object: inactive (disabled on boot) >>> >>> == Cinder services == >>> >>> openstack-cinder-api: inactive (disabled on boot) >>> >>> openstack-cinder-scheduler: inactive (disabled on boot) >>> >>> openstack-cinder-volume: inactive (disabled on boot) >>> >>> openstack-cinder-backup: inactive (disabled on boot) >>> >>> == Ceilometer services == >>> >>> openstack-ceilometer-api: inactive (disabled on boot) >>> >>> openstack-ceilometer-central: inactive (disabled on boot) >>> >>> openstack-ceilometer-compute: inactive (disabled on boot) >>> >>> openstack-ceilometer-collector: inactive (disabled on boot) >>> >>> openstack-ceilometer-notification: inactive (disabled on boot) >>> >>> == Heat services == >>> >>> openstack-heat-api: inactive (disabled on boot) >>> >>> openstack-heat-api-cfn: inactive (disabled on boot) >>> >>> openstack-heat-api-cloudwatch: inactive (disabled on boot) >>> >>> openstack-heat-engine: inactive (disabled on boot) >>> >>> == Sahara services == >>> >>> openstack-sahara-all: inactive (disabled on boot) >>> >>> == Support services == >>> >>> libvirtd: active >>> >>> openvswitch: active >>> >>> dbus: active >>> >>> rabbitmq-server: inactive (disabled on boot) >>> >>> memcached: inactive (disabled on boot) >>> >>> >>> >>> >>> >>> >>> >>> Please let me know if there is any other logs which I can provide >>> that can help in troubleshooting. >>> >>> >>> >>> >>> >>> Thanks a lot in Advance for your help and support. >>> >>> >>> >>> Best Regards, >>> >>> Milind Gunjan >>> >>> >>> >>> >>> ________________________________ >>> >>> This e-mail may contain Sprint proprietary information intended for >>> the sole use of the recipient(s). Any use by others is prohibited. >>> If you are not the intended recipient, please contact the sender and >>> delete all copies of the message. >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> ________________________________ >> >> This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. From dms at redhat.com Sat Aug 6 03:27:32 2016 From: dms at redhat.com (David Moreau Simard) Date: Fri, 5 Aug 2016 23:27:32 -0400 Subject: [rdo-list] Failure to build from source for Horizon: three new dependencies (need reviews) Message-ID: Hi, There's a FTBFS for Horizon, they've added three new dependencies [1]. The review.rdo for the ftbfs is here [2]. Considering we're friday and we're likely to be the weekend without a consistent build, I've figured I'd at least go ahead and submit reviews for them ASAP. These are my first package reviews ever, be nice :P - XStatic-Angular-Schema-Form: https://bugzilla.redhat.com/show_bug.cgi?id=1364603 - XStatic-objectpath: https://bugzilla.redhat.com/show_bug.cgi?id=1364607 - XStatic-tv4: https://bugzilla.redhat.com/show_bug.cgi?id=1364620 The first one looks okay but the other two, while the koji scratch build works, the fedora-review build fails. It's probably an obvious mistake but I can't spot it right now. Will look later. [1]: https://review.openstack.org/#/c/332745/ [2]: https://review.rdoproject.org/r/#/c/1807/ David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From dms at redhat.com Sat Aug 6 17:22:23 2016 From: dms at redhat.com (David Moreau Simard) Date: Sat, 6 Aug 2016 13:22:23 -0400 Subject: [rdo-list] Failure to build from source for Horizon: three new dependencies (need reviews) In-Reply-To: References: Message-ID: Thanks Haikel ! I've put up new patches with your feedback and it turns out my fedora-review problem was a known issue when building xstatic source [1]. Simply cleaning the fedora-review mockroot resolved that. [1]: https://bitbucket.org/thomaswaldmann/xstatic/issues/2/cannot-build-a-new-xstatic-package-with David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Sat, Aug 6, 2016 at 3:22 AM, Ha?kel Gu?mar wrote: > On 06/08/16 05:27, David Moreau Simard wrote: >> Hi, >> >> There's a FTBFS for Horizon, they've added three new dependencies [1]. >> The review.rdo for the ftbfs is here [2]. >> >> Considering we're friday and we're likely to be the weekend without a >> consistent build, I've figured I'd at least go ahead and submit >> reviews for them ASAP. >> These are my first package reviews ever, be nice :P >> >> - XStatic-Angular-Schema-Form: >> https://bugzilla.redhat.com/show_bug.cgi?id=1364603 >> - XStatic-objectpath: https://bugzilla.redhat.com/show_bug.cgi?id=1364607 >> - XStatic-tv4: https://bugzilla.redhat.com/show_bug.cgi?id=1364620 >> >> The first one looks okay but the other two, while the koji scratch >> build works, the fedora-review build fails. >> It's probably an obvious mistake but I can't spot it right now. Will look later. >> >> [1]: https://review.openstack.org/#/c/332745/ >> [2]: https://review.rdoproject.org/r/#/c/1807/ >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> > > As you're not yet a Fedora packager, I blocked the FE-NEEDSPONSOR > tracker. Sponsoring process requires from you at least, two (good) > informal reviews. Either Fedora or RDO-NEWTON, just link them back to > one of your tickets. > > Feel free re-assign tickets, but until David is sponsored, just wait > before setting fedora-review flag. > I self-assigned the tickets as XStatic embeds javascript library. A > reviewer that does not know that javascript libraries have a "temporary" > bundling exception by packaging committee can make it drag on. > > > Only briefly reviewed them, there are few issues here and there, easy > fixes but not ready to be pre-imported as-is in CBS. > > Regards, > H. > > PS: for packaging topic, you may CC jruzicka, he should be able to help. From bderzhavets at hotmail.com Sun Aug 7 07:52:29 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sun, 7 Aug 2016 07:52:29 +0000 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: , Message-ID: TripleO HA Controller been installed via instack-virt-setup has PCS CLI like :- pcs resource cleanup neutron-server-clone pcs resource cleanup openstack-nova-api-clone pcs resource cleanup openstack-nova-consoleauth-clone pcs resource cleanup openstack-heat-engine-clone pcs resource cleanup openstack-cinder-api-clone pcs resource cleanup openstack-glance-registry-clone pcs resource cleanup httpd-clone been working as expected on bare metal Same cluster been setup via QuickStart (Virtual ENV) after bouncing one of controllers included in cluster ignores PCS CLI at least via my experience ( which is obviously limited either format of particular commands is wrong for QuickStart ) I believe that dropping (complete replacing ) instack-virt-setup is not a good idea in general. Personally, I believe that like in case with packstack it is always good to have VENV configuration been tested before going to bare metal deployment. My major concern is maintenance and disaster recovery tests , rather then deployment itself . What good is for me TripleO Quickstart running on bare metal if I cannot replace crashed VM Controller just been limited to Services HA ( all 3 Cluster VMs running on single bare metal node ) Thanks Boris. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Mon Aug 8 06:50:52 2016 From: marius at remote-lab.net (Marius Cornea) Date: Mon, 8 Aug 2016 08:50:52 +0200 Subject: [rdo-list] RDO TripleO Mitaka Overcloud Failing In-Reply-To: References: <6704ed6564014183a08d067cf5b8a5cc@PREWE13M11.ad.sprint.com> <6e1c2211c3ea4970b784308f5ce6d154@PREWE13M11.ad.sprint.com> Message-ID: Hi, It looks like it's unable to connect to the galera cluster. I'd do the following next steps to troubleshoot this: run pcs status and make sure that the galera resources are running, check the haproxy config (/etc/haproxy/haproxy.cfg) for the mysql vip and make sure it's started(as a pcs resource) and reachable. Thanks On Fri, Aug 5, 2016 at 6:12 PM, Gunjan, Milind [CTO] wrote: > Hi Marius, > > Please find the output below from nova-scheduler.log : > > > 2016-08-05 16:08:35.642 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7037 attempts left. > 2016-08-05 16:08:47.666 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7038 attempts left. > 2016-08-05 16:08:59.690 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7039 attempts left. > 2016-08-05 16:09:11.713 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7040 attempts left. > 2016-08-05 16:09:23.738 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7041 attempts left. > 2016-08-05 16:09:35.762 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7042 attempts left. > 2016-08-05 16:09:47.786 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7043 attempts left. > 2016-08-05 16:09:59.810 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7044 attempts left. > 2016-08-05 16:10:11.833 15102 WARNING oslo_db.sqlalchemy.engines [req-1febb724-e680-414b-88ba-5c3e02265849 - - - - -] SQL connection failed. -7045 attempts left. > > It seems the SQL connection is failing. I saw the similar messages in neutron logs too. > > What can be the probable cause of this ? > > Best Regards, > Milind > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Friday, August 05, 2016 11:59 AM > To: Gunjan, Milind [CTO] > Cc: rdo-list at redhat.com > Subject: Re: [rdo-list] RDO TripleO Mitaka Overcloud Failing > > Hi, > > Can you check /var/log/nova/nova-scheduler.log to see if it's got any indication on why it's filing to start? > > Thanks > > On Fri, Aug 5, 2016 at 4:27 PM, Gunjan, Milind [CTO] wrote: >> Hi Marius, >> >> This is what I see when I ran the puppet script in debug mode: >> >> Debug: Executing '/bin/systemctl start neutron-server' >> Error: Could not start Service[neutron-server]: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. >> Wrapped exception: >> Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. >> Error: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: change from stopped to running failed: Could not start Service[neutron-server]: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a timeout was exceeded. See "systemctl status neutron-server.service" and "journalctl -xe" for details. >> Notice: >> /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-servi >> ce]: Dependency Service[neutron-server] has failures: true >> Warning: >> /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-servi >> ce]: Skipping because of failed dependencies >> Notice: >> /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: >> Dependency Service[neutron-server] has failures: true >> Warning: >> /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: >> Skipping because of failed dependencies >> Notice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: >> Dependency Service[neutron-server] has failures: true >> Warning: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: >> Skipping because of failed dependencies >> Notice: >> /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: >> Dependency Service[neutron-server] has failures: true >> Warning: >> /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: >> Skipping because of failed dependencies >> >> Aslo, the script is stuck at present at this step : >> >> Debug: Executing '/bin/systemctl start openstack-nova-scheduler' >> >> >> Best Regards, >> Milind >> >> -----Original Message----- >> From: Marius Cornea [mailto:marius at remote-lab.net] >> Sent: Thursday, August 04, 2016 4:26 AM >> To: Gunjan, Milind [CTO] >> Cc: rdo-list at redhat.com >> Subject: Re: [rdo-list] RDO TripleO Mitaka Overcloud Failing >> >> OK, I don't actually see an error in the logs, the last thing that shows up is: >> >> on controller-0: >> [DEBUG] Running /var/lib/heat-config/hooks/puppet < >> /var/lib/heat-config/deployed/c989f58d-cd38-4813-a174-7e42c82bcb6f.jso >> n >> >> on compute-0: >> [DEBUG] Running /var/lib/heat-config/hooks/puppet < >> /var/lib/heat-config/deployed/c5265c58-96ae-49d5-9c1e-a38041e2b130.jso >> n >> >> I suspect these steps are timing out so let's try running them manually to figure out what's going on: >> >> Running the commands manually will output a puppet apply command, showing one from my environment as an example: >> >> # /var/lib/heat-config/hooks/puppet < >> /var/lib/heat-config/deployed/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.jso >> n >> >> [2016-08-04 08:12:21,609] (heat-config) [DEBUG] Running FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff" >> FACTER_fqdn="overcloud-controller-0.localdomain" >> FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" >> puppet apply --detailed-exitcodes >> /var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5 >> de5bff.pp >> >> Next step is to stop it(ctrl+c), copy the puppet apply command, add --debug and run it: >> >> # FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff" >> FACTER_fqdn="overcloud-controller-0.localdomain" >> FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" >> puppet apply --detailed-exitcodes >> /var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5 >> de5bff.pp >> --debug >> >> This should output puppet debug info that might lead us to where it gets stuck. Please paste the output so we can investigate further. >> >> Thanks >> >> On Thu, Aug 4, 2016 at 3:22 AM, Gunjan, Milind [CTO] wrote: >>> Thanks a lot Christopher for the suggestions. >>> >>> Marius: Thanks a lot for helping me out. I am attaching the requested logs. >>> >>> I tried to redeploy overcloud with 3 controller but the issue remains the same. Overcloud stack deployment is failing at Post-deployment configuration steps as before. When I was going to /var/log/messages for different services, it seems there is issue with haproxy service. Neutron service is failing too and the service endpoints being configured through puppet are not reachable for all failed service. I have attached os-collect-config journals from all four nodes. >>> >>> >>> Please let me know if there is any other logs or any other troubleshooting steps which I can implement. >>> >>> Best Regards, >>> Milind >>> >>> -----Original Message----- >>> From: Marius Cornea [mailto:marius at remote-lab.net] >>> Sent: Wednesday, August 03, 2016 4:00 PM >>> To: Gunjan, Milind [CTO] >>> Cc: rdo-list at redhat.com >>> Subject: Re: [rdo-list] RDO TripleO Mitaka Non-HA Overcloud Failing >>> >>> Hi, >>> >>> Could you please ssh to the nodes, gather the os-collect-config journals (journalctl -l -u os-collect-config) and attach them here? >>> >>> Thank you, >>> Marius >>> >>> On Wed, Aug 3, 2016 at 8:40 PM, Gunjan, Milind [CTO] wrote: >>>> Hi All, >>>> >>>> >>>> >>>> I am currently working on Tripleo Mitaka Openstack deployment on >>>> baremetal >>>> servers: >>>> >>>> Undercloud ? 1 baremetal server with 2 NIC (1 for provisioning and >>>> 2nd for external network connectivity) >>>> >>>> Controller ? 1 baremetal server ( 6 NICs with each openstack VLANs >>>> on separate NIC) >>>> >>>> Compute ? 1 baremetal server >>>> >>>> >>>> >>>> I followed Graeme's instructions here : >>>> https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html to >>>> set up Undercloud . Undercloud deployment was successful and all the >>>> images required for overcloud deployment was properly built as per the instruction. >>>> I would like to mention that I used libvirt tools to modify the root >>>> password on overcloud-full.qcow2 and we also modified the grub file >>>> to include ?net.ifnames=0 biosdevname=0? to restore old interface naming. >>>> >>>> >>>> >>>> I was able to successfully introspect 2 serves to be used for >>>> controller and compute nodes. Also , we added the serial device >>>> discovered during introspection as root device: >>>> >>>> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add >>>> properties/root_device='{"serial": "618e728372833010c79bead9066f0f9e"}' >>>> >>>> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add >>>> properties/root_device='{"serial": "618e7283728347101f2107b511603adc"}' >>>> >>>> >>>> >>>> Next, we added compute and control tag to respective introspected >>>> node with local boot option: >>>> >>>> >>>> >>>> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add >>>> properties/capabilities='profile:control,boot_option:local' >>>> >>>> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add >>>> properties/capabilities='profile:compute,boot_option:local' >>>> >>>> >>>> >>>> We used multiple NIC templates for control and compute node which >>>> has been attached along with network-environment.yaml file. Default >>>> network isolation template file has been used. >>>> >>>> >>>> >>>> >>>> >>>> Deployment script looks like this : >>>> >>>> #!/bin/bash >>>> >>>> DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" >>>> >>>> template_base_dir="$DIR" >>>> >>>> ntpserver= #Sprint LAB >>>> >>>> openstack overcloud deploy --templates \ >>>> >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/network-iso >>>> l >>>> a >>>> tion.yaml >>>> \ >>>> >>>> -e ${template_base_dir}/environments/network-environment.yaml \ >>>> >>>> --control-flavor control --compute-flavor compute \ >>>> >>>> --control-scale 1 --compute-scale 1 \ >>>> >>>> --ntp-server $ntpserver \ >>>> >>>> --neutron-network-type vxlan --neutron-tunnel-types vxlan --debug >>>> >>>> >>>> >>>> Heat stack deployment goes on more really long time (more than 4 >>>> hours) and gets stuck at postdeployment configurations. Please find >>>> below the capture during install : >>>> >>>> >>>> >>>> >>>> >>>> Every 2.0s: ironic node-list && nova list && heat stack-list && heat >>>> resource-list -n5 overcloud | grep -vi complete Wed Aug 3 17:33:37 >>>> 2016 >>>> >>>> >>>> >>>> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >>>> >>>> | UUID | Name | Instance UUID >>>> | Power State | Provisioning State | Maintenance | >>>> >>>> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >>>> >>>> | 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 | None | >>>> 9e7aae15-cabc-4489-a1b2-778915a78df2 | power on | active | >>>> False | >>>> >>>> | afcfbee3-3108-48da-a6da-aba8f422642c | None | >>>> c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | power on | active | >>>> False | >>>> >>>> +--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+ >>>> >>>> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >>>> >>>> | ID | Name | Status | >>>> Task State | Power State | Networks | >>>> >>>> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >>>> >>>> | 9e7aae15-cabc-4489-a1b2-778915a78df2 | overcloud-controller-0 | >>>> | ACTIVE | >>>> - | Running | ctlplane=192.168.149.9 | >>>> >>>> | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | overcloud-novacompute-0 | >>>> | ACTIVE | >>>> - | Running | ctlplane=192.168.149.8 | >>>> >>>> +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ >>>> >>>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>>> >>>> | id | stack_name | stack_status | >>>> creation_time | updated_time | >>>> >>>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>>> >>>> | 26ee0150-4cfa-4268-9107-8bfbf6712913 | overcloud | CREATE_FAILED >>>> | | >>>> 2016-08-03T08:11:34 | None | >>>> >>>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>>> >>>> +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ >>>> >>>> ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ >>>> >>>> | resource_name | physical_resource_id >>>> | resource_type >>>> >>>> | resource_status | updated_time | stack_name >>>> | >>>> >>>> +---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------ >>>> >>>> ---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+ >>>> >>>> | ComputeNodesPostDeployment | >>>> 3797aec6-e543-4dda-9cd1-c7261e827a64 | >>>> OS::TripleO::ComputePostDeployment >>>> >>>> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud >>>> | >>>> >>>> | ControllerNodesPostDeployment | >>>> 6ad9f88c-5c55-4125-97f1-eb0e33329d16 | >>>> OS::TripleO::ControllerPostDeployment >>>> >>>> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud >>>> | >>>> >>>> | ComputePuppetDeployment | >>>> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | >>>> OS::Heat::StructuredDeployments >>>> >>>> | CREATE_FAILED | 2016-08-03T08:29:19 | >>>> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy >>>> | >>>> >>>> | ControllerOvercloudServicesDeployment_Step4 | >>>> 15509f59-ff28-43af-95dd-6247a6a32c2d | >>>> OS::Heat::StructuredDeployments >>>> >>>> | CREATE_FAILED | 2016-08-03T08:29:20 | >>>> overcloud-ControllerNodesPostDeployment-35y7uafngfwj >>>> | >>>> >>>> | 0 | >>>> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >>>> OS::Heat::StructuredDeployment >>>> >>>> | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 | >>>> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy-ComputePuppetDeplo >>>> y >>>> m >>>> ent-cpahcct3tfw3 >>>> | >>>> >>>> | 0 | >>>> 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >>>> OS::Heat::StructuredDeployment >>>> >>>> >>>> >>>> >>>> >>>> [stack at mitaka-uc ~]$ openstack software deployment show >>>> 5e9308f7-c3a9-4a94-a017-e1acb694c036 >>>> >>>> +---------------+--------------------------------------+ >>>> >>>> | Field | Value | >>>> >>>> +---------------+--------------------------------------+ >>>> >>>> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >>>> >>>> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | >>>> >>>> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | >>>> >>>> | creation_time | 2016-08-03T08:32:10 | >>>> >>>> | updated_time | | >>>> >>>> | status | IN_PROGRESS | >>>> >>>> | status_reason | Deploy data available | >>>> >>>> | input_values | {} | >>>> >>>> | action | CREATE | >>>> >>>> +---------------+--------------------------------------+ >>>> >>>> >>>> >>>> [stack at mitaka-uc ~]$ openstack software deployment show --long >>>> 5e9308f7-c3a9-4a94-a017-e1acb694c036 >>>> >>>> +---------------+--------------------------------------+ >>>> >>>> | Field | Value | >>>> >>>> +---------------+--------------------------------------+ >>>> >>>> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 | >>>> >>>> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 | >>>> >>>> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 | >>>> >>>> | creation_time | 2016-08-03T08:32:10 | >>>> >>>> | updated_time | | >>>> >>>> | status | IN_PROGRESS | >>>> >>>> | status_reason | Deploy data available | >>>> >>>> | input_values | {} | >>>> >>>> | action | CREATE | >>>> >>>> | output_values | None | >>>> >>>> +---------------+--------------------------------------+ >>>> >>>> >>>> >>>> [stack at mitaka-uc ~]$ openstack stack resource list >>>> 3797aec6-e543-4dda-9cd1-c7261e827a64 >>>> >>>> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >>>> >>>> | resource_name | physical_resource_id | >>>> resource_type | resource_status | >>>> updated_time | >>>> >>>> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >>>> >>>> | ComputeArtifactsConfig | a33cd04d-61ab-4429-8565-182409c2b97f | >>>> file:///usr/share/openstack-tripleo-heat- | CREATE_COMPLETE | >>>> 2016-08-03T08:29:19 | >>>> >>>> | | | >>>> templates/puppet/deploy-artifacts.yaml | | >>>> | >>>> >>>> | ComputePuppetConfig | 5bb712b0-5358-46c7-a444-f9adedfedd50 | >>>> OS::Heat::SoftwareConfig | CREATE_COMPLETE | >>>> 2016-08-03T08:29:19 | >>>> >>>> | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f | >>>> OS::Heat::StructuredDeployments | CREATE_FAILED | >>>> 2016-08-03T08:29:19 | >>>> >>>> | ComputeArtifactsDeploy | 1d13bf34-fc66-4bf1-a3b7-1dd815f58f5a | >>>> OS::Heat::StructuredDeployments | CREATE_COMPLETE | >>>> 2016-08-03T08:29:19 | >>>> >>>> | ExtraConfig | | >>>> OS::TripleO::NodeExtraConfigPost | INIT_COMPLETE | >>>> 2016-08-03T08:29:19 | >>>> >>>> +-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+ >>>> >>>> >>>> >>>> [stack at mitaka-uc ~]$ openstack stack resource list >>>> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f >>>> >>>> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >>>> >>>> | resource_name | physical_resource_id | resource_type >>>> | resource_status | updated_time | >>>> >>>> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >>>> >>>> | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >>>> OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS | >>>> 2016-08-03T08:30:04 | >>>> >>>> +---------------+--------------------------------------+--------------------------------+--------------------+---------------------+ >>>> >>>> [stack at mitaka-uc ~]$ openstack software deployment show >>>> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e >>>> >>>> +---------------+--------------------------------------+ >>>> >>>> | Field | Value | >>>> >>>> +---------------+--------------------------------------+ >>>> >>>> | id | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e | >>>> >>>> | server_id | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | >>>> >>>> | config_id | 24e5c0db-f84f-4a94-8f8e-8e38e73ccc86 | >>>> >>>> | creation_time | 2016-08-03T08:30:05 | >>>> >>>> | updated_time | | >>>> >>>> | status | IN_PROGRESS | >>>> >>>> | status_reason | Deploy data available | >>>> >>>> | input_values | {} | >>>> >>>> | action | CREATE | >>>> >>>> +---------------+--------------------------------------+ >>>> >>>> >>>> >>>> Keystonerc file was not generated. Please find below openstack >>>> status command result on controller and compute. >>>> >>>> >>>> >>>> [heat-admin at overcloud-controller-0 ~]$ openstack-status >>>> >>>> == Nova services == >>>> >>>> openstack-nova-api: active >>>> >>>> openstack-nova-compute: inactive (disabled on boot) >>>> >>>> openstack-nova-network: inactive (disabled on boot) >>>> >>>> openstack-nova-scheduler: activating(disabled on boot) >>>> >>>> openstack-nova-cert: active >>>> >>>> openstack-nova-conductor: active >>>> >>>> openstack-nova-console: inactive (disabled on boot) >>>> >>>> openstack-nova-consoleauth: active >>>> >>>> openstack-nova-xvpvncproxy: inactive (disabled on boot) >>>> >>>> == Glance services == >>>> >>>> openstack-glance-api: active >>>> >>>> openstack-glance-registry: active >>>> >>>> == Keystone service == >>>> >>>> openstack-keystone: inactive (disabled on boot) >>>> >>>> == Horizon service == >>>> >>>> openstack-dashboard: uncontactable >>>> >>>> == neutron services == >>>> >>>> neutron-server: failed (disabled on boot) >>>> >>>> neutron-dhcp-agent: inactive (disabled on boot) >>>> >>>> neutron-l3-agent: inactive (disabled on boot) >>>> >>>> neutron-metadata-agent: inactive (disabled on boot) >>>> >>>> neutron-lbaas-agent: inactive (disabled on boot) >>>> >>>> neutron-openvswitch-agent: inactive (disabled on boot) >>>> >>>> neutron-metering-agent: inactive (disabled on boot) >>>> >>>> == Swift services == >>>> >>>> openstack-swift-proxy: active >>>> >>>> openstack-swift-account: active >>>> >>>> openstack-swift-container: active >>>> >>>> openstack-swift-object: active >>>> >>>> == Cinder services == >>>> >>>> openstack-cinder-api: active >>>> >>>> openstack-cinder-scheduler: active >>>> >>>> openstack-cinder-volume: active >>>> >>>> openstack-cinder-backup: inactive (disabled on boot) >>>> >>>> == Ceilometer services == >>>> >>>> openstack-ceilometer-api: active >>>> >>>> openstack-ceilometer-central: active >>>> >>>> openstack-ceilometer-compute: inactive (disabled on boot) >>>> >>>> openstack-ceilometer-collector: active >>>> >>>> openstack-ceilometer-notification: active >>>> >>>> == Heat services == >>>> >>>> openstack-heat-api: inactive (disabled on boot) >>>> >>>> openstack-heat-api-cfn: active >>>> >>>> openstack-heat-api-cloudwatch: inactive (disabled on boot) >>>> >>>> openstack-heat-engine: inactive (disabled on boot) >>>> >>>> == Sahara services == >>>> >>>> openstack-sahara-api: active >>>> >>>> openstack-sahara-engine: active >>>> >>>> == Support services == >>>> >>>> libvirtd: active >>>> >>>> openvswitch: active >>>> >>>> dbus: active >>>> >>>> target: active >>>> >>>> rabbitmq-server: active >>>> >>>> memcached: active >>>> >>>> >>>> >>>> >>>> >>>> [heat-admin at overcloud-novacompute-0 ~]$ openstack-status >>>> >>>> == Nova services == >>>> >>>> openstack-nova-api: inactive (disabled on boot) >>>> >>>> openstack-nova-compute: activating(disabled on boot) >>>> >>>> openstack-nova-network: inactive (disabled on boot) >>>> >>>> openstack-nova-scheduler: inactive (disabled on boot) >>>> >>>> openstack-nova-cert: inactive (disabled on boot) >>>> >>>> openstack-nova-conductor: inactive (disabled on boot) >>>> >>>> openstack-nova-console: inactive (disabled on boot) >>>> >>>> openstack-nova-consoleauth: inactive (disabled on boot) >>>> >>>> openstack-nova-xvpvncproxy: inactive (disabled on boot) >>>> >>>> == Glance services == >>>> >>>> openstack-glance-api: inactive (disabled on boot) >>>> >>>> openstack-glance-registry: inactive (disabled on boot) >>>> >>>> == Keystone service == >>>> >>>> openstack-keystone: inactive (disabled on boot) >>>> >>>> == Horizon service == >>>> >>>> openstack-dashboard: uncontactable >>>> >>>> == neutron services == >>>> >>>> neutron-server: inactive (disabled on boot) >>>> >>>> neutron-dhcp-agent: inactive (disabled on boot) >>>> >>>> neutron-l3-agent: inactive (disabled on boot) >>>> >>>> neutron-metadata-agent: inactive (disabled on boot) >>>> >>>> neutron-lbaas-agent: inactive (disabled on boot) >>>> >>>> neutron-openvswitch-agent: active >>>> >>>> neutron-metering-agent: inactive (disabled on boot) >>>> >>>> == Swift services == >>>> >>>> openstack-swift-proxy: inactive (disabled on boot) >>>> >>>> openstack-swift-account: inactive (disabled on boot) >>>> >>>> openstack-swift-container: inactive (disabled on boot) >>>> >>>> openstack-swift-object: inactive (disabled on boot) >>>> >>>> == Cinder services == >>>> >>>> openstack-cinder-api: inactive (disabled on boot) >>>> >>>> openstack-cinder-scheduler: inactive (disabled on boot) >>>> >>>> openstack-cinder-volume: inactive (disabled on boot) >>>> >>>> openstack-cinder-backup: inactive (disabled on boot) >>>> >>>> == Ceilometer services == >>>> >>>> openstack-ceilometer-api: inactive (disabled on boot) >>>> >>>> openstack-ceilometer-central: inactive (disabled on boot) >>>> >>>> openstack-ceilometer-compute: inactive (disabled on boot) >>>> >>>> openstack-ceilometer-collector: inactive (disabled on boot) >>>> >>>> openstack-ceilometer-notification: inactive (disabled on boot) >>>> >>>> == Heat services == >>>> >>>> openstack-heat-api: inactive (disabled on boot) >>>> >>>> openstack-heat-api-cfn: inactive (disabled on boot) >>>> >>>> openstack-heat-api-cloudwatch: inactive (disabled on boot) >>>> >>>> openstack-heat-engine: inactive (disabled on boot) >>>> >>>> == Sahara services == >>>> >>>> openstack-sahara-all: inactive (disabled on boot) >>>> >>>> == Support services == >>>> >>>> libvirtd: active >>>> >>>> openvswitch: active >>>> >>>> dbus: active >>>> >>>> rabbitmq-server: inactive (disabled on boot) >>>> >>>> memcached: inactive (disabled on boot) >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> Please let me know if there is any other logs which I can provide >>>> that can help in troubleshooting. >>>> >>>> >>>> >>>> >>>> >>>> Thanks a lot in Advance for your help and support. >>>> >>>> >>>> >>>> Best Regards, >>>> >>>> Milind Gunjan >>>> >>>> >>>> >>>> >>>> ________________________________ >>>> >>>> This e-mail may contain Sprint proprietary information intended for >>>> the sole use of the recipient(s). Any use by others is prohibited. >>>> If you are not the intended recipient, please contact the sender and >>>> delete all copies of the message. >>>> >>>> _______________________________________________ >>>> rdo-list mailing list >>>> rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> ________________________________ >>> >>> This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. From adarazs at redhat.com Mon Aug 8 11:03:59 2016 From: adarazs at redhat.com (Attila Darazs) Date: Mon, 8 Aug 2016 13:03:59 +0200 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) Message-ID: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> Promotion jobs -------------- As I took responsibility for the RDO image promotion pipelines[1] during trown's absence, let me recap what's up with the Master/Mitaka/Liberty jobs as they are all blocked. Master: It's now blocked by a bug in Mistral[2] to which we got a promise from the Mistra PTL that it will be fixed today or tomorrow. Mitaka & Liberty: both are blocked by an error during openstack undercloud install[3], looks like a packaging error. This bug needs attention. What's new ---------- We implemented 3rd party *testing* jobs, that are using the unpromoted images to run. They can be very useful if you're testing a change that's supposed to fix a promotion error. They can be invoked by commenting 'rdo-ci-testing' on review.openstack.org changes. Currently it doesn't trigger on every project, let me know if you need to add one. Also ping me if you're having trouble with them. I'm adarazs on #rdo. Best regards, Attila [1] https://ci.centos.org/view/rdo/view/promotion-pipeline/ [2] https://bugs.launchpad.net/mistral/+bug/1610269 [3] https://bugzilla.redhat.com/show_bug.cgi?id=1364789 From whayutin at redhat.com Mon Aug 8 12:43:43 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 8 Aug 2016 08:43:43 -0400 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: Message-ID: Attila, Raoul Can you please investigate this issue. Thanks! On Sun, Aug 7, 2016 at 3:52 AM, Boris Derzhavets wrote: > TripleO HA Controller been installed via instack-virt-setup has PCS CLI > like :- > > pcs resource cleanup neutron-server-clone > pcs resource cleanup openstack-nova-api-clone > pcs resource cleanup openstack-nova-consoleauth-clone > pcs resource cleanup openstack-heat-engine-clone > pcs resource cleanup openstack-cinder-api-clone > pcs resource cleanup openstack-glance-registry-clone > pcs resource cleanup httpd-clone > > been working as expected on bare metal > > > Same cluster been setup via QuickStart (Virtual ENV) after bouncing one > of controllers > > included in cluster ignores PCS CLI at least via my experience ( which is > obviously limited > > either format of particular commands is wrong for QuickStart ) > > I believe that dropping (complete replacing ) instack-virt-setup is not a > good idea in general. Personally, I believe that like in case with > packstack it is always good > > to have VENV configuration been tested before going to bare metal > deployment. > > My major concern is maintenance and disaster recovery tests , rather then > deployment itself . What good is for me TripleO Quickstart running on bare > metal if I cannot replace > > crashed VM Controller just been limited to Services HA ( all 3 Cluster VMs > running on single > > bare metal node ) > > > Thanks > > Boris. > > > > > ------------------------------ > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at redhat.com Mon Aug 8 13:37:32 2016 From: apevec at redhat.com (Alan Pevec) Date: Mon, 8 Aug 2016 15:37:32 +0200 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> Message-ID: > Mitaka & Liberty: both are blocked by an error during openstack undercloud install[3], looks like a packaging error. This bug needs attention. osc-lib is newton master only change, how did it end up in mitaka/liberty tripleo? Please do not add this dep in packaging, check where it comes from. > [3] https://bugzilla.redhat.com/show_bug.cgi?id=1364789 -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at redhat.com Mon Aug 8 13:44:30 2016 From: apevec at redhat.com (Alan Pevec) Date: Mon, 8 Aug 2016 15:44:30 +0200 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> Message-ID: > osc-lib is newton master only change, how did it end up in mitaka/liberty tripleo? ok, trace shows it is in gnocchiclient, is that build before https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From adarazs at redhat.com Mon Aug 8 13:48:50 2016 From: adarazs at redhat.com (Attila Darazs) Date: Mon, 8 Aug 2016 15:48:50 +0200 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> Message-ID: <6bb21942-f21d-f21b-9560-55db8ef42f52@redhat.com> On 08/08/2016 03:37 PM, Alan Pevec wrote: > >> Mitaka & Liberty: both are blocked by an error during openstack > undercloud install[3], looks like a packaging error. This bug needs > attention. > > osc-lib is newton master only change, how did it end up in > mitaka/liberty tripleo? > Please do not add this dep in packaging, check where it comes from. According to the traceback[1], it comes from gnocchiclient. It's here: https://github.com/openstack/python-gnocchiclient/blob/master/gnocchiclient/osc.py#L15 Which, I assume is built from the master branch of the repo. How should it be solved then? A. [1] https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-mitaka-delorean-minimal-657/undercloud/home/stack/undercloud_install.log.gz >> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1364789 > From adarazs at redhat.com Mon Aug 8 13:54:55 2016 From: adarazs at redhat.com (Attila Darazs) Date: Mon, 8 Aug 2016 15:54:55 +0200 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> Message-ID: <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> On 08/08/2016 03:44 PM, Alan Pevec wrote: >> osc-lib is newton master only change, how did it end up in > mitaka/liberty tripleo? > > ok, trace shows it is in gnocchiclient, is that build before > https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e > ? According to: https://trunk.rdoproject.org/centos7-mitaka/report.html it looks like the latest build of that package is still using the patch that added osc-lib: [1] http://git.openstack.org/cgit/openstack/python-gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d So it seems to me we should make the same change for Liberty (I'll make it), also why didn't the DLRN build trigger after a distgit change? Attila From amoralej at redhat.com Mon Aug 8 14:01:10 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Mon, 8 Aug 2016 16:01:10 +0200 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: <6bb21942-f21d-f21b-9560-55db8ef42f52@redhat.com> References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <6bb21942-f21d-f21b-9560-55db8ef42f52@redhat.com> Message-ID: On Mon, Aug 8, 2016 at 3:48 PM, Attila Darazs wrote: > On 08/08/2016 03:37 PM, Alan Pevec wrote: >> >> >>> Mitaka & Liberty: both are blocked by an error during openstack >> >> undercloud install[3], looks like a packaging error. This bug needs >> attention. >> >> osc-lib is newton master only change, how did it end up in >> mitaka/liberty tripleo? >> Please do not add this dep in packaging, check where it comes from. > > > According to the traceback[1], it comes from gnocchiclient. It's here: > > https://github.com/openstack/python-gnocchiclient/blob/master/gnocchiclient/osc.py#L15 > > Which, I assume is built from the master branch of the repo. How should it > be solved then? > I guess we need to pin mitaka to stable/2.2 branch. It's partially implemented in https://review.rdoproject.org/r/#/c/1809/ but as dlrn already built a later version is still using it. I'll check how to do it safely. > A. > > [1] > https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-mitaka-delorean-minimal-657/undercloud/home/stack/undercloud_install.log.gz > > > >>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1364789 >> >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Mon Aug 8 14:05:46 2016 From: javier.pena at redhat.com (Javier Pena) Date: Mon, 8 Aug 2016 10:05:46 -0400 (EDT) Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> Message-ID: <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On 08/08/2016 03:44 PM, Alan Pevec wrote: > >> osc-lib is newton master only change, how did it end up in > > mitaka/liberty tripleo? > > > > ok, trace shows it is in gnocchiclient, is that build before > > https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e > > ? > > According to: > > https://trunk.rdoproject.org/centos7-mitaka/report.html > > it looks like the latest build of that package is still using the patch > that added osc-lib: > > [1] > http://git.openstack.org/cgit/openstack/python-gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d > > So it seems to me we should make the same change for Liberty (I'll make > it), also why didn't the DLRN build trigger after a distgit change? > Yes, the same change should be done for Liberty. About the second question, the last commit from the current source branch (stable/2.2) is older than the latest commit for the previous branch (master), so it is ignored. We can fix that by removing the commits from the DB and manually building the new gnocchiclient (will do now). Regards, Javier > Attila > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From adarazs at redhat.com Mon Aug 8 14:14:24 2016 From: adarazs at redhat.com (Attila Darazs) Date: Mon, 8 Aug 2016 16:14:24 +0200 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> Message-ID: <69fbbb91-2ce3-cfb1-752e-a90fb536d7f9@redhat.com> On 08/08/2016 04:05 PM, Javier Pena wrote: > > > ----- Original Message ----- >> On 08/08/2016 03:44 PM, Alan Pevec wrote: >>>> osc-lib is newton master only change, how did it end up in >>> mitaka/liberty tripleo? >>> >>> ok, trace shows it is in gnocchiclient, is that build before >>> https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e >>> ? >> >> According to: >> >> https://trunk.rdoproject.org/centos7-mitaka/report.html >> >> it looks like the latest build of that package is still using the patch >> that added osc-lib: >> >> [1] >> http://git.openstack.org/cgit/openstack/python-gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d >> >> So it seems to me we should make the same change for Liberty (I'll make >> it), also why didn't the DLRN build trigger after a distgit change? >> > > Yes, the same change should be done for Liberty. I created a review for it: http://review.rdoproject.org/r/1817 > About the second question, the last commit from the current source branch (stable/2.2) is older than the latest commit for the previous branch (master), so it is ignored. We can fix that by removing the commits from the DB and manually building the new gnocchiclient (will do now). Please submit that as well and do the same hack for liberty. Thank you for your help, Attila > Regards, > Javier > > >> Attila >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> From javier.pena at redhat.com Mon Aug 8 14:28:02 2016 From: javier.pena at redhat.com (Javier Pena) Date: Mon, 8 Aug 2016 10:28:02 -0400 (EDT) Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: <69fbbb91-2ce3-cfb1-752e-a90fb536d7f9@redhat.com> References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> <69fbbb91-2ce3-cfb1-752e-a90fb536d7f9@redhat.com> Message-ID: <1252864923.447136.1470666482799.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On 08/08/2016 04:05 PM, Javier Pena wrote: > > > > > > ----- Original Message ----- > >> On 08/08/2016 03:44 PM, Alan Pevec wrote: > >>>> osc-lib is newton master only change, how did it end up in > >>> mitaka/liberty tripleo? > >>> > >>> ok, trace shows it is in gnocchiclient, is that build before > >>> https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e > >>> ? > >> > >> According to: > >> > >> https://trunk.rdoproject.org/centos7-mitaka/report.html > >> > >> it looks like the latest build of that package is still using the patch > >> that added osc-lib: > >> > >> [1] > >> http://git.openstack.org/cgit/openstack/python-gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d > >> > >> So it seems to me we should make the same change for Liberty (I'll make > >> it), also why didn't the DLRN build trigger after a distgit change? > >> > > > > Yes, the same change should be done for Liberty. > > I created a review for it: http://review.rdoproject.org/r/1817 > > > About the second question, the last commit from the current source branch > > (stable/2.2) is older than the latest commit for the previous branch > > (master), so it is ignored. We can fix that by removing the commits from > > the DB and manually building the new gnocchiclient (will do now). > > Please submit that as well and do the same hack for liberty. > I've rebuilt gnocchiclient for liberty and mitaka. Regards, Javier > Thank you for your help, > Attila > > > Regards, > > Javier > > > > > >> Attila > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From bderzhavets at hotmail.com Mon Aug 8 14:52:55 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 8 Aug 2016 14:52:55 +0000 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: <1252864923.447136.1470666482799.JavaMail.zimbra@redhat.com> References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> <69fbbb91-2ce3-cfb1-752e-a90fb536d7f9@redhat.com>, <1252864923.447136.1470666482799.JavaMail.zimbra@redhat.com> Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Javier Pena Sent: Monday, August 8, 2016 10:28 AM To: rdo-list Cc: alan pevec Subject: Re: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) ----- Original Message ----- > On 08/08/2016 04:05 PM, Javier Pena wrote: > > > > > > ----- Original Message ----- > >> On 08/08/2016 03:44 PM, Alan Pevec wrote: > >>>> osc-lib is newton master only change, how did it end up in > >>> mitaka/liberty tripleo? > >>> > >>> ok, trace shows it is in gnocchiclient, is that build before > >>> https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e [https://avatars2.githubusercontent.com/u/145846?v=3&s=200] Set gnocchiclient mitaka to stable/2.2 ? redhat-openstack/rdoinfo at 101adb0 github.com This is so we dont break mitaka due to a recent change introduced in gnocchi client introducing osc_lib. see https://review.openstack.org/#/c/343877/ Change-Id: I35c82883ef637c92151621255a40c9213d... > >>> ? > >> > >> According to: > >> > >> https://trunk.rdoproject.org/centos7-mitaka/report.html > >> > >> it looks like the latest build of that package is still using the patch > >> that added osc-lib: > >> > >> [1] > >> http://git.openstack.org/cgit/openstack/python-gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d > >> > >> So it seems to me we should make the same change for Liberty (I'll make > >> it), also why didn't the DLRN build trigger after a distgit change? > >> > > > > Yes, the same change should be done for Liberty. > > I created a review for it: http://review.rdoproject.org/r/1817 > > > About the second question, the last commit from the current source branch > > (stable/2.2) is older than the latest commit for the previous branch > > (master), so it is ignored. We can fix that by removing the commits from > > the DB and manually building the new gnocchiclient (will do now). > > Please submit that as well and do the same hack for liberty. > I've rebuilt gnocchiclient for liberty and mitaka. Does it mean that Mitaka HA instack-virt-setup on CentOS 7.2 VIRTHOST based on delorean repos :- http://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo http://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo will not require any more osc-lib to run `openstack undercloud install` with no errors. If answer is no, how I am supposed to manage to avoid install osc-lib before running `openstack undercloud install` Python tripleoclient has keystoneauth 2.4 what causes trace in BZ https://bugzilla.redhat.com/show_bug.cgi?id=1364789 pre-installed osc-lib (on INSTACK VM ) updates it to 2.11 so undercloud gets deployed , but overcloud deployments get affected, which worked fine just several days ago. Please , see 3 use cases in BZ record. Thanks. Boris. Regards, Javier > Thank you for your help, > Attila > > > Regards, > > Javier > > > > > >> Attila > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at progbau.de Mon Aug 8 14:57:49 2016 From: contact at progbau.de (Chris) Date: Mon, 08 Aug 2016 21:57:49 +0700 Subject: [rdo-list] Swift new starter questions (POC) Message-ID: <57A89DED.9030306@progbau.de> Hello, We are currently playing with swift and try to find out if it would be useful for our needs. We would use the latest mitaka release. It would be a multi-region deployment in around 4-5 different countries/DCs with a total size of 5 TB and millions of small files. Worst case RTT between two locations would be 400ms. I came across some questions I want to ask. 1) As far as I understood Swift returns a 200 OK if the majority of replicas are written. Is it possible to change this ratio? 2) Does the write affinity now includes containers and accounts? Saw a presentation from the Symantic guys on the Openstack summit where this topic came up. And the acces time spikes if this information are transferred from the WAN. 2a) If not does it make sense to do some kind of warm up and get the container and account information saved in memcache for faster access? Anybody experience at this approach? 3) Lets say we have 5 regions and a replication factor of 3. When we use write affinity 12 copies would be written in a local cluster and then in the background replicated to the other regions? Isn't there a better way and avoid unnecessary local writes if they would be transferred away anyway? 4) What will happen if the WAN connection to a DC fails is there an option to not create a another replication in other DCs just because the WAN is of fore some minutes? 5) In a Swift multi region deployment is it possible to have local Keystone services in every region/DC or does all authentication need to come from the same keytone machine? Any help and ideas appreciated! Cheers Chris From hguemar at fedoraproject.org Mon Aug 8 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 8 Aug 2016 15:00:03 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160808150003.D578D60A4003@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-08-10 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From dms at redhat.com Mon Aug 8 15:11:44 2016 From: dms at redhat.com (David Moreau Simard) Date: Mon, 8 Aug 2016 11:11:44 -0400 Subject: [rdo-list] FYI: New slaves now in use on ci.centos.org for RDO In-Reply-To: References: Message-ID: I've scaled down the amount of total threads we had after being notified by the ci.centos.org team that our usage had gone significantly up [1]. We had 16 threads before and with the new slaves we were up to 34 threads. I've scaled down this number back to 24 for the time being to see if that improves the situation. [1] https://lists.centos.org/pipermail/ci-users/2016-August/000330.html David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Aug 3, 2016 at 2:31 PM, David Moreau Simard wrote: > We noticed that the promotion jobs relied on a property file created > locally on the slave. > > Since promotion jobs rely on that property file to share parameters > (such as which trunk repository jobs should be testing) and jobs could > be running on any of the four slaves, this was problematic. > > For the time being, promotion jobs have been pinned to a single slave > but we are looking at a way to remove this limitation to benefit from > the redundancy and increased capacity. > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Wed, Aug 3, 2016 at 12:40 PM, David Moreau Simard wrote: >> Now that's a lot of running jobs [1] :) >> >> [1]: http://i.imgur.com/s7Cq53M.png >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> >> On Wed, Aug 3, 2016 at 12:14 PM, David Moreau Simard wrote: >>> Hi, >>> >>> We've tested successfully three new slaves out of the "beta" OpenStack >>> cloud on ci.centos.org. >>> We're going to be lowering the amount of threads on our existing slave >>> and spread the load evenly across the 4 slaves. >>> >>> The objective is two-fold: >>> - Spread load evenly across four slaves rather than one: redundancy >>> and additional capacity/concurrency >>> - Test real workloads on the ci.centos.org OpenStack cloud before it >>> is opened up to additional tenants >>> >>> I will be monitoring closely (moreso than usual) the jobs but >>> everything /should/ work. >>> You can tell on which slave a particular job was run from at the very >>> beginning of the console output, it looks like this: >>> "Building remotely on rdo-ci-cloudslave01 (rdo) in workspace [...]" >>> >>> If you notice anything odd about jobs running on the new cloudslaves >>> machines, please let me know directly. >>> >>> Thanks ! >>> >>> David Moreau Simard >>> Senior Software Engineer | Openstack RDO >>> >>> dmsimard = [irc, github, twitter] From rbowen at redhat.com Mon Aug 8 15:59:44 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 8 Aug 2016 11:59:44 -0400 Subject: [rdo-list] Unanswered RDO questions, ask.openstack.org Message-ID: <29ede1c1-4926-a0a9-7e5d-df19fddd345e@redhat.com> 42 unanswered questions: RDO TripleO Mitaka HA Overcloud Failing https://ask.openstack.org/en/question/95249/rdo-tripleo-mitaka-ha-overcloud-failing/ Tags: mitaka, triple-o, overcloud, cento7 cinder volumes attached but not available during OS install https://ask.openstack.org/en/question/95223/cinder-volumes-attached-but-not-available-during-os-install/ Tags: mitaka, cinder, install RDO - is there any fedora package newer than puppet-4.2.1-3.fc24.noarch.rpm https://ask.openstack.org/en/question/94969/rdo-is-there-any-fedora-package-newer-than-puppet-421-3fc24noarchrpm/ Tags: rdo, puppet, install-openstack OpenStack RDO mysqld 100% cpu https://ask.openstack.org/en/question/94961/openstack-rdo-mysqld-100-cpu/ Tags: openstack, mysqld, cpu Failed to set RDO repo on host-packstact-centOS-7 https://ask.openstack.org/en/question/94828/failed-to-set-rdo-repo-on-host-packstact-centos-7/ Tags: openstack-packstack, centos7, rdo how to deploy haskell-distributed in RDO? https://ask.openstack.org/en/question/94785/how-to-deploy-haskell-distributed-in-rdo/ Tags: rdo How to set quota for domain and have it shared with all the projects/tenants in domain https://ask.openstack.org/en/question/94105/how-to-set-quota-for-domain-and-have-it-shared-with-all-the-projectstenants-in-domain/ Tags: domainquotadriver rdo tripleO liberty undercloud install failing https://ask.openstack.org/en/question/94023/rdo-tripleo-liberty-undercloud-install-failing/ Tags: rdo, rdo-manager, liberty, undercloud, instack Add new compute node for TripleO deployment in virtual environment https://ask.openstack.org/en/question/93703/add-new-compute-node-for-tripleo-deployment-in-virtual-environment/ Tags: compute, tripleo, liberty, virtual, baremetal Unable to start Ceilometer services https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-services/ Tags: ceilometer, ceilometer-api Adding hard drive space to RDO installation https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ Tags: cinder, openstack, space, add AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ Tags: openstack, networking, aws ceilometer: I've installed openstack mitaka. but swift stops working when i configured the pipeline and ceilometer filter https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ Tags: ceilometer, openstack-swift, mitaka Fail on installing the controller on Cent OS 7 https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ Tags: installation, centos7, controller the error of service entity and API endpoints https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ Tags: service, entity, and, api, endpoints Running delorean fails: Git won't fetch sources https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ Tags: delorean, rdo Keystone authentication: Failed to contact the endpoint. https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ Tags: keystone, authenticate, endpoint, murano Liberty RDO: stack resource topology icons are pink https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ Tags: stack, resource, topology, dashboard Build of instance aborted: Block Device Mapping is Invalid. https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ Tags: cinder, lvm, centos7 No handlers could be found for logger "oslo_config.cfg" while syncing the glance database https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ Tags: liberty, glance, install-openstack how to use chef auto manage openstack in RDO? https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ Tags: chef, rdo Separate Cinder storage traffic from management https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ Tags: cinder, separate, nic, iscsi Openstack installation fails using packstack, failure is in installation of openstack-nova-compute. Error: Dependency Package[nova-compute] has failures https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ Tags: novacompute, rdo, packstack, dependency, failure CentOS OpenStack - compute node can't talk https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ Tags: rdo How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on RDO Liberty ? https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ Tags: rdo, liberty, swift, ha VM and container can't download anything from internet https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ Tags: rdo, neutron, network, connectivity Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ Tags: keyboard, map, keymap, vncproxy, novnc OpenStack-Docker driver failed https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ Tags: docker, openstack, liberty Sahara SSHException: Error reading SSH protocol banner https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ Tags: sahara, icehouse, ssh, vanila Error Sahara create cluster: 'Error attach volume to instance https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, vanila, icehouse -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From bderzhavets at hotmail.com Mon Aug 8 16:06:00 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 8 Aug 2016 16:06:00 +0000 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> <69fbbb91-2ce3-cfb1-752e-a90fb536d7f9@redhat.com>, <1252864923.447136.1470666482799.JavaMail.zimbra@redhat.com>, Message-ID: Yes , undercloud got built without pre-install osc-lib, stacktrace was eliminated due to rebuilt gnocchiclient . I believe this one python-gnocchiclient-2.2.0-0.20160808141853.04c6ce5.el7.centos.src.rpm Tnanks. Boris. ________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Monday, August 8, 2016 10:52 AM To: Javier Pena; rdo-list Cc: alan pevec Subject: Re: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) ________________________________ From: rdo-list-bounces at redhat.com on behalf of Javier Pena Sent: Monday, August 8, 2016 10:28 AM To: rdo-list Cc: alan pevec Subject: Re: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) ----- Original Message ----- > On 08/08/2016 04:05 PM, Javier Pena wrote: > > > > > > ----- Original Message ----- > >> On 08/08/2016 03:44 PM, Alan Pevec wrote: > >>>> osc-lib is newton master only change, how did it end up in > >>> mitaka/liberty tripleo? > >>> > >>> ok, trace shows it is in gnocchiclient, is that build before > >>> https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e [https://avatars2.githubusercontent.com/u/145846?v=3&s=200] Set gnocchiclient mitaka to stable/2.2 ? redhat-openstack/rdoinfo at 101adb0 github.com This is so we dont break mitaka due to a recent change introduced in gnocchi client introducing osc_lib. see https://review.openstack.org/#/c/343877/ Change-Id: I35c82883ef637c92151621255a40c9213d... > >>> ? > >> > >> According to: > >> > >> https://trunk.rdoproject.org/centos7-mitaka/report.html > >> > >> it looks like the latest build of that package is still using the patch > >> that added osc-lib: > >> > >> [1] > >> http://git.openstack.org/cgit/openstack/python-gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d > >> > >> So it seems to me we should make the same change for Liberty (I'll make > >> it), also why didn't the DLRN build trigger after a distgit change? > >> > > > > Yes, the same change should be done for Liberty. > > I created a review for it: http://review.rdoproject.org/r/1817 > > > About the second question, the last commit from the current source branch > > (stable/2.2) is older than the latest commit for the previous branch > > (master), so it is ignored. We can fix that by removing the commits from > > the DB and manually building the new gnocchiclient (will do now). > > Please submit that as well and do the same hack for liberty. > I've rebuilt gnocchiclient for liberty and mitaka. Does it mean that Mitaka HA instack-virt-setup on CentOS 7.2 VIRTHOST based on delorean repos :- http://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo http://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo will not require any more osc-lib to run `openstack undercloud install` with no errors. If answer is no, how I am supposed to manage to avoid install osc-lib before running `openstack undercloud install` Python tripleoclient has keystoneauth 2.4 what causes trace in BZ https://bugzilla.redhat.com/show_bug.cgi?id=1364789 pre-installed osc-lib (on INSTACK VM ) updates it to 2.11 so undercloud gets deployed , but overcloud deployments get affected, which worked fine just several days ago. Please , see 3 use cases in BZ record. Thanks. Boris. Regards, Javier > Thank you for your help, > Attila > > > Regards, > > Javier > > > > > >> Attila > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From adarazs at redhat.com Mon Aug 8 16:58:05 2016 From: adarazs at redhat.com (Attila Darazs) Date: Mon, 8 Aug 2016 18:58:05 +0200 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: <1252864923.447136.1470666482799.JavaMail.zimbra@redhat.com> References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> <69fbbb91-2ce3-cfb1-752e-a90fb536d7f9@redhat.com> <1252864923.447136.1470666482799.JavaMail.zimbra@redhat.com> Message-ID: <80db296f-2175-8e9a-3e5d-a9c6efc4f722@redhat.com> Top posting to provide another clean status: *Master* still fails, and the reason is that our image building job uses the latest consistent repo, which is old: 2016-08-05 17:02[1] If I understand the DLRN build system correctly[2], then "python-django-horizon" package is preventing the current to become consistent. Here's the build log[3]. Can somebody help with this? This is the current blocker for master promotion. *Mitaka* promotion got past the point where it failed before. Probably this will do the trick for liberty too. (it's at the deploy overcloud step) Best regards, Attila [1] https://trunk.rdoproject.org/centos7-master/consistent/ [2] https://trunk.rdoproject.org/centos7-master/status_report.html [3] https://trunk.rdoproject.org/centos7-master/21/6e/216e67371764ca329a73d63c6d187283b01db067_7013ec64/rpmbuild.log On 08/08/2016 04:28 PM, Javier Pena wrote: > > > ----- Original Message ----- >> On 08/08/2016 04:05 PM, Javier Pena wrote: >>> >>> >>> ----- Original Message ----- >>>> On 08/08/2016 03:44 PM, Alan Pevec wrote: >>>>>> osc-lib is newton master only change, how did it end up in >>>>> mitaka/liberty tripleo? >>>>> >>>>> ok, trace shows it is in gnocchiclient, is that build before >>>>> https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e >>>>> ? >>>> >>>> According to: >>>> >>>> https://trunk.rdoproject.org/centos7-mitaka/report.html >>>> >>>> it looks like the latest build of that package is still using the patch >>>> that added osc-lib: >>>> >>>> [1] >>>> http://git.openstack.org/cgit/openstack/python-gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d >>>> >>>> So it seems to me we should make the same change for Liberty (I'll make >>>> it), also why didn't the DLRN build trigger after a distgit change? >>>> >>> >>> Yes, the same change should be done for Liberty. >> >> I created a review for it: http://review.rdoproject.org/r/1817 >> >>> About the second question, the last commit from the current source branch >>> (stable/2.2) is older than the latest commit for the previous branch >>> (master), so it is ignored. We can fix that by removing the commits from >>> the DB and manually building the new gnocchiclient (will do now). >> >> Please submit that as well and do the same hack for liberty. >> > > I've rebuilt gnocchiclient for liberty and mitaka. > > Regards, > Javier > > >> Thank you for your help, >> Attila >> >>> Regards, >>> Javier >>> >>> >>>> Attila >>>> >>>> _______________________________________________ >>>> rdo-list mailing list >>>> rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> From dms at redhat.com Mon Aug 8 17:05:35 2016 From: dms at redhat.com (David Moreau Simard) Date: Mon, 8 Aug 2016 13:05:35 -0400 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: <80db296f-2175-8e9a-3e5d-a9c6efc4f722@redhat.com> References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> <69fbbb91-2ce3-cfb1-752e-a90fb536d7f9@redhat.com> <1252864923.447136.1470666482799.JavaMail.zimbra@redhat.com> <80db296f-2175-8e9a-3e5d-a9c6efc4f722@redhat.com> Message-ID: Horizon has started failing to build late friday due to three new dependencies that have been introduced. Work is already in progress, see the other mailing-list thread [1]. [1]: https://www.redhat.com/archives/rdo-list/2016-August/msg00086.html David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Mon, Aug 8, 2016 at 12:58 PM, Attila Darazs wrote: > Top posting to provide another clean status: > > *Master* still fails, and the reason is that our image building job uses the > latest consistent repo, which is old: 2016-08-05 17:02[1] > > If I understand the DLRN build system correctly[2], then > "python-django-horizon" package is preventing the current to become > consistent. Here's the build log[3]. > > Can somebody help with this? This is the current blocker for master > promotion. > > *Mitaka* promotion got past the point where it failed before. Probably this > will do the trick for liberty too. (it's at the deploy overcloud step) > > Best regards, > Attila > > [1] https://trunk.rdoproject.org/centos7-master/consistent/ > [2] https://trunk.rdoproject.org/centos7-master/status_report.html > [3] > https://trunk.rdoproject.org/centos7-master/21/6e/216e67371764ca329a73d63c6d187283b01db067_7013ec64/rpmbuild.log > > > On 08/08/2016 04:28 PM, Javier Pena wrote: >> >> >> >> ----- Original Message ----- >>> >>> On 08/08/2016 04:05 PM, Javier Pena wrote: >>>> >>>> >>>> >>>> ----- Original Message ----- >>>>> >>>>> On 08/08/2016 03:44 PM, Alan Pevec wrote: >>>>>>> >>>>>>> osc-lib is newton master only change, how did it end up in >>>>>> >>>>>> mitaka/liberty tripleo? >>>>>> >>>>>> ok, trace shows it is in gnocchiclient, is that build before >>>>>> >>>>>> https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e >>>>>> ? >>>>> >>>>> >>>>> According to: >>>>> >>>>> https://trunk.rdoproject.org/centos7-mitaka/report.html >>>>> >>>>> it looks like the latest build of that package is still using the patch >>>>> that added osc-lib: >>>>> >>>>> [1] >>>>> >>>>> http://git.openstack.org/cgit/openstack/python-gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d >>>>> >>>>> So it seems to me we should make the same change for Liberty (I'll make >>>>> it), also why didn't the DLRN build trigger after a distgit change? >>>>> >>>> >>>> Yes, the same change should be done for Liberty. >>> >>> >>> I created a review for it: http://review.rdoproject.org/r/1817 >>> >>>> About the second question, the last commit from the current source >>>> branch >>>> (stable/2.2) is older than the latest commit for the previous branch >>>> (master), so it is ignored. We can fix that by removing the commits from >>>> the DB and manually building the new gnocchiclient (will do now). >>> >>> >>> Please submit that as well and do the same hack for liberty. >>> >> >> I've rebuilt gnocchiclient for liberty and mitaka. >> >> Regards, >> Javier >> >> >>> Thank you for your help, >>> Attila >>> >>>> Regards, >>>> Javier >>>> >>>> >>>>> Attila >>>>> >>>>> _______________________________________________ >>>>> rdo-list mailing list >>>>> rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Mon Aug 8 18:30:35 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 8 Aug 2016 14:30:35 -0400 Subject: [rdo-list] RDO bloggers - week of August 8 Message-ID: Here's what RDO enthusiasts have been blogging about this week: Customizing a Tripleo Quickstart Deploy by Adam Young Tripleo Heat Templates allow the deployer to customize the controller deployment by setting values in the controllerExtraConfig section of the stack configuration. However, Quickstart already makes use of this in the file /tmp/deploy_env.yaml, so if you want to continue to customize, you need to work with this file. ? read more at http://tm3.org/88 fedora-review tool for reviewing RDO packages by Chandan Kumar This tool makes reviews of rpm packages for Fedora easier. It tries to automate most of the process. Through a bash API the checks can be extended in any programming language and for any programming language. ? read more at http://tm3.org/89 OpenStack operators, developers, users? It?s YOUR summit, vote! by David Simard Once again, the OpenStack Summit is nigh and this time it?ll be in Barcelona. The OpenStack Summit event is an opportunity for Operators, Developers and Users alike to gather, discuss and learn about OpenStack. What we know is that there?s going to be keynotes, design sessions for developers to hack on things and operator sessions for discussing and exchanging around the challenges of operating OpenStack. We also know there?s going to be a bunch of presentations on a wide range of topics from the OpenStack community. ? read more at http://tm3.org/8a TripleO Composable Services 101 by Steve Hardy Over the newton cycle, we've been working very hard on a major refactor of our heat templates and puppet manifiests, such that a much more granular and flexible "Composable Services" pattern is followed throughout our implementation. ? read more at http://tm3.org/8b TripleO deep dive session #5 (Undercloud - Under the hood) by Carlos Camacho This is the fifth video from a series of ?Deep Dive? sessions related to TripleO deployments. ? watch at http://tm3.org/8c -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From rbowen at redhat.com Mon Aug 8 18:49:08 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 8 Aug 2016 14:49:08 -0400 Subject: [rdo-list] OpenStack meetups this week Message-ID: <8f7f04b5-4b8c-cb7c-fcd4-8580cbb78392@redhat.com> It looks like a really slim week for OpenStack meetups! The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Wednesday August 10 in Charlotte, NC, US: First Charlotte Openstack meetup - http://www.meetup.com/Charlotte-OpenStack/events/232543687/ * Thursday August 11 in Shanghai, CN: OpenStack ????? - http://www.meetup.com/WWCode-Shanghai/events/232915360/ -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From amoralej at redhat.com Tue Aug 9 09:28:09 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 9 Aug 2016 11:28:09 +0200 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: <80db296f-2175-8e9a-3e5d-a9c6efc4f722@redhat.com> References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> <69fbbb91-2ce3-cfb1-752e-a90fb536d7f9@redhat.com> <1252864923.447136.1470666482799.JavaMail.zimbra@redhat.com> <80db296f-2175-8e9a-3e5d-a9c6efc4f722@redhat.com> Message-ID: On Mon, Aug 8, 2016 at 6:58 PM, Attila Darazs wrote: > Top posting to provide another clean status: > > *Master* still fails, and the reason is that our image building job uses the > latest consistent repo, which is old: 2016-08-05 17:02[1] > > If I understand the DLRN build system correctly[2], then > "python-django-horizon" package is preventing the current to become > consistent. Here's the build log[3]. > > Can somebody help with this? This is the current blocker for master > promotion. > Dependencies are fixed and we are consistent since yesterday, but we still have some issues preventing the promotion of master. Some of them are reported in the etherpad, but some errors still needs to be investigated (I'm currently working in issue 31 affecting packstack scenario001 and 003, but last run puppet scenario002 and both tripleo jobs failed also. > *Mitaka* promotion got past the point where it failed before. Probably this > will do the trick for liberty too. (it's at the deploy overcloud step) > > Best regards, > Attila > > [1] https://trunk.rdoproject.org/centos7-master/consistent/ > [2] https://trunk.rdoproject.org/centos7-master/status_report.html > [3] > https://trunk.rdoproject.org/centos7-master/21/6e/216e67371764ca329a73d63c6d187283b01db067_7013ec64/rpmbuild.log > > > On 08/08/2016 04:28 PM, Javier Pena wrote: >> >> >> >> ----- Original Message ----- >>> >>> On 08/08/2016 04:05 PM, Javier Pena wrote: >>>> >>>> >>>> >>>> ----- Original Message ----- >>>>> >>>>> On 08/08/2016 03:44 PM, Alan Pevec wrote: >>>>>>> >>>>>>> osc-lib is newton master only change, how did it end up in >>>>>> >>>>>> mitaka/liberty tripleo? >>>>>> >>>>>> ok, trace shows it is in gnocchiclient, is that build before >>>>>> >>>>>> https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e >>>>>> ? >>>>> >>>>> >>>>> According to: >>>>> >>>>> https://trunk.rdoproject.org/centos7-mitaka/report.html >>>>> >>>>> it looks like the latest build of that package is still using the patch >>>>> that added osc-lib: >>>>> >>>>> [1] >>>>> >>>>> http://git.openstack.org/cgit/openstack/python-gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d >>>>> >>>>> So it seems to me we should make the same change for Liberty (I'll make >>>>> it), also why didn't the DLRN build trigger after a distgit change? >>>>> >>>> >>>> Yes, the same change should be done for Liberty. >>> >>> >>> I created a review for it: http://review.rdoproject.org/r/1817 >>> >>>> About the second question, the last commit from the current source >>>> branch >>>> (stable/2.2) is older than the latest commit for the previous branch >>>> (master), so it is ignored. We can fix that by removing the commits from >>>> the DB and manually building the new gnocchiclient (will do now). >>> >>> >>> Please submit that as well and do the same hack for liberty. >>> >> >> I've rebuilt gnocchiclient for liberty and mitaka. >> >> Regards, >> Javier >> >> >>> Thank you for your help, >>> Attila >>> >>>> Regards, >>>> Javier >>>> >>>> >>>>> Attila >>>>> >>>>> _______________________________________________ >>>>> rdo-list mailing list >>>>> rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From nirl at asocsnetworks.com Tue Aug 9 11:50:46 2016 From: nirl at asocsnetworks.com (Nir Levy) Date: Tue, 9 Aug 2016 11:50:46 +0000 Subject: [rdo-list] Fedora: running RDO (openstack-mitaka) Message-ID: Main goal: installing any RDO over Fedora 24, starting with all in one. according to: https://www.rdoproject.org/install/quickstart/ and https://www.rdoproject.org/documentation/packstack-all-in-one-diy-configuration/ sudo yum install https://www.rdoproject.org/repos/rdo-release.rpm sudo yum update -y sudo yum install -y openstack-packstack packstack --allinone --gen-answer_file=answer.txt and afterwords. setting my interfaces: CONFIG_NOVA_NETWORK_PUBIF --novanetwork-pubif CONFIG_NOVA_COMPUTE_PRIVIF --novacompute-privif CONFIG_NOVA_NETWORK_PRIVIF --novanetwork-privif setting additional settings: CONFIG_PROVISION_DEMO --provision-demo n (y for allinone) CONFIG_SWIFT_INSTALL --os-swift-install (y for allinone) n Set to y if you would like PackStack to install Object Storage. CONFIG_NAGIOS_INSTALL --nagios-install n (y for allinone) Set to y if you would like to install Nagios. Nagios provides additional tools for monitoring the OpenStack environment. CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE --provision-all-in-one-ovs-bridge n (y for allinone) packstack --answer-file=answer.txt 1) first issue I've encountered is : Error: Parameter mode failed on File[rabbitmq.config]: The file mode specification must be a string, not 'Fixnum' at ... occurs, I have to verify: /usr/lib/python2.7/site-packages/packstack/puppet/templates/amqp.pp /usr/share/openstack-puppet/modules/module-collectd/manifests/plugin/amqp.pp and modify the following: https://review.openstack.org/349908 2) second issue I've encountered is : https://bugs.launchpad.net/packstack/+bug/1597951 after modifying SELinux /usr/sbin/getenforce Enforcing setenforce permissive /usr/sbin/getenforce Permissive seems to resolve it: 3) third issue is the current issue. 192.168.13.85_amqp.pp: [ DONE ] <-> previously failed. 192.168.13.85_mariadb.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] 192.168.13.85_mariadb.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.13.85_mariadb.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install mariadb-galera-server' returned 1: Yum command has been deprecated, redirecting to '/usr/bin/dnf -d 0 -e 0 -y install mariadb-galera-server'. You will find full trace in log /var/tmp/packstack/20160802-193240-sGLWV3/manifests/192.168.13.85_mariadb.pp.log Please check log file /var/tmp/packstack/20160802-193240-sGLWV3/openstack-setup.log for more information Nir Levy SW Engineer Web: www.asocstech.com | [cid:image001.jpg at 01D1B599.5A2C9530] Nir Levy SW Engineer Web: www.asocstech.com | [cid:image001.jpg at 01D1B599.5A2C9530] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2704 bytes Desc: image001.jpg URL: From bnemec at redhat.com Tue Aug 9 15:41:32 2016 From: bnemec at redhat.com (Ben Nemec) Date: Tue, 9 Aug 2016 10:41:32 -0500 Subject: [rdo-list] TripleO Network Template Generator Message-ID: I sent this to openstack-dev too, but I think we have a number of users on rdo-list that might benefit from it too. Here's what I sent earlier: This is something that has existed for a while, but I had been hesitant to evangelize it until it was a little more proven. At this point I've used it to generate templates for a number of different environments, and it has worked well. I decided it was time to record another demo and throw it out there for the broader community to look at. See details on my blog: http://blog.nemebean.com/content/tripleo-network-isolation-template-generator Most of what you need to know is either there or in the video itself. Let me know what you think. Thanks. -Ben From whayutin at redhat.com Tue Aug 9 21:38:02 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 9 Aug 2016 17:38:02 -0400 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> <69fbbb91-2ce3-cfb1-752e-a90fb536d7f9@redhat.com> <1252864923.447136.1470666482799.JavaMail.zimbra@redhat.com> <80db296f-2175-8e9a-3e5d-a9c6efc4f722@redhat.com> Message-ID: Is there a bug open on #30 in the etherpad [1]? 30. TBD the new minimal_pacemaker job fails on the promotion master pipeline [apevec] [1] https://etherpad.openstack.org/p/delorean_master_current_issues On Tue, Aug 9, 2016 at 5:28 AM, Alfredo Moralejo Alonso wrote: > On Mon, Aug 8, 2016 at 6:58 PM, Attila Darazs wrote: > > Top posting to provide another clean status: > > > > *Master* still fails, and the reason is that our image building job uses > the > > latest consistent repo, which is old: 2016-08-05 17:02[1] > > > > If I understand the DLRN build system correctly[2], then > > "python-django-horizon" package is preventing the current to become > > consistent. Here's the build log[3]. > > > > Can somebody help with this? This is the current blocker for master > > promotion. > > > > Dependencies are fixed and we are consistent since yesterday, but we > still have some issues preventing the promotion of master. Some of > them are reported in the etherpad, but some errors still needs to be > investigated (I'm currently working in issue 31 affecting packstack > scenario001 and 003, but last run puppet scenario002 and both tripleo > jobs failed also. > > > *Mitaka* promotion got past the point where it failed before. Probably > this > > will do the trick for liberty too. (it's at the deploy overcloud step) > > > > Best regards, > > Attila > > > > [1] https://trunk.rdoproject.org/centos7-master/consistent/ > > [2] https://trunk.rdoproject.org/centos7-master/status_report.html > > [3] > > https://trunk.rdoproject.org/centos7-master/21/6e/ > 216e67371764ca329a73d63c6d187283b01db067_7013ec64/rpmbuild.log > > > > > > On 08/08/2016 04:28 PM, Javier Pena wrote: > >> > >> > >> > >> ----- Original Message ----- > >>> > >>> On 08/08/2016 04:05 PM, Javier Pena wrote: > >>>> > >>>> > >>>> > >>>> ----- Original Message ----- > >>>>> > >>>>> On 08/08/2016 03:44 PM, Alan Pevec wrote: > >>>>>>> > >>>>>>> osc-lib is newton master only change, how did it end up in > >>>>>> > >>>>>> mitaka/liberty tripleo? > >>>>>> > >>>>>> ok, trace shows it is in gnocchiclient, is that build before > >>>>>> > >>>>>> https://github.com/redhat-openstack/rdoinfo/commit/ > 101adb09757a852560816a79d1e2c58d2b0cf87e > >>>>>> ? > >>>>> > >>>>> > >>>>> According to: > >>>>> > >>>>> https://trunk.rdoproject.org/centos7-mitaka/report.html > >>>>> > >>>>> it looks like the latest build of that package is still using the > patch > >>>>> that added osc-lib: > >>>>> > >>>>> [1] > >>>>> > >>>>> http://git.openstack.org/cgit/openstack/python- > gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d > >>>>> > >>>>> So it seems to me we should make the same change for Liberty (I'll > make > >>>>> it), also why didn't the DLRN build trigger after a distgit change? > >>>>> > >>>> > >>>> Yes, the same change should be done for Liberty. > >>> > >>> > >>> I created a review for it: http://review.rdoproject.org/r/1817 > >>> > >>>> About the second question, the last commit from the current source > >>>> branch > >>>> (stable/2.2) is older than the latest commit for the previous branch > >>>> (master), so it is ignored. We can fix that by removing the commits > from > >>>> the DB and manually building the new gnocchiclient (will do now). > >>> > >>> > >>> Please submit that as well and do the same hack for liberty. > >>> > >> > >> I've rebuilt gnocchiclient for liberty and mitaka. > >> > >> Regards, > >> Javier > >> > >> > >>> Thank you for your help, > >>> Attila > >>> > >>>> Regards, > >>>> Javier > >>>> > >>>> > >>>>> Attila > >>>>> > >>>>> _______________________________________________ > >>>>> rdo-list mailing list > >>>>> rdo-list at redhat.com > >>>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>>> > >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>>>> > >>> > >>> _______________________________________________ > >>> rdo-list mailing list > >>> rdo-list at redhat.com > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Wed Aug 10 09:48:48 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 10 Aug 2016 11:48:48 +0200 Subject: [rdo-list] New 3rd party testing jobs & the state of the RDO promotion jobs (all blocked) In-Reply-To: References: <42872c0a-004a-2c81-ce71-840214d972dd@redhat.com> <3ee0114b-ea70-39ca-ea03-87ff8e3cbb8a@redhat.com> <410228087.439598.1470665146298.JavaMail.zimbra@redhat.com> <69fbbb91-2ce3-cfb1-752e-a90fb536d7f9@redhat.com> <1252864923.447136.1470666482799.JavaMail.zimbra@redhat.com> <80db296f-2175-8e9a-3e5d-a9c6efc4f722@redhat.com> Message-ID: On Tue, Aug 9, 2016 at 11:38 PM, Wesley Hayutin wrote: > Is there a bug open on #30 in the etherpad [1]? > 30. TBD the new minimal_pacemaker job fails on the promotion master > pipeline [apevec] > > [1] https://etherpad.openstack.org/p/delorean_master_current_issues > > Note that cinder bug https://bugs.launchpad.net/cinder/+bug/1610073 which was impacting packstack jobs (see issue 31), is affecting tripleo jobs also (i added some coments and logs in issue 34 ). Cinder team is working in fix https://review.openstack.org/#/c/353120/ for this bug. > > On Tue, Aug 9, 2016 at 5:28 AM, Alfredo Moralejo Alonso > wrote: >> >> On Mon, Aug 8, 2016 at 6:58 PM, Attila Darazs wrote: >> > Top posting to provide another clean status: >> > >> > *Master* still fails, and the reason is that our image building job uses >> > the >> > latest consistent repo, which is old: 2016-08-05 17:02[1] >> > >> > If I understand the DLRN build system correctly[2], then >> > "python-django-horizon" package is preventing the current to become >> > consistent. Here's the build log[3]. >> > >> > Can somebody help with this? This is the current blocker for master >> > promotion. >> > >> >> Dependencies are fixed and we are consistent since yesterday, but we >> still have some issues preventing the promotion of master. Some of >> them are reported in the etherpad, but some errors still needs to be >> investigated (I'm currently working in issue 31 affecting packstack >> scenario001 and 003, but last run puppet scenario002 and both tripleo >> jobs failed also. >> >> > *Mitaka* promotion got past the point where it failed before. Probably >> > this >> > will do the trick for liberty too. (it's at the deploy overcloud step) >> > >> > Best regards, >> > Attila >> > >> > [1] https://trunk.rdoproject.org/centos7-master/consistent/ >> > [2] https://trunk.rdoproject.org/centos7-master/status_report.html >> > [3] >> > >> > https://trunk.rdoproject.org/centos7-master/21/6e/216e67371764ca329a73d63c6d187283b01db067_7013ec64/rpmbuild.log >> > >> > >> > On 08/08/2016 04:28 PM, Javier Pena wrote: >> >> >> >> >> >> >> >> ----- Original Message ----- >> >>> >> >>> On 08/08/2016 04:05 PM, Javier Pena wrote: >> >>>> >> >>>> >> >>>> >> >>>> ----- Original Message ----- >> >>>>> >> >>>>> On 08/08/2016 03:44 PM, Alan Pevec wrote: >> >>>>>>> >> >>>>>>> osc-lib is newton master only change, how did it end up in >> >>>>>> >> >>>>>> mitaka/liberty tripleo? >> >>>>>> >> >>>>>> ok, trace shows it is in gnocchiclient, is that build before >> >>>>>> >> >>>>>> >> >>>>>> https://github.com/redhat-openstack/rdoinfo/commit/101adb09757a852560816a79d1e2c58d2b0cf87e >> >>>>>> ? >> >>>>> >> >>>>> >> >>>>> According to: >> >>>>> >> >>>>> https://trunk.rdoproject.org/centos7-mitaka/report.html >> >>>>> >> >>>>> it looks like the latest build of that package is still using the >> >>>>> patch >> >>>>> that added osc-lib: >> >>>>> >> >>>>> [1] >> >>>>> >> >>>>> >> >>>>> http://git.openstack.org/cgit/openstack/python-gnocchiclient/commit/?id=dcb53e7040655d48fa96beaf39dd6be22c4f684d >> >>>>> >> >>>>> So it seems to me we should make the same change for Liberty (I'll >> >>>>> make >> >>>>> it), also why didn't the DLRN build trigger after a distgit change? >> >>>>> >> >>>> >> >>>> Yes, the same change should be done for Liberty. >> >>> >> >>> >> >>> I created a review for it: http://review.rdoproject.org/r/1817 >> >>> >> >>>> About the second question, the last commit from the current source >> >>>> branch >> >>>> (stable/2.2) is older than the latest commit for the previous branch >> >>>> (master), so it is ignored. We can fix that by removing the commits >> >>>> from >> >>>> the DB and manually building the new gnocchiclient (will do now). >> >>> >> >>> >> >>> Please submit that as well and do the same hack for liberty. >> >>> >> >> >> >> I've rebuilt gnocchiclient for liberty and mitaka. >> >> >> >> Regards, >> >> Javier >> >> >> >> >> >>> Thank you for your help, >> >>> Attila >> >>> >> >>>> Regards, >> >>>> Javier >> >>>> >> >>>> >> >>>>> Attila >> >>>>> >> >>>>> _______________________________________________ >> >>>>> rdo-list mailing list >> >>>>> rdo-list at redhat.com >> >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >> >>>>> >> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >>>>> >> >>> >> >>> _______________________________________________ >> >>> rdo-list mailing list >> >>> rdo-list at redhat.com >> >>> https://www.redhat.com/mailman/listinfo/rdo-list >> >>> >> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >>> >> > >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > From mengxiandong at gmail.com Wed Aug 10 10:18:36 2016 From: mengxiandong at gmail.com (Xiandong Meng) Date: Wed, 10 Aug 2016 22:18:36 +1200 Subject: [rdo-list] RDO CI hardware requirements for CentOS altarch Message-ID: We had discussed it a bit in previous RDO meeting on irc. I want to write a separate mail for more broad and in-depth discussion here. For a master release (for now it is Newton), i noticed the promotion pipes fall into three different categories: - Triple-O based CI test - Packstack based test - OpenStack-Puppet based test So what is the base minimal CI requirements to start with for AltArch support? Since many of the CI test should work even with VMs instead of physical nodes, can we start with 1-2 physical servers? Regards, Alex Meng mengxiandong at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Wed Aug 10 14:24:24 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 10 Aug 2016 10:24:24 -0400 Subject: [rdo-list] RDO CI hardware requirements for CentOS altarch In-Reply-To: References: Message-ID: Hi Alex, I don't know the specifics of resource usage for alternative architectures but I can tell about x86_64. Packstack and Puppet-OpenStack jobs are designed to run within 8GB of RAM - either on a single virtual machine or on a single bare metal server. I would say 4 cores is the minimum (or otherwise job length is severely affected), 8 is best. Disk space is not generally a concern, easily fitting within 50GB of space. I don't have the numbers for TripleO so I'll let someone else chime in on that. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Aug 10, 2016 at 6:18 AM, Xiandong Meng wrote: > We had discussed it a bit in previous RDO meeting on irc. I want to write a > separate mail > for more broad and in-depth discussion here. > > > For a master release (for now it is Newton), i noticed the promotion pipes > fall into three different categories: > - Triple-O based CI test > - Packstack based test > - OpenStack-Puppet based test > > So what is the base minimal CI requirements to start with for AltArch > support? Since many of the CI test should work even with VMs instead of > physical nodes, can we start > with 1-2 physical servers? > > Regards, > > Alex Meng > mengxiandong at gmail.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Wed Aug 10 15:56:45 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 10 Aug 2016 17:56:45 +0200 Subject: [rdo-list] [minute] RDO meeting (2016-08-10) minutes Message-ID: ============================== #rdo: RDO meeting - 2016-08-10 ============================== Meeting started by number80 at 15:02:05 UTC. The full logs are available at https://meetbot.fedoraproject.org/rdo/2016-08-10/rdo_meeting_-_2016-08-10.2016-08-10-15.02.log.html . Meeting summary --------------- * roll call (number80, 15:02:34) * Add vanilla tempest package (number80, 15:05:09) * LINK: https://github.com/openstack/puppet-tempest/blob/master/manifests/params.pp#L15 (chandankumar, 15:19:57) * Volunteers wanted to talk about what you're doing in Newton, for RDO podcast (number80, 15:27:43) * rbowen is looking for volunteers for RDO Newton release podcasts (number80, 15:28:16) * Cleanup old commits from former centos7-master in DLRN instance (number80, 15:28:58) * ACTION: jpena to purge old commits from centos-master (jpena, 15:33:27) * chair for next week meeting (number80, 15:34:11) * jpena to chair next week meeting (number80, 15:34:36) * open floor (number80, 15:34:48) Meeting ended at 15:42:23 UTC. Action Items ------------ * jpena to purge old commits from centos-master Action Items, by person ----------------------- * jpena * jpena to purge old commits from centos-master * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * number80 (76) * amoralej (24) * EmilienM (20) * chandankumar (16) * jpena (15) * tosky (8) * rbowen (8) * zodbot (7) * hrybacki (2) * weshay (2) * jruzicka (2) * coolsvap (1) * flepied (1) * jschlueter (1) * rdogerrit (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From whayutin at redhat.com Wed Aug 10 17:19:01 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 10 Aug 2016 13:19:01 -0400 Subject: [rdo-list] tripleo-quickstart: new features, and upcoming changes Message-ID: Greetings, It's been a little while since the RDO CI team has updated the community with new features coming to tripleo-quickstart and CI. Since I have your attention I'll walk through some upcoming changes that you need to be aware of. - We've determined that the role and jinja2 template [1] we use to configure the overcloud is too large and not very composable. The current template contains registration, introspection, flavor creation, and network-isolation setup. - This will be split into three distinct roles in the future [2] - ansible-role-tripleo-overcloud-prep-images - ansible-role-tripleo-overcloud-prep-flavors - ansible-role-tripleo-overcloud-prep-network - Additionally we will be creating a role responsible for copying custom heat templates, nic-conifgs, and other config to the undercloud to prep for a deployment [2] - The tripleo-quickstart core's have decided to remove the roles responsible for configuring and deploying the overcloud from tripleo-quickstart and make them 3rd party roles (oooq-extras) Now for the new features: - Paul Belanger has added tripleo-quickstart doc to the openstack doc server. - http://docs.openstack.org/developer/tripleo-quickstart/ - Paul Belanger has added a lint job for tripleo-quickstart in the openstack ci system - e.g. http://logs.openstack.org/90/352990/1/check/gate-tripleo-quickstart-linters-ubuntu-xenial/ - Harry Rybacki is about to land a patch to automatically generate TripleO documentation via CI execution [3]. This is a great feature to keep TripleO docs up to date, documenting step by step what CI is doing, and great for reproducing bugs!! - Attila Darazs has added 3rd party OpenStack CI. You can now run rdo ci against your openstack reviews. - To test w/ the latest delorean builds use keyword "rdo-ci-testing" - To test w/ the latested promoted RDO quickstart image use keyword "rdo-ci-check" Upcoming features under development: - Mathieu Bultel has a working proof of concept that takes an already deployed TripleO environment (virt only), shuts it down, saves it to a file server and then restores the environment on a new clean libvirt host. This has helped Mathieu iterate on upgrade testing more efficiently. This work has practical implications for sales and other areas. There is some additional cool stuff on the way, but it's a little too early to advertise it. Thanks to Paul, Harry, Attila and Mathieu and the rest of the RDO community! [1] https://github.com/openstack/tripleo-quickstart/blob/master/roles/tripleo/undercloud/templates/undercloud-install-post.sh.j2 [2] https://trello.com/c/yMoI2i1d [3] https://ci.centos.org/artifacts/rdo/jenkins-poc-hrybacki-tripleo-quickstart-roles-gate-mitaka-doc-generator-32/docs/build/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dciabrin at redhat.com Wed Aug 10 17:21:46 2016 From: dciabrin at redhat.com (Damien Ciabrini) Date: Wed, 10 Aug 2016 13:21:46 -0400 (EDT) Subject: [rdo-list] Tracking package updates between RDO and RHOS In-Reply-To: <1022225562.17515879.1470849560151.JavaMail.zimbra@redhat.com> Message-ID: <284386508.17518154.1470849706659.JavaMail.zimbra@redhat.com> Hi folks, I'm trying to get a full picture of how shared services like rabbitmq or galera are packaged in RDO and how/when they are updated. >From what I understand - please correct me if I'm wrong - this is how it works: * such packages are usually tagged with cloud7-openstack-common-release [1] * they are generally built with dist git from Fedora or EPEL, unless specific override is needed [2] * the version of those packages is the same on RDO liberty and RDO mitaka. * they are manually updated to keep close to what is in latest RHOS. Now assuming the above is correct, I wonder how to best deal with package updates so that RDO community and RHOS maintainers can coordinate efficiently? Specifically, I can think of a few update scenarios: 1. minor package updates (e.g. new minor upstream release, CVE,...) How to best notify RDO in case RHOS perform such updates? Push new specfile review in RDO gerrit? 2. major package upgrade (e.g. mariadb 5.5 -> 10.1) I suppose major version upgrade should only take place on new RDO version because it makes it simpler to test. But how should we isolate major version upgrade to impact several RDO release? * Should we always move package from common-release to {mitaka|newton|...}-release tag when we plan to upgrade it to a new major version, like it was done for mariadb? * Should we discuss those details via a specfile review in RDO gerrit? 3. package update in RDO needs to reach RHOS I expect this to be very rare, but in case RDO needs to upgrade a package version for packaging reasons (e.g. mariadb), how to best notify RHOS maintainer to plan upgrade scenarios or other tests ahead of time? Thanks in advance for your feedback :) [1] http://cbs.centos.org/koji/packages?tagID=63 [2] https://github.com/rdo-common/ -- Damien From mengxiandong at gmail.com Thu Aug 11 01:45:07 2016 From: mengxiandong at gmail.com (Xiandong Meng) Date: Thu, 11 Aug 2016 13:45:07 +1200 Subject: [rdo-list] RDO CI hardware requirements for CentOS altarch In-Reply-To: References: Message-ID: David, thank you for your response. For Packstack and Puppet-OpenStack jobs, I noticed that usually each job takes no more than 45 minutes. How many jobs may run in parallel and how often they are triggered? Regards, Alex Meng mengxiandong at gmail.com On Thu, Aug 11, 2016 at 2:24 AM, David Moreau Simard wrote: > Hi Alex, > > I don't know the specifics of resource usage for alternative > architectures but I can tell about x86_64. > > Packstack and Puppet-OpenStack jobs are designed to run within 8GB of > RAM - either on a single virtual machine or on a single bare metal > server. > I would say 4 cores is the minimum (or otherwise job length is > severely affected), 8 is best. > Disk space is not generally a concern, easily fitting within 50GB of space. > > I don't have the numbers for TripleO so I'll let someone else chime in on > that. > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Wed, Aug 10, 2016 at 6:18 AM, Xiandong Meng > wrote: > > We had discussed it a bit in previous RDO meeting on irc. I want to > write a > > separate mail > > for more broad and in-depth discussion here. > > > > > > For a master release (for now it is Newton), i noticed the promotion > pipes > > fall into three different categories: > > - Triple-O based CI test > > - Packstack based test > > - OpenStack-Puppet based test > > > > So what is the base minimal CI requirements to start with for AltArch > > support? Since many of the CI test should work even with VMs instead of > > physical nodes, can we start > > with 1-2 physical servers? > > > > Regards, > > > > Alex Meng > > mengxiandong at gmail.com > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Thu Aug 11 02:50:21 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 10 Aug 2016 22:50:21 -0400 Subject: [rdo-list] RDO CI hardware requirements for CentOS altarch In-Reply-To: References: Message-ID: Alex, Can you expand on what you mean by that ? The concurrency (or lack thereof) of the jobs are more about the design of the job itself -- or the environment it is run from as well as the environment it is run on. The jobs part of the promotion pipeline [1] run a couple times per day. [1]: https://ci.centos.org/view/rdo/view/promotion-pipeline/ David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Aug 10, 2016 at 9:45 PM, Xiandong Meng wrote: > David, thank you for your response. > > For Packstack and Puppet-OpenStack jobs, I noticed that usually each job > takes no more than 45 minutes. How many jobs may run in parallel and how > often they are triggered? > > Regards, > > Alex Meng > mengxiandong at gmail.com > > On Thu, Aug 11, 2016 at 2:24 AM, David Moreau Simard wrote: >> >> Hi Alex, >> >> I don't know the specifics of resource usage for alternative >> architectures but I can tell about x86_64. >> >> Packstack and Puppet-OpenStack jobs are designed to run within 8GB of >> RAM - either on a single virtual machine or on a single bare metal >> server. >> I would say 4 cores is the minimum (or otherwise job length is >> severely affected), 8 is best. >> Disk space is not generally a concern, easily fitting within 50GB of >> space. >> >> I don't have the numbers for TripleO so I'll let someone else chime in on >> that. >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> >> On Wed, Aug 10, 2016 at 6:18 AM, Xiandong Meng >> wrote: >> > We had discussed it a bit in previous RDO meeting on irc. I want to >> > write a >> > separate mail >> > for more broad and in-depth discussion here. >> > >> > >> > For a master release (for now it is Newton), i noticed the promotion >> > pipes >> > fall into three different categories: >> > - Triple-O based CI test >> > - Packstack based test >> > - OpenStack-Puppet based test >> > >> > So what is the base minimal CI requirements to start with for AltArch >> > support? Since many of the CI test should work even with VMs instead of >> > physical nodes, can we start >> > with 1-2 physical servers? >> > >> > Regards, >> > >> > Alex Meng >> > mengxiandong at gmail.com >> > >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > > From mengxiandong at gmail.com Thu Aug 11 03:52:04 2016 From: mengxiandong at gmail.com (Xiandong Meng) Date: Thu, 11 Aug 2016 15:52:04 +1200 Subject: [rdo-list] RDO CI hardware requirements for CentOS altarch In-Reply-To: References: Message-ID: OK, i mean on the same time, how many concurrent CI jobs may be running? For example, at peak load, will weirdo-master-promote-packstack-scenario001 , weirdo-master-promote-packstack-scenario002 , weirdo-master-promote-packstack-scenario003 run at the same time? Or they will run in sequence by design? Regards, Alex Meng mengxiandong at gmail.com On Thu, Aug 11, 2016 at 2:50 PM, David Moreau Simard wrote: > Alex, > > Can you expand on what you mean by that ? > The concurrency (or lack thereof) of the jobs are more about the > design of the job itself -- or the environment it is run from as well > as the environment it is run on. > > The jobs part of the promotion pipeline [1] run a couple times per day. > > [1]: https://ci.centos.org/view/rdo/view/promotion-pipeline/ > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Wed, Aug 10, 2016 at 9:45 PM, Xiandong Meng > wrote: > > David, thank you for your response. > > > > For Packstack and Puppet-OpenStack jobs, I noticed that usually each job > > takes no more than 45 minutes. How many jobs may run in parallel and how > > often they are triggered? > > > > Regards, > > > > Alex Meng > > mengxiandong at gmail.com > > > > On Thu, Aug 11, 2016 at 2:24 AM, David Moreau Simard > wrote: > >> > >> Hi Alex, > >> > >> I don't know the specifics of resource usage for alternative > >> architectures but I can tell about x86_64. > >> > >> Packstack and Puppet-OpenStack jobs are designed to run within 8GB of > >> RAM - either on a single virtual machine or on a single bare metal > >> server. > >> I would say 4 cores is the minimum (or otherwise job length is > >> severely affected), 8 is best. > >> Disk space is not generally a concern, easily fitting within 50GB of > >> space. > >> > >> I don't have the numbers for TripleO so I'll let someone else chime in > on > >> that. > >> > >> David Moreau Simard > >> Senior Software Engineer | Openstack RDO > >> > >> dmsimard = [irc, github, twitter] > >> > >> > >> On Wed, Aug 10, 2016 at 6:18 AM, Xiandong Meng > >> wrote: > >> > We had discussed it a bit in previous RDO meeting on irc. I want to > >> > write a > >> > separate mail > >> > for more broad and in-depth discussion here. > >> > > >> > > >> > For a master release (for now it is Newton), i noticed the promotion > >> > pipes > >> > fall into three different categories: > >> > - Triple-O based CI test > >> > - Packstack based test > >> > - OpenStack-Puppet based test > >> > > >> > So what is the base minimal CI requirements to start with for AltArch > >> > support? Since many of the CI test should work even with VMs instead > of > >> > physical nodes, can we start > >> > with 1-2 physical servers? > >> > > >> > Regards, > >> > > >> > Alex Meng > >> > mengxiandong at gmail.com > >> > > >> > _______________________________________________ > >> > rdo-list mailing list > >> > rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Thu Aug 11 04:11:18 2016 From: dms at redhat.com (David Moreau Simard) Date: Thu, 11 Aug 2016 00:11:18 -0400 Subject: [rdo-list] RDO CI hardware requirements for CentOS altarch In-Reply-To: References: Message-ID: Right now the jobs are sequenced as outlined in the pipeline [1]. First, the job "rdo-promote-get-hash-master" will run. If it is successful, it triggers the next "stage" of the pipeline. Then, the job "tripleo-quickstart-promote-master-delorean-build-images" will run. If it is successfull, it triggers the next "stage" of the pipeline. After that you have 8 jobs that, if capacity allows (usually the case), will all run simultaneously. Otherwise, builds can be queued until a slave can process them. ... and so on. We can't realistically run all these jobs in a sequence. Considering they complete in 45 minutes (or more), we'd be looking at a pipeline of over 6 hours of builds. [1]: https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/ David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Aug 10, 2016 at 11:52 PM, Xiandong Meng wrote: > OK, i mean on the same time, how many concurrent CI jobs may be running? For > example, at peak load, will weirdo-master-promote-packstack-scenario001 , > weirdo-master-promote-packstack-scenario002 , > weirdo-master-promote-packstack-scenario003 run at the same time? Or they > will run in sequence by design? > > > Regards, > > Alex Meng > mengxiandong at gmail.com > > On Thu, Aug 11, 2016 at 2:50 PM, David Moreau Simard wrote: >> >> Alex, >> >> Can you expand on what you mean by that ? >> The concurrency (or lack thereof) of the jobs are more about the >> design of the job itself -- or the environment it is run from as well >> as the environment it is run on. >> >> The jobs part of the promotion pipeline [1] run a couple times per day. >> >> [1]: https://ci.centos.org/view/rdo/view/promotion-pipeline/ >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> >> >> On Wed, Aug 10, 2016 at 9:45 PM, Xiandong Meng >> wrote: >> > David, thank you for your response. >> > >> > For Packstack and Puppet-OpenStack jobs, I noticed that usually each job >> > takes no more than 45 minutes. How many jobs may run in parallel and how >> > often they are triggered? >> > >> > Regards, >> > >> > Alex Meng >> > mengxiandong at gmail.com >> > >> > On Thu, Aug 11, 2016 at 2:24 AM, David Moreau Simard >> > wrote: >> >> >> >> Hi Alex, >> >> >> >> I don't know the specifics of resource usage for alternative >> >> architectures but I can tell about x86_64. >> >> >> >> Packstack and Puppet-OpenStack jobs are designed to run within 8GB of >> >> RAM - either on a single virtual machine or on a single bare metal >> >> server. >> >> I would say 4 cores is the minimum (or otherwise job length is >> >> severely affected), 8 is best. >> >> Disk space is not generally a concern, easily fitting within 50GB of >> >> space. >> >> >> >> I don't have the numbers for TripleO so I'll let someone else chime in >> >> on >> >> that. >> >> >> >> David Moreau Simard >> >> Senior Software Engineer | Openstack RDO >> >> >> >> dmsimard = [irc, github, twitter] >> >> >> >> >> >> On Wed, Aug 10, 2016 at 6:18 AM, Xiandong Meng >> >> wrote: >> >> > We had discussed it a bit in previous RDO meeting on irc. I want to >> >> > write a >> >> > separate mail >> >> > for more broad and in-depth discussion here. >> >> > >> >> > >> >> > For a master release (for now it is Newton), i noticed the promotion >> >> > pipes >> >> > fall into three different categories: >> >> > - Triple-O based CI test >> >> > - Packstack based test >> >> > - OpenStack-Puppet based test >> >> > >> >> > So what is the base minimal CI requirements to start with for AltArch >> >> > support? Since many of the CI test should work even with VMs instead >> >> > of >> >> > physical nodes, can we start >> >> > with 1-2 physical servers? >> >> > >> >> > Regards, >> >> > >> >> > Alex Meng >> >> > mengxiandong at gmail.com >> >> > >> >> > _______________________________________________ >> >> > rdo-list mailing list >> >> > rdo-list at redhat.com >> >> > https://www.redhat.com/mailman/listinfo/rdo-list >> >> > >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> > > > From mengxiandong at gmail.com Thu Aug 11 08:05:41 2016 From: mengxiandong at gmail.com (Xiandong Meng) Date: Thu, 11 Aug 2016 20:05:41 +1200 Subject: [rdo-list] RDO CI hardware requirements for CentOS altarch In-Reply-To: References: Message-ID: OK, so the bottleneck is in "INSTALL / TEST (IMPORT IMAGES)" stage with 2 Triple-O scenario, 3 Packstack and 3 OpenStack-Puppet test jobs. So we may need up to (4 core, 8 GB memory)*3 to cover PackStack and OpenStack-Puppet path as each Triple-O scenario will need about 90 minutes. (So the shortest time is limited to 90 minutes assuming we maximize the parallelism level. ) Now i need some input about the resource requirements for Triple-O test scenarios. Regards, Alex Meng mengxiandong at gmail.com On Thu, Aug 11, 2016 at 4:11 PM, David Moreau Simard wrote: > Right now the jobs are sequenced as outlined in the pipeline [1]. > > First, the job "rdo-promote-get-hash-master" will run. If it is > successful, it triggers the next "stage" of the pipeline. > Then, the job "tripleo-quickstart-promote-master-delorean-build-images" > will run. If it is successfull, it triggers the next "stage" of the > pipeline. > After that you have 8 jobs that, if capacity allows (usually the > case), will all run simultaneously. Otherwise, builds can be queued > until a slave can process them. > ... and so on. > > We can't realistically run all these jobs in a sequence. > Considering they complete in 45 minutes (or more), we'd be looking at > a pipeline of over 6 hours of builds. > > [1]: https://ci.centos.org/view/rdo/view/promotion-pipeline/ > job/rdo-delorean-promote-master/ > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Wed, Aug 10, 2016 at 11:52 PM, Xiandong Meng > wrote: > > OK, i mean on the same time, how many concurrent CI jobs may be running? > For > > example, at peak load, will weirdo-master-promote-packstack-scenario001 > , > > weirdo-master-promote-packstack-scenario002 , > > weirdo-master-promote-packstack-scenario003 run at the same time? Or > they > > will run in sequence by design? > > > > > > Regards, > > > > Alex Meng > > mengxiandong at gmail.com > > > > On Thu, Aug 11, 2016 at 2:50 PM, David Moreau Simard > wrote: > >> > >> Alex, > >> > >> Can you expand on what you mean by that ? > >> The concurrency (or lack thereof) of the jobs are more about the > >> design of the job itself -- or the environment it is run from as well > >> as the environment it is run on. > >> > >> The jobs part of the promotion pipeline [1] run a couple times per day. > >> > >> [1]: https://ci.centos.org/view/rdo/view/promotion-pipeline/ > >> > >> David Moreau Simard > >> Senior Software Engineer | Openstack RDO > >> > >> dmsimard = [irc, github, twitter] > >> > >> > >> On Wed, Aug 10, 2016 at 9:45 PM, Xiandong Meng > >> wrote: > >> > David, thank you for your response. > >> > > >> > For Packstack and Puppet-OpenStack jobs, I noticed that usually each > job > >> > takes no more than 45 minutes. How many jobs may run in parallel and > how > >> > often they are triggered? > >> > > >> > Regards, > >> > > >> > Alex Meng > >> > mengxiandong at gmail.com > >> > > >> > On Thu, Aug 11, 2016 at 2:24 AM, David Moreau Simard > >> > wrote: > >> >> > >> >> Hi Alex, > >> >> > >> >> I don't know the specifics of resource usage for alternative > >> >> architectures but I can tell about x86_64. > >> >> > >> >> Packstack and Puppet-OpenStack jobs are designed to run within 8GB of > >> >> RAM - either on a single virtual machine or on a single bare metal > >> >> server. > >> >> I would say 4 cores is the minimum (or otherwise job length is > >> >> severely affected), 8 is best. > >> >> Disk space is not generally a concern, easily fitting within 50GB of > >> >> space. > >> >> > >> >> I don't have the numbers for TripleO so I'll let someone else chime > in > >> >> on > >> >> that. > >> >> > >> >> David Moreau Simard > >> >> Senior Software Engineer | Openstack RDO > >> >> > >> >> dmsimard = [irc, github, twitter] > >> >> > >> >> > >> >> On Wed, Aug 10, 2016 at 6:18 AM, Xiandong Meng < > mengxiandong at gmail.com> > >> >> wrote: > >> >> > We had discussed it a bit in previous RDO meeting on irc. I want to > >> >> > write a > >> >> > separate mail > >> >> > for more broad and in-depth discussion here. > >> >> > > >> >> > > >> >> > For a master release (for now it is Newton), i noticed the > promotion > >> >> > pipes > >> >> > fall into three different categories: > >> >> > - Triple-O based CI test > >> >> > - Packstack based test > >> >> > - OpenStack-Puppet based test > >> >> > > >> >> > So what is the base minimal CI requirements to start with for > AltArch > >> >> > support? Since many of the CI test should work even with VMs > instead > >> >> > of > >> >> > physical nodes, can we start > >> >> > with 1-2 physical servers? > >> >> > > >> >> > Regards, > >> >> > > >> >> > Alex Meng > >> >> > mengxiandong at gmail.com > >> >> > > >> >> > _______________________________________________ > >> >> > rdo-list mailing list > >> >> > rdo-list at redhat.com > >> >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> >> > > >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Thu Aug 11 15:25:59 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 11 Aug 2016 17:25:59 +0200 Subject: [rdo-list] tripleo-quickstart: new features, and upcoming changes In-Reply-To: References: Message-ID: 2016-08-10 19:19 GMT+02:00 Wesley Hayutin : > Greetings, > It's been a little while since the RDO CI team has updated the community > with new features coming to tripleo-quickstart and CI. > > Since I have your attention I'll walk through some upcoming changes that you > need to be aware of. > Nice summary of upcoming work on oooq but since it is hosted in the openstack namespace, I would have sent it first to openstack-dev list (w/ rdo-list in CC) > We've determined that the role and jinja2 template [1] we use to configure > the overcloud is too large and not very composable. The current template > contains registration, introspection, flavor creation, and network-isolation > setup. > > This will be split into three distinct roles in the future [2] > > ansible-role-tripleo-overcloud-prep-images > ansible-role-tripleo-overcloud-prep-flavors > ansible-role-tripleo-overcloud-prep-network > > Additionally we will be creating a role responsible for copying custom heat > templates, nic-conifgs, and other config to the undercloud to prep for a > deployment [2] > The tripleo-quickstart core's have decided to remove the roles responsible > for configuring and deploying the overcloud from tripleo-quickstart and make > them 3rd party roles (oooq-extras) > > > Now for the new features: > > Paul Belanger has added tripleo-quickstart doc to the openstack doc server. > > http://docs.openstack.org/developer/tripleo-quickstart/ > > Paul Belanger has added a lint job for tripleo-quickstart in the openstack > ci system > > e.g. > http://logs.openstack.org/90/352990/1/check/gate-tripleo-quickstart-linters-ubuntu-xenial/ > > Harry Rybacki is about to land a patch to automatically generate TripleO > documentation via CI execution [3]. This is a great feature to keep TripleO > docs up to date, documenting step by step what CI is doing, and great for > reproducing bugs!! > Attila Darazs has added 3rd party OpenStack CI. You can now run rdo ci > against your openstack reviews. > > To test w/ the latest delorean builds use keyword "rdo-ci-testing" > To test w/ the latested promoted RDO quickstart image use keyword > "rdo-ci-check" > > Upcoming features under development: > > Mathieu Bultel has a working proof of concept that takes an already deployed > TripleO environment (virt only), shuts it down, saves it to a file server > and then restores the environment on a new clean libvirt host. This has > helped Mathieu iterate on upgrade testing more efficiently. This work has > practical implications for sales and other areas. > > There is some additional cool stuff on the way, but it's a little too early > to advertise it. Thanks to Paul, Harry, Attila and Mathieu and the rest of > the RDO community! > Great job guys! Regards, H. > > [1] > https://github.com/openstack/tripleo-quickstart/blob/master/roles/tripleo/undercloud/templates/undercloud-install-post.sh.j2 > [2] https://trello.com/c/yMoI2i1d > [3] > https://ci.centos.org/artifacts/rdo/jenkins-poc-hrybacki-tripleo-quickstart-roles-gate-mitaka-doc-generator-32/docs/build/ > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Thu Aug 11 19:57:46 2016 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 11 Aug 2016 15:57:46 -0400 Subject: [rdo-list] tripleo-quickstart: new features, and upcoming changes In-Reply-To: References: Message-ID: On 08/10/2016 01:19 PM, Wesley Hayutin wrote: > # Paul Belanger has added tripleo-quickstart doc to the openstack doc server. > > * http://docs.openstack.org/developer/tripleo-quickstart/ Should we drop this doc from https://www.rdoproject.org/tripleo/ to remove duplication? -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From ggillies at redhat.com Fri Aug 12 03:36:09 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Fri, 12 Aug 2016 13:36:09 +1000 Subject: [rdo-list] Building overcloud images with TripleO and RDO Message-ID: <184fbf80-b1fd-1716-92b3-fb1104a3f1ea@redhat.com> Hi, I spent the last day or two trying to get to the bottom of the issue described at [1], which turned out to be because the version of galera that is in EPEL is higher than what we have in RDO mitaka stable, and when it attempts to get used, mariadb-galera-server fails to start. In order to understand why epel was being pulled in, how to stop it, and how this seemed to have slipped through CI/testing, I've been trying to look through and understand the whole state of the image building process across TripleO, RDO, and our CI. Unfortunately what I've discovered hasn't been great. It looks like there is at least 3 different paths being used to build images. Apologies if anything below is incorrect, it's incredibly convoluted and difficult to follow for someone who isn't intimately familiar with it all (like myself). 1) Using "openstack overcloud image build --all", which is I assume the method end users are supposed to be using, or at least it's the method documented in the docs. This uses diskimagebuilder under the hood, but the logic behind it is in python (under python-tripleoclient), with a lot of stuff hardcoded in 2) Using tripleo.sh, which, while it looks like calls "openstack overcloud image build", also has some of it's own logic and messes with things like the ~/.cache/image-create/source-repositories file, which I believe is how the issue at [1] passed CI in the first place 3) Using the ansible role ansible-role-tripleo-image-build [2] which looks like it also uses diskimagebuilder, but through a slightly different approach, by using an ansible library that can take an image definition via yaml (neat!) and then all diskimagebuilder using python-tripleo-common as an intermediary. Which is a different code path (though the code itself looks similar) to python-tripleoclient I feel this issue is hugely important as I believe it is one of the biggest barriers to having more people adopt RDO/TripleO. Too often people encounter issues with deploys that are hard to nail down because we have no real understanding exactly how they built the images, nor as an Operator I don't feel like I have a clear understanding of what I get when I use different options. The bug at [1] is a classic example of something I should never have hit. We do have stable images available at [3] (built using method 3) however there are a number of problems with just using them 1) I think it's perfectly reasonable for people to want to build their own images. It's part of the Open Source philosophy, we want things to be Open and we want to understand how things work, so we can customise, extend, and troubleshoot ourselves. If your image building process is so convoluted that you have to say "just use our prebuilt ones", then you have done something wrong. 2) The images don't get updated (they current mitaka ones were built in April) 3) There is actually nowhere on the RDO website, nor the tripleo website, that actually references their location. So as a new user, you have exactly zero chance of finding these images and using them. I'm not sure what the best process is to start improving this, but it looks like it's complicated enough and involves enough moving pieces that a spec against tripleo might be the way to go? I am thinking the goal would be to move towards everyone having one way, one code path, for building images with TripleO, that could be utilised by all use cases out there. My thinking is the method would take image definitions in a yaml format similar to how ansible-role-tripleo-image-build works, and we can just ship a bunch of different yaml files for all the different image scenarios people might want. e.g. /usr/share/tripleo-images/centos-7-x86_64-mitaka-cbs.yaml /usr/share/tripleo-images/centos-7-x86_64-mitaka-trunk.yaml /usr/share/tripleo-images/centos-7-x86_64-trunk.yaml Etc etc. you could then have a symlink called default.yaml which points to whatever scenario you wish people to use by default, and the scenario could be overwritten by a command line argument. Basically this is exactly how mock [4] works, and has been proven to be a nice, clean, easy to use workflow for people to understand. The added bonus is if people wanted to do their own images, they could copy one of the existing files as a template to start with. If people feel this is worthwhile (and I hope they do), I'm interested in understanding what the next steps would be to get this to happen. Regards, Graeme [1] https://bugzilla.redhat.com/show_bug.cgi?id=1365884 [2] https://github.com/redhat-openstack/ansible-role-tripleo-image-build [3] http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ [4] https://fedoraproject.org/wiki/Mock -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From chkumar246 at gmail.com Fri Aug 12 08:25:34 2016 From: chkumar246 at gmail.com (Chandan kumar) Date: Fri, 12 Aug 2016 13:55:34 +0530 Subject: [rdo-list] Future of Tempest packaging in RDO Message-ID: Hi, Tempest is the OpenStack integration testing suite which is used by upstream CI to test functionality against each patchset submitted. RDO provides: * RPM of modified version of tempest containing the additional script for generating tempest config and configuring tempest easily[VI]. * test sub packages and tempest-service packages for all OpenStack services tempest plugins These were consumed by upstream puppet CI, Weirdo, and OOOQ CI. Currently, below are the lists of problem associated with it: [1]. In upstream puppet CI,OpenStack puppet services modules are tested using openstack-puppet-integration[I]. which installs tempest from git source and tempest-services plugins are installed through rpm using puppet-tempest. The problem here is it allows mixing installing tempest from git and plugins as RPM which is not a configuration that should be supported by RDO. and one of the side effects is you can't use tempest plugins from pure packages unless installing separately tempest. [2]. In OOOQ CI, It uses RDO tempest rpm that installs all the test sub packages. The tempest rpm is currently outdated in terms of Upstream Tempest and some of the functionality which works with the tempest rpm, might not work with upstream so we are again ending up filling bugs. [3.] Circular dependency problems with tempest [II]. The tempest rpm installs all the tempest plugins and test sub packages. Suppose you install the undercloud, try to run tempest from it. You have many packages with the common code for the various services (say: python-ceilometer) which expose the entry point for code which is not available (as it is in python-ceilometer-tests).When you try to run testr list-tests (or other discovery wrappers) and you get tons of errors, until you figure out and install all the required -tests package. This was and is confusing for many people and lead to bugs again [III]. [4.] Problems in installing and testing a tempest plugins separately. For now, only two (designate and horizon) tempest-plugin packages require openstack-tempest [IV][V]. So for installing and testing a tempest plugin, we will again end up with the problem [3] Another case is that Tempest plugins differ from each other in terms of behaviour: some of them need to be specifically disabled if installed but not used, others require specific configuration to run (or not). It causes another problem while testing it. These are the above lists of problems we found in current tempest rpm and others might have some more We are looking forward to discussing the same in order to find a better solution for everyone. Links: [I]. https://github.com/openstack/puppet-openstack-integration/blob/master/run_tests.sh#L85 [II]. https://review.rdoproject.org/r/#/c/1780/ [III]. https://bugzilla.redhat.com/show_bug.cgi?id=1335541 [IV]. https://github.com/rdo-packages/designate-tempest-plugin-distgit/blob/rpm-master/python-designate-tests-tempest.spec#L21 [V]. https://review.rdoproject.org/r/#/c/1820/ [VI]. https://review.gerrithub.io/#/c/266365 Thanks, Chandan Kumar From cbrown2 at ocf.co.uk Fri Aug 12 13:52:31 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Fri, 12 Aug 2016 14:52:31 +0100 Subject: [rdo-list] Building overcloud images with TripleO and RDO In-Reply-To: <184fbf80-b1fd-1716-92b3-fb1104a3f1ea@redhat.com> References: <184fbf80-b1fd-1716-92b3-fb1104a3f1ea@redhat.com> Message-ID: <1471009951.2145.48.camel@ocf.co.uk> Hello, Thanks for this Graeme. As the original reporter of this bug, it would be good to understand how the CI process works for stable rdo release. Since downgrading the galera version we have progressed to a point where we now see keystone authentication failures in Step 6 due the use of multidomain support. So I suppose my question would be where the responsibility for maintaining stable RDO lies, where to file bugs and who to contact and what CI takes place currently with these packages and how it can be improved. >From my point of view, as a contributor rarely, deployer of OpenStack mostly (and an operator intermittently), it would be good to have images that are grabbed via yum, similar to the OSP deployment mechanism. These are then the "known good" images and bugs can be filed against these quickly and we reduce the huge variation in images people use to deploy. This would in turn feed back up to the creation of OSP images so I imagine benefit commercial deployments as well. It would then be good to be able to build images if we need to pull in additional packages or customise (I agree with Graeme's point about building these being entirely in keeping with Open Source tradition) but currently we have a situation where there are multiple paths to build an image and multiple images to download from if something is broken. The last time there was a thread on stable deployment, a lot of things got fixed (like buggy introspection so many thanks for that as it now works flawlessly) but it would be good to understand who to talk to when things go wrong. Currently I'm entirely unsure if its tripleo devs, RDO people, Red Hat bugzilla, CentOS bugs, CentOS cloud sig.. etc. If anyone could point me in the right direction I'd be grateful. Thanks to Graeme for debugging the Galera version issue. On Fri, 2016-08-12 at 04:36 +0100, Graeme Gillies wrote: > Hi, > > I spent the last day or two trying to get to the bottom of the issue > described at [1], which turned out to be because the version of > galera > that is in EPEL is higher than what we have in RDO mitaka stable, and > when it attempts to get used, mariadb-galera-server fails to start. > > In order to understand why epel was being pulled in, how to stop it, > and > how this seemed to have slipped through CI/testing, I've been trying > to > look through and understand the whole state of the image building > process across TripleO, RDO, and our CI. > > Unfortunately what I've discovered hasn't been great. It looks like > there is at least 3 different paths being used to build images. > Apologies if anything below is incorrect, it's incredibly convoluted > and > difficult to follow for someone who isn't intimately familiar with it > all (like myself). > > 1) Using "openstack overcloud image build --all", which is I assume > the > method end users are supposed to be using, or at least it's the > method > documented in the docs. This uses diskimagebuilder under the hood, > but > the logic behind it is in python (under python-tripleoclient), with a > lot of stuff hardcoded in > > 2) Using tripleo.sh, which, while it looks like calls "openstack > overcloud image build", also has some of it's own logic and messes > with > things like the ~/.cache/image-create/source-repositories file, which > I > believe is how the issue at [1] passed CI in the first place > > 3) Using the ansible role ansible-role-tripleo-image-build [2] which > looks like it also uses diskimagebuilder, but through a slightly > different approach, by using an ansible library that can take an > image > definition via yaml (neat!) and then all diskimagebuilder using > python-tripleo-common as an intermediary. Which is a different code > path > (though the code itself looks similar) to python-tripleoclient > > I feel this issue is hugely important as I believe it is one of the > biggest barriers to having more people adopt RDO/TripleO. Too often > people encounter issues with deploys that are hard to nail down > because > we have no real understanding exactly how they built the images, nor > as > an Operator I don't feel like I have a clear understanding of what I > get > when I use different options. The bug at [1] is a classic example of > something I should never have hit. > > We do have stable images available at [3] (built using method 3) > however > there are a number of problems with just using them > > 1) I think it's perfectly reasonable for people to want to build > their > own images. It's part of the Open Source philosophy, we want things > to > be Open and we want to understand how things work, so we can > customise, > extend, and troubleshoot ourselves. If your image building process is > so > convoluted that you have to say "just use our prebuilt ones", then > you > have done something wrong. > > 2) The images don't get updated (they current mitaka ones were built > in > April) > > 3) There is actually nowhere on the RDO website, nor the tripleo > website, that actually references their location. So as a new user, > you > have exactly zero chance of finding these images and using them. > > I'm not sure what the best process is to start improving this, but it > looks like it's complicated enough and involves enough moving pieces > that a spec against tripleo might be the way to go? I am thinking the > goal would be to move towards everyone having one way, one code path, > for building images with TripleO, that could be utilised by all use > cases out there. > > My thinking is the method would take image definitions in a yaml > format > similar to how ansible-role-tripleo-image-build works, and we can > just > ship a bunch of different yaml files for all the different image > scenarios people might want. e.g. > > /usr/share/tripleo-images/centos-7-x86_64-mitaka-cbs.yaml > /usr/share/tripleo-images/centos-7-x86_64-mitaka-trunk.yaml > /usr/share/tripleo-images/centos-7-x86_64-trunk.yaml > > Etc etc. you could then have a symlink called default.yaml which > points > to whatever scenario you wish people to use by default, and the > scenario > could be overwritten by a command line argument. Basically this is > exactly how mock [4] works, and has been proven to be a nice, clean, > easy to use workflow for people to understand. The added bonus is if > people wanted to do their own images, they could copy one of the > existing files as a template to start with. > > If people feel this is worthwhile (and I hope they do), I'm > interested > in understanding what the next steps would be to get this to happen. > > Regards, > > Graeme > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1365884 > [2] https://github.com/redhat-openstack/ansible-role-tripleo-image-bu > ild > [3] > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mita > ka/cbs/ > [4] https://fedoraproject.org/wiki/Mock > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Regards, Christopher Brown OpenStack Engineer OCF plc From javier.pena at redhat.com Fri Aug 12 14:22:57 2016 From: javier.pena at redhat.com (Javier Pena) Date: Fri, 12 Aug 2016 10:22:57 -0400 (EDT) Subject: [rdo-list] Fedora: running RDO (openstack-mitaka) In-Reply-To: References: Message-ID: <945084531.1997457.1471011777964.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Main goal: installing any RDO over Fedora 24, starting with all in one. > according to: > https://www.rdoproject.org/install/quickstart/ and > https://www.rdoproject.org/documentation/packstack-all-in-one-diy-configuration/ > sudo yum install https://www.rdoproject.org/repos/rdo-release.rpm > sudo yum update -y > sudo yum install -y openstack-packstack > packstack --allinone --gen-answer_file=answer.txt > and afterwords. > setting my interfaces: > CONFIG_NOVA_NETWORK_PUBIF --novanetwork-pubif > CONFIG_NOVA_COMPUTE_PRIVIF --novacompute-privif > CONFIG_NOVA_NETWORK_PRIVIF --novanetwork-privif > setting additional settings: > CONFIG_PROVISION_DEMO --provision-demo n (y for allinone) > CONFIG_SWIFT_INSTALL --os-swift-install (y for allinone) n Set to y if you > would like PackStack to install Object Storage. > CONFIG_NAGIOS_INSTALL --nagios-install n (y for allinone) Set to y if you > would like to install Nagios. Nagios provides additional tools for > monitoring the OpenStack environment. > CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE --provision-all-in-one-ovs-bridge n (y > for allinone) > packstack --answer-file=answer.txt > 1) first issue I?ve encountered is : > Error: Parameter mode failed on File[rabbitmq.config]: The file mode > specification must be a string, not 'Fixnum' at ? > occurs, > I have to verify: > /usr/lib/python2.7/site-packages/packstack/puppet/templates/amqp.pp > /usr/share/openstack-puppet/modules/module-collectd/manifests/plugin/amqp.pp > and modify the following: > https://review.openstack.org/349908 > 2) second issue I?ve encountered is : > https://bugs.launchpad.net/packstack/+bug/1597951 > after modifying SELinux > /usr/sbin/getenforce > Enforcing > setenforce permissive > /usr/sbin/getenforce > Permissive > seems to resolve it: > 3) third issue is the current issue. > 192.168.13.85_amqp.pp: [ DONE ] <-> previously failed. > 192.168.13.85_mariadb.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > 192.168.13.85_mariadb.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > ERROR : Error appeared during Puppet run: 192.168.13.85_mariadb.pp > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install mariadb-galera-server' > returned 1: Yum command has been deprecated, redirecting to '/usr/bin/dnf -d > 0 -e 0 -y install mariadb-galera-server'. > You will find full trace in log > /var/tmp/packstack/20160802-193240-sGLWV3/manifests/192.168.13.85_mariadb.pp.log > Please check log file > /var/tmp/packstack/20160802-193240-sGLWV3/openstack-setup.log for more > information Hi Nir, I've been running some tests, and we have a couple different issues here: - The RabbitMQ and MariaDB issues are due to the repository used. It is prepared for CentOS 7, so it includes a different MariaDB version (which conflicts with the Fedora one), which causes the last issue you saw. - I've run a quick test after switching to a repo built for Fedora 24 -> https://trunk.rdoproject.org/f24/current/ (this is the current Newton branch, although a bit outdated). I've found some other issues there, which seem to be related to Puppet 4 or Hiera differences, but couldn't get too far. TL;DR: we need to fix Fedora, but currently our focus is on making CentOS 7 work as well as possible. You will probably get a much smoother experience if you try the setup with CentOS, but I'll be happy to review patches to fix Fedora installation as well :). Regards, Javier > Nir Levy > SW Engineer > Web: www.asocstech.com | > Nir Levy > SW Engineer > Web: www.asocstech.com | > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2704 bytes Desc: image001.jpg URL: From mail-lists at karan.org Fri Aug 12 20:30:50 2016 From: mail-lists at karan.org (Karanbir Singh) Date: Fri, 12 Aug 2016 21:30:50 +0100 Subject: [rdo-list] RDO CI hardware requirements for CentOS altarch In-Reply-To: References: Message-ID: <781a340f-10a8-c3e4-d471-da215532883a@karan.org> note that a key element to the messaging from our side is that the content is tested the way users use the code, not the way developers deliver the code - so ideally, we want to get down to baremetal deployments and validating distro v/s rdo and rdo v/s distro. we typically see hundreds of bare metal deployments per day for rdo / cloud SIG, and at the very least we should make an effort to sync across the arch's if we can. so, as we work through what needs doing, lets make sure we factor in the user story we want to deliver at the other end. Regards On 11/08/16 09:05, Xiandong Meng wrote: > OK, so the bottleneck is in "INSTALL / TEST (IMPORT IMAGES)" stage > with 2 Triple-O scenario, 3 Packstack and 3 OpenStack-Puppet test > jobs. So we may need up to (4 core, 8 GB memory)*3 to cover > PackStack and OpenStack-Puppet path as each Triple-O scenario will > need about 90 minutes. (So the shortest time is limited to 90 > minutes assuming we maximize the parallelism level. ) > > Now i need some input about the resource requirements for Triple-O > test scenarios. > > > > Regards, > > Alex Meng mengxiandong at gmail.com > > > On Thu, Aug 11, 2016 at 4:11 PM, David Moreau Simard > > wrote: > > Right now the jobs are sequenced as outlined in the pipeline [1]. > > First, the job "rdo-promote-get-hash-master" will run. If it is > successful, it triggers the next "stage" of the pipeline. Then, the > job "tripleo-quickstart-promote-master-delorean-build-images" will > run. If it is successfull, it triggers the next "stage" of the > pipeline. After that you have 8 jobs that, if capacity allows > (usually the case), will all run simultaneously. Otherwise, builds > can be queued until a slave can process them. ... and so on. > > We can't realistically run all these jobs in a sequence. > Considering they complete in 45 minutes (or more), we'd be looking > at a pipeline of over 6 hours of builds. > > [1]: > https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/ > > > > David Moreau Simard Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Wed, Aug 10, 2016 at 11:52 PM, Xiandong Meng > > wrote: >> OK, i mean on the same time, how many concurrent CI jobs may be > running? For >> example, at peak load, will > weirdo-master-promote-packstack-scenario001 , >> weirdo-master-promote-packstack-scenario002 , >> weirdo-master-promote-packstack-scenario003 run at the same >> time? > Or they >> will run in sequence by design? >> >> >> Regards, >> >> Alex Meng mengxiandong at gmail.com >> >> On Thu, Aug 11, 2016 at 2:50 PM, David Moreau Simard > > wrote: >>> >>> Alex, >>> >>> Can you expand on what you mean by that ? The concurrency (or >>> lack thereof) of the jobs are more about the design of the job >>> itself -- or the environment it is run from as well as the >>> environment it is run on. >>> >>> The jobs part of the promotion pipeline [1] run a couple times > per day. >>> >>> [1]: https://ci.centos.org/view/rdo/view/promotion-pipeline/ > >>> >>> David Moreau Simard Senior Software Engineer | Openstack RDO >>> >>> dmsimard = [irc, github, twitter] >>> >>> >>> On Wed, Aug 10, 2016 at 9:45 PM, Xiandong Meng > > >>> wrote: >>>> David, thank you for your response. >>>> >>>> For Packstack and Puppet-OpenStack jobs, I noticed that >>>> usually > each job >>>> takes no more than 45 minutes. How many jobs may run in > parallel and how >>>> often they are triggered? >>>> >>>> Regards, >>>> >>>> Alex Meng mengxiandong at gmail.com >>>> >>>> >>>> On Thu, Aug 11, 2016 at 2:24 AM, David Moreau Simard > > >>>> wrote: >>>>> >>>>> Hi Alex, >>>>> >>>>> I don't know the specifics of resource usage for >>>>> alternative architectures but I can tell about x86_64. >>>>> >>>>> Packstack and Puppet-OpenStack jobs are designed to run >>>>> within > 8GB of >>>>> RAM - either on a single virtual machine or on a single >>>>> bare metal server. I would say 4 cores is the minimum (or >>>>> otherwise job length is severely affected), 8 is best. Disk >>>>> space is not generally a concern, easily fitting within > 50GB of >>>>> space. >>>>> >>>>> I don't have the numbers for TripleO so I'll let someone >>>>> else > chime in >>>>> on that. >>>>> >>>>> David Moreau Simard Senior Software Engineer | Openstack >>>>> RDO >>>>> >>>>> dmsimard = [irc, github, twitter] >>>>> >>>>> >>>>> On Wed, Aug 10, 2016 at 6:18 AM, Xiandong Meng > > >>>>> wrote: >>>>>> We had discussed it a bit in previous RDO meeting on irc. >>>>>> I > want to >>>>>> write a separate mail for more broad and in-depth >>>>>> discussion here. >>>>>> >>>>>> >>>>>> For a master release (for now it is Newton), i noticed >>>>>> the > promotion >>>>>> pipes fall into three different categories: - Triple-O >>>>>> based CI test - Packstack based test - OpenStack-Puppet >>>>>> based test >>>>>> >>>>>> So what is the base minimal CI requirements to start >>>>>> with > for AltArch >>>>>> support? Since many of the CI test should work even with >>>>>> VMs > instead >>>>>> of physical nodes, can we start with 1-2 physical >>>>>> servers? >>>>>> >>>>>> Regards, >>>>>> >>>>>> Alex Meng mengxiandong at gmail.com >>>>>> >>>>>> >>>>>> _______________________________________________ rdo-list >>>>>> mailing list rdo-list at redhat.com >>>>>> >>>>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>>>> >>>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>>> >>>> >> >> > > -- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From bderzhavets at hotmail.com Sat Aug 13 05:06:20 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 13 Aug 2016 05:06:20 +0000 Subject: [rdo-list] RDO CI hardware requirements for CentOS altarch In-Reply-To: <781a340f-10a8-c3e4-d471-da215532883a@karan.org> References: , <781a340f-10a8-c3e4-d471-da215532883a@karan.org> Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Karanbir Singh Sent: Friday, August 12, 2016 4:30 PM To: Xiandong Meng; David Moreau Simard Cc: rdo-list Subject: Re: [rdo-list] RDO CI hardware requirements for CentOS altarch note that a key element to the messaging from our side is that the content is tested the way users use the code, not the way developers deliver the code - so ideally, we want to get down to baremetal deployments and validating distro v/s rdo and rdo v/s distro. > we typically see hundreds of bare metal deployments per day for rdo / > cloud SIG, and at the very least we should make an effort to sync > across the arch's if we can. In regards of RDO how much is percentage of TripleO bare metal deployments per Mitaka ( per Liberty ) ? If this info is closed by some reasons , sorry for question been asked. Thanks. Boris >so, as we work through what needs doing, lets make sure we factor in >the user story we want to deliver at the other end. Regards On 11/08/16 09:05, Xiandong Meng wrote: > OK, so the bottleneck is in "INSTALL / TEST (IMPORT IMAGES)" stage > with 2 Triple-O scenario, 3 Packstack and 3 OpenStack-Puppet test > jobs. So we may need up to (4 core, 8 GB memory)*3 to cover > PackStack and OpenStack-Puppet path as each Triple-O scenario will > need about 90 minutes. (So the shortest time is limited to 90 > minutes assuming we maximize the parallelism level. ) > > Now i need some input about the resource requirements for Triple-O > test scenarios. > > > > Regards, > > Alex Meng mengxiandong at gmail.com > > > On Thu, Aug 11, 2016 at 4:11 PM, David Moreau Simard > > wrote: > > Right now the jobs are sequenced as outlined in the pipeline [1]. > > First, the job "rdo-promote-get-hash-master" will run. If it is > successful, it triggers the next "stage" of the pipeline. Then, the > job "tripleo-quickstart-promote-master-delorean-build-images" will > run. If it is successfull, it triggers the next "stage" of the > pipeline. After that you have 8 jobs that, if capacity allows > (usually the case), will all run simultaneously. Otherwise, builds > can be queued until a slave can process them. ... and so on. > > We can't realistically run all these jobs in a sequence. > Considering they complete in 45 minutes (or more), we'd be looking > at a pipeline of over 6 hours of builds. > > [1]: > https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/ > > > > David Moreau Simard Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Wed, Aug 10, 2016 at 11:52 PM, Xiandong Meng > > wrote: >> OK, i mean on the same time, how many concurrent CI jobs may be > running? For >> example, at peak load, will > weirdo-master-promote-packstack-scenario001 , >> weirdo-master-promote-packstack-scenario002 , >> weirdo-master-promote-packstack-scenario003 run at the same >> time? > Or they >> will run in sequence by design? >> >> >> Regards, >> >> Alex Meng mengxiandong at gmail.com >> >> On Thu, Aug 11, 2016 at 2:50 PM, David Moreau Simard > > wrote: >>> >>> Alex, >>> >>> Can you expand on what you mean by that ? The concurrency (or >>> lack thereof) of the jobs are more about the design of the job >>> itself -- or the environment it is run from as well as the >>> environment it is run on. >>> >>> The jobs part of the promotion pipeline [1] run a couple times > per day. >>> >>> [1]: https://ci.centos.org/view/rdo/view/promotion-pipeline/ > >>> >>> David Moreau Simard Senior Software Engineer | Openstack RDO >>> >>> dmsimard = [irc, github, twitter] >>> >>> >>> On Wed, Aug 10, 2016 at 9:45 PM, Xiandong Meng > > >>> wrote: >>>> David, thank you for your response. >>>> >>>> For Packstack and Puppet-OpenStack jobs, I noticed that >>>> usually > each job >>>> takes no more than 45 minutes. How many jobs may run in > parallel and how >>>> often they are triggered? >>>> >>>> Regards, >>>> >>>> Alex Meng mengxiandong at gmail.com >>>> >>>> >>>> On Thu, Aug 11, 2016 at 2:24 AM, David Moreau Simard > > >>>> wrote: >>>>> >>>>> Hi Alex, >>>>> >>>>> I don't know the specifics of resource usage for >>>>> alternative architectures but I can tell about x86_64. >>>>> >>>>> Packstack and Puppet-OpenStack jobs are designed to run >>>>> within > 8GB of >>>>> RAM - either on a single virtual machine or on a single >>>>> bare metal server. I would say 4 cores is the minimum (or >>>>> otherwise job length is severely affected), 8 is best. Disk >>>>> space is not generally a concern, easily fitting within > 50GB of >>>>> space. >>>>> >>>>> I don't have the numbers for TripleO so I'll let someone >>>>> else > chime in >>>>> on that. >>>>> >>>>> David Moreau Simard Senior Software Engineer | Openstack >>>>> RDO >>>>> >>>>> dmsimard = [irc, github, twitter] >>>>> >>>>> >>>>> On Wed, Aug 10, 2016 at 6:18 AM, Xiandong Meng > > >>>>> wrote: >>>>>> We had discussed it a bit in previous RDO meeting on irc. >>>>>> I > want to >>>>>> write a separate mail for more broad and in-depth >>>>>> discussion here. >>>>>> >>>>>> >>>>>> For a master release (for now it is Newton), i noticed >>>>>> the > promotion >>>>>> pipes fall into three different categories: - Triple-O >>>>>> based CI test - Packstack based test - OpenStack-Puppet >>>>>> based test >>>>>> >>>>>> So what is the base minimal CI requirements to start >>>>>> with > for AltArch >>>>>> support? Since many of the CI test should work even with >>>>>> VMs > instead >>>>>> of physical nodes, can we start with 1-2 physical >>>>>> servers? >>>>>> >>>>>> Regards, >>>>>> >>>>>> Alex Meng mengxiandong at gmail.com >>>>>> >>>>>> >>>>>> _______________________________________________ rdo-list >>>>>> mailing list rdo-list at redhat.com >>>>>> >>>>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>>>> >>>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>>> >>>> >> >> > > -- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederic.lepied at redhat.com Sat Aug 13 09:21:37 2016 From: frederic.lepied at redhat.com (=?UTF-8?B?RnLDqWTDqXJpYyBMZXBpZWQ=?=) Date: Sat, 13 Aug 2016 11:21:37 +0200 Subject: [rdo-list] Proposition to get more CI promotions Message-ID: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> Hi, Our CI promotion system is as if we were running downhill: we cannot stop and the only way to fix issues is to move forward by getting new fixed packages. And it usually takes days before we can get the fixes and by the time we get them we can have other issues appearing and so never being able to have a promotion. I would like to propose an improved way of working to get more CI promotions. When we have failures in our tests, we do like we do today: debug the issues and find the commits that are causing the regression and working with upstream or fixing packaging issues to solve the regression. The proposed improvement to get more CI promotions is, while we wait for the fixes to be ready, to get the oldest commit that is currently causing an issue from the current analysis and to try the previous commits in reverse order to promote before the issues appear. With the database of DLRN we have all the information to be able to implement this backward tries and have more chances to promote. I'm in holidays next week but I wanted to bring this idea before been completely off. WDYT? -- Fred - May the Source be with you From nhicher at redhat.com Mon Aug 15 13:45:26 2016 From: nhicher at redhat.com (Nicolas Hicher) Date: Mon, 15 Aug 2016 09:45:26 -0400 Subject: [rdo-list] Scheduled maintenance of review.rdoproject.org: 2016-08-24 13:30 UTC Message-ID: <3795275d-4d49-02dd-68fd-e5ecc6665d5f@redhat.com> Hello folks, We plan to upgrade review.rdoproject.org on 2016-08-24 13:30 UTC (next Wednesday). The downtime will be about 1 hour approximately. This is a maintenance upgrade to the last stable version of Software Factory 2.2.3, the changelog is: http://softwarefactory-project.io/r/gitweb?p=software-factory.git;a=blob_plain;f=CHANGELOG.md Regards, Software Factory Team, on behalf of rdo-infra From hguemar at fedoraproject.org Mon Aug 15 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 15 Aug 2016 15:00:03 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160815150003.D5FAC60A4003@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-08-17 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From imaslov at dispersivegroup.com Mon Aug 15 16:08:13 2016 From: imaslov at dispersivegroup.com (Ilja Maslov) Date: Mon, 15 Aug 2016 16:08:13 +0000 Subject: [rdo-list] centos-release-openstack-mitaka Message-ID: Hi, I've been trying to use centos-release-openstack-mitaka repo to deploy undercloud, build overcloud images (with some hacks to use the same repo) and deploy the overcloud. The rationale behind it is to use Centos-provided repos with production deployment in mind. rdo-trunk-mitaka-tested repo has many bugs fixed, but I do not think it can be qualified as production-ready repository. Long story short, centos-release-openstack-mitaka has os-net-config package at version 0.2.2-1.el7, which broke Linux Bond configurations [1]. The issue had been fixed back in April and made it to RHEL OSP (Liberty), but not to centos-release-openstack-mitaka. I'm not sure if rdo-list is the best place to ask this question, but would appreciate if you can point me towards resources describing how Centos stable repos for OpenStack are maintained, tested and packages promoted to newer versions. It feels as if today, one has a choice between a bit outdated Centos repo and too hot rdo-trunk-tested. Is there anything in between that can be used for production deployments? Thanks, Ilja 1. https://bugzilla.redhat.com/show_bug.cgi?id=1323717 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Mon Aug 15 17:42:14 2016 From: dms at redhat.com (David Moreau Simard) Date: Mon, 15 Aug 2016 13:42:14 -0400 Subject: [rdo-list] Proposition to get more CI promotions In-Reply-To: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> References: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> Message-ID: On Sat, Aug 13, 2016 at 5:21 AM, Fr?d?ric Lepied wrote: > The proposed improvement to get more CI promotions is, while we wait > for the fixes to be ready, to get the oldest commit that is currently > causing an issue from the current analysis and to try the previous > commits in reverse order to promote before the issues appear. With the > database of DLRN we have all the information to be able to implement > this backward tries and have more chances to promote. If I understand this correctly, it would amount to promoting what would essentially be an inconsistent repository, right ? The whole point of consistent was to avoid promoting repositories which had out-of-date packages in them. If I didn't understand this correctly, please do explain :) David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From cbrown2 at ocf.co.uk Mon Aug 15 19:59:37 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Mon, 15 Aug 2016 20:59:37 +0100 Subject: [rdo-list] centos-release-openstack-mitaka In-Reply-To: References: Message-ID: <1471291177.20756.132.camel@ocf.co.uk> Hi Ilja, On Mon, 2016-08-15 at 17:08 +0100, Ilja Maslov wrote: > Hi, > > I?ve been trying to use centos-release-openstack-mitaka repo to > deploy undercloud, build overcloud images (with some hacks to use the > same repo) and deploy the overcloud. I think you are running into the same issues we are. > The rationale behind it is to use Centos-provided repos with > production deployment in mind. > rdo-trunk-mitaka-tested repo has many bugs fixed, but I do not think > it can be qualified as production-ready repository. Agreed, production deployment at the moment with RDO is not a pretty picture. Sorry to be blunt but this is my experience after around a year of working with it. We have to pin package versions, mash repositories together, run deployments from nightly images and the like. We generally resort to the OSP documentation for a definitive source. > Long story short, centos-release-openstack-mitaka has os-net-config > package at version 0.2.2-1.el7, which broke Linux Bond configurations > [1]. We also have issues with incorrect galera affecting HA and a bug with keystone v3 domain support > The issue had been fixed back in April and made it to RHEL OSP > (Liberty), but not to centos-release-openstack-mitaka. > > I?m not sure if rdo-list is the best place to ask this question, but > would appreciate if you can point me towards resources describing how > Centos stable repos for OpenStack are maintained, tested and packages > promoted to newer versions. I would also really appreciate this so there can be a cohesive effort towards maintaining a stable rdo release. > It feels as if today, one has a choice between a bit outdated Centos > repo and too hot rdo-trunk-tested. Is there anything in between that > can be used for production deployments? I think the idea with the CentOS repo is that it only gets updated with when upstream release an update. But there doesn't seem to be any CI that tests whether this setup actually works. I have added this as a topic for Wednesday's IRC meeting: https://etherpad.openstack.org/p/RDO-Meeting but it is unlikely I will be able to attend this as I will be travelling. > Thanks, > Ilja > > 1. https://bugzilla.redhat.com/show_bug.cgi?id=1323717 > -- Regards, Christopher From mrunge at redhat.com Tue Aug 16 06:35:20 2016 From: mrunge at redhat.com (Matthias Runge) Date: Tue, 16 Aug 2016 08:35:20 +0200 Subject: [rdo-list] centos-release-openstack-mitaka In-Reply-To: <1471291177.20756.132.camel@ocf.co.uk> References: <1471291177.20756.132.camel@ocf.co.uk> Message-ID: <7ca51fe5-518b-b9f1-e2e5-cc1ea6175286@redhat.com> On 15/08/16 21:59, Christopher Brown wrote: > Hi Ilja, > > On Mon, 2016-08-15 at 17:08 +0100, Ilja Maslov wrote: >> Hi, >> > > I think the idea with the CentOS repo is that it only gets updated with > when upstream release an update. But there doesn't seem to be any CI > that tests whether this setup actually works. > > I have added this as a topic for Wednesday's IRC meeting: > > https://etherpad.openstack.org/p/RDO-Meeting > > but it is unlikely I will be able to attend this as I will be > travelling. > Thank you for your feedback, it really helps. If you have the feeling, nobody listening, that's wrong. Please share your experience, where you had to pin versions etc. It would be good to avoid others running into the same issues and to stabilize quicker. About CI, other folks might want to comment on the state. Matthias -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander From cbrown2 at ocf.co.uk Tue Aug 16 07:58:35 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Tue, 16 Aug 2016 08:58:35 +0100 Subject: [rdo-list] centos-release-openstack-mitaka In-Reply-To: <7ca51fe5-518b-b9f1-e2e5-cc1ea6175286@redhat.com> References: <1471291177.20756.132.camel@ocf.co.uk> <7ca51fe5-518b-b9f1-e2e5-cc1ea6175286@redhat.com> Message-ID: <1471334315.20756.145.camel@ocf.co.uk> Hi Matthias, On Tue, 2016-08-16 at 07:35 +0100, Matthias Runge wrote: > On 15/08/16 21:59, Christopher Brown wrote: > > > > Hi Ilja, > > > > On Mon, 2016-08-15 at 17:08 +0100, Ilja Maslov wrote: > > > > > > Hi, > > >? > > > > I think the idea with the CentOS repo is that it only gets updated > > with > > when upstream release an update. But there doesn't seem to be any > > CI > > that tests whether this setup actually works. > > > > I have added this as a topic for Wednesday's IRC meeting: > > > > https://etherpad.openstack.org/p/RDO-Meeting > > > > but it is unlikely I will be able to attend this as I will be > > travelling. > > > > Thank you for your feedback, it really helps. If you have the > feeling, > nobody listening, that's wrong. Thanks, much appreciated. I am very keen to be part of a fix for this. We do our fair share of OSP and RDO deployments and both have their quirks but the ability to re-use config across both means that the RDO/OSP model works very well for us. > Please share your experience, where you had to pin versions etc. It > would be good to avoid others running into the same issues and to > stabilize quicker. Last time I think trown made fairly obvious suggestion that a bugzilla at least would be good, therefore: https://bugzilla.redhat.com/show_bug.cgi?id=1365884 to which ggillies compiled a good response here: https://www.redhat.com/archives/rdo-list/2016-August/msg00127.html but I have yet to see any feedback (I think it was a holiday in the US yesterday?) This is where we have pinned galera to the earlier working version Graeme identified. The keystone v3 error we have not logged as we have reverted temporarily to v2 to carry out tempest tests. I will file a bug for that once we replicate this again. > About CI, other folks might want to comment on the state. I think really the issue is with building the images here. The stable "openstack overcloud image build" doesn't appear to get tested as this is broken. When I followed the "Install from Mitaka branch" instructions here: http://tripleo.org/basic_deployment/basic_deployment_cli.html#get-image s it pulled in packages from newton-testing, though that could have been an error on my part. I think what would be good would be to know who to contact, file bugs, submit patches etc for the stable branch. > Matthias > > -- > Matthias Runge > > Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Michael Cunningham, > Michael O'Neill, Eric Shander > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Regards, Christopher Brown From mengxiandong at gmail.com Tue Aug 16 08:03:37 2016 From: mengxiandong at gmail.com (Xiandong Meng) Date: Tue, 16 Aug 2016 20:03:37 +1200 Subject: [rdo-list] source rpm repo for OpenStack Newton Dependencies Message-ID: I can find the binary dependency rpms at http://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-newton/ , but where can i find the corresponding source rpm repo? Regards, Alex Meng mengxiandong at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Tue Aug 16 08:57:47 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 16 Aug 2016 10:57:47 +0200 Subject: [rdo-list] Proposition to get more CI promotions In-Reply-To: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> References: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> Message-ID: On Sat, Aug 13, 2016 at 11:21 AM, Fr?d?ric Lepied wrote: > Hi, > > Our CI promotion system is as if we were running downhill: we cannot > stop and the only way to fix issues is to move forward by getting new > fixed packages. And it usually takes days before we can get the fixes > and by the time we get them we can have other issues appearing and so > never being able to have a promotion. > > I would like to propose an improved way of working to get more CI > promotions. When we have failures in our tests, we do like we do > today: debug the issues and find the commits that are causing the > regression and working with upstream or fixing packaging issues to > solve the regression. > > The proposed improvement to get more CI promotions is, while we wait > for the fixes to be ready, to get the oldest commit that is currently > causing an issue from the current analysis and to try the previous > commits in reverse order to promote before the issues appear. With the > database of DLRN we have all the information to be able to implement > this backward tries and have more chances to promote. > > I'm in holidays next week but I wanted to bring this idea before been > completely off. WDYT? In the past we already applied this approach a couple of times for upstream issues that took longer that desired to be fixed (these cases lead us to move to the current u-c pinning for libraries). IMO, this has some drawbacks: - Breaks the consistency principle we try to enforce among packages in RDO repos. - Requires manual definition of the versions or commits for each package. - When a package is pinned to a specific commit or version, we stop packaging and testing any new commit, so we are delaying and piling the detection of new issues that may appear in the package. - Currently, DLRN doesn't manage properly moving back to a previous commit whenever a new one has been built. It requires manual error-prone tasks to be performed on the dlrn instance (anyway it'd be probably a good idea to implement this us. IMO, we shouldn't use this as a systematic approach for managing versions in our repos to enforce promotions but only as last resort for exceptional cases. Anyway, it's probably a good idea to improve dlrn to implement this use case in a easier and cleaner way. > -- > Fred - May the Source be with you > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Tue Aug 16 10:26:33 2016 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 16 Aug 2016 06:26:33 -0400 (EDT) Subject: [rdo-list] Proposition to get more CI promotions In-Reply-To: References: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> Message-ID: <1497708497.3164050.1471343193511.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On Sat, Aug 13, 2016 at 11:21 AM, Fr?d?ric Lepied > wrote: > > Hi, > > > > Our CI promotion system is as if we were running downhill: we cannot > > stop and the only way to fix issues is to move forward by getting new > > fixed packages. And it usually takes days before we can get the fixes > > and by the time we get them we can have other issues appearing and so > > never being able to have a promotion. > > > > I would like to propose an improved way of working to get more CI > > promotions. When we have failures in our tests, we do like we do > > today: debug the issues and find the commits that are causing the > > regression and working with upstream or fixing packaging issues to > > solve the regression. > > > > The proposed improvement to get more CI promotions is, while we wait > > for the fixes to be ready, to get the oldest commit that is currently > > causing an issue from the current analysis and to try the previous > > commits in reverse order to promote before the issues appear. With the > > database of DLRN we have all the information to be able to implement > > this backward tries and have more chances to promote. > > > > I'm in holidays next week but I wanted to bring this idea before been > > completely off. WDYT? > > In the past we already applied this approach a couple of times for > upstream issues that took longer that desired to be fixed (these cases > lead us to move to the current u-c pinning for libraries). IMO, this > has some drawbacks: > - Breaks the consistency principle we try to enforce among packages in > RDO repos. > - Requires manual definition of the versions or commits for each package. Well, if I understood Fr?d?ric's proposal correctly, that's not what we'd do, just go back to the list of previous commits and start trying backwards. They don't even have to be for the same package. Using an example, let's say we have the following commits, from newest to oldest: - Commit 2 to openstack-nova (consistent) - Commit 2 to openstack-cinder (consistent) - Commit 1 to openstack-cinder (consistent) - Commit 1 to openstack-keystone (consistent, previous promoted commit) Just by luck, when the promotion job is started, it tests commit 2 to openstack-nova, and fails due to a new issue. If we can go back just one commit, to "commit 2 to openstack-cinder", and it is passes CI, we could still have a consistent repo promoted, just not the very last one at the time of running the CI job. Actually, this makes sense to me. By doing this, we could promote the last known good+consistent repo, while we fix the latest issue. > - When a package is pinned to a specific commit or version, we stop > packaging and testing any new commit, so we are delaying and piling > the detection of new issues that may appear in the package. > - Currently, DLRN doesn't manage properly moving back to a previous > commit whenever a new one has been built. It requires manual > error-prone tasks to be performed on the dlrn instance (anyway it'd be > probably a good idea to implement this us. > > IMO, we shouldn't use this as a systematic approach for managing > versions in our repos to enforce promotions but only as last resort > for exceptional cases. Anyway, it's probably a good idea to improve > dlrn to implement this use case in a easier and cleaner way. > This requires some good thinking. Right now, the only way I know is to fiddle with the DLRN database, which is quite scary and can lead to inconsistencies. Maybe we can come up with a better approach. Regards, Javier > > > -- > > Fred - May the Source be with you > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From amoralej at redhat.com Tue Aug 16 10:41:07 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 16 Aug 2016 12:41:07 +0200 Subject: [rdo-list] Proposition to get more CI promotions In-Reply-To: <1497708497.3164050.1471343193511.JavaMail.zimbra@redhat.com> References: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> <1497708497.3164050.1471343193511.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Aug 16, 2016 at 12:26 PM, Javier Pena wrote: > > > ----- Original Message ----- >> On Sat, Aug 13, 2016 at 11:21 AM, Fr?d?ric Lepied >> wrote: >> > Hi, >> > >> > Our CI promotion system is as if we were running downhill: we cannot >> > stop and the only way to fix issues is to move forward by getting new >> > fixed packages. And it usually takes days before we can get the fixes >> > and by the time we get them we can have other issues appearing and so >> > never being able to have a promotion. >> > >> > I would like to propose an improved way of working to get more CI >> > promotions. When we have failures in our tests, we do like we do >> > today: debug the issues and find the commits that are causing the >> > regression and working with upstream or fixing packaging issues to >> > solve the regression. >> > >> > The proposed improvement to get more CI promotions is, while we wait >> > for the fixes to be ready, to get the oldest commit that is currently >> > causing an issue from the current analysis and to try the previous >> > commits in reverse order to promote before the issues appear. With the >> > database of DLRN we have all the information to be able to implement >> > this backward tries and have more chances to promote. >> > >> > I'm in holidays next week but I wanted to bring this idea before been >> > completely off. WDYT? >> >> In the past we already applied this approach a couple of times for >> upstream issues that took longer that desired to be fixed (these cases >> lead us to move to the current u-c pinning for libraries). IMO, this >> has some drawbacks: >> - Breaks the consistency principle we try to enforce among packages in >> RDO repos. >> - Requires manual definition of the versions or commits for each package. > > Well, if I understood Fr?d?ric's proposal correctly, that's not what we'd do, just go back to the list of previous commits and start trying backwards. They don't even have to be for the same package. > > Using an example, let's say we have the following commits, from newest to oldest: > > - Commit 2 to openstack-nova (consistent) > - Commit 2 to openstack-cinder (consistent) > - Commit 1 to openstack-cinder (consistent) > - Commit 1 to openstack-keystone (consistent, previous promoted commit) > > Just by luck, when the promotion job is started, it tests commit 2 to openstack-nova, and fails due to a new issue. If we can go back just one commit, to "commit 2 to openstack-cinder", and it is passes CI, we could still have a consistent repo promoted, just not the very last one at the time of running the CI job. > > Actually, this makes sense to me. By doing this, we could promote the last known good+consistent repo, while we fix the latest issue. > OK, i understood "try the previous commits in reverse order", he was referring to the same package but re-reading the mail, you are probably right about the proposal. This approach is easier and better, as we don't break consistency and don't need any trick in dlrn side. However, it will not work if issues of different packages overlap (as we had last week, in fact we were discussing on irc about setting a specific hash but at that time there was not "sane" repo), but probably may help in some cases. >> - When a package is pinned to a specific commit or version, we stop >> packaging and testing any new commit, so we are delaying and piling >> the detection of new issues that may appear in the package. >> - Currently, DLRN doesn't manage properly moving back to a previous >> commit whenever a new one has been built. It requires manual >> error-prone tasks to be performed on the dlrn instance (anyway it'd be >> probably a good idea to implement this us. >> >> IMO, we shouldn't use this as a systematic approach for managing >> versions in our repos to enforce promotions but only as last resort >> for exceptional cases. Anyway, it's probably a good idea to improve >> dlrn to implement this use case in a easier and cleaner way. >> > > This requires some good thinking. Right now, the only way I know is to fiddle with the DLRN database, which is quite scary and can lead to inconsistencies. Maybe we can come up with a better approach. > Yeap, it's probably not an easy change but i think it's still worthy to do it as I'm pretty sure we'll find scenarios were we'll need it, don't you think so? > Regards, > Javier > >> >> > -- >> > Fred - May the Source be with you >> > >> > >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Tue Aug 16 11:32:24 2016 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 16 Aug 2016 07:32:24 -0400 (EDT) Subject: [rdo-list] Proposition to get more CI promotions In-Reply-To: References: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> <1497708497.3164050.1471343193511.JavaMail.zimbra@redhat.com> Message-ID: <1356320818.3174992.1471347144925.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On Tue, Aug 16, 2016 at 12:26 PM, Javier Pena wrote: > > > > > > ----- Original Message ----- > >> On Sat, Aug 13, 2016 at 11:21 AM, Fr?d?ric Lepied > >> wrote: > >> > Hi, > >> > > >> > Our CI promotion system is as if we were running downhill: we cannot > >> > stop and the only way to fix issues is to move forward by getting new > >> > fixed packages. And it usually takes days before we can get the fixes > >> > and by the time we get them we can have other issues appearing and so > >> > never being able to have a promotion. > >> > > >> > I would like to propose an improved way of working to get more CI > >> > promotions. When we have failures in our tests, we do like we do > >> > today: debug the issues and find the commits that are causing the > >> > regression and working with upstream or fixing packaging issues to > >> > solve the regression. > >> > > >> > The proposed improvement to get more CI promotions is, while we wait > >> > for the fixes to be ready, to get the oldest commit that is currently > >> > causing an issue from the current analysis and to try the previous > >> > commits in reverse order to promote before the issues appear. With the > >> > database of DLRN we have all the information to be able to implement > >> > this backward tries and have more chances to promote. > >> > > >> > I'm in holidays next week but I wanted to bring this idea before been > >> > completely off. WDYT? > >> > >> In the past we already applied this approach a couple of times for > >> upstream issues that took longer that desired to be fixed (these cases > >> lead us to move to the current u-c pinning for libraries). IMO, this > >> has some drawbacks: > >> - Breaks the consistency principle we try to enforce among packages in > >> RDO repos. > >> - Requires manual definition of the versions or commits for each package. > > > > Well, if I understood Fr?d?ric's proposal correctly, that's not what we'd > > do, just go back to the list of previous commits and start trying > > backwards. They don't even have to be for the same package. > > > > Using an example, let's say we have the following commits, from newest to > > oldest: > > > > - Commit 2 to openstack-nova (consistent) > > - Commit 2 to openstack-cinder (consistent) > > - Commit 1 to openstack-cinder (consistent) > > - Commit 1 to openstack-keystone (consistent, previous promoted commit) > > > > Just by luck, when the promotion job is started, it tests commit 2 to > > openstack-nova, and fails due to a new issue. If we can go back just one > > commit, to "commit 2 to openstack-cinder", and it is passes CI, we could > > still have a consistent repo promoted, just not the very last one at the > > time of running the CI job. > > > > Actually, this makes sense to me. By doing this, we could promote the last > > known good+consistent repo, while we fix the latest issue. > > > > OK, i understood "try the previous commits in reverse order", he was > referring to the same package but re-reading the mail, you are > probably right about the proposal. This approach is easier and better, > as we don't break consistency and don't need any trick in dlrn side. > However, it will not work if issues of different packages overlap (as > we had last week, in fact we were discussing on irc about setting a > specific hash but at that time there was not "sane" repo), but > probably may help in some cases. Yes, well, with multiple failures it will go to the last commit before any of the two started happening, which is ok. I just wonder how we could detect if a repo is consistent, but that's a minor implementation detail. > > >> - When a package is pinned to a specific commit or version, we stop > >> packaging and testing any new commit, so we are delaying and piling > >> the detection of new issues that may appear in the package. > >> - Currently, DLRN doesn't manage properly moving back to a previous > >> commit whenever a new one has been built. It requires manual > >> error-prone tasks to be performed on the dlrn instance (anyway it'd be > >> probably a good idea to implement this us. > >> > >> IMO, we shouldn't use this as a systematic approach for managing > >> versions in our repos to enforce promotions but only as last resort > >> for exceptional cases. Anyway, it's probably a good idea to improve > >> dlrn to implement this use case in a easier and cleaner way. > >> > > > > This requires some good thinking. Right now, the only way I know is to > > fiddle with the DLRN database, which is quite scary and can lead to > > inconsistencies. Maybe we can come up with a better approach. > > > > Yeap, it's probably not an easy change but i think it's still worthy > to do it as I'm pretty sure we'll find scenarios were we'll need it, > don't you think so? > With the current approach (using the upper-constraints for master), we should only need to go backwards if there's an upstream revert due to a bug, but yeah, that's a situation we'd like to have covered. > > Regards, > > Javier > > > >> > >> > -- > >> > Fred - May the Source be with you > >> > > >> > > >> > _______________________________________________ > >> > rdo-list mailing list > >> > rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From imaslov at dispersivegroup.com Tue Aug 16 14:26:55 2016 From: imaslov at dispersivegroup.com (Ilja Maslov) Date: Tue, 16 Aug 2016 14:26:55 +0000 Subject: [rdo-list] centos-release-openstack-mitaka In-Reply-To: <1471334315.20756.145.camel@ocf.co.uk> References: <1471291177.20756.132.camel@ocf.co.uk> <7ca51fe5-518b-b9f1-e2e5-cc1ea6175286@redhat.com> <1471334315.20756.145.camel@ocf.co.uk> Message-ID: <2750691c0f564c5093eb7685c6d1b252@svr2-disp-exch.dispersive.local> So, how exactly newer packages get into CentOS repo? I see about 7 newer versions of the os-net-config in various repos, ranging between versions 0.2.2 and 5.0.0. RHEL OSP errata updates to 0.2.3, I believe. Is there a place I can check to find out when os-net-config package will get updated in CentOS repo? Do CentOS repos follow RHEL updates and errata? Image creation is a very important subject, but if there is no stable repo, no matter what process is used to create images, they will be as good as the repo the packages are pulled from :) Thanks, Ilja -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Christopher Brown Sent: Tuesday, August 16, 2016 3:59 AM To: mrunge at redhat.com; rdo-list at redhat.com Subject: Re: [rdo-list] centos-release-openstack-mitaka Hi Matthias, On Tue, 2016-08-16 at 07:35 +0100, Matthias Runge wrote: > On 15/08/16 21:59, Christopher Brown wrote: > > > > Hi Ilja, > > > > On Mon, 2016-08-15 at 17:08 +0100, Ilja Maslov wrote: > > > > > > Hi, > > >? > > > > I think the idea with the CentOS repo is that it only gets updated > > with when upstream release an update. But there doesn't seem to be > > any CI that tests whether this setup actually works. > > > > I have added this as a topic for Wednesday's IRC meeting: > > > > https://etherpad.openstack.org/p/RDO-Meeting > > > > but it is unlikely I will be able to attend this as I will be > > travelling. > > > > Thank you for your feedback, it really helps. If you have the feeling, > nobody listening, that's wrong. Thanks, much appreciated. I am very keen to be part of a fix for this. We do our fair share of OSP and RDO deployments and both have their quirks but the ability to re-use config across both means that the RDO/OSP model works very well for us. > Please share your experience, where you had to pin versions etc. It > would be good to avoid others running into the same issues and to > stabilize quicker. Last time I think trown made fairly obvious suggestion that a bugzilla at least would be good, therefore: https://bugzilla.redhat.com/show_bug.cgi?id=1365884 to which ggillies compiled a good response here: https://www.redhat.com/archives/rdo-list/2016-August/msg00127.html but I have yet to see any feedback (I think it was a holiday in the US yesterday?) This is where we have pinned galera to the earlier working version Graeme identified. The keystone v3 error we have not logged as we have reverted temporarily to v2 to carry out tempest tests. I will file a bug for that once we replicate this again. > About CI, other folks might want to comment on the state. I think really the issue is with building the images here. The stable "openstack overcloud image build" doesn't appear to get tested as this is broken. When I followed the "Install from Mitaka branch" instructions here: http://tripleo.org/basic_deployment/basic_deployment_cli.html#get-image s it pulled in packages from newton-testing, though that could have been an error on my part. I think what would be good would be to know who to contact, file bugs, submit patches etc for the stable branch. > Matthias > > -- > Matthias Runge > > Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, Managing > Directors: Charles Cachera, Michael Cunningham, > Michael O'Neill, Eric Shander > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Regards, Christopher Brown _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From whayutin at redhat.com Tue Aug 16 14:32:16 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 16 Aug 2016 10:32:16 -0400 Subject: [rdo-list] rdo-infra scrum 8/16 Message-ID: https://bluejeans.com/s/abLd/ https://review.rdoproject.org/etherpad/p/rdo-infra-scrum Highlights: Demo: rdoproject infra and oooq deployments design by bkero Demo: oooq generated documentation for TripleO by hrybacki Plan: Preparing oooq to meet the requirements to replace instack-virt-setup Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Aug 16 20:21:09 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 16 Aug 2016 16:21:09 -0400 Subject: [rdo-list] FYI.. newton has been promoted again Message-ID: \0/ Last Promotion 8/12/2016 Current Promotion 8/16/2016 https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/633/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Aug 17 02:48:20 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 16 Aug 2016 22:48:20 -0400 Subject: [rdo-list] FYI.. newton has been promoted again In-Reply-To: References: Message-ID: Two in a row for newton master https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/634/ Sagi, I would expect that the tripleo periodic jobs would pass on their next run and update the tripleo pin [1] periodic-tripleo-ci-centos-7-ovb-ha is passing, however periodic-tripleo-ci-centos-7-ovb-nonha is timing out. Please investigate the nonha job tomorrow. Thank you! [1] https://dashboards.rdoproject.org/rdo-dev On Tue, Aug 16, 2016 at 4:21 PM, Wesley Hayutin wrote: > \0/ > > Last Promotion 8/12/2016 > Current Promotion 8/16/2016 > > https://ci.centos.org/view/rdo/view/promotion-pipeline/ > job/rdo-delorean-promote-master/633/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From myoung at redhat.com Wed Aug 17 05:55:18 2016 From: myoung at redhat.com (Matt Young) Date: Wed, 17 Aug 2016 01:55:18 -0400 Subject: [rdo-list] Building overcloud images with TripleO and RDO In-Reply-To: <184fbf80-b1fd-1716-92b3-fb1104a3f1ea@redhat.com> References: <184fbf80-b1fd-1716-92b3-fb1104a3f1ea@redhat.com> Message-ID: Hi! First, thanks for digging into the recent issue detailed below. I agree that: --- We can always do more to document what processes and tooling we have, and clearly we need to! Open mechanisms that allow anyone to build images, see how it all works, customize, override, debug, etc are indeed both ideal and IMHO are a requirement. Declarative mechanisms are an ideal base for tools that can adapt, that are a joy to use, can evolve rationally, and are robust enough in the long term to be used broadly. We have a variety of use-cases, scenarios, and users to consider. It?s a big tent! --- Regarding ansible-role-tripleo-image-build (artib) [1], I wanted to chime in briefly with a few thoughts and references to help advance the conversation. Earlier this year this was all new to me, so I thought others might be in the same place. --- tripleo-quickstart (oooq) [2] has the primary goal of providing an ?easy button? for new users/contributors. It is designed to deploy current stable bits [3] to libvirt quickly and reliably, using declarative configuration for topology and inputs [4]. It?s does so using the documented steps to deploy tripleo, and encourages learning and onboarding. It can run the the generated scripts (on the undercloud) or the user can. It manages to do so in a way that that is a boon to CI and iterative development workflows. A set of building blocks [5] exist and is growing, and is being used in our CI today. artib packages images for use with oooq, but does not attempt to define a new build mechanism. It uses a shared upstream library (tripleo-common [6]) which invokes DIB [7] to create the images. The input to tripleo-common is declarative YAML, and there?s a CLI interface provided as well. Regarding the pre-built images, we?re using them not because it?s ?too hard? but rather ?saves time in CI.? Using the same tools anyone can create their own images, tweak them, put custom test or debug tools in them, etc. One can also simply (optionally) use one to quickly verify a bug, or experiment, or ____. We are also building images today to enable stuff like the oooq-usbkey [8] (cool right ?!?!) --- +1 to talking more about this, and submitting blueprints on how we can advance tooling around image building. I?m an interested party that?s new to OpenStack, and look forward to collaborating with (all of) you. Let?s brainstorm how we might improve both the discoverability and utility of the various image building tools, or what they might look like moving forward. What are we missing? What can we do/make better? In my view this begins with listening - to understand the requirements, constraints, and expectations that folks have for these tools. Let?s recognize up front that while converging/consolidating the toolchains might be a possible outcome, this is OpenStack. Carrots > Sticks, and it might just happen naturally. I think these tools have immense potential to help the OpenStack development process. Given the recent advances in both containers and composable roles, as well as the plethora of tools we have already, I look forward to seeing how we can improve our collective union-set ?utility belt? [9] Matt [1] https://github.com/redhat-openstack/ansible-role-tripleo-image-build [2] https://github.com/openstack/tripleo-quickstart [3] http://artifacts.ci.centos.org/rdo/images/master/delorean/stable/ [4] https://github.com/openstack/tripleo-quickstart/blob/master/doc/configuring.md [5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role-tripleo [6] https://github.com/openstack/tripleo-common/tree/master/tripleo_common [7] http://docs.openstack.org/developer/diskimage-builder/ [8] https://www.rdoproject.org/tripleo/oooq-usbkey/ [9] yes...batman. He is/was self-created! On 08/11/2016 11:36 PM, Graeme Gillies wrote: > Hi, > > I spent the last day or two trying to get to the bottom of the issue > described at [1], which turned out to be because the version of galera > that is in EPEL is higher than what we have in RDO mitaka stable, and > when it attempts to get used, mariadb-galera-server fails to start. > > In order to understand why epel was being pulled in, how to stop it, and > how this seemed to have slipped through CI/testing, I've been trying to > look through and understand the whole state of the image building > process across TripleO, RDO, and our CI. > > Unfortunately what I've discovered hasn't been great. It looks like > there is at least 3 different paths being used to build images. > Apologies if anything below is incorrect, it's incredibly convoluted and > difficult to follow for someone who isn't intimately familiar with it > all (like myself). > > 1) Using "openstack overcloud image build --all", which is I assume the > method end users are supposed to be using, or at least it's the method > documented in the docs. This uses diskimagebuilder under the hood, but > the logic behind it is in python (under python-tripleoclient), with a > lot of stuff hardcoded in > > 2) Using tripleo.sh, which, while it looks like calls "openstack > overcloud image build", also has some of it's own logic and messes with > things like the ~/.cache/image-create/source-repositories file, which I > believe is how the issue at [1] passed CI in the first place > > 3) Using the ansible role ansible-role-tripleo-image-build [2] which > looks like it also uses diskimagebuilder, but through a slightly > different approach, by using an ansible library that can take an image > definition via yaml (neat!) and then all diskimagebuilder using > python-tripleo-common as an intermediary. Which is a different code path > (though the code itself looks similar) to python-tripleoclient > > I feel this issue is hugely important as I believe it is one of the > biggest barriers to having more people adopt RDO/TripleO. Too often > people encounter issues with deploys that are hard to nail down because > we have no real understanding exactly how they built the images, nor as > an Operator I don't feel like I have a clear understanding of what I get > when I use different options. The bug at [1] is a classic example of > something I should never have hit. > > We do have stable images available at [3] (built using method 3) however > there are a number of problems with just using them > > 1) I think it's perfectly reasonable for people to want to build their > own images. It's part of the Open Source philosophy, we want things to > be Open and we want to understand how things work, so we can customise, > extend, and troubleshoot ourselves. If your image building process is so > convoluted that you have to say "just use our prebuilt ones", then you > have done something wrong. > > 2) The images don't get updated (they current mitaka ones were built in > April) > > 3) There is actually nowhere on the RDO website, nor the tripleo > website, that actually references their location. So as a new user, you > have exactly zero chance of finding these images and using them. > > I'm not sure what the best process is to start improving this, but it > looks like it's complicated enough and involves enough moving pieces > that a spec against tripleo might be the way to go? I am thinking the > goal would be to move towards everyone having one way, one code path, > for building images with TripleO, that could be utilised by all use > cases out there. > > My thinking is the method would take image definitions in a yaml format > similar to how ansible-role-tripleo-image-build works, and we can just > ship a bunch of different yaml files for all the different image > scenarios people might want. e.g. > > /usr/share/tripleo-images/centos-7-x86_64-mitaka-cbs.yaml > /usr/share/tripleo-images/centos-7-x86_64-mitaka-trunk.yaml > /usr/share/tripleo-images/centos-7-x86_64-trunk.yaml > > Etc etc. you could then have a symlink called default.yaml which points > to whatever scenario you wish people to use by default, and the scenario > could be overwritten by a command line argument. Basically this is > exactly how mock [4] works, and has been proven to be a nice, clean, > easy to use workflow for people to understand. The added bonus is if > people wanted to do their own images, they could copy one of the > existing files as a template to start with. > > If people feel this is worthwhile (and I hope they do), I'm interested > in understanding what the next steps would be to get this to happen. > > Regards, > > Graeme > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1365884 > [2] https://github.com/redhat-openstack/ansible-role-tripleo-image-build > [3] > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ > [4] https://fedoraproject.org/wiki/Mock From nirl at asocsnetworks.com Wed Aug 17 08:15:02 2016 From: nirl at asocsnetworks.com (Nir Levy) Date: Wed, 17 Aug 2016 08:15:02 +0000 Subject: [rdo-list] Fedora: running RDO (openstack-mitaka) Message-ID: Hi everybody, Any scheduled items regarding this issue? regards, Nir. From: Nir Levy Sent: Tuesday, August 9, 2016 2:50 PM To: rdo-list ; Javier Pena Subject: Fedora: running RDO (openstack-mitaka) Main goal: installing any RDO over Fedora 24, starting with all in one. according to: https://www.rdoproject.org/install/quickstart/ and https://www.rdoproject.org/documentation/packstack-all-in-one-diy-configuration/ sudo yum install https://www.rdoproject.org/repos/rdo-release.rpm sudo yum update -y sudo yum install -y openstack-packstack packstack --allinone --gen-answer_file=answer.txt and afterwords. setting my interfaces: CONFIG_NOVA_NETWORK_PUBIF --novanetwork-pubif CONFIG_NOVA_COMPUTE_PRIVIF --novacompute-privif CONFIG_NOVA_NETWORK_PRIVIF --novanetwork-privif setting additional settings: CONFIG_PROVISION_DEMO --provision-demo n (y for allinone) CONFIG_SWIFT_INSTALL --os-swift-install (y for allinone) n Set to y if you would like PackStack to install Object Storage. CONFIG_NAGIOS_INSTALL --nagios-install n (y for allinone) Set to y if you would like to install Nagios. Nagios provides additional tools for monitoring the OpenStack environment. CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE --provision-all-in-one-ovs-bridge n (y for allinone) packstack --answer-file=answer.txt 1) first issue I've encountered is : Error: Parameter mode failed on File[rabbitmq.config]: The file mode specification must be a string, not 'Fixnum' at ... occurs, I have to verify: /usr/lib/python2.7/site-packages/packstack/puppet/templates/amqp.pp /usr/share/openstack-puppet/modules/module-collectd/manifests/plugin/amqp.pp and modify the following: https://review.openstack.org/349908 2) second issue I've encountered is : https://bugs.launchpad.net/packstack/+bug/1597951 after modifying SELinux /usr/sbin/getenforce Enforcing setenforce permissive /usr/sbin/getenforce Permissive seems to resolve it: 3) third issue is the current issue. 192.168.13.85_amqp.pp: [ DONE ] <-> previously failed. 192.168.13.85_mariadb.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] 192.168.13.85_mariadb.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.13.85_mariadb.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install mariadb-galera-server' returned 1: Yum command has been deprecated, redirecting to '/usr/bin/dnf -d 0 -e 0 -y install mariadb-galera-server'. You will find full trace in log /var/tmp/packstack/20160802-193240-sGLWV3/manifests/192.168.13.85_mariadb.pp.log Please check log file /var/tmp/packstack/20160802-193240-sGLWV3/openstack-setup.log for more information Nir Levy SW Engineer Web: www.asocstech.com | [cid:image001.jpg at 01D1B599.5A2C9530] Nir Levy SW Engineer Web: www.asocstech.com | [cid:image001.jpg at 01D1B599.5A2C9530] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2704 bytes Desc: image001.jpg URL: From hguemar at fedoraproject.org Wed Aug 17 09:30:02 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 17 Aug 2016 11:30:02 +0200 Subject: [rdo-list] source rpm repo for OpenStack Newton Dependencies In-Reply-To: References: Message-ID: 2016-08-16 10:03 GMT+02:00 Xiandong Meng : > I can find the binary dependency rpms at > http://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-newton/ , but > where can i find the corresponding source rpm repo? > > Regards, > > Alex Meng > mengxiandong at gmail.com > They're hosted on the vault: http://vault.centos.org/7.2.1511/cloud/Source/openstack-mitaka/ Regards, H. > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From bderzhavets at hotmail.com Wed Aug 17 10:23:51 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 17 Aug 2016 10:23:51 +0000 Subject: [rdo-list] Fedora: running RDO (openstack-mitaka) In-Reply-To: References: Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Nir Levy Sent: Wednesday, August 17, 2016 4:15 AM To: rdo-list; Javier Pena Cc: Yan Fridland Subject: Re: [rdo-list] Fedora: running RDO (openstack-mitaka) Hi everybody, Any scheduled items regarding this issue? regards, Nir. From: Nir Levy Sent: Tuesday, August 9, 2016 2:50 PM To: rdo-list ; Javier Pena Subject: Fedora: running RDO (openstack-mitaka) Main goal: installing any RDO over Fedora 24, starting with all in one. according to: https://www.rdoproject.org/install/quickstart/ and https://www.rdoproject.org/documentation/packstack-all-in-one-diy-configuration/ Packstack quickstart - RDO www.rdoproject.org Packstack quickstart: Proof of concept for single node. Packstack is an installation utility that lets you spin up a proof of concept cloud on one node. sudo yum install https://www.rdoproject.org/repos/rdo-release.rpm sudo yum update -y sudo yum install -y openstack-packstack You are installing stuff supposed to provide packstack and all other openstack packages for RHEL 7.X Schedule rebuild all of them for F24 at least. I am not talking about CI implementation for F24. RDO IS NOT SUPPORTED on Fedora, unless all patching and builds would be done by Community members. It was stated officially . Boris. packstack --allinone --gen-answer_file=answer.txt and afterwords. setting my interfaces: CONFIG_NOVA_NETWORK_PUBIF --novanetwork-pubif CONFIG_NOVA_COMPUTE_PRIVIF --novacompute-privif CONFIG_NOVA_NETWORK_PRIVIF --novanetwork-privif setting additional settings: CONFIG_PROVISION_DEMO --provision-demo n (y for allinone) CONFIG_SWIFT_INSTALL --os-swift-install (y for allinone) n Set to y if you would like PackStack to install Object Storage. CONFIG_NAGIOS_INSTALL --nagios-install n (y for allinone) Set to y if you would like to install Nagios. Nagios provides additional tools for monitoring the OpenStack environment. CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE --provision-all-in-one-ovs-bridge n (y for allinone) packstack --answer-file=answer.txt 1) first issue I've encountered is : Error: Parameter mode failed on File[rabbitmq.config]: The file mode specification must be a string, not 'Fixnum' at ... occurs, I have to verify: /usr/lib/python2.7/site-packages/packstack/puppet/templates/amqp.pp /usr/share/openstack-puppet/modules/module-collectd/manifests/plugin/amqp.pp and modify the following: https://review.openstack.org/349908 2) second issue I've encountered is : https://bugs.launchpad.net/packstack/+bug/1597951 after modifying SELinux /usr/sbin/getenforce Enforcing setenforce permissive /usr/sbin/getenforce Permissive seems to resolve it: 3) third issue is the current issue. 192.168.13.85_amqp.pp: [ DONE ] <-> previously failed. 192.168.13.85_mariadb.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] 192.168.13.85_mariadb.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.13.85_mariadb.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install mariadb-galera-server' returned 1: Yum command has been deprecated, redirecting to '/usr/bin/dnf -d 0 -e 0 -y install mariadb-galera-server'. You will find full trace in log /var/tmp/packstack/20160802-193240-sGLWV3/manifests/192.168.13.85_mariadb.pp.log Please check log file /var/tmp/packstack/20160802-193240-sGLWV3/openstack-setup.log for more information Nir Levy SW Engineer Web: www.asocstech.com | [cid:image001.jpg at 01D1B599.5A2C9530] Nir Levy SW Engineer Web: www.asocstech.com | [cid:image001.jpg at 01D1B599.5A2C9530] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2704 bytes Desc: image001.jpg URL: From mengxiandong at gmail.com Wed Aug 17 11:32:29 2016 From: mengxiandong at gmail.com (Xiandong Meng) Date: Wed, 17 Aug 2016 23:32:29 +1200 Subject: [rdo-list] source rpm repo for OpenStack Newton Dependencies In-Reply-To: References: Message-ID: OK, but i am looking for the OpenStack-Newton dependency source rpms. I could not find that on vault. Regards, Alex Meng mengxiandong at gmail.com On Wed, Aug 17, 2016 at 9:30 PM, Ha?kel wrote: > 2016-08-16 10:03 GMT+02:00 Xiandong Meng : > > I can find the binary dependency rpms at > > http://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-newton/ , > but > > where can i find the corresponding source rpm repo? > > > > Regards, > > > > Alex Meng > > mengxiandong at gmail.com > > > > They're hosted on the vault: > http://vault.centos.org/7.2.1511/cloud/Source/openstack-mitaka/ > > Regards, > H. > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Wed Aug 17 13:54:04 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 17 Aug 2016 15:54:04 +0200 Subject: [rdo-list] source rpm repo for OpenStack Newton Dependencies In-Reply-To: References: Message-ID: 2016-08-17 13:32 GMT+02:00 Xiandong Meng : > OK, but i am looking for the OpenStack-Newton dependency source rpms. I > could not find that on vault. > > Regards, > > Alex Meng > mengxiandong at gmail.com > We haven't imported Newton on CBS. We only have DLRN builds (source rpms are in the same directory): https://trunk.rdoproject.org/centos7/current/ Regards, H. > On Wed, Aug 17, 2016 at 9:30 PM, Ha?kel wrote: >> >> 2016-08-16 10:03 GMT+02:00 Xiandong Meng : >> > I can find the binary dependency rpms at >> > http://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-newton/ , >> > but >> > where can i find the corresponding source rpm repo? >> > >> > Regards, >> > >> > Alex Meng >> > mengxiandong at gmail.com >> > >> >> They're hosted on the vault: >> http://vault.centos.org/7.2.1511/cloud/Source/openstack-mitaka/ >> >> Regards, >> H. >> >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > > From jslagle at redhat.com Wed Aug 17 16:11:54 2016 From: jslagle at redhat.com (James Slagle) Date: Wed, 17 Aug 2016 12:11:54 -0400 Subject: [rdo-list] FYI.. newton has been promoted again In-Reply-To: References: Message-ID: <20160817161154.GU15056@localhost.localdomain> On Tue, Aug 16, 2016 at 10:48:20PM -0400, Wesley Hayutin wrote: > Two in a row for newton master > https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/634/ > > Sagi, > I would expect that the tripleo periodic jobs would pass on their next run > and update the tripleo pin [1] > periodic-tripleo-ci-centos-7-ovb-ha is passing, however > periodic-tripleo-ci-centos-7-ovb-nonha is timing out. > Please investigate the nonha job tomorrow. It's the same issue that it's been for quite a while. Our nonha test uses undercloud ssl, and Ironic had a regression in switching to use the public endpoint during instance deployment instead of the internal endpoint. Since the public endpoint uses SSL, IPA can not verify the server certificate. TripleO Bug: https://bugs.launchpad.net/tripleo/+bug/1613088 Ironic Bug (shows root cause): https://bugs.launchpad.net/ironic/+bug/1613331 The Ironic fix we've been trying to get merged: https://review.openstack.org/#/c/355537/ (their CI has been done for a few days due to unrelated reasons). I'd recommend that RDO CI switch to also configure SSL on the undercloud, for at least some jobs. I don't think it makes a lot of sense to promote a repo that doesn't work with SSL for TripleO's case, but perhaps it does for RDO. -- -- James Slagle -- From javier.pena at redhat.com Wed Aug 17 17:03:34 2016 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 17 Aug 2016 13:03:34 -0400 (EDT) Subject: [rdo-list] [Meeting] RDO meeting (2016-08-17) Minutes Message-ID: <1133136662.3537528.1471453414310.JavaMail.zimbra@redhat.com> ============================== #rdo: RDO meeting - 2016-08-17 ============================== Meeting started by jpena at 15:00:37 UTC. The full logs are available at https://meetbot.fedoraproject.org/rdo/2016-08-17/rdo_meeting_-_2016-08-17.2016-08-17-15.00.log.html . Meeting summary --------------- * roll call (jpena, 15:00:46) * RDO Stable Mitaka is broken, please can it be given some love (jpena, 15:04:36) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1365884 (jpena, 15:05:19) * LINK: http://cbs.centos.org/koji/buildinfo?buildID=10246 (number80, 15:07:11) * TripleO image is broken, not Mitaka RDO (jruzicka, 15:20:08) * ACTION: adarazs will investigate 1365884 (number80, 15:22:42) * DLRN, symlinks and CDN (jpena, 15:22:59) * ACTION: dmsimard to explore a solution to expose custom symlinks on the public trunk.rdoproject.org instance (dmsimard, 15:38:24) * Tempest packaging (jpena, 15:39:02) * ACTION: dmellado to restart the Tempest packaging thread (jpena, 15:54:31) * reviews needing eyes (jpena, 15:54:44) * LINK: https://review.rdoproject.org/r/#/c/1847/ (jpena, 15:55:06) * LINK: https://review.rdoproject.org/r/#/c/1275/ (jpena, 15:55:16) * CentOS Cloud SIG meetings (Thursday, 15:00 UTC) (jpena, 15:57:07) * LINK: https://etherpad.openstack.org/p/centos-cloud-sig (rbowen, 15:57:10) * OpenStack Summit (jpena, 16:00:20) * LINK: https://etherpad.openstack.org/p/rdo-barcelona-summit-booth (rbowen, 16:00:22) * ACTION: everyone interested sign up at https://etherpad.openstack.org/p/rdo-barcelona-summit-booth (jpena, 16:01:56) * Chair for next meeting (jpena, 16:02:25) * ACTION: number80 chair next week (number80, 16:02:55) Meeting ended at 16:03:35 UTC. Action Items ------------ * adarazs will investigate 1365884 * dmsimard to explore a solution to expose custom symlinks on the public trunk.rdoproject.org instance * dmellado to restart the Tempest packaging thread * everyone interested sign up at https://etherpad.openstack.org/p/rdo-barcelona-summit-booth * number80 chair next week Action Items, by person ----------------------- * adarazs * adarazs will investigate 1365884 * dmellado * dmellado to restart the Tempest packaging thread * dmsimard * dmsimard to explore a solution to expose custom symlinks on the public trunk.rdoproject.org instance * number80 * number80 chair next week * **UNASSIGNED** * everyone interested sign up at https://etherpad.openstack.org/p/rdo-barcelona-summit-booth People Present (lines said) --------------------------- * jpena (62) * dmsimard (54) * number80 (44) * dmellado (42) * rbowen (25) * jruzicka (20) * tosky (15) * zodbot (13) * dhellmann (11) * weshay (9) * amoralej (4) * pabelanger (4) * myoung (3) * imcsk8 (3) * jschlueter (2) * hewbrocca (2) * adarazs (2) * hrybacki (1) * jjoyce (1) * coolsvap (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From rbowen at redhat.com Wed Aug 17 21:07:49 2016 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 17 Aug 2016 17:07:49 -0400 Subject: [rdo-list] Unanswered 'rdo' questions on ask.openstack.org Message-ID: Thanks again to everyone that helps answer questions on ask.openstack.org. Some of the ones that have been hanging around here for a while are due to be closed due to inactivity and lack of response, so the list should be shorter next time. I hope. 44 unanswered questions: Installing Openstack Mitaka RDO On CentOS 7 server with a public through SSH only! https://ask.openstack.org/en/question/95712/installing-openstack-mitaka-rdo-on-centos-7-server-with-a-public-through-ssh-only/ Tags: rdo, mitaka, allinone, install-openstack, openstack-packstack keystone api "The request you have made requires authentication" https://ask.openstack.org/en/question/95672/keystone-api-the-request-you-have-made-requires-authentication/ Tags: keystone, api, error error during installation of openstack rdo on centos 7 https://ask.openstack.org/en/question/95657/error-during-installation-of-openstack-rdo-on-centos-7/ Tags: rdo, devstack#mitaka multi nodes provider network ovs config https://ask.openstack.org/en/question/95423/multi-nodes-provider-network-ovs-config/ Tags: rdo, liberty-neutron adding rdo-packages https://ask.openstack.org/en/question/95380/adding-rdo-packages/ Tags: rdo RDO TripleO Mitaka HA Overcloud Failing https://ask.openstack.org/en/question/95249/rdo-tripleo-mitaka-ha-overcloud-failing/ Tags: mitaka, tripleo, overcloud, centos7 RDO - is there any fedora package newer than puppet-4.2.1-3.fc24.noarch.rpm https://ask.openstack.org/en/question/94969/rdo-is-there-any-fedora-package-newer-than-puppet-421-3fc24noarchrpm/ Tags: rdo, puppet, install-openstack OpenStack RDO mysqld 100% cpu https://ask.openstack.org/en/question/94961/openstack-rdo-mysqld-100-cpu/ Tags: openstack, mysqld, cpu Failed to set RDO repo on host-packstact-centOS-7 https://ask.openstack.org/en/question/94828/failed-to-set-rdo-repo-on-host-packstact-centos-7/ Tags: openstack-packstack, centos7, rdo how to deploy haskell-distributed in RDO? https://ask.openstack.org/en/question/94785/how-to-deploy-haskell-distributed-in-rdo/ Tags: rdo How to set quota for domain and have it shared with all the projects/tenants in domain https://ask.openstack.org/en/question/94105/how-to-set-quota-for-domain-and-have-it-shared-with-all-the-projectstenants-in-domain/ Tags: domainquotadriver rdo tripleO liberty undercloud install failing https://ask.openstack.org/en/question/94023/rdo-tripleo-liberty-undercloud-install-failing/ Tags: rdo, rdo-manager, liberty, undercloud, instack Add new compute node for TripleO deployment in virtual environment https://ask.openstack.org/en/question/93703/add-new-compute-node-for-tripleo-deployment-in-virtual-environment/ Tags: compute, tripleo, liberty, virtual, baremetal Unable to start Ceilometer services https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-services/ Tags: ceilometer, ceilometer-api Adding hard drive space to RDO installation https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ Tags: cinder, openstack, space, add AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ Tags: openstack, networking, aws ceilometer: I've installed openstack mitaka. but swift stops working when i configured the pipeline and ceilometer filter https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ Tags: ceilometer, openstack-swift, mitaka Fail on installing the controller on Cent OS 7 https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ Tags: installation, centos7, controller the error of service entity and API endpoints https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ Tags: service, entity, and, api, endpoints Running delorean fails: Git won't fetch sources https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ Tags: delorean, rdo Keystone authentication: Failed to contact the endpoint. https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ Tags: keystone, authenticate, endpoint, murano Liberty RDO: stack resource topology icons are pink https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ Tags: stack, resource, topology, dashboard Build of instance aborted: Block Device Mapping is Invalid. https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ Tags: cinder, lvm, centos7 No handlers could be found for logger "oslo_config.cfg" while syncing the glance database https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ Tags: liberty, glance, install-openstack how to use chef auto manage openstack in RDO? https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ Tags: chef, rdo Separate Cinder storage traffic from management https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ Tags: cinder, separate, nic, iscsi Openstack installation fails using packstack, failure is in installation of openstack-nova-compute. Error: Dependency Package[nova-compute] has failures https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ Tags: novacompute, rdo, packstack, dependency, failure CentOS OpenStack - compute node can't talk https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ Tags: rdo How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on RDO Liberty ? https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ Tags: rdo, liberty, swift, ha VM and container can't download anything from internet https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ Tags: rdo, neutron, network, connectivity -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From ggillies at redhat.com Thu Aug 18 03:47:32 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Thu, 18 Aug 2016 13:47:32 +1000 Subject: [rdo-list] Updated mitaka stable images for tripleo Message-ID: Hi, In a follow up to my previous post about image building in tripleo/RDO, do we currently have a timeline, process, or anything else regarding when rebuilt tripleo images are rebuilt and uploaded to buildlogs.centos.org? E.g. if you look at the mitaka images http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ They were built back in April. Can we get something in place to have them updated more frequently? Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From hbrock at redhat.com Thu Aug 18 04:06:10 2016 From: hbrock at redhat.com (Hugh Brock) Date: Thu, 18 Aug 2016 06:06:10 +0200 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: References: Message-ID: On Aug 18, 2016 5:48 AM, "Graeme Gillies" wrote: > > Hi, > > In a follow up to my previous post about image building in tripleo/RDO, > do we currently have a timeline, process, or anything else regarding > when rebuilt tripleo images are rebuilt and uploaded to > buildlogs.centos.org? E.g. if you look at the mitaka images > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ > > They were built back in April. Can we get something in place to have > them updated more frequently? > > Regards, > > Graeme > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com Graeme thanks for raising this. RDO folks, should we not be building images every time we promote? If we are doing that, should we not be linking them more obviously? -Hugh -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Thu Aug 18 07:14:55 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 18 Aug 2016 09:14:55 +0200 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: References: Message-ID: 2016-08-18 6:06 GMT+02:00 Hugh Brock : > On Aug 18, 2016 5:48 AM, "Graeme Gillies" wrote: >> >> Hi, >> >> In a follow up to my previous post about image building in tripleo/RDO, >> do we currently have a timeline, process, or anything else regarding >> when rebuilt tripleo images are rebuilt and uploaded to >> buildlogs.centos.org? E.g. if you look at the mitaka images >> >> >> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ >> >> They were built back in April. Can we get something in place to have >> them updated more frequently? >> https://ci.centos.org/artifacts/rdo/images/ >> Regards, >> >> Graeme >> >> -- >> Graeme Gillies >> Principal Systems Administrator >> Openstack Infrastructure >> Red Hat Australia >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > Graeme thanks for raising this. > > RDO folks, should we not be building images every time we promote? If we are > doing that, should we not be linking them more obviously? > > -Hugh > Seems like sync is broken. @KB; maybe, I'm missing some context but since John is in PTO, can you look at it? https://bugs.centos.org/view.php?id=10697 Regards, H. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ggillies at redhat.com Thu Aug 18 07:42:06 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Thu, 18 Aug 2016 17:42:06 +1000 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: References: Message-ID: <3f9ed6c8-03cf-8bf2-4c2c-d8a8fc7cb707@redhat.com> On 18/08/16 17:14, Ha?kel wrote: > 2016-08-18 6:06 GMT+02:00 Hugh Brock : >> On Aug 18, 2016 5:48 AM, "Graeme Gillies" wrote: >>> >>> Hi, >>> >>> In a follow up to my previous post about image building in tripleo/RDO, >>> do we currently have a timeline, process, or anything else regarding >>> when rebuilt tripleo images are rebuilt and uploaded to >>> buildlogs.centos.org? E.g. if you look at the mitaka images >>> >>> >>> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ >>> >>> They were built back in April. Can we get something in place to have >>> them updated more frequently? >>> > > https://ci.centos.org/artifacts/rdo/images/ If you look under the mitaka directory at this link, you will notice the only directory there is delorean. I'm not talking about delorean, I'm not interested in that. I'm interested in images built using the package manifests from centos-release-openstack-mitaka and its dependencies (Ceph, qemu-ev, etc). The images at the link I gave originally are images that were built with those CBS, non-delorean, repos Regards, Graeme > >>> Regards, >>> >>> Graeme >>> >>> -- >>> Graeme Gillies >>> Principal Systems Administrator >>> Openstack Infrastructure >>> Red Hat Australia >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> Graeme thanks for raising this. >> >> RDO folks, should we not be building images every time we promote? If we are >> doing that, should we not be linking them more obviously? >> >> -Hugh >> > > Seems like sync is broken. > > @KB; maybe, I'm missing some context but since John is in PTO, can you > look at it? > https://bugs.centos.org/view.php?id=10697 > > Regards, > H. > >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From bderzhavets at hotmail.com Thu Aug 18 07:59:10 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 18 Aug 2016 07:59:10 +0000 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: References: , Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Ha?kel Sent: Thursday, August 18, 2016 3:14 AM To: Hugh Brock Cc: Karanbir Singh; rdo-list Subject: Re: [rdo-list] Updated mitaka stable images for tripleo 2016-08-18 6:06 GMT+02:00 Hugh Brock : > On Aug 18, 2016 5:48 AM, "Graeme Gillies" wrote: >> >> Hi, >> >> In a follow up to my previous post about image building in tripleo/RDO, >> do we currently have a timeline, process, or anything else regarding >> when rebuilt tripleo images are rebuilt and uploaded to >> buildlogs.centos.org? E.g. if you look at the mitaka images >> >> >> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ >> >> They were built back in April. Can we get something in place to have >> them updated more frequently? >> https://ci.centos.org/artifacts/rdo/images/ I believe that here undercloud.qcow2 has been built in way different from Standard TripleO. TripleO QS never/nowhere does # openstack overcloud image build --all Explained in details here https://bluejeans.com/s/a5ua/ Blue Jeans Network | Video Collaboration in the Cloud bluejeans.com Blue Jeans Network - Interoperable, Cloud-based, Affordable Video Conferencing Service Just in case snapshot is attached Thanks Boris >> Regards, >> >> Graeme >> >> -- >> Graeme Gillies >> Principal Systems Administrator >> Openstack Infrastructure >> Red Hat Australia >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > Graeme thanks for raising this. > > RDO folks, should we not be building images every time we promote? If we are > doing that, should we not be linking them more obviously? > > -Hugh > Seems like sync is broken. @KB; maybe, I'm missing some context but since John is in PTO, can you look at it? https://bugs.centos.org/view.php?id=10697 Regards, H. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2016-08-18 10-35-24.png Type: image/png Size: 432649 bytes Desc: Screenshot from 2016-08-18 10-35-24.png URL: From hguemar at fedoraproject.org Thu Aug 18 08:45:43 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 18 Aug 2016 10:45:43 +0200 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: <3f9ed6c8-03cf-8bf2-4c2c-d8a8fc7cb707@redhat.com> References: <3f9ed6c8-03cf-8bf2-4c2c-d8a8fc7cb707@redhat.com> Message-ID: 2016-08-18 9:42 GMT+02:00 Graeme Gillies : > On 18/08/16 17:14, Ha?kel wrote: >> 2016-08-18 6:06 GMT+02:00 Hugh Brock : >>> On Aug 18, 2016 5:48 AM, "Graeme Gillies" wrote: >>>> >>>> Hi, >>>> >>>> In a follow up to my previous post about image building in tripleo/RDO, >>>> do we currently have a timeline, process, or anything else regarding >>>> when rebuilt tripleo images are rebuilt and uploaded to >>>> buildlogs.centos.org? E.g. if you look at the mitaka images >>>> >>>> >>>> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ >>>> >>>> They were built back in April. Can we get something in place to have >>>> them updated more frequently? >>>> >> >> https://ci.centos.org/artifacts/rdo/images/ > > If you look under the mitaka directory at this link, you will notice the > only directory there is delorean. > > I'm not talking about delorean, I'm not interested in that. I'm > interested in images built using the package manifests from > centos-release-openstack-mitaka and its dependencies (Ceph, qemu-ev, etc). > > The images at the link I gave originally are images that were built with > those CBS, non-delorean, repos > > Regards, > > Graeme > The thing is there's no automation to periodically rebuild and test those images. Regards, H. >> >>>> Regards, >>>> >>>> Graeme >>>> >>>> -- >>>> Graeme Gillies >>>> Principal Systems Administrator >>>> Openstack Infrastructure >>>> Red Hat Australia >>>> >>>> _______________________________________________ >>>> rdo-list mailing list >>>> rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> Graeme thanks for raising this. >>> >>> RDO folks, should we not be building images every time we promote? If we are >>> doing that, should we not be linking them more obviously? >>> >>> -Hugh >>> >> >> Seems like sync is broken. >> >> @KB; maybe, I'm missing some context but since John is in PTO, can you >> look at it? >> https://bugs.centos.org/view.php?id=10697 >> >> Regards, >> H. >> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia From pgsousa at gmail.com Thu Aug 18 08:55:03 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 18 Aug 2016 09:55:03 +0100 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: References: Message-ID: Hi Graeme what I usually do is to pickup those images then run "yum update" on them (I use virt-customize) to get the last updates from sig repos. Regards On Thu, Aug 18, 2016 at 4:47 AM, Graeme Gillies wrote: > Hi, > > In a follow up to my previous post about image building in tripleo/RDO, > do we currently have a timeline, process, or anything else regarding > when rebuilt tripleo images are rebuilt and uploaded to > buildlogs.centos.org? E.g. if you look at the mitaka images > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_ > images/mitaka/cbs/ > > They were built back in April. Can we get something in place to have > them updated more frequently? > > Regards, > > Graeme > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mengxiandong at gmail.com Thu Aug 18 08:57:32 2016 From: mengxiandong at gmail.com (Xiandong Meng) Date: Thu, 18 Aug 2016 20:57:32 +1200 Subject: [rdo-list] source rpm repo for OpenStack Newton Dependencies In-Reply-To: References: Message-ID: Thank you. This is really the OpenStack packages, and I am looking for the source rpm of dependency packages, such as MariaDB, MongoDB etc for OpenStack-Newton release. Regards, Alex Meng mengxiandong at gmail.com On Thu, Aug 18, 2016 at 1:54 AM, Ha?kel wrote: > 2016-08-17 13:32 GMT+02:00 Xiandong Meng : > > OK, but i am looking for the OpenStack-Newton dependency source rpms. I > > could not find that on vault. > > > > Regards, > > > > Alex Meng > > mengxiandong at gmail.com > > > > We haven't imported Newton on CBS. > We only have DLRN builds (source rpms are in the same directory): > https://trunk.rdoproject.org/centos7/current/ > > Regards, > H. > > > On Wed, Aug 17, 2016 at 9:30 PM, Ha?kel > wrote: > >> > >> 2016-08-16 10:03 GMT+02:00 Xiandong Meng : > >> > I can find the binary dependency rpms at > >> > http://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-newton/ , > >> > but > >> > where can i find the corresponding source rpm repo? > >> > > >> > Regards, > >> > > >> > Alex Meng > >> > mengxiandong at gmail.com > >> > > >> > >> They're hosted on the vault: > >> http://vault.centos.org/7.2.1511/cloud/Source/openstack-mitaka/ > >> > >> Regards, > >> H. > >> > >> > _______________________________________________ > >> > rdo-list mailing list > >> > rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Thu Aug 18 09:05:11 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Thu, 18 Aug 2016 19:05:11 +1000 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: References: <3f9ed6c8-03cf-8bf2-4c2c-d8a8fc7cb707@redhat.com> Message-ID: On 18/08/16 18:45, Ha?kel wrote: > 2016-08-18 9:42 GMT+02:00 Graeme Gillies : >> On 18/08/16 17:14, Ha?kel wrote: >>> 2016-08-18 6:06 GMT+02:00 Hugh Brock : >>>> On Aug 18, 2016 5:48 AM, "Graeme Gillies" wrote: >>>>> >>>>> Hi, >>>>> >>>>> In a follow up to my previous post about image building in tripleo/RDO, >>>>> do we currently have a timeline, process, or anything else regarding >>>>> when rebuilt tripleo images are rebuilt and uploaded to >>>>> buildlogs.centos.org? E.g. if you look at the mitaka images >>>>> >>>>> >>>>> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ >>>>> >>>>> They were built back in April. Can we get something in place to have >>>>> them updated more frequently? >>>>> >>> >>> https://ci.centos.org/artifacts/rdo/images/ >> >> If you look under the mitaka directory at this link, you will notice the >> only directory there is delorean. >> >> I'm not talking about delorean, I'm not interested in that. I'm >> interested in images built using the package manifests from >> centos-release-openstack-mitaka and its dependencies (Ceph, qemu-ev, etc). >> >> The images at the link I gave originally are images that were built with >> those CBS, non-delorean, repos >> >> Regards, >> >> Graeme >> > > The thing is there's no automation to periodically rebuild and test > those images. > > Regards, > H. Ok no problem, now we have identified that, how can we go about getting some automation to do this? It would ideally happen everytime we ship a new package in those official channels, but failing, that, could we do this periodically? Regards, Graeme > >>> >>>>> Regards, >>>>> >>>>> Graeme >>>>> >>>>> -- >>>>> Graeme Gillies >>>>> Principal Systems Administrator >>>>> Openstack Infrastructure >>>>> Red Hat Australia >>>>> >>>>> _______________________________________________ >>>>> rdo-list mailing list >>>>> rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>>> Graeme thanks for raising this. >>>> >>>> RDO folks, should we not be building images every time we promote? If we are >>>> doing that, should we not be linking them more obviously? >>>> >>>> -Hugh >>>> >>> >>> Seems like sync is broken. >>> >>> @KB; maybe, I'm missing some context but since John is in PTO, can you >>> look at it? >>> https://bugs.centos.org/view.php?id=10697 >>> >>> Regards, >>> H. >>> >>>> >>>> _______________________________________________ >>>> rdo-list mailing list >>>> rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> -- >> Graeme Gillies >> Principal Systems Administrator >> Openstack Infrastructure >> Red Hat Australia -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From amoralej at redhat.com Thu Aug 18 10:22:21 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 18 Aug 2016 12:22:21 +0200 Subject: [rdo-list] source rpm repo for OpenStack Newton Dependencies In-Reply-To: References: Message-ID: Hi Alex, I'm not sure if srpms are published in a yum repo before publish as a stable repo in mirror.centos.org but you can access the srpms directly in cbs website, as in http://cbs.centos.org/koji/buildinfo?buildID=10246 or download it with cbs CLI: cbs download-build -a src mariadb-10.1.12-4.el7.src.rpm I hope this is helpful. Best regards, Alfredo On Thu, Aug 18, 2016 at 10:57 AM, Xiandong Meng wrote: > Thank you. This is really the OpenStack packages, and I am looking for the > source rpm of dependency packages, such as MariaDB, MongoDB etc for > OpenStack-Newton release. > > > Regards, > > Alex Meng > mengxiandong at gmail.com > > On Thu, Aug 18, 2016 at 1:54 AM, Ha?kel wrote: >> >> 2016-08-17 13:32 GMT+02:00 Xiandong Meng : >> > OK, but i am looking for the OpenStack-Newton dependency source rpms. I >> > could not find that on vault. >> > >> > Regards, >> > >> > Alex Meng >> > mengxiandong at gmail.com >> > >> >> We haven't imported Newton on CBS. >> We only have DLRN builds (source rpms are in the same directory): >> https://trunk.rdoproject.org/centos7/current/ >> >> Regards, >> H. >> >> > On Wed, Aug 17, 2016 at 9:30 PM, Ha?kel >> > wrote: >> >> >> >> 2016-08-16 10:03 GMT+02:00 Xiandong Meng : >> >> > I can find the binary dependency rpms at >> >> > http://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-newton/ , >> >> > but >> >> > where can i find the corresponding source rpm repo? >> >> > >> >> > Regards, >> >> > >> >> > Alex Meng >> >> > mengxiandong at gmail.com >> >> > >> >> >> >> They're hosted on the vault: >> >> http://vault.centos.org/7.2.1511/cloud/Source/openstack-mitaka/ >> >> >> >> Regards, >> >> H. >> >> >> >> > _______________________________________________ >> >> > rdo-list mailing list >> >> > rdo-list at redhat.com >> >> > https://www.redhat.com/mailman/listinfo/rdo-list >> >> > >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From bderzhavets at hotmail.com Thu Aug 18 10:53:12 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 18 Aug 2016 10:53:12 +0000 Subject: [rdo-list] source rpm repo for OpenStack Newton Dependencies In-Reply-To: References: , Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Alfredo Moralejo Alonso Sent: Thursday, August 18, 2016 6:22 AM To: Xiandong Meng Cc: rdo-list Subject: Re: [rdo-list] source rpm repo for OpenStack Newton Dependencies Hi Alex, I'm not sure if srpms are published in a yum repo before publish as a stable repo in mirror.centos.org but you can access the srpms directly in cbs website, as in http://cbs.centos.org/koji/buildinfo?buildID=10246 or download it with cbs CLI: >cbs download-build -a src mariadb-10.1.12-4.el7.src.rpm Status in TripleO QuickStart [root at overcloud-controller-0 ~]# sudo mysql -V mysql Ver 15.1 Distrib 10.1.12-MariaDB, for Linux (x86_64) using EditLine wrapper Boris. >I hope this is helpful. Best regards, Alfredo On Thu, Aug 18, 2016 at 10:57 AM, Xiandong Meng wrote: > Thank you. This is really the OpenStack packages, and I am looking for the > source rpm of dependency packages, such as MariaDB, MongoDB etc for > OpenStack-Newton release. > > > Regards, > > Alex Meng > mengxiandong at gmail.com > > On Thu, Aug 18, 2016 at 1:54 AM, Ha?kel wrote: >> >> 2016-08-17 13:32 GMT+02:00 Xiandong Meng : >> > OK, but i am looking for the OpenStack-Newton dependency source rpms. I >> > could not find that on vault. >> > >> > Regards, >> > >> > Alex Meng >> > mengxiandong at gmail.com >> > >> >> We haven't imported Newton on CBS. >> We only have DLRN builds (source rpms are in the same directory): >> https://trunk.rdoproject.org/centos7/current/ >> >> Regards, >> H. >> >> > On Wed, Aug 17, 2016 at 9:30 PM, Ha?kel >> > wrote: >> >> >> >> 2016-08-16 10:03 GMT+02:00 Xiandong Meng : >> >> > I can find the binary dependency rpms at >> >> > http://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-newton/ , >> >> > but >> >> > where can i find the corresponding source rpm repo? >> >> > >> >> > Regards, >> >> > >> >> > Alex Meng >> >> > mengxiandong at gmail.com >> >> > >> >> >> >> They're hosted on the vault: >> >> http://vault.centos.org/7.2.1511/cloud/Source/openstack-mitaka/ >> >> >> >> Regards, >> >> H. >> >> >> >> > _______________________________________________ >> >> > rdo-list mailing list >> >> > rdo-list at redhat.com >> >> > https://www.redhat.com/mailman/listinfo/rdo-list >> >> > >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu Aug 18 17:01:34 2016 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 18 Aug 2016 13:01:34 -0400 Subject: [rdo-list] CentOS Cloud SIG Message-ID: <5802b7bc-61f7-e67c-3138-e1b01b9da315@redhat.com> To quote from yesterday's RDO IRC meeting: We have basically not had a CentOS Cloud SIG meeting for 2 months. Part of this is because of travel/summer/whatever. I also have a standing conflict with that meeting, which I'm trying to fix. But more than that, I think it's because the Cloud SIG is just a rehash of this meeting, because only RDO is participating. On the one hand, we don't really accomplish much in that meeting when we do have it. On the other hand, it's a way to get RDO in front of another audience. So, I guess I'm asking if people still think it's valuable, and will try to carve out a little time for this each week. Or as was suggested in the meeting, should we attempt to merge the two meetings in some meaningful way, until such time as other Cloud Software communities see a benefit in participating? -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From whayutin at redhat.com Thu Aug 18 20:00:43 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 18 Aug 2016 16:00:43 -0400 Subject: [rdo-list] tripleo-quickstart, undercloud/overcloud-post roles removal and quickstart-role-requirements deprecation Message-ID: Greetings, In order to replace instack-virt-setup [1] the undercloud-post and overcloud roles native to quickstart will be removed. As to not impact current users of TripleO-Quickstart these roles will be moved w/ git history into ansible-role-tripleo-overcloud and ansible-role-tripleo-undercloud-post. This was covered in the latest RDO-Infra meeting [3]. Moving the native quickstart roles may impact the current users of these roles. To prevent anything from breaking we will be pinning the undercloud-post and overcloud roles to the latest known good commit [2]. We'll need [2] merged immediately to begin this work. I will keep the list updated, let me know if anyone has a problem. Thanks! [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart [2] https://review.openstack.org/#/c/356669/ [3] https://review.rdoproject.org/etherpad/p/rdo-infra-scrum lines 58 - 69 -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Aug 19 23:52:00 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 19 Aug 2016 19:52:00 -0400 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: References: <3f9ed6c8-03cf-8bf2-4c2c-d8a8fc7cb707@redhat.com> Message-ID: We would need to setup yet another pipeline. I can go after that if there is a need and it sounds like there is. I'll reduce the cadence on some of the other jobs to compensate. We are already chewing up quite a bit of the ci.centos.org infra, KB we may need to sync up regarding storage and network bandwidth. I'll wait for KB's ack before proceeding. Thanks Sent from my mobile On Aug 18, 2016 05:05, "Graeme Gillies" wrote: On 18/08/16 18:45, Ha?kel wrote: > 2016-08-18 9:42 GMT+02:00 Graeme Gillies : >> On 18/08/16 17:14, Ha?kel wrote: >>> 2016-08-18 6:06 GMT+02:00 Hugh Brock : >>>> On Aug 18, 2016 5:48 AM, "Graeme Gillies" wrote: >>>>> >>>>> Hi, >>>>> >>>>> In a follow up to my previous post about image building in tripleo/RDO, >>>>> do we currently have a timeline, process, or anything else regarding >>>>> when rebuilt tripleo images are rebuilt and uploaded to >>>>> buildlogs.centos.org? E.g. if you look at the mitaka images >>>>> >>>>> >>>>> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_ images/mitaka/cbs/ >>>>> >>>>> They were built back in April. Can we get something in place to have >>>>> them updated more frequently? >>>>> >>> >>> https://ci.centos.org/artifacts/rdo/images/ >> >> If you look under the mitaka directory at this link, you will notice the >> only directory there is delorean. >> >> I'm not talking about delorean, I'm not interested in that. I'm >> interested in images built using the package manifests from >> centos-release-openstack-mitaka and its dependencies (Ceph, qemu-ev, etc). >> >> The images at the link I gave originally are images that were built with >> those CBS, non-delorean, repos >> >> Regards, >> >> Graeme >> > > The thing is there's no automation to periodically rebuild and test > those images. > > Regards, > H. Ok no problem, now we have identified that, how can we go about getting some automation to do this? It would ideally happen everytime we ship a new package in those official channels, but failing, that, could we do this periodically? Regards, Graeme > >>> >>>>> Regards, >>>>> >>>>> Graeme >>>>> >>>>> -- >>>>> Graeme Gillies >>>>> Principal Systems Administrator >>>>> Openstack Infrastructure >>>>> Red Hat Australia >>>>> >>>>> _______________________________________________ >>>>> rdo-list mailing list >>>>> rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>>> Graeme thanks for raising this. >>>> >>>> RDO folks, should we not be building images every time we promote? If we are >>>> doing that, should we not be linking them more obviously? >>>> >>>> -Hugh >>>> >>> >>> Seems like sync is broken. >>> >>> @KB; maybe, I'm missing some context but since John is in PTO, can you >>> look at it? >>> https://bugs.centos.org/view.php?id=10697 >>> >>> Regards, >>> H. >>> >>>> >>>> _______________________________________________ >>>> rdo-list mailing list >>>> rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> -- >> Graeme Gillies >> Principal Systems Administrator >> Openstack Infrastructure >> Red Hat Australia -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Aug 19 23:55:59 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 19 Aug 2016 19:55:59 -0400 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: References: Message-ID: Sent from my mobile On Aug 18, 2016 03:59, "Boris Derzhavets" wrote: > > > > > ________________________________ > From: rdo-list-bounces at redhat.com on behalf of Ha?kel > Sent: Thursday, August 18, 2016 3:14 AM > To: Hugh Brock > Cc: Karanbir Singh; rdo-list > Subject: Re: [rdo-list] Updated mitaka stable images for tripleo > > 2016-08-18 6:06 GMT+02:00 Hugh Brock : > > On Aug 18, 2016 5:48 AM, "Graeme Gillies" wrote: > >> > >> Hi, > >> > >> In a follow up to my previous post about image building in tripleo/RDO, > >> do we currently have a timeline, process, or anything else regarding > >> when rebuilt tripleo images are rebuilt and uploaded to > >> buildlogs.centos.org? E.g. if you look at the mitaka images > >> > >> > >> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ > >> > >> They were built back in April. Can we get something in place to have > >> them updated more frequently? > >> > > https://ci.centos.org/artifacts/rdo/images/ > > I believe that here undercloud.qcow2 has been built in way different > from Standard TripleO. TripleO QS never/nowhere does > # openstack overcloud image build --all > > Explained in details here > https://bluejeans.com/s/a5ua/ > Blue Jeans Network | Video Collaboration in the Cloud > bluejeans.com > Blue Jeans Network - Interoperable, Cloud-based, Affordable Video Conferencing Service > > Just in case snapshot is attached Fyi Unfortunately that is already out of date. Tripleo-ci no longer creates an undercloud image. I'll send more details later. > > Thanks > Boris > > > > >> Regards, > >> > >> Graeme > >> > >> -- > >> Graeme Gillies > >> Principal Systems Administrator > >> Openstack Infrastructure > >> Red Hat Australia > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > Graeme thanks for raising this. > > > > RDO folks, should we not be building images every time we promote? If we are > > doing that, should we not be linking them more obviously? > > > > -Hugh > > > > Seems like sync is broken. > > @KB; maybe, I'm missing some context but since John is in PTO, can you > look at it? > https://bugs.centos.org/view.php?id=10697 > > Regards, > H. > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sat Aug 20 06:29:44 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 20 Aug 2016 06:29:44 +0000 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: References: , Message-ID: ________________________________ From: Wesley Hayutin Sent: Friday, August 19, 2016 7:55 PM To: Boris Derzhavets Cc: Karanbir Singh; Ha?kel; rdo-list; Hugh Brock Subject: Re: [rdo-list] Updated mitaka stable images for tripleo Sent from my mobile On Aug 18, 2016 03:59, "Boris Derzhavets" > wrote: > > > > > ________________________________ > From: rdo-list-bounces at redhat.com > on behalf of Ha?kel > > Sent: Thursday, August 18, 2016 3:14 AM > To: Hugh Brock > Cc: Karanbir Singh; rdo-list > Subject: Re: [rdo-list] Updated mitaka stable images for tripleo > > 2016-08-18 6:06 GMT+02:00 Hugh Brock >: > > On Aug 18, 2016 5:48 AM, "Graeme Gillies" > wrote: > >> > >> Hi, > >> > >> In a follow up to my previous post about image building in tripleo/RDO, > >> do we currently have a timeline, process, or anything else regarding > >> when rebuilt tripleo images are rebuilt and uploaded to > >> buildlogs.centos.org? E.g. if you look at the mitaka images > >> > >> > >> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ > >> > >> They were built back in April. Can we get something in place to have > >> them updated more frequently? > >> > > https://ci.centos.org/artifacts/rdo/images/ > > I believe that here undercloud.qcow2 has been built in way different > from Standard TripleO. TripleO QS never/nowhere does > # openstack overcloud image build --all > > Explained in details here > https://bluejeans.com/s/a5ua/ > Blue Jeans Network | Video Collaboration in the Cloud > bluejeans.com > Blue Jeans Network - Interoperable, Cloud-based, Affordable Video Conferencing Service > > Just in case snapshot is attached Fyi Unfortunately that is already out of date. Tripleo-ci no longer creates an undercloud image. I'll send more details later. > Regarding pushing TripleO QuickStart to bare metal :- Does it prevent from setting up undercloud as VM on bare metal node, so that QuickStart CI via delorean trunks could be utilized in the same way as in virtual environment ? Thank you very much for responding. Boris. > > > Thanks > Boris > > > > >> Regards, > >> > >> Graeme > >> > >> -- > >> Graeme Gillies > >> Principal Systems Administrator > >> Openstack Infrastructure > >> Red Hat Australia > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > Graeme thanks for raising this. > > > > RDO folks, should we not be building images every time we promote? If we are > > doing that, should we not be linking them more obviously? > > > > -Hugh > > > > Seems like sync is broken. > > @KB; maybe, I'm missing some context but since John is in PTO, can you > look at it? > https://bugs.centos.org/view.php?id=10697 > > Regards, > H. > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Mon Aug 22 00:30:37 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Mon, 22 Aug 2016 10:30:37 +1000 Subject: [rdo-list] Updated mitaka stable images for tripleo In-Reply-To: References: <3f9ed6c8-03cf-8bf2-4c2c-d8a8fc7cb707@redhat.com> Message-ID: <305921d9-b178-955d-c1c1-af62fa38a2ea@redhat.com> On 20/08/16 09:52, Wesley Hayutin wrote: > We would need to setup yet another pipeline. I can go after that if > there is a need and it sounds like there is. I'll reduce the cadence on > some of the other jobs to compensate. Yes I think it would be much appreciated if such a pipeline could be setup. Even just to rebuild once a month would be great. > > We are already chewing up quite a bit of the ci.centos.org > infra, KB we may need to sync up regarding > storage and network bandwidth. Understood, hopefully as we get RDO Cloud online that will help open up the resourcing bottleneck. Regards, Graeme > > I'll wait for KB's ack before proceeding. > > Thanks > > Sent from my mobile > > > On Aug 18, 2016 05:05, "Graeme Gillies" > wrote: > > On 18/08/16 18:45, Ha?kel wrote: > > 2016-08-18 9:42 GMT+02:00 Graeme Gillies >: > >> On 18/08/16 17:14, Ha?kel wrote: > >>> 2016-08-18 6:06 GMT+02:00 Hugh Brock >: > >>>> On Aug 18, 2016 5:48 AM, "Graeme Gillies" > wrote: > >>>>> > >>>>> Hi, > >>>>> > >>>>> In a follow up to my previous post about image building in > tripleo/RDO, > >>>>> do we currently have a timeline, process, or anything else > regarding > >>>>> when rebuilt tripleo images are rebuilt and uploaded to > >>>>> buildlogs.centos.org ? E.g. if > you look at the mitaka images > >>>>> > >>>>> > >>>>> > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ > > >>>>> > >>>>> They were built back in April. Can we get something in place > to have > >>>>> them updated more frequently? > >>>>> > >>> > >>> https://ci.centos.org/artifacts/rdo/images/ > > >> > >> If you look under the mitaka directory at this link, you will > notice the > >> only directory there is delorean. > >> > >> I'm not talking about delorean, I'm not interested in that. I'm > >> interested in images built using the package manifests from > >> centos-release-openstack-mitaka and its dependencies (Ceph, > qemu-ev, etc). > >> > >> The images at the link I gave originally are images that were > built with > >> those CBS, non-delorean, repos > >> > >> Regards, > >> > >> Graeme > >> > > > > The thing is there's no automation to periodically rebuild and test > > those images. > > > > Regards, > > H. > > Ok no problem, now we have identified that, how can we go about getting > some automation to do this? It would ideally happen everytime we ship a > new package in those official channels, but failing, that, could we do > this periodically? > > Regards, > > Graeme > > > > >>> > >>>>> Regards, > >>>>> > >>>>> Graeme > >>>>> > >>>>> -- > >>>>> Graeme Gillies > >>>>> Principal Systems Administrator > >>>>> Openstack Infrastructure > >>>>> Red Hat Australia > >>>>> > >>>>> _______________________________________________ > >>>>> rdo-list mailing list > >>>>> rdo-list at redhat.com > >>>>> https://www.redhat.com/mailman/listinfo/rdo-list > > >>>>> > >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > >>>> > >>>> Graeme thanks for raising this. > >>>> > >>>> RDO folks, should we not be building images every time we > promote? If we are > >>>> doing that, should we not be linking them more obviously? > >>>> > >>>> -Hugh > >>>> > >>> > >>> Seems like sync is broken. > >>> > >>> @KB; maybe, I'm missing some context but since John is in PTO, > can you > >>> look at it? > >>> https://bugs.centos.org/view.php?id=10697 > > >>> > >>> Regards, > >>> H. > >>> > >>>> > >>>> _______________________________________________ > >>>> rdo-list mailing list > >>>> rdo-list at redhat.com > >>>> https://www.redhat.com/mailman/listinfo/rdo-list > > >>>> > >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > >> > >> > >> -- > >> Graeme Gillies > >> Principal Systems Administrator > >> Openstack Infrastructure > >> Red Hat Australia > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From rasca at redhat.com Mon Aug 22 07:51:36 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Mon, 22 Aug 2016 09:51:36 +0200 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: Message-ID: <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> Hi everybody, sorry for the late response but I was on PTO. I don't understand the meaning of the cleanup commands, but maybe it's just because I'm not getting the whole picture. I guess we're hitting a version problem here: if you deploy the actual master (i.e. with quickstart) you'll get the environment with the constraints limited to the core services because of [1] and [2] (so none of the mentioned services exists in the cluster configuration). Hope this helps, [1] https://review.openstack.org/#/c/314208/ [2] https://review.openstack.org/#/c/342650/ -- Raoul Scarazzini rasca at redhat.com On 08/08/2016 14:43, Wesley Hayutin wrote: > Attila, Raoul > Can you please investigate this issue. > > Thanks! > > On Sun, Aug 7, 2016 at 3:52 AM, Boris Derzhavets > > wrote: > > TripleO HA Controller been installed via instack-virt-setup has PCS > CLI like :- > > pcs resource cleanup neutron-server-clone > pcs resource cleanup openstack-nova-api-clone > pcs resource cleanup openstack-nova-consoleauth-clone > pcs resource cleanup openstack-heat-engine-clone > pcs resource cleanup openstack-cinder-api-clone > pcs resource cleanup openstack-glance-registry-clone > pcs resource cleanup httpd-clone > > been working as expected on bare metal > > > Same cluster been setup via QuickStart (Virtual ENV) after bouncing > one of controllers > > included in cluster ignores PCS CLI at least via my experience ( > which is obviously limited > > either format of particular commands is wrong for QuickStart ) > > I believe that dropping (complete replacing ) instack-virt-setup is > not a good idea in general. Personally, I believe that like in case > with packstack it is always good > > to have VENV configuration been tested before going to bare metal > deployment. > > My major concern is maintenance and disaster recovery tests , rather > then deployment itself . What good is for me TripleO Quickstart > running on bare metal if I cannot replace > > crashed VM Controller just been limited to Services HA ( all 3 > Cluster VMs running on single > > bare metal node ) > > > Thanks > > Boris. > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From bderzhavets at hotmail.com Mon Aug 22 11:29:38 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 22 Aug 2016 11:29:38 +0000 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> References: , <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> Message-ID: ________________________________ From: Raoul Scarazzini Sent: Monday, August 22, 2016 3:51 AM To: Wesley Hayutin; Boris Derzhavets; Attila Darazs Cc: David Moreau Simard; rdo-list Subject: Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI Hi everybody, sorry for the late response but I was on PTO. I don't understand the meaning of the cleanup commands, but maybe it's just because I'm not getting the whole picture. > I have to confirm that fault was mine PCS CLI is working on TripeO QuickStart but requires pcs cluster restart on particular node which went down via ` nova stop controller-X` and was brought up via `nova start controller-X` Details here :- http://bderzhavets.blogspot.ru/2016/08/emulation-rdo-triple0-quickstart-ha.html VENV been set up with instack-virt-setup doesn't require ( on bounced Controller node ) # pcs cluster stop # pcs cluster start Before issuing start.sh #!/bash -x pcs resource cleanup rabbitmq-clone ; sleep 10 pcs resource cleanup neutron-server-clone ; sleep 10 pcs resource cleanup openstack-nova-api-clone ; sleep 10 pcs resource cleanup openstack-nova-consoleauth-clone ; sleep 10 pcs resource cleanup openstack-heat-engine-clone ; sleep 10 pcs resource cleanup openstack-cinder-api-clone ; sleep 10 pcs resource cleanup openstack-glance-registry-clone ; sleep 10 pcs resource cleanup httpd-clone ; # . ./start.sh In worse case scenario I have to issue start.sh twice from different Controllers pcs resource cleanup openstack-nova-api-clone attempts to start corresponding service , which is down at the moment. In fact two cleanups above start all Nova Services && one neutron cleanup starts all neutron agents as well. I was also kept track of Galera DB via `clustercheck` Thanks. Boris > I guess we're hitting a version problem here: if you deploy the actual master (i.e. with quickstart) you'll get the environment with the constraints limited to the core services because of [1] and [2] (so none of the mentioned services exists in the cluster configuration). Hope this helps, [1] https://review.openstack.org/#/c/314208/ [2] https://review.openstack.org/#/c/342650/ -- Raoul Scarazzini rasca at redhat.com On 08/08/2016 14:43, Wesley Hayutin wrote: > Attila, Raoul > Can you please investigate this issue. > > Thanks! > > On Sun, Aug 7, 2016 at 3:52 AM, Boris Derzhavets > > wrote: > > TripleO HA Controller been installed via instack-virt-setup has PCS > CLI like :- > > pcs resource cleanup neutron-server-clone > pcs resource cleanup openstack-nova-api-clone > pcs resource cleanup openstack-nova-consoleauth-clone > pcs resource cleanup openstack-heat-engine-clone > pcs resource cleanup openstack-cinder-api-clone > pcs resource cleanup openstack-glance-registry-clone > pcs resource cleanup httpd-clone > > been working as expected on bare metal > > > Same cluster been setup via QuickStart (Virtual ENV) after bouncing > one of controllers > > included in cluster ignores PCS CLI at least via my experience ( > which is obviously limited > > either format of particular commands is wrong for QuickStart ) > > I believe that dropping (complete replacing ) instack-virt-setup is > not a good idea in general. Personally, I believe that like in case > with packstack it is always good > > to have VENV configuration been tested before going to bare metal > deployment. > > My major concern is maintenance and disaster recovery tests , rather > then deployment itself . What good is for me TripleO Quickstart > running on bare metal if I cannot replace > > crashed VM Controller just been limited to Services HA ( all 3 > Cluster VMs running on single > > bare metal node ) > > > Thanks > > Boris. > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Mon Aug 22 11:49:42 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 22 Aug 2016 11:49:42 +0000 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: , <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com>, Message-ID: Sorry , for my English I was also keeping (not kept ) track on Galera DB via `clustercheck` either I just kept. Boris ________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Monday, August 22, 2016 7:29 AM To: Raoul Scarazzini; Wesley Hayutin; Attila Darazs Cc: rdo-list Subject: Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI ________________________________ From: Raoul Scarazzini Sent: Monday, August 22, 2016 3:51 AM To: Wesley Hayutin; Boris Derzhavets; Attila Darazs Cc: David Moreau Simard; rdo-list Subject: Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI Hi everybody, sorry for the late response but I was on PTO. I don't understand the meaning of the cleanup commands, but maybe it's just because I'm not getting the whole picture. > I have to confirm that fault was mine PCS CLI is working on TripeO QuickStart but requires pcs cluster restart on particular node which went down via ` nova stop controller-X` and was brought up via `nova start controller-X` Details here :- http://bderzhavets.blogspot.ru/2016/08/emulation-rdo-triple0-quickstart-ha.html VENV been set up with instack-virt-setup doesn't require ( on bounced Controller node ) # pcs cluster stop # pcs cluster start Before issuing start.sh #!/bash -x pcs resource cleanup rabbitmq-clone ; sleep 10 pcs resource cleanup neutron-server-clone ; sleep 10 pcs resource cleanup openstack-nova-api-clone ; sleep 10 pcs resource cleanup openstack-nova-consoleauth-clone ; sleep 10 pcs resource cleanup openstack-heat-engine-clone ; sleep 10 pcs resource cleanup openstack-cinder-api-clone ; sleep 10 pcs resource cleanup openstack-glance-registry-clone ; sleep 10 pcs resource cleanup httpd-clone ; # . ./start.sh In worse case scenario I have to issue start.sh twice from different Controllers pcs resource cleanup openstack-nova-api-clone attempts to start corresponding service , which is down at the moment. In fact two cleanups above start all Nova Services && one neutron cleanup starts all neutron agents as well. I was also kept track of Galera DB via `clustercheck` Thanks. Boris > I guess we're hitting a version problem here: if you deploy the actual master (i.e. with quickstart) you'll get the environment with the constraints limited to the core services because of [1] and [2] (so none of the mentioned services exists in the cluster configuration). Hope this helps, [1] https://review.openstack.org/#/c/314208/ [2] https://review.openstack.org/#/c/342650/ -- Raoul Scarazzini rasca at redhat.com On 08/08/2016 14:43, Wesley Hayutin wrote: > Attila, Raoul > Can you please investigate this issue. > > Thanks! > > On Sun, Aug 7, 2016 at 3:52 AM, Boris Derzhavets > > wrote: > > TripleO HA Controller been installed via instack-virt-setup has PCS > CLI like :- > > pcs resource cleanup neutron-server-clone > pcs resource cleanup openstack-nova-api-clone > pcs resource cleanup openstack-nova-consoleauth-clone > pcs resource cleanup openstack-heat-engine-clone > pcs resource cleanup openstack-cinder-api-clone > pcs resource cleanup openstack-glance-registry-clone > pcs resource cleanup httpd-clone > > been working as expected on bare metal > > > Same cluster been setup via QuickStart (Virtual ENV) after bouncing > one of controllers > > included in cluster ignores PCS CLI at least via my experience ( > which is obviously limited > > either format of particular commands is wrong for QuickStart ) > > I believe that dropping (complete replacing ) instack-virt-setup is > not a good idea in general. Personally, I believe that like in case > with packstack it is always good > > to have VENV configuration been tested before going to bare metal > deployment. > > My major concern is maintenance and disaster recovery tests , rather > then deployment itself . What good is for me TripleO Quickstart > running on bare metal if I cannot replace > > crashed VM Controller just been limited to Services HA ( all 3 > Cluster VMs running on single > > bare metal node ) > > > Thanks > > Boris. > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Aug 22 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 22 Aug 2016 15:00:03 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160822150003.0DC7F60A4003@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-08-24 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rbowen at redhat.com Mon Aug 22 15:36:08 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 22 Aug 2016 11:36:08 -0400 Subject: [rdo-list] Unanswered RDO questions, ask.openstack.org Message-ID: We've got the unanswered list down to the lowest level yet, so it's a little less overwhelming now. Thanks, everyone! 25 unanswered questions: "Parameter outiface failed on Firewall" during installation of openstack rdo on centos 7 https://ask.openstack.org/en/question/95657/parameter-outiface-failed-on-firewall-during-installation-of-openstack-rdo-on-centos-7/ Tags: rdo, devstack#mitaka multi nodes provider network ovs config https://ask.openstack.org/en/question/95423/multi-nodes-provider-network-ovs-config/ Tags: rdo, liberty-neutron Adding additional packages to an RDO installation https://ask.openstack.org/en/question/95380/adding-additional-packages-to-an-rdo-installation/ Tags: rdo, mistral RDO TripleO Mitaka HA Overcloud Failing https://ask.openstack.org/en/question/95249/rdo-tripleo-mitaka-ha-overcloud-failing/ Tags: mitaka, tripleo, overcloud, centos7 RDO - is there any fedora package newer than puppet-4.2.1-3.fc24.noarch.rpm https://ask.openstack.org/en/question/94969/rdo-is-there-any-fedora-package-newer-than-puppet-421-3fc24noarchrpm/ Tags: rdo, puppet, install-openstack OpenStack RDO mysqld 100% cpu https://ask.openstack.org/en/question/94961/openstack-rdo-mysqld-100-cpu/ Tags: openstack, mysqld, cpu Failed to set RDO repo on host-packstack-centOS-7 https://ask.openstack.org/en/question/94828/failed-to-set-rdo-repo-on-host-packstack-centos-7/ Tags: openstack-packstack, centos7, rdo how to deploy haskell-distributed in RDO? https://ask.openstack.org/en/question/94785/how-to-deploy-haskell-distributed-in-rdo/ Tags: rdo rdo tripleO liberty undercloud install failing https://ask.openstack.org/en/question/94023/rdo-tripleo-liberty-undercloud-install-failing/ Tags: rdo, rdo-manager, liberty, undercloud, instack AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ Tags: openstack, networking, aws ceilometer: I've installed openstack mitaka. but swift stops working when i configured the pipeline and ceilometer filter https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ Tags: ceilometer, openstack-swift, mitaka Keystone authentication: Failed to contact the endpoint. https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ Tags: keystone, authenticate, endpoint, murano Liberty RDO: stack resource topology icons are pink https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ Tags: resource, topology, dashboard, horizon, pink No handlers could be found for logger "oslo_config.cfg" while syncing the glance database https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ Tags: liberty, glance, install-openstack CentOS OpenStack - compute node can't talk https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ Tags: rdo How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on RDO Liberty ? https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ Tags: rdo, liberty, swift, ha VM and container can't download anything from internet https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ Tags: rdo, neutron, network, connectivity Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ Tags: keyboard, map, keymap, vncproxy, novnc OpenStack-Docker driver failed https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ Tags: docker, openstack, liberty Routing between two tenants https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ Tags: kilo, fuel, rdo, routing openstack baremetal introspection internal server error https://ask.openstack.org/en/question/82790/openstack-baremetal-introspection-internal-server-error/ Tags: rdo, ironic-inspector, tripleo Installing openstack using packstack (rdo) failed https://ask.openstack.org/en/question/82473/installing-openstack-using-packstack-rdo-failed/ Tags: rdo, packstack, installation-error, keystone VMware Host Backend causes No valid host was found. Bug ??? https://ask.openstack.org/en/question/79738/vmware-host-backend-causes-no-valid-host-was-found-bug/ Tags: vmware, rdo Mutlinode Devstack with two interfaces https://ask.openstack.org/en/question/78615/mutlinode-devstack-with-two-interfaces/ Tags: devstack, vlan, openstack Overcloud deployment on VM fails as IP address from DHCP is not assigned https://ask.openstack.org/en/question/66272/overcloud-deployment-on-vm-fails-as-ip-address-from-dhcp-is-not-assigned/ Tags: overcloud_in_vm -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From lmadsen at redhat.com Mon Aug 22 20:35:45 2016 From: lmadsen at redhat.com (Leif Madsen) Date: Mon, 22 Aug 2016 16:35:45 -0400 Subject: [rdo-list] CentOS Cloud SIG In-Reply-To: <5802b7bc-61f7-e67c-3138-e1b01b9da315@redhat.com> References: <5802b7bc-61f7-e67c-3138-e1b01b9da315@redhat.com> Message-ID: On Thu, Aug 18, 2016 at 1:01 PM, Rich Bowen wrote: > To quote from yesterday's RDO IRC meeting: > > > > So, I guess I'm asking if people still think it's valuable, and will try > to carve out a little time for this each week. Or as was suggested in > the meeting, should we attempt to merge the two meetings in some > meaningful way, until such time as other Cloud Software communities see > a benefit in participating? > I'm never a fan of adding meetings for the sake of adding them. There are already so many. While I don't have much skin in the game, I'd vote for a merger of the two meetings in the current RDO Meeting timeslot. Thanks! Leif. -- Leif Madsen | Partner Engineer - NFV & CI NFV Partner Engineering Red Hat GPG: (D670F846) BEE0 336E 5406 42BA 6194 6831 B38A 291E D670 F846 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Mon Aug 22 20:44:25 2016 From: dms at redhat.com (David Moreau Simard) Date: Mon, 22 Aug 2016 16:44:25 -0400 Subject: [rdo-list] Rebuild a newer version of python-crypto ? Message-ID: Hi, I'm noticing an issue in the review.rdo gate. We pip install zuul in the nodepool images (presumably for zuul-cloner) which pulls in Paramiko which pulls pycrypto: Collecting pycrypto!=2.4,<3.0,>=2.1 (from paramiko<2.0.0,>=1.8.0->zuul) It looks like if we install crypto from both pip and packages, we're bound to be running into issues like that: ==== Error: /Stage[main]/Glance/Package[openstack-glance]/ensure: change from absent to present failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-glance' returned 1: Error downloading packages: 1:openstack-glance-13.0.0-0.20160817133141.bfe4d91.el7.centos.noarch: [Errno 256] No more mirrors to try. Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-barbican-api' returned 1: Error unpacking rpm package python-crypto-2.6.1-1.el7.x86_64 error: unpacking of archive failed on file /usr/lib64/python2.7/site-packages/pycrypto-2.6.1-py2.7.egg-info: cpio: rename failed - Is a directory ==== Reproduced here [1] on a fresh CentOS install with: 1 yum -y install "@Development Tools" python-devel openssl-devel libffi-devel libxml2-devel python-setuptools python-crypto 2 yum -y remove python-crypto 3 easy_install pip 4 pip install pycrypto 5 yum -y install python-crypto I am unable to reproduce the issue on Fedora. CentOS extras (and RDO) has python-crypto-2.6.1-1.el7.centos.x86_64 (2014) while Fedora has python-crypto-2.6.1-10.fc24 (2016). Can we update the version we have in RDO ? Or is there some other root cause I'm missing ? [1]: http://pastebin.centos.org/52006/raw/ David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From dms at redhat.com Mon Aug 22 20:58:43 2016 From: dms at redhat.com (David Moreau Simard) Date: Mon, 22 Aug 2016 16:58:43 -0400 Subject: [rdo-list] Scheduled maintenance of review.rdoproject.org: 2016-08-24 13:30 UTC In-Reply-To: <3795275d-4d49-02dd-68fd-e5ecc6665d5f@redhat.com> References: <3795275d-4d49-02dd-68fd-e5ecc6665d5f@redhat.com> Message-ID: As discussed off-thread, please postpone this maintenance for the time being. The update includes a forced account e-mail address synchronization with the e-mail address the user has in Github. This can disrupt users' workflow if their Gerrit and Github emails differ. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Mon, Aug 15, 2016 at 9:45 AM, Nicolas Hicher wrote: > Hello folks, > > We plan to upgrade review.rdoproject.org on 2016-08-24 13:30 UTC (next > Wednesday). The downtime will be about 1 hour approximately. > > This is a maintenance upgrade to the last stable version of Software > Factory 2.2.3, the changelog is: > > http://softwarefactory-project.io/r/gitweb?p=software-factory.git;a=blob_plain;f=CHANGELOG.md > > Regards, > Software Factory Team, on behalf of rdo-infra > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Mon Aug 22 21:18:59 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Mon, 22 Aug 2016 23:18:59 +0200 Subject: [rdo-list] Rebuild a newer version of python-crypto ? In-Reply-To: References: Message-ID: On 22/08/16 22:44, David Moreau Simard wrote: > Hi, > > I'm noticing an issue in the review.rdo gate. > We pip install zuul in the nodepool images (presumably for > zuul-cloner) which pulls in Paramiko which pulls pycrypto: > Collecting pycrypto!=2.4,<3.0,>=2.1 (from paramiko<2.0.0,>=1.8.0->zuul) > > It looks like if we install crypto from both pip and packages, we're > bound to be running into issues like that: > ==== > Error: /Stage[main]/Glance/Package[openstack-glance]/ensure: change > from absent to present failed: Execution of '/bin/yum -d 0 -e 0 -y > install openstack-glance' returned 1: Error downloading packages: > 1:openstack-glance-13.0.0-0.20160817133141.bfe4d91.el7.centos.noarch: > [Errno 256] No more mirrors to try. > Error: Execution of '/bin/yum -d 0 -e 0 -y install > openstack-barbican-api' returned 1: Error unpacking rpm package > python-crypto-2.6.1-1.el7.x86_64 ====>>>> > error: unpacking of archive failed on file > /usr/lib64/python2.7/site-packages/pycrypto-2.6.1-py2.7.egg-info: > cpio: rename failed - Is a directory > ==== > <<<<==== This is a known RPM/CPIO issue, and in this case, not a packaging bug. > Reproduced here [1] on a fresh CentOS install with: > 1 yum -y install "@Development Tools" python-devel openssl-devel > libffi-devel libxml2-devel python-setuptools python-crypto > 2 yum -y remove python-crypto > 3 easy_install pip > 4 pip install pycrypto > 5 yum -y install python-crypto > *fresh* as in no Cloud SIG repository? In that case, this is totally expected behaviour because of setuptools versions difference. Not mentionning, that mixing packages and pip-installed modules in system site-packages is not supported. Please check setuptools versions, note that we ship a more recent setuptools in Cloud SIG repositories. > I am unable to reproduce the issue on Fedora. > CentOS extras (and RDO) has python-crypto-2.6.1-1.el7.centos.x86_64 > (2014) while Fedora has python-crypto-2.6.1-10.fc24 (2016). > $ rpm -qi python2-crypto Name : python2-crypto Version : 2.6.1 Release : 10.fc24 Architecture: x86_64 Install Date: Wed 09 Mar 2016 13:16:13 CET $ sudo pip install pycrypto Requirement already satisfied (use --upgrade to upgrade): pycrypto in /usr/lib64/python2.7/site-packages pip is able to detect that pycrypto is already installed from package. > Can we update the version we have in RDO ? Or is there some other root > cause I'm missing ? > For the record, I also diffed 2.6.1-1 and 2.6.1-10 and the main changes are: * python3 port (irrelevant for us) * unbundling libtomcrypt which may be a good reason for rebuilding it in RDO but irrelevant to your issue. But again, it won't solve that particular issue. Better fix would be using virtualenv for zuul-cloner. Regards, H. > [1]: http://pastebin.centos.org/52006/raw/ > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > From apevec at redhat.com Mon Aug 22 21:29:59 2016 From: apevec at redhat.com (Alan Pevec) Date: Mon, 22 Aug 2016 23:29:59 +0200 Subject: [rdo-list] Rebuild a newer version of python-crypto ? In-Reply-To: References: Message-ID: > But again, it won't solve that particular issue. Better fix would be > using virtualenv for zuul-cloner. or RPM package zuul-cloner From hguemar at fedoraproject.org Mon Aug 22 21:36:06 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Mon, 22 Aug 2016 23:36:06 +0200 Subject: [rdo-list] CentOS Cloud SIG In-Reply-To: <5802b7bc-61f7-e67c-3138-e1b01b9da315@redhat.com> References: <5802b7bc-61f7-e67c-3138-e1b01b9da315@redhat.com> Message-ID: 2016-08-18 19:01 GMT+02:00 Rich Bowen : > To quote from yesterday's RDO IRC meeting: > > We have basically not had a CentOS Cloud SIG meeting for 2 months. > Part of this is because of travel/summer/whatever. I also have a > standing conflict with that meeting, which I'm trying to fix. > But more than that, I think it's because the Cloud SIG is just a rehash > of this meeting, because only RDO is participating. > > On the one hand, we don't really accomplish much in that meeting when we > do have it. On the other hand, it's a way to get RDO in front of another > audience. > > So, I guess I'm asking if people still think it's valuable, and will try > to carve out a little time for this each week. Or as was suggested in > the meeting, should we attempt to merge the two meetings in some > meaningful way, until such time as other Cloud Software communities see > a benefit in participating? > Currently, Cloud SIG is de-facto maintained by RDO folks, CentOS Core team + Kushal. I agree that re-hashing meetings is wasting valuable time for many people, so I agree with your proposal. But we should continue looking at collaborating with other Cloud communities (like NFV) One twist: we should copy centos devel list in our meeting minutes, if we merge the meetings. Regards, H. > -- > Rich Bowen - rbowen at redhat.com > RDO Community Liaison > http://rdoproject.org > @RDOCommunity > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dms at redhat.com Mon Aug 22 21:41:34 2016 From: dms at redhat.com (David Moreau Simard) Date: Mon, 22 Aug 2016 17:41:34 -0400 Subject: [rdo-list] Rebuild a newer version of python-crypto ? In-Reply-To: References: Message-ID: It looks like installing python-crypto before doing the "pip install zuul" satisfies it's dependency: Requirement already satisfied (use --upgrade to upgrade): pycrypto!=2.4,<3.0,>=2.1 in /usr/lib64/python2.7/site-packages (from paramiko<2.0.0,>=1.8.0->zuul) I submitted a fix to software-factory [1] and backported to review.rdo [2]. > or RPM package zuul-cloner Or package Zuul itself !? :) [1]: http://softwarefactory-project.io/r/#/c/4626/ [2]: https://review.rdoproject.org/r/#/c/1904/ David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Mon, Aug 22, 2016 at 5:18 PM, Ha?kel wrote: > On 22/08/16 22:44, David Moreau Simard wrote: >> Hi, >> >> I'm noticing an issue in the review.rdo gate. >> We pip install zuul in the nodepool images (presumably for >> zuul-cloner) which pulls in Paramiko which pulls pycrypto: >> Collecting pycrypto!=2.4,<3.0,>=2.1 (from paramiko<2.0.0,>=1.8.0->zuul) >> >> It looks like if we install crypto from both pip and packages, we're >> bound to be running into issues like that: >> ==== >> Error: /Stage[main]/Glance/Package[openstack-glance]/ensure: change >> from absent to present failed: Execution of '/bin/yum -d 0 -e 0 -y >> install openstack-glance' returned 1: Error downloading packages: >> 1:openstack-glance-13.0.0-0.20160817133141.bfe4d91.el7.centos.noarch: >> [Errno 256] No more mirrors to try. >> Error: Execution of '/bin/yum -d 0 -e 0 -y install >> openstack-barbican-api' returned 1: Error unpacking rpm package >> python-crypto-2.6.1-1.el7.x86_64 > > > ====>>>> > >> error: unpacking of archive failed on file >> /usr/lib64/python2.7/site-packages/pycrypto-2.6.1-py2.7.egg-info: >> cpio: rename failed - Is a directory >> ==== >> > > <<<<==== This is a known RPM/CPIO issue, and in this case, not a packaging bug. > > > >> Reproduced here [1] on a fresh CentOS install with: >> 1 yum -y install "@Development Tools" python-devel openssl-devel >> libffi-devel libxml2-devel python-setuptools python-crypto >> 2 yum -y remove python-crypto >> 3 easy_install pip >> 4 pip install pycrypto >> 5 yum -y install python-crypto >> > > *fresh* as in no Cloud SIG repository? > In that case, this is totally expected behaviour because of setuptools > versions difference. > Not mentionning, that mixing packages and pip-installed modules in > system site-packages is not supported. > > Please check setuptools versions, note that we ship a more recent > setuptools in Cloud SIG repositories. > >> I am unable to reproduce the issue on Fedora. >> CentOS extras (and RDO) has python-crypto-2.6.1-1.el7.centos.x86_64 >> (2014) while Fedora has python-crypto-2.6.1-10.fc24 (2016). >> > > > $ rpm -qi python2-crypto > Name : python2-crypto > Version : 2.6.1 > Release : 10.fc24 > Architecture: x86_64 > Install Date: Wed 09 Mar 2016 13:16:13 CET > $ sudo pip install pycrypto > Requirement already satisfied (use --upgrade to upgrade): pycrypto in > /usr/lib64/python2.7/site-packages > > pip is able to detect that pycrypto is already installed from package. > > >> Can we update the version we have in RDO ? Or is there some other root >> cause I'm missing ? >> > > For the record, I also diffed 2.6.1-1 and 2.6.1-10 and the main changes are: > * python3 port (irrelevant for us) > * unbundling libtomcrypt which may be a good reason for rebuilding it > in RDO but irrelevant to your issue. > > But again, it won't solve that particular issue. Better fix would be > using virtualenv for zuul-cloner. > > Regards, > H. > >> [1]: http://pastebin.centos.org/52006/raw/ >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> From kdreyer at redhat.com Mon Aug 22 22:12:44 2016 From: kdreyer at redhat.com (Ken Dreyer) Date: Mon, 22 Aug 2016 16:12:44 -0600 Subject: [rdo-list] cherry-picking with rdopkg Message-ID: Hi Jakub, In the Ceph project we often have to cherry-pick GitHub Pull Requests downstream into RH's Ceph Storage product. Now that rdopkg supports pulling RHBZ numbers from the commits, we can store the BZ data in the -patches branches, and rdopkg does the rest. To cherry-pick GitHub PRs for particular BZs, I've written the following app: https://github.com/ktdreyer/downstream-cherry-picker This ensures that the cherry-picks are done correctly each time, enforces the RHBZ numbers, etc. It would be cool to build this as a feature into rdopkg. It could be something like: rdopkg cherry-pick https://github.com/ceph/ceph/pull/111 12345 And it would cherry-pick GitHub PR #111 into the -patches branch, adding "Resolves: rhbz#12345" to each cherry-pick's log. Do you guys in RDO often cherry-pick large branches (like 8+ commits) downstream at a time? If so, we could make it auto-detect GitHub vs Gerrit (although I'm not sure what URL you'd use for a branch in Gerrit)? What do you think? - Ken From sbaker at redhat.com Mon Aug 22 23:17:31 2016 From: sbaker at redhat.com (Steve Baker) Date: Tue, 23 Aug 2016 11:17:31 +1200 Subject: [rdo-list] python-heat-agent subpackage from openstack-heat-templates Message-ID: Hi All I would like to get some feedback on this packaging change: https://review.rdoproject.org/r/#/c/1909 This change creates a new subpackage python-heat-agent out of openstack-heat-templates. Currently image building or boot configuration has to do a number of non-obvious steps to end up with a server ready to perform heat-driven software deployments. The package python-heat-agent installs all dependencies and files required to do this, resulting in a boot config on a pristine image being as simple as: yum -y install https://www.rdoproject.org/repos/rdo-release.rpm yum -y install python-heat-agent systemctl enable os-collect-config systemctl start --no-block os-collect-config python-heat-agent installs one hook which allows configuration via heat templates. The aim is to create another subpackage per configuration tool hook in heat-templates. So python-heat-agent-puppet will install the puppet hook and depend on python-heat-agent and puppet packages. This depends on some upstream heat-templates changes: https://review.openstack.org/#/q/topic:centos-rdo-boot-config As far as TripleO goes, this packaging approach has the potential to eliminate the need for diskimage-builder invoking heat-templates elements, and further reducing the use of tripleo-image-elements - I'd like to have a discussion on openstack-dev around whether there should be a push to remove tripleo-image-elements entirely. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Tue Aug 23 01:29:05 2016 From: dprince at redhat.com (Dan Prince) Date: Mon, 22 Aug 2016 21:29:05 -0400 Subject: [rdo-list] python-heat-agent subpackage from openstack-heat-templates In-Reply-To: References: Message-ID: <1471915745.30384.4.camel@redhat.com> On Tue, 2016-08-23 at 11:17 +1200, Steve Baker wrote: > Hi All > > I would like to get some feedback on this packaging change: > https://review.rdoproject.org/r/#/c/1909 > > This change creates a new subpackage python-heat-agent out of > openstack-heat-templates. > > Currently image building or boot configuration has to do a number of > non-obvious steps to end up with a server ready to perform heat- > driven > software deployments. > > The package python-heat-agent installs all dependencies and files > required to do this, resulting in a boot config on a pristine image > being as simple as: > > ??????? yum -y install https://www.rdoproject.org/repos/rdo-release.r > pm > ??????? yum -y install python-heat-agent > ??????? systemctl enable os-collect-config > ??????? systemctl start --no-block os-collect-config > > python-heat-agent installs one hook which allows configuration via > heat templates. The aim is to create another subpackage per > configuration tool hook in heat-templates. So python-heat-agent- > puppet > will install the puppet hook and depend on python-heat-agent and > puppet packages. +1. Moving away from elements for this would be very nice. FWIW, what you've done here would dovetail into the all-in-one Heat undercloud installer effort too in that we could pretty much eliminate the use of instack to bootstrap os-collect-config altogether and just use these subpackages directly. Faster, better, leaner I think. > > This depends on some upstream heat-templates changes: > https://review.openstack.org/#/q/topic:centos-rdo-boot-config > > As far as TripleO goes, this packaging approach has the potential to > eliminate the need for diskimage-builder invoking heat-templates > elements, and further reducing the use of tripleo-image-elements - > I'd like to have a discussion on openstack-dev around whether there > should be a push to remove tripleo-image-elements entirely. +1 to all of this. We've still got some refactoring to do to fully eliminate more of the os-*-config dependencies (os-apply-config, and os-net-config) but I think we are closing in on it. Dan From sbaker at redhat.com Tue Aug 23 04:09:42 2016 From: sbaker at redhat.com (Steve Baker) Date: Tue, 23 Aug 2016 16:09:42 +1200 Subject: [rdo-list] python-heat-agent subpackage from openstack-heat-templates In-Reply-To: <1471915745.30384.4.camel@redhat.com> References: <1471915745.30384.4.camel@redhat.com> Message-ID: On 23/08/16 13:29, Dan Prince wrote: > On Tue, 2016-08-23 at 11:17 +1200, Steve Baker wrote: >> Hi All >> >> I would like to get some feedback on this packaging change: >> https://review.rdoproject.org/r/#/c/1909 >> >> This change creates a new subpackage python-heat-agent out of >> openstack-heat-templates. >> >> Currently image building or boot configuration has to do a number of >> non-obvious steps to end up with a server ready to perform heat- >> driven >> software deployments. >> >> The package python-heat-agent installs all dependencies and files >> required to do this, resulting in a boot config on a pristine image >> being as simple as: >> >> yum -y install https://www.rdoproject.org/repos/rdo-release.r >> pm >> yum -y install python-heat-agent >> systemctl enable os-collect-config >> systemctl start --no-block os-collect-config >> >> python-heat-agent installs one hook which allows configuration via >> heat templates. The aim is to create another subpackage per >> configuration tool hook in heat-templates. So python-heat-agent- >> puppet >> will install the puppet hook and depend on python-heat-agent and >> puppet packages. > +1. Moving away from elements for this would be very nice. > > FWIW, what you've done here would dovetail into the all-in-one Heat > undercloud installer effort too in that we could pretty much eliminate > the use of instack to bootstrap os-collect-config altogether and just > use these subpackages directly. Faster, better, leaner I think. The above review is a series now, there is now a python-heat-agent-puppet and python-heat-agent-ansible. >> This depends on some upstream heat-templates changes: >> https://review.openstack.org/#/q/topic:centos-rdo-boot-config >> >> As far as TripleO goes, this packaging approach has the potential to >> eliminate the need for diskimage-builder invoking heat-templates >> elements, and further reducing the use of tripleo-image-elements - >> I'd like to have a discussion on openstack-dev around whether there >> should be a push to remove tripleo-image-elements entirely. > +1 to all of this. We've still got some refactoring to do to fully > eliminate more of the os-*-config dependencies (os-apply-config, and > os-net-config) but I think we are closing in on it. > I'll post something to openstack-dev once I have something concrete to recommend. From javier.pena at redhat.com Tue Aug 23 09:34:29 2016 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 23 Aug 2016 05:34:29 -0400 (EDT) Subject: [rdo-list] CentOS Cloud SIG In-Reply-To: References: <5802b7bc-61f7-e67c-3138-e1b01b9da315@redhat.com> Message-ID: <1197340603.4774735.1471944869456.JavaMail.zimbra@redhat.com> ----- Original Message ----- > 2016-08-18 19:01 GMT+02:00 Rich Bowen : > > To quote from yesterday's RDO IRC meeting: > > > > We have basically not had a CentOS Cloud SIG meeting for 2 months. > > Part of this is because of travel/summer/whatever. I also have a > > standing conflict with that meeting, which I'm trying to fix. > > But more than that, I think it's because the Cloud SIG is just a rehash > > of this meeting, because only RDO is participating. > > > > On the one hand, we don't really accomplish much in that meeting when we > > do have it. On the other hand, it's a way to get RDO in front of another > > audience. > > > > So, I guess I'm asking if people still think it's valuable, and will try > > to carve out a little time for this each week. Or as was suggested in > > the meeting, should we attempt to merge the two meetings in some > > meaningful way, until such time as other Cloud Software communities see > > a benefit in participating? > > > > Currently, Cloud SIG is de-facto maintained by RDO folks, CentOS Core > team + Kushal. > I agree that re-hashing meetings is wasting valuable time for many > people, so I agree with your proposal. > But we should continue looking at collaborating with other Cloud > communities (like NFV) > > One twist: we should copy centos devel list in our meeting minutes, if > we merge the meetings. I'm also in favor of merging meetings, given the attendants and topics are mostly the same. Javier > > Regards, > H. > > > > > -- > > Rich Bowen - rbowen at redhat.com > > RDO Community Liaison > > http://rdoproject.org > > @RDOCommunity > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From hguemar at fedoraproject.org Tue Aug 23 13:47:28 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 23 Aug 2016 15:47:28 +0200 Subject: [rdo-list] python-heat-agent subpackage from openstack-heat-templates In-Reply-To: References: Message-ID: 2016-08-23 1:17 GMT+02:00 Steve Baker : > Hi All > > I would like to get some feedback on this packaging change: > https://review.rdoproject.org/r/#/c/1909 > > This change creates a new subpackage python-heat-agent out of > openstack-heat-templates. > > Currently image building or boot configuration has to do a number of > non-obvious steps to end up with a server ready to perform heat-driven > software deployments. > > The package python-heat-agent installs all dependencies and files > required to do this, resulting in a boot config on a pristine image > being as simple as: > > yum -y install https://www.rdoproject.org/repos/rdo-release.rpm > yum -y install python-heat-agent > systemctl enable os-collect-config > systemctl start --no-block os-collect-config > > python-heat-agent installs one hook which allows configuration via > heat templates. The aim is to create another subpackage per > configuration tool hook in heat-templates. So python-heat-agent-puppet > will install the puppet hook and depend on python-heat-agent and > puppet packages. > > This depends on some upstream heat-templates changes: > https://review.openstack.org/#/q/topic:centos-rdo-boot-config > > As far as TripleO goes, this packaging approach has the potential to > eliminate the need for diskimage-builder invoking heat-templates elements, > and further reducing the use of tripleo-image-elements - I'd like to have a > discussion on openstack-dev around whether there should be a push to remove > tripleo-image-elements entirely. > Interesting changes, I think that python-heat-agent does not create issues with existing stuff as it's mostly new stuff. Adding it to the meeting agenda. H. From nhicher at redhat.com Tue Aug 23 14:59:34 2016 From: nhicher at redhat.com (Nicolas Hicher) Date: Tue, 23 Aug 2016 10:59:34 -0400 Subject: [rdo-list] Scheduled maintenance of review.rdoproject.org: 2016-08-24 13:30 UTC In-Reply-To: References: <3795275d-4d49-02dd-68fd-e5ecc6665d5f@redhat.com> Message-ID: Hello, We will postpone the maintenance. Thanks. Nico On 08/22/2016 04:58 PM, David Moreau Simard wrote: > As discussed off-thread, please postpone this maintenance for the time being. > > The update includes a forced account e-mail address synchronization > with the e-mail address the user has in Github. > This can disrupt users' workflow if their Gerrit and Github emails differ. > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Mon, Aug 15, 2016 at 9:45 AM, Nicolas Hicher wrote: >> Hello folks, >> >> We plan to upgrade review.rdoproject.org on 2016-08-24 13:30 UTC (next >> Wednesday). The downtime will be about 1 hour approximately. >> >> This is a maintenance upgrade to the last stable version of Software >> Factory 2.2.3, the changelog is: >> >> http://softwarefactory-project.io/r/gitweb?p=software-factory.git;a=blob_plain;f=CHANGELOG.md >> >> Regards, >> Software Factory Team, on behalf of rdo-infra >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From rasca at redhat.com Tue Aug 23 15:29:01 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Tue, 23 Aug 2016 17:29:01 +0200 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> Message-ID: Hi Boris, so, for what I see the pcs commands that stops and starts the cluster on the rebooted node should not be used. It can happen that a service fails to start but we need to investigate why from the logs. Remember that cleaning up resources can be useful if we know what happened, but using it repeatedly makes no sense. In addition remember that you can use just "pcs resource cleanup" to cleanup the entire cluster status and in some way "start from the beginning". Now, about this specific problem we need to understand what is happening here. Correct me if I'm wrong: 1) We have a clean env in which we reboot a node; 2) The nodes comes up, but some resources fails; 3) After some cleanups the env becomes clean again; Is this the sequence of operations you are using? Is the problem systematic and can we reproduce it? Can we grab sosreports from the machine involved? Most important question: which OpenStack version are you testing? -- Raoul Scarazzini rasca at redhat.com On 22/08/2016 13:49, Boris Derzhavets wrote: > > Sorry , for my English > > I was also keeping (not kept ) track on Galera DB via `clustercheck` > > either I just kept. > > > Boris > > > ------------------------------------------------------------------------ > *From:* rdo-list-bounces at redhat.com on > behalf of Boris Derzhavets > *Sent:* Monday, August 22, 2016 7:29 AM > *To:* Raoul Scarazzini; Wesley Hayutin; Attila Darazs > *Cc:* rdo-list > *Subject:* Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in > regards of managing HA PCS/Corosync cluster via pcs CLI > > > > > > ------------------------------------------------------------------------ > *From:* Raoul Scarazzini > *Sent:* Monday, August 22, 2016 3:51 AM > *To:* Wesley Hayutin; Boris Derzhavets; Attila Darazs > *Cc:* David Moreau Simard; rdo-list > *Subject:* Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in > regards of managing HA PCS/Corosync cluster via pcs CLI > > Hi everybody, > sorry for the late response but I was on PTO. I don't understand the > meaning of the cleanup commands, but maybe it's just because I'm not > getting the whole picture. > >> > I have to confirm that fault was mine PCS CLI is working on TripeO > QuickStart > but requires pcs cluster restart on particular node which went down > via ` nova stop controller-X` and was brought up via `nova start > controller-X` > Details here :- > > http://bderzhavets.blogspot.ru/2016/08/emulation-rdo-triple0-quickstart-ha.html > > VENV been set up with instack-virt-setup doesn't require ( on bounced > Controller node ) > > # pcs cluster stop > # pcs cluster start > > Before issuing start.sh > > #!/bash -x > pcs resource cleanup rabbitmq-clone ; > sleep 10 > pcs resource cleanup neutron-server-clone ; > sleep 10 > pcs resource cleanup openstack-nova-api-clone ; > sleep 10 > pcs resource cleanup openstack-nova-consoleauth-clone ; > sleep 10 > pcs resource cleanup openstack-heat-engine-clone ; > sleep 10 > pcs resource cleanup openstack-cinder-api-clone ; > sleep 10 > pcs resource cleanup openstack-glance-registry-clone ; > sleep 10 > pcs resource cleanup httpd-clone ; > > # . ./start.sh > > In worse case scenario I have to issue start.sh twice from different > Controllers > pcs resource cleanup openstack-nova-api-clone attempts to start > corresponding > service , which is down at the moment. In fact two cleanups above start all > Nova Services && one neutron cleanup starts all neutron agents as well. > I was also kept track of Galera DB via `clustercheck` > > Thanks. > Boris >> > > > I guess we're hitting a version problem here: if you deploy the actual > master (i.e. with quickstart) you'll get the environment with the > constraints limited to the core services because of [1] and [2] (so none > of the mentioned services exists in the cluster configuration). > > Hope this helps, > > [1] https://review.openstack.org/#/c/314208/ > [2] https://review.openstack.org/#/c/342650/ > > -- > Raoul Scarazzini > rasca at redhat.com > > On 08/08/2016 14:43, Wesley Hayutin wrote: >> Attila, Raoul >> Can you please investigate this issue. >> >> Thanks! >> >> On Sun, Aug 7, 2016 at 3:52 AM, Boris Derzhavets >> > wrote: >> >> TripleO HA Controller been installed via instack-virt-setup has PCS >> CLI like :- >> >> pcs resource cleanup neutron-server-clone >> pcs resource cleanup openstack-nova-api-clone >> pcs resource cleanup openstack-nova-consoleauth-clone >> pcs resource cleanup openstack-heat-engine-clone >> pcs resource cleanup openstack-cinder-api-clone >> pcs resource cleanup openstack-glance-registry-clone >> pcs resource cleanup httpd-clone >> >> been working as expected on bare metal >> >> >> Same cluster been setup via QuickStart (Virtual ENV) after bouncing >> one of controllers >> >> included in cluster ignores PCS CLI at least via my experience ( >> which is obviously limited >> >> either format of particular commands is wrong for QuickStart ) >> >> I believe that dropping (complete replacing ) instack-virt-setup is >> not a good idea in general. Personally, I believe that like in case >> with packstack it is always good >> >> to have VENV configuration been tested before going to bare metal >> deployment. >> >> My major concern is maintenance and disaster recovery tests , rather >> then deployment itself . What good is for me TripleO Quickstart >> running on bare metal if I cannot replace >> >> crashed VM Controller just been limited to Services HA ( all 3 >> Cluster VMs running on single >> >> bare metal node ) >> >> >> Thanks >> >> Boris. >> >> >> >> >> >> ------------------------------------------------------------------------ >> >> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> From bderzhavets at hotmail.com Tue Aug 23 17:42:32 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 23 Aug 2016 17:42:32 +0000 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> , Message-ID: ________________________________ From: Raoul Scarazzini Sent: Tuesday, August 23, 2016 11:29 AM To: Boris Derzhavets; Wesley Hayutin; Attila Darazs Cc: rdo-list Subject: Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI Hi Boris, so, for what I see the pcs commands that stops and starts the cluster on the rebooted node should not be used. It can happen that a service fails to start but we need to investigate why from the logs. Remember that cleaning up resources can be useful if we know what happened, but using it repeatedly makes no sense. In addition remember that you can use just "pcs resource cleanup" to cleanup the entire cluster status and in some way "start from the beginning". Now, about this specific problem we need to understand what is happening here. Correct me if I'm wrong: 1) We have a clean env in which we reboot a node; That is correct 2) The nodes comes up, but some resources fails; All resources fail 3) After some cleanups the env becomes clean again; a) If VENV is setup by instack-virt-setup ( official guide ) Mentioned script start.sh works right a way . It comes as well from official guide. b) if VENV is setup by Tripleo QuickStart ( where undecloud.qcow2 gets uploaded to libvirt pool already having overcloud images integrated per Jon's Video explanation QuickStart CI vs Tripleo CI ) then ( via my experience ) before attempting start.sh I MUST restart PCS Cluster on bounced Controller-X , then invoke `. ./start.sh` ( not simply ./start.sh ) Pretty often second run start.sh is required from another controller-Y. Some times I cannot fix it in script mode and have manually run commands giving delay more the 10 sec. So finally ( about 25 tests passed) I get `pcs status` OK. In other words all service are up and running on every controller-X,Y,Z Details :- http://bderzhavets.blogspot.ru/2016/08/emulation-rdo-triple0-quickstart-ha.html [https://3.bp.blogspot.com/-xtnWVVrV2cs/V7nQssWM9pI/AAAAAAAAHBw/DrYHJeCNEO4nTCigqZpgt4P7iwgmKekhQCLcB/w1200-h630-p-nu/Screenshot%2Bfrom%2B2016-08-21%2B19-00-24.png] Xen Virtualization on Linux and Solaris: Emulation Triple0 QuickStart HA Controller's Cluster failover bderzhavets.blogspot.ru Is this the sequence of operations you are using? Is the problem systematic and can we reproduce it? > YES > Can we grab sosreports from the machine involved? > Instruct me how to do this ? > Most important question: which OpenStack version are you testing? > Mitaka stable :- [tripleo-quickstart at stack] $ bash quickstart --config ./ha.yml $VIRTHOST By default , no --release specified Mitaka Delorean trunks get selected Just check /etc/yum.repos.d/ for delorean.repos quickstart places on undercloud when it exits asking to to connect to undercloud > Boris -- Raoul Scarazzini rasca at redhat.com On 22/08/2016 13:49, Boris Derzhavets wrote: > > Sorry , for my English > > I was also keeping (not kept ) track on Galera DB via `clustercheck` > > either I just kept. > > > Boris > > > ------------------------------------------------------------------------ > *From:* rdo-list-bounces at redhat.com on > behalf of Boris Derzhavets > *Sent:* Monday, August 22, 2016 7:29 AM > *To:* Raoul Scarazzini; Wesley Hayutin; Attila Darazs > *Cc:* rdo-list > *Subject:* Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in > regards of managing HA PCS/Corosync cluster via pcs CLI > > > > > > ------------------------------------------------------------------------ > *From:* Raoul Scarazzini > *Sent:* Monday, August 22, 2016 3:51 AM > *To:* Wesley Hayutin; Boris Derzhavets; Attila Darazs > *Cc:* David Moreau Simard; rdo-list > *Subject:* Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in > regards of managing HA PCS/Corosync cluster via pcs CLI > > Hi everybody, > sorry for the late response but I was on PTO. I don't understand the > meaning of the cleanup commands, but maybe it's just because I'm not > getting the whole picture. > >> > I have to confirm that fault was mine PCS CLI is working on TripeO > QuickStart > but requires pcs cluster restart on particular node which went down > via ` nova stop controller-X` and was brought up via `nova start > controller-X` > Details here :- > > http://bderzhavets.blogspot.ru/2016/08/emulation-rdo-triple0-quickstart-ha.html [https://3.bp.blogspot.com/-xtnWVVrV2cs/V7nQssWM9pI/AAAAAAAAHBw/DrYHJeCNEO4nTCigqZpgt4P7iwgmKekhQCLcB/w1200-h630-p-nu/Screenshot%2Bfrom%2B2016-08-21%2B19-00-24.png] Xen Virtualization on Linux and Solaris: Emulation Triple0 QuickStart HA Controller's Cluster failover bderzhavets.blogspot.ru > > VENV been set up with instack-virt-setup doesn't require ( on bounced > Controller node ) > > # pcs cluster stop > # pcs cluster start > > Before issuing start.sh > > #!/bash -x > pcs resource cleanup rabbitmq-clone ; > sleep 10 > pcs resource cleanup neutron-server-clone ; > sleep 10 > pcs resource cleanup openstack-nova-api-clone ; > sleep 10 > pcs resource cleanup openstack-nova-consoleauth-clone ; > sleep 10 > pcs resource cleanup openstack-heat-engine-clone ; > sleep 10 > pcs resource cleanup openstack-cinder-api-clone ; > sleep 10 > pcs resource cleanup openstack-glance-registry-clone ; > sleep 10 > pcs resource cleanup httpd-clone ; > > # . ./start.sh > > In worse case scenario I have to issue start.sh twice from different > Controllers > pcs resource cleanup openstack-nova-api-clone attempts to start > corresponding > service , which is down at the moment. In fact two cleanups above start all > Nova Services && one neutron cleanup starts all neutron agents as well. > I was also kept track of Galera DB via `clustercheck` > > Thanks. > Boris >> > > > I guess we're hitting a version problem here: if you deploy the actual > master (i.e. with quickstart) you'll get the environment with the > constraints limited to the core services because of [1] and [2] (so none > of the mentioned services exists in the cluster configuration). > > Hope this helps, > > [1] https://review.openstack.org/#/c/314208/ > [2] https://review.openstack.org/#/c/342650/ > > -- > Raoul Scarazzini > rasca at redhat.com > > On 08/08/2016 14:43, Wesley Hayutin wrote: >> Attila, Raoul >> Can you please investigate this issue. >> >> Thanks! >> >> On Sun, Aug 7, 2016 at 3:52 AM, Boris Derzhavets >> > wrote: >> >> TripleO HA Controller been installed via instack-virt-setup has PCS >> CLI like :- >> >> pcs resource cleanup neutron-server-clone >> pcs resource cleanup openstack-nova-api-clone >> pcs resource cleanup openstack-nova-consoleauth-clone >> pcs resource cleanup openstack-heat-engine-clone >> pcs resource cleanup openstack-cinder-api-clone >> pcs resource cleanup openstack-glance-registry-clone >> pcs resource cleanup httpd-clone >> >> been working as expected on bare metal >> >> >> Same cluster been setup via QuickStart (Virtual ENV) after bouncing >> one of controllers >> >> included in cluster ignores PCS CLI at least via my experience ( >> which is obviously limited >> >> either format of particular commands is wrong for QuickStart ) >> >> I believe that dropping (complete replacing ) instack-virt-setup is >> not a good idea in general. Personally, I believe that like in case >> with packstack it is always good >> >> to have VENV configuration been tested before going to bare metal >> deployment. >> >> My major concern is maintenance and disaster recovery tests , rather >> then deployment itself . What good is for me TripleO Quickstart >> running on bare metal if I cannot replace >> >> crashed VM Controller just been limited to Services HA ( all 3 >> Cluster VMs running on single >> >> bare metal node ) >> >> >> Thanks >> >> Boris. >> >> >> >> >> >> ------------------------------------------------------------------------ >> >> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Wed Aug 24 06:56:01 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Wed, 24 Aug 2016 08:56:01 +0200 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> Message-ID: What I can say for sure at the moment is that it's really hard to follow your quoting :) Still some info are missing: - Which version are you using? The latest? So are you installing from master? - Are the two versions the same while using quickstart and instack-virt-setup? - Do we have some logs to look at? Again, I think here we're hitting a version issure. Thanks, -- Raoul Scarazzini rasca at redhat.com On 23/08/2016 19:42, Boris Derzhavets wrote: > > > > ------------------------------------------------------------------------ > *From:* Raoul Scarazzini > *Sent:* Tuesday, August 23, 2016 11:29 AM > *To:* Boris Derzhavets; Wesley Hayutin; Attila Darazs > *Cc:* rdo-list > *Subject:* Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in > regards of managing HA PCS/Corosync cluster via pcs CLI > > Hi Boris, > so, for what I see the pcs commands that stops and starts the cluster on > the rebooted node should not be used. It can happen that a service fails > to start but we need to investigate why from the logs. > > Remember that cleaning up resources can be useful if we know what > happened, but using it repeatedly makes no sense. In addition remember > that you can use just "pcs resource cleanup" to cleanup the entire > cluster status and in some way "start from the beginning". > > Now, about this specific problem we need to understand what is happening > here. Correct me if I'm wrong: > > 1) We have a clean env in which we reboot a node; > That is correct > > 2) The nodes comes up, but some resources fails; > All resources fail > > 3) After some cleanups the env becomes clean again; > > a) If VENV is setup by instack-virt-setup ( official guide ) > Mentioned script start.sh works right a way . It comes as well > from official guide. > > b) if VENV is setup by Tripleo QuickStart ( where undecloud.qcow2 > gets uploaded > to libvirt pool already having overcloud images integrated per > Jon's Video explanation > QuickStart CI vs Tripleo CI ) > then ( via my experience ) before attempting start.sh I > MUST restart PCS Cluster > on bounced Controller-X , then invoke `. ./start.sh` ( not > simply ./start.sh ) > Pretty often second run start.sh is required from another > controller-Y. > Some times I cannot fix it in script mode and have manually > run commands giving > delay more the 10 sec. So finally ( about 25 tests passed) I > get `pcs status` OK. > In other words all service are up and running on every > controller-X,Y,Z > > Details :- > http://bderzhavets.blogspot.ru/2016/08/emulation-rdo-triple0-quickstart-ha.html > > > > Xen Virtualization on Linux and Solaris: Emulation Triple0 QuickStart HA > Controller's Cluster failover > > bderzhavets.blogspot.ru > > Is this the sequence of operations you are using? Is the problem > systematic and can we reproduce it? >> > YES >> > Can we grab sosreports from the > machine involved? >> > Instruct me how to do this ? >> > Most important question: which OpenStack version are > you testing? >> > Mitaka stable :- > > [tripleo-quickstart at stack] $ bash quickstart --config ./ha.yml $VIRTHOST > By default , no --release specified Mitaka Delorean trunks get selected > Just check|/etc/yum.repos.d/| for delorean.repos > quickstart places on undercloud when it exits asking to to connect to > undercloud >> > Boris > -- > Raoul Scarazzini > rasca at redhat.com > > > On 22/08/2016 13:49, Boris Derzhavets wrote: >> >> Sorry , for my English >> >> I was also keeping (not kept ) track on Galera DB via `clustercheck` >> >> either I just kept. >> >> >> Boris >> >> >> ------------------------------------------------------------------------ >> *From:* rdo-list-bounces at redhat.com on >> behalf of Boris Derzhavets >> *Sent:* Monday, August 22, 2016 7:29 AM >> *To:* Raoul Scarazzini; Wesley Hayutin; Attila Darazs >> *Cc:* rdo-list >> *Subject:* Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in >> regards of managing HA PCS/Corosync cluster via pcs CLI >> >> >> >> >> >> ------------------------------------------------------------------------ >> *From:* Raoul Scarazzini >> *Sent:* Monday, August 22, 2016 3:51 AM >> *To:* Wesley Hayutin; Boris Derzhavets; Attila Darazs >> *Cc:* David Moreau Simard; rdo-list >> *Subject:* Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in >> regards of managing HA PCS/Corosync cluster via pcs CLI >> >> Hi everybody, >> sorry for the late response but I was on PTO. I don't understand the >> meaning of the cleanup commands, but maybe it's just because I'm not >> getting the whole picture. >> >>> >> I have to confirm that fault was mine PCS CLI is working on TripeO >> QuickStart >> but requires pcs cluster restart on particular node which went down >> via ` nova stop controller-X` and was brought up via `nova start >> controller-X` >> Details here :- >> >> http://bderzhavets.blogspot.ru/2016/08/emulation-rdo-triple0-quickstart-ha.html > > > > Xen Virtualization on Linux and Solaris: Emulation Triple0 QuickStart HA > Controller's Cluster failover > > bderzhavets.blogspot.ru > > >> >> VENV been set up with instack-virt-setup doesn't require ( on bounced >> Controller node ) >> >> # pcs cluster stop >> # pcs cluster start >> >> Before issuing start.sh >> >> #!/bash -x >> pcs resource cleanup rabbitmq-clone ; >> sleep 10 >> pcs resource cleanup neutron-server-clone ; >> sleep 10 >> pcs resource cleanup openstack-nova-api-clone ; >> sleep 10 >> pcs resource cleanup openstack-nova-consoleauth-clone ; >> sleep 10 >> pcs resource cleanup openstack-heat-engine-clone ; >> sleep 10 >> pcs resource cleanup openstack-cinder-api-clone ; >> sleep 10 >> pcs resource cleanup openstack-glance-registry-clone ; >> sleep 10 >> pcs resource cleanup httpd-clone ; >> >> # . ./start.sh >> >> In worse case scenario I have to issue start.sh twice from different >> Controllers >> pcs resource cleanup openstack-nova-api-clone attempts to start >> corresponding >> service , which is down at the moment. In fact two cleanups above start all >> Nova Services && one neutron cleanup starts all neutron agents as well. >> I was also kept track of Galera DB via `clustercheck` >> >> Thanks. >> Boris >>> >> >> >> I guess we're hitting a version problem here: if you deploy the actual >> master (i.e. with quickstart) you'll get the environment with the >> constraints limited to the core services because of [1] and [2] (so none >> of the mentioned services exists in the cluster configuration). >> >> Hope this helps, >> >> [1] https://review.openstack.org/#/c/314208/ >> [2] https://review.openstack.org/#/c/342650/ >> >> -- >> Raoul Scarazzini >> rasca at redhat.com >> >> On 08/08/2016 14:43, Wesley Hayutin wrote: >>> Attila, Raoul >>> Can you please investigate this issue. >>> >>> Thanks! >>> >>> On Sun, Aug 7, 2016 at 3:52 AM, Boris Derzhavets >>> > wrote: >>> >>> TripleO HA Controller been installed via instack-virt-setup has PCS >>> CLI like :- >>> >>> pcs resource cleanup neutron-server-clone >>> pcs resource cleanup openstack-nova-api-clone >>> pcs resource cleanup openstack-nova-consoleauth-clone >>> pcs resource cleanup openstack-heat-engine-clone >>> pcs resource cleanup openstack-cinder-api-clone >>> pcs resource cleanup openstack-glance-registry-clone >>> pcs resource cleanup httpd-clone >>> >>> been working as expected on bare metal >>> >>> >>> Same cluster been setup via QuickStart (Virtual ENV) after bouncing >>> one of controllers >>> >>> included in cluster ignores PCS CLI at least via my experience ( >>> which is obviously limited >>> >>> either format of particular commands is wrong for QuickStart ) >>> >>> I believe that dropping (complete replacing ) instack-virt-setup is >>> not a good idea in general. Personally, I believe that like in case >>> with packstack it is always good >>> >>> to have VENV configuration been tested before going to bare metal >>> deployment. >>> >>> My major concern is maintenance and disaster recovery tests , rather >>> then deployment itself . What good is for me TripleO Quickstart >>> running on bare metal if I cannot replace >>> >>> crashed VM Controller just been limited to Services HA ( all 3 >>> Cluster VMs running on single >>> >>> bare metal node ) >>> >>> >>> Thanks >>> >>> Boris. >>> >>> >>> >>> >>> >>> ------------------------------------------------------------------------ >>> >>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >>> From bderzhavets at hotmail.com Wed Aug 24 07:54:51 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 24 Aug 2016 07:54:51 +0000 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> , Message-ID: ________________________________ From: Raoul Scarazzini Sent: Wednesday, August 24, 2016 2:56 AM To: Boris Derzhavets; Wesley Hayutin; Attila Darazs Cc: rdo-list Subject: Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI What I can say for sure at the moment is that it's really hard to follow your quoting :) Still some info are missing: - Which version are you using? The latest? So are you installing from master? > No idea. My actions :- $ rm -fr .ansible .quickstart tripleo-quickstart $ git clone https://github.com/openstack/tripleo-quickstart $ cd tripleo* $ ssh root@$VIRTHOST uname -a $ vi /config/general_config/ha.yml ==> to tune && save $ bash quickstart.sh --config ./config/general_config/ha.yml $VIRTHOST When I get prompt to login to undercloud /etc/yum.repos.d/ contains delorean.repo ( on fresh undercloud QS build ) files pointing to Mitaka Delorean trunks. Just check for yourself. > - Are the two versions the same while using quickstart and instack-virt-setup? > No the versions are different ( points in delorean.repos differ ) Details here :- http://bderzhavets.blogspot.ru/2016/07/stable-mitaka-ha-instack-virt-setup.html [https://1.bp.blogspot.com/-0sbpr2HVRsQ/V523UacdA9I/AAAAAAAAG_E/5FJ3IMWNl9ImdAYnYlHXGzWGMHps-Q-5wCEw/w1200-h630-p-nu/Screenshot%2Bfrom%2B2016-07-31%2B10-38-16.png] Xen Virtualization on Linux and Solaris: Stable Mitaka HA instack-virt-setup on CentOS 7.2 VIRTHOST bderzhavets.blogspot.ru - Do we have some logs to look at? > Sorry, it was not my concern. My primary target was identify sequence of steps which brings PCS Cluster to proper state with 100% warranty. Again, I think here we're hitting a version issure. > I tested QuickStart ( obtained just at the same time on different box) delorean.repo files in instack-virt-setup build of VENV It doesn't change much final overcloud cluster behavior. I am aware of this link is outdated ( due to Wesley Hayutin respond ) https://bluejeans.com/s/a5ua/ However , my current understanding of QuickStart CI ( not TripleO CI ) is based on this video I believe the core reason is the way how Tripleo QS assembles undercloud for VENV build. It makes HA Controllers cluster pretty stable and always recoverable (VENV case) It is different from official way at least in meantime. I've already wrote TripleO QS nowhere issues :- $ openstack overcloud images build --all ===> that is supposed to be removed as far as I understand TripleO Core team intend. > Boris. Thanks, -- Raoul Scarazzini rasca at redhat.com On 23/08/2016 19:42, Boris Derzhavets wrote: > > > > ------------------------------------------------------------------------ > *From:* Raoul Scarazzini > *Sent:* Tuesday, August 23, 2016 11:29 AM > *To:* Boris Derzhavets; Wesley Hayutin; Attila Darazs > *Cc:* rdo-list > *Subject:* Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in > regards of managing HA PCS/Corosync cluster via pcs CLI > > Hi Boris, > so, for what I see the pcs commands that stops and starts the cluster on > the rebooted node should not be used. It can happen that a service fails > to start but we need to investigate why from the logs. > > Remember that cleaning up resources can be useful if we know what > happened, but using it repeatedly makes no sense. In addition remember > that you can use just "pcs resource cleanup" to cleanup the entire > cluster status and in some way "start from the beginning". > > Now, about this specific problem we need to understand what is happening > here. Correct me if I'm wrong: > > 1) We have a clean env in which we reboot a node; > That is correct > > 2) The nodes comes up, but some resources fails; > All resources fail > > 3) After some cleanups the env becomes clean again; > > a) If VENV is setup by instack-virt-setup ( official guide ) > Mentioned script start.sh works right a way . It comes as well > from official guide. > > b) if VENV is setup by Tripleo QuickStart ( where undecloud.qcow2 > gets uploaded > to libvirt pool already having overcloud images integrated per > Jon's Video explanation > QuickStart CI vs Tripleo CI ) > then ( via my experience ) before attempting start.sh I > MUST restart PCS Cluster > on bounced Controller-X , then invoke `. ./start.sh` ( not > simply ./start.sh ) > Pretty often second run start.sh is required from another > controller-Y. > Some times I cannot fix it in script mode and have manually > run commands giving > delay more the 10 sec. So finally ( about 25 tests passed) I > get `pcs status` OK. > In other words all service are up and running on every > controller-X,Y,Z > > Details :- > http://bderzhavets.blogspot.ru/2016/08/emulation-rdo-triple0-quickstart-ha.html [https://3.bp.blogspot.com/-xtnWVVrV2cs/V7nQssWM9pI/AAAAAAAAHBw/DrYHJeCNEO4nTCigqZpgt4P7iwgmKekhQCLcB/w1200-h630-p-nu/Screenshot%2Bfrom%2B2016-08-21%2B19-00-24.png] Xen Virtualization on Linux and Solaris: Emulation Triple0 QuickStart HA Controller's Cluster failover bderzhavets.blogspot.ru > > > > Xen Virtualization on Linux and Solaris: Emulation Triple0 QuickStart HA > Controller's Cluster failover > > bderzhavets.blogspot.ru > > Is this the sequence of operations you are using? Is the problem > systematic and can we reproduce it? >> > YES >> > Can we grab sosreports from the > machine involved? >> > Instruct me how to do this ? >> > Most important question: which OpenStack version are > you testing? >> > Mitaka stable :- > > [tripleo-quickstart at stack] $ bash quickstart --config ./ha.yml $VIRTHOST > By default , no --release specified Mitaka Delorean trunks get selected > Just check|/etc/yum.repos.d/| for delorean.repos > quickstart places on undercloud when it exits asking to to connect to > undercloud >> > Boris > -- > Raoul Scarazzini > rasca at redhat.com > > > On 22/08/2016 13:49, Boris Derzhavets wrote: >> >> Sorry , for my English >> >> I was also keeping (not kept ) track on Galera DB via `clustercheck` >> >> either I just kept. >> >> >> Boris >> >> >> ------------------------------------------------------------------------ >> *From:* rdo-list-bounces at redhat.com on >> behalf of Boris Derzhavets >> *Sent:* Monday, August 22, 2016 7:29 AM >> *To:* Raoul Scarazzini; Wesley Hayutin; Attila Darazs >> *Cc:* rdo-list >> *Subject:* Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in >> regards of managing HA PCS/Corosync cluster via pcs CLI >> >> >> >> >> >> ------------------------------------------------------------------------ >> *From:* Raoul Scarazzini >> *Sent:* Monday, August 22, 2016 3:51 AM >> *To:* Wesley Hayutin; Boris Derzhavets; Attila Darazs >> *Cc:* David Moreau Simard; rdo-list >> *Subject:* Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in >> regards of managing HA PCS/Corosync cluster via pcs CLI >> >> Hi everybody, >> sorry for the late response but I was on PTO. I don't understand the >> meaning of the cleanup commands, but maybe it's just because I'm not >> getting the whole picture. >> >>> >> I have to confirm that fault was mine PCS CLI is working on TripeO >> QuickStart >> but requires pcs cluster restart on particular node which went down >> via ` nova stop controller-X` and was brought up via `nova start >> controller-X` >> Details here :- >> >> http://bderzhavets.blogspot.ru/2016/08/emulation-rdo-triple0-quickstart-ha.html > > > > Xen Virtualization on Linux and Solaris: Emulation Triple0 QuickStart HA > Controller's Cluster failover > > bderzhavets.blogspot.ru > > >> >> VENV been set up with instack-virt-setup doesn't require ( on bounced >> Controller node ) >> >> # pcs cluster stop >> # pcs cluster start >> >> Before issuing start.sh >> >> #!/bash -x >> pcs resource cleanup rabbitmq-clone ; >> sleep 10 >> pcs resource cleanup neutron-server-clone ; >> sleep 10 >> pcs resource cleanup openstack-nova-api-clone ; >> sleep 10 >> pcs resource cleanup openstack-nova-consoleauth-clone ; >> sleep 10 >> pcs resource cleanup openstack-heat-engine-clone ; >> sleep 10 >> pcs resource cleanup openstack-cinder-api-clone ; >> sleep 10 >> pcs resource cleanup openstack-glance-registry-clone ; >> sleep 10 >> pcs resource cleanup httpd-clone ; >> >> # . ./start.sh >> >> In worse case scenario I have to issue start.sh twice from different >> Controllers >> pcs resource cleanup openstack-nova-api-clone attempts to start >> corresponding >> service , which is down at the moment. In fact two cleanups above start all >> Nova Services && one neutron cleanup starts all neutron agents as well. >> I was also kept track of Galera DB via `clustercheck` >> >> Thanks. >> Boris >>> >> >> >> I guess we're hitting a version problem here: if you deploy the actual >> master (i.e. with quickstart) you'll get the environment with the >> constraints limited to the core services because of [1] and [2] (so none >> of the mentioned services exists in the cluster configuration). >> >> Hope this helps, >> >> [1] https://review.openstack.org/#/c/314208/ >> [2] https://review.openstack.org/#/c/342650/ >> >> -- >> Raoul Scarazzini >> rasca at redhat.com >> >> On 08/08/2016 14:43, Wesley Hayutin wrote: >>> Attila, Raoul >>> Can you please investigate this issue. >>> >>> Thanks! >>> >>> On Sun, Aug 7, 2016 at 3:52 AM, Boris Derzhavets >>> > wrote: >>> >>> TripleO HA Controller been installed via instack-virt-setup has PCS >>> CLI like :- >>> >>> pcs resource cleanup neutron-server-clone >>> pcs resource cleanup openstack-nova-api-clone >>> pcs resource cleanup openstack-nova-consoleauth-clone >>> pcs resource cleanup openstack-heat-engine-clone >>> pcs resource cleanup openstack-cinder-api-clone >>> pcs resource cleanup openstack-glance-registry-clone >>> pcs resource cleanup httpd-clone >>> >>> been working as expected on bare metal >>> >>> >>> Same cluster been setup via QuickStart (Virtual ENV) after bouncing >>> one of controllers >>> >>> included in cluster ignores PCS CLI at least via my experience ( >>> which is obviously limited >>> >>> either format of particular commands is wrong for QuickStart ) >>> >>> I believe that dropping (complete replacing ) instack-virt-setup is >>> not a good idea in general. Personally, I believe that like in case >>> with packstack it is always good >>> >>> to have VENV configuration been tested before going to bare metal >>> deployment. >>> >>> My major concern is maintenance and disaster recovery tests , rather >>> then deployment itself . What good is for me TripleO Quickstart >>> running on bare metal if I cannot replace >>> >>> crashed VM Controller just been limited to Services HA ( all 3 >>> Cluster VMs running on single >>> >>> bare metal node ) >>> >>> >>> Thanks >>> >>> Boris. >>> >>> >>> >>> >>> >>> ------------------------------------------------------------------------ >>> >>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Wed Aug 24 08:34:58 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Wed, 24 Aug 2016 10:34:58 +0200 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> Message-ID: On 24/08/2016 09:54, Boris Derzhavets wrote: [...] > - Which version are you using? The latest? So are you installing from > master? >> > No idea. My actions :- > > $ rm -fr .ansible .quickstart tripleo-quickstart > $ git clone https://github.com/openstack/tripleo-quickstart > $ cd tripleo* > $ ssh root@$VIRTHOST uname -a > $ vi /config/general_config/ha.yml ==> to tune && save > $ bash quickstart.sh --config ./config/general_config/ha.yml $VIRTHOST OK, it can be useful to see what's inside ha.yml, but the command you mentioned does not specify a branc, so it uses "master", so Newton, so you're using a totally different setup from mitaka, especially from a cluster perspective. > When I get prompt to login to undercloud > /etc/yum.repos.d/ contains delorean.repo ( on fresh undercloud QS build ) > files pointing to Mitaka Delorean trunks. > Just check for yourself. Where can I check? On the link you posted I cannot find these files from the quickstart installation... > - Are the two versions the same while using quickstart and > instack-virt-setup? > No the versions are different ( points in delorean.repos differ ) This could have a great impact while comparing results. > - Do we have some logs to look at? > Sorry, it was not my concern. > My primary target was identify sequence of steps > which brings PCS Cluster to proper state with 100% warranty. This is strictly related on all the stuff I've mentioned above. Things must work without workarounds like cleanup or similar things. If something breaks and we're using correct setup steps then we're in front of a bug. But to analyze a bug we need logs. > Again, I think here we're hitting a version issure. > I tested QuickStart ( obtained just at the same time on different box) > delorean.repo files in instack-virt-setup build of VENV > It doesn't change much final overcloud cluster behavior. > I am aware of this link is outdated ( due to Wesley Hayutin respond ) > https://bluejeans.com/s/a5ua/ > However , my current understanding of QuickStart CI ( not TripleO CI ) > is based on this video > I believe the core reason is the way how Tripleo QS assembles undercloud > for VENV build. > It makes HA Controllers cluster pretty stable and always recoverable > (VENV case) The VENV should not matter at all in terms of cluster configuration on the overcloud. > It is different from official way at least in meantime. > I've already wrote TripleO QS nowhere issues :- > $ openstack overcloud images build --all ===> that is supposed to be > removed > as far as I understand TripleO Core team intend. > Boris. As I said there's a lot of mix inside the repo and thins makes the debug really difficult. -- Raoul Scarazzini rasca at redhat.com From frederic.lepied at redhat.com Wed Aug 24 10:24:10 2016 From: frederic.lepied at redhat.com (=?UTF-8?B?RnLDqWTDqXJpYyBMZXBpZWQ=?=) Date: Wed, 24 Aug 2016 12:24:10 +0200 Subject: [rdo-list] Proposition to get more CI promotions In-Reply-To: <1497708497.3164050.1471343193511.JavaMail.zimbra@redhat.com> References: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> <1497708497.3164050.1471343193511.JavaMail.zimbra@redhat.com> Message-ID: <65134185-7fab-def1-6bab-2ca94ead284f@redhat.com> On 08/16/2016 12:26 PM, Javier Pena wrote: > > ----- Original Message ----- >> On Sat, Aug 13, 2016 at 11:21 AM, Fr?d?ric Lepied >> wrote: >>> Hi, >>> >>> Our CI promotion system is as if we were running downhill: we cannot >>> stop and the only way to fix issues is to move forward by getting new >>> fixed packages. And it usually takes days before we can get the fixes >>> and by the time we get them we can have other issues appearing and so >>> never being able to have a promotion. >>> >>> I would like to propose an improved way of working to get more CI >>> promotions. When we have failures in our tests, we do like we do >>> today: debug the issues and find the commits that are causing the >>> regression and working with upstream or fixing packaging issues to >>> solve the regression. >>> >>> The proposed improvement to get more CI promotions is, while we wait >>> for the fixes to be ready, to get the oldest commit that is currently >>> causing an issue from the current analysis and to try the previous >>> commits in reverse order to promote before the issues appear. With the >>> database of DLRN we have all the information to be able to implement >>> this backward tries and have more chances to promote. >>> >>> I'm in holidays next week but I wanted to bring this idea before been >>> completely off. WDYT? >> In the past we already applied this approach a couple of times for >> upstream issues that took longer that desired to be fixed (these cases >> lead us to move to the current u-c pinning for libraries). IMO, this >> has some drawbacks: >> - Breaks the consistency principle we try to enforce among packages in >> RDO repos. >> - Requires manual definition of the versions or commits for each package. > Well, if I understood Fr?d?ric's proposal correctly, that's not what we'd do, just go back to the list of previous commits and start trying backwards. They don't even have to be for the same package. > > Using an example, let's say we have the following commits, from newest to oldest: > > - Commit 2 to openstack-nova (consistent) > - Commit 2 to openstack-cinder (consistent) > - Commit 1 to openstack-cinder (consistent) > - Commit 1 to openstack-keystone (consistent, previous promoted commit) > > Just by luck, when the promotion job is started, it tests commit 2 to openstack-nova, and fails due to a new issue. If we can go back just one commit, to "commit 2 to openstack-cinder", and it is passes CI, we could still have a consistent repo promoted, just not the very last one at the time of running the CI job. > > Actually, this makes sense to me. By doing this, we could promote the last known good+consistent repo, while we fix the latest issue. > Yes that was the intent. Let's see how we could implement this. -- Fred - May the Source be with you From apevec at redhat.com Wed Aug 24 10:45:42 2016 From: apevec at redhat.com (Alan Pevec) Date: Wed, 24 Aug 2016 12:45:42 +0200 Subject: [rdo-list] Proposition to get more CI promotions In-Reply-To: <65134185-7fab-def1-6bab-2ca94ead284f@redhat.com> References: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> <1497708497.3164050.1471343193511.JavaMail.zimbra@redhat.com> <65134185-7fab-def1-6bab-2ca94ead284f@redhat.com> Message-ID: >> If we can go back just one commit, to "commit 2 to openstack-cinder", and it is passes CI, we could still have a consistent repo promoted, just not the very last one at the time of running the CI job. >> >> Actually, this makes sense to me. By doing this, we could promote the last known good+consistent repo, while we fix the latest issue. > > Yes that was the intent. Let's see how we could implement this. Hold on, haven't we previously discussed to switch to staged promotion i.e. puppet CI promotes RDO Trunk consistent first, then tripleo CI tries to promote that and finally RDO CI uses what tripleo CI promoted? Now we have all three starting from different random latest "consistent" (at the time corresponding promotion runs) and diverging. But in any case, first step is to have a database of "consistent" hashes recorded somewhere, it would be just input to the puppet CI promotion proposal script, right? Cheers, Alan From frederic.lepied at redhat.com Wed Aug 24 11:38:57 2016 From: frederic.lepied at redhat.com (=?UTF-8?B?RnLDqWTDqXJpYyBMZXBpZWQ=?=) Date: Wed, 24 Aug 2016 13:38:57 +0200 Subject: [rdo-list] Proposition to get more CI promotions In-Reply-To: References: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> <1497708497.3164050.1471343193511.JavaMail.zimbra@redhat.com> <65134185-7fab-def1-6bab-2ca94ead284f@redhat.com> Message-ID: On 08/24/2016 12:45 PM, Alan Pevec wrote: >>> If we can go back just one commit, to "commit 2 to openstack-cinder", and it is passes CI, we could still have a consistent repo promoted, just not the very last one at the time of running the CI job. >>> >>> Actually, this makes sense to me. By doing this, we could promote the last known good+consistent repo, while we fix the latest issue. >> Yes that was the intent. Let's see how we could implement this. > Hold on, haven't we previously discussed to switch to staged promotion > i.e. puppet CI promotes RDO Trunk consistent first, then tripleo CI > tries to promote that and finally RDO CI uses what tripleo CI > promoted? Now we have all three starting from different random latest > "consistent" (at the time corresponding promotion runs) and diverging. > But in any case, first step is to have a database of "consistent" > hashes recorded somewhere, it would be just input to the puppet CI > promotion proposal script, right? The staged promotion is a different story that we need to continue. For me that's a parallel effort we need to pursue. But even with a staged promotion, we could still want to try on a failure a previous consistent repo that we could promote. Fred From whayutin at redhat.com Wed Aug 24 12:13:43 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 24 Aug 2016 08:13:43 -0400 Subject: [rdo-list] Proposition to get more CI promotions In-Reply-To: References: <27a52908-174c-7f5f-4bad-d8fba43d4019@redhat.com> <1497708497.3164050.1471343193511.JavaMail.zimbra@redhat.com> <65134185-7fab-def1-6bab-2ca94ead284f@redhat.com> Message-ID: On Wed, Aug 24, 2016 at 7:38 AM, Fr?d?ric Lepied wrote: > On 08/24/2016 12:45 PM, Alan Pevec wrote: > >>> If we can go back just one commit, to "commit 2 to openstack-cinder", > and it is passes CI, we could still have a consistent repo promoted, just > not the very last one at the time of running the CI job. > >>> > >>> Actually, this makes sense to me. By doing this, we could promote the > last known good+consistent repo, while we fix the latest issue. > >> Yes that was the intent. Let's see how we could implement this. > > Hold on, haven't we previously discussed to switch to staged promotion > > i.e. puppet CI promotes RDO Trunk consistent first, then tripleo CI > > tries to promote that and finally RDO CI uses what tripleo CI > > promoted? Now we have all three starting from different random latest > > "consistent" (at the time corresponding promotion runs) and diverging. > > But in any case, first step is to have a database of "consistent" > > hashes recorded somewhere, it would be just input to the puppet CI > > promotion proposal script, right? > > The staged promotion is a different story that we need to continue. For > me that's a parallel effort we need to pursue. But even with a staged > promotion, we could still want to try on a failure a previous consistent > repo that we could promote. > > Fred > You guys are discussing puppet CI tests as a step in the rdo promotion. Shouldn't this test be pushed out of RDO and into upstream openstack as a prerequisite for the tripleo-pin? Why duplicate this test at the RDO level? The upstream periodic and RDO CI tests are hitting the same delorean hash. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Aug 24 12:21:01 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 24 Aug 2016 12:21:01 +0000 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> , Message-ID: ________________________________ From: Raoul Scarazzini Sent: Wednesday, August 24, 2016 11:34 AM To: Boris Derzhavets; Wesley Hayutin; Attila Darazs Cc: rdo-list Subject: Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI On 24/08/2016 09:54, Boris Derzhavets wrote: [...] > - Which version are you using? The latest? So are you installing from > master? >> > No idea. My actions :- > > $ rm -fr .ansible .quickstart tripleo-quickstart > $ git clone https://github.com/openstack/tripleo-quickstart [https://avatars0.githubusercontent.com/u/324574?v=3&s=400] GitHub - openstack/tripleo-quickstart: Ansible based ... github.com tripleo-quickstart - Ansible based project for setting up TripleO virtual environments > $ cd tripleo* > $ ssh root@$VIRTHOST uname -a > $ vi /config/general_config/ha.yml ==> to tune && save > $ bash quickstart.sh --config ./config/general_config/ha.yml $VIRTHOST OK, it can be useful to see what's inside ha.yml, but the command you mentioned does not specify a branc, so it uses "master", so Newton, so you're using a totally different setup from mitaka, especially from a cluster perspective. ======================================================================== Addressing your requests ======================================================================== [boris at fedora24wks tripleo-quickstart]$ cat ./config/general_config/ha.yml # Deploy an HA openstack environment. # # This will require (6144 * 4) == approx. 24GB for the overcloud # nodes, plus another 8GB for the undercloud, for a total of around # 32GB. control_memory: 6144 compute_memory: 6144 default_vcpu: 2 undercloud_memory: 8192 # Giving the undercloud additional CPUs can greatly improve heat's # performance (and result in a shorter deploy time). undercloud_vcpu: 2 # Create three controller nodes and one compute node. overcloud_nodes: - name: control_0 flavor: control - name: control_1 flavor: control - name: control_2 flavor: control - name: compute_0 flavor: compute - name: compute_1 flavor: compute # We don't need introspection in a virtual environment (because we are # creating all the "hardware" we really know the necessary # information). step_introspect: true # Tell tripleo about our environment. network_isolation: true extra_args: >- --control-scale 3 --compute-scale 2 --neutron-network-type vxlan --neutron-tunnel-types vxlan --ntp-server pool.ntp.org test_tempest: false test_ping: true enable_pacemaker: true $ bash quickstart.sh --config ./config/general_config/ha.yml $VIRTHOST EXIT ( undrecloud has been built ) ################################## Virtual Environment Setup Complete ################################## Access the undercloud by: ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud There are scripts in the home directory to continue the deploy: overcloud-deploy.sh will deploy the overcloud overcloud-deploy-post.sh will do any post-deploy configuration overcloud-validate.sh will run post-deploy validation Alternatively, you can ignore these scripts and follow the upstream docs, starting from the overcloud deploy section: http://ow.ly/1Vc1301iBlb ################################## Virtual Environment Setup Complete ################################## [boris at fedora24wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud Warning: Permanently added '192.168.1.74' (ECDSA) to the list of known hosts. Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts. Last login: Wed Aug 24 12:13:16 2016 from gateway [stack at undercloud ~]$ sudo su - [root at undercloud stack]# cd /etc/yum.repos.d [root at undercloud yum.repos.d]# ls -l total 40 -rw-r--r--. 1 root root 1664 Dec 9 2015 CentOS-Base.repo -rw-r--r--. 1 root root 1057 Aug 24 02:58 CentOS-Ceph-Hammer.repo -rw-r--r--. 1 root root 1309 Dec 9 2015 CentOS-CR.repo -rw-r--r--. 1 root root 649 Dec 9 2015 CentOS-Debuginfo.repo -rw-r--r--. 1 root root 290 Dec 9 2015 CentOS-fasttrack.repo -rw-r--r--. 1 root root 630 Dec 9 2015 CentOS-Media.repo -rw-r--r--. 1 root root 1331 Dec 9 2015 CentOS-Sources.repo -rw-r--r--. 1 root root 1952 Dec 9 2015 CentOS-Vault.repo -rw-r--r--. 1 root root 162 Aug 24 02:58 delorean-deps.repo -rw-r--r--. 1 root root 220 Aug 24 02:58 delorean.repo [root at undercloud yum.repos.d]# cat delorean-deps.repo [delorean-mitaka-testing] name=dlrn-mitaka-testing baseurl=http://buildlogs.centos.org/centos/7/cloud/$basearch/openstack-mitaka/ enabled=1 gpgcheck=0 priority=2 [root at undercloud yum.repos.d]# cat delorean.repo [delorean] name=delorean-openstack-rally-3909299306233247d547bad265a1adb78adfb3d4 baseurl=http://trunk.rdoproject.org/centos7-mitaka/39/09/3909299306233247d547bad265a1adb78adfb3d4_4e6dfa3c enabled=1 gpgcheck=0 Thanks Boris. =================================================================================== > When I get prompt to login to undercloud > /etc/yum.repos.d/ contains delorean.repo ( on fresh undercloud QS build ) > files pointing to Mitaka Delorean trunks. > Just check for yourself. Where can I check? On the link you posted I cannot find these files from the quickstart installation... > - Are the two versions the same while using quickstart and > instack-virt-setup? > No the versions are different ( points in delorean.repos differ ) This could have a great impact while comparing results. > - Do we have some logs to look at? > Sorry, it was not my concern. > My primary target was identify sequence of steps > which brings PCS Cluster to proper state with 100% warranty. This is strictly related on all the stuff I've mentioned above. Things must work without workarounds like cleanup or similar things. If something breaks and we're using correct setup steps then we're in front of a bug. But to analyze a bug we need logs. > Again, I think here we're hitting a version issure. > I tested QuickStart ( obtained just at the same time on different box) > delorean.repo files in instack-virt-setup build of VENV > It doesn't change much final overcloud cluster behavior. > I am aware of this link is outdated ( due to Wesley Hayutin respond ) > https://bluejeans.com/s/a5ua/ > However , my current understanding of QuickStart CI ( not TripleO CI ) > is based on this video > I believe the core reason is the way how Tripleo QS assembles undercloud > for VENV build. > It makes HA Controllers cluster pretty stable and always recoverable > (VENV case) The VENV should not matter at all in terms of cluster configuration on the overcloud. > It is different from official way at least in meantime. > I've already wrote TripleO QS nowhere issues :- > $ openstack overcloud images build --all ===> that is supposed to be > removed > as far as I understand TripleO Core team intend. > Boris. As I said there's a lot of mix inside the repo and thins makes the debug really difficult. -- Raoul Scarazzini rasca at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Aug 24 13:29:06 2016 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 24 Aug 2016 09:29:06 -0400 Subject: [rdo-list] CentOS Cloud SIG In-Reply-To: <1197340603.4774735.1471944869456.JavaMail.zimbra@redhat.com> References: <5802b7bc-61f7-e67c-3138-e1b01b9da315@redhat.com> <1197340603.4774735.1471944869456.JavaMail.zimbra@redhat.com> Message-ID: <38994d3d-fc69-47a9-dff9-da28e71039a0@redhat.com> On 08/23/2016 05:34 AM, Javier Pena wrote: > > > ----- Original Message ----- >> 2016-08-18 19:01 GMT+02:00 Rich Bowen : >>> To quote from yesterday's RDO IRC meeting: >>> >>> We have basically not had a CentOS Cloud SIG meeting for 2 months. >>> Part of this is because of travel/summer/whatever. I also have a >>> standing conflict with that meeting, which I'm trying to fix. >>> But more than that, I think it's because the Cloud SIG is just a rehash >>> of this meeting, because only RDO is participating. >>> >>> On the one hand, we don't really accomplish much in that meeting when we >>> do have it. On the other hand, it's a way to get RDO in front of another >>> audience. >>> >>> So, I guess I'm asking if people still think it's valuable, and will try >>> to carve out a little time for this each week. Or as was suggested in >>> the meeting, should we attempt to merge the two meetings in some >>> meaningful way, until such time as other Cloud Software communities see >>> a benefit in participating? >>> >> >> Currently, Cloud SIG is de-facto maintained by RDO folks, CentOS Core >> team + Kushal. >> I agree that re-hashing meetings is wasting valuable time for many >> people, so I agree with your proposal. >> But we should continue looking at collaborating with other Cloud >> communities (like NFV) >> >> One twist: we should copy centos devel list in our meeting minutes, if >> we merge the meetings. > > I'm also in favor of merging meetings, given the attendants and topics are mostly the same. Ok, thanks. I'll reach out to everyone that's ever attended one of the CentOS Cloud SIG meetings and let them know personally - although they're all here too. --Rich -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From rasca at redhat.com Wed Aug 24 14:25:42 2016 From: rasca at redhat.com (Raoul Scarazzini) Date: Wed, 24 Aug 2016 16:25:42 +0200 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> Message-ID: On 24/08/2016 14:21, Boris Derzhavets wrote: [...] > ======================================================================== > Addressing your requests > ======================================================================== > [boris at fedora24wks tripleo-quickstart]$ cat ./config/general_config/ha.yml [...] > $ bash quickstart.sh --config ./config/general_config/ha.yml $VIRTHOST > EXIT ( undrecloud has been built ) Not only. At this point everything should be built. [...] > [root at undercloud stack]# cd /etc/yum.repos.d > [root at undercloud yum.repos.d]# cat delorean-deps.repo > [delorean-mitaka-testing] > name=dlrn-mitaka-testing > baseurl=http://buildlogs.centos.org/centos/7/cloud/$basearch/openstack-mitaka/ > enabled=1 > gpgcheck=0 > priority=2 > [root at undercloud yum.repos.d]# cat delorean.repo > [delorean] > name=delorean-openstack-rally-3909299306233247d547bad265a1adb78adfb3d4 > baseurl=http://trunk.rdoproject.org/centos7-mitaka/39/09/3909299306233247d547bad265a1adb78adfb3d4_4e6dfa3c > enabled=1 > gpgcheck=0 This sounds really strange, since you should get master repo (so Newton, not Mitaka) while using quickstart without specifying --release. How long ago did you downloaded the quickstart git repo? Can you redeploy everything from scratch with the latest pull from quickstart's git repo? -- Raoul Scarazzini rasca at redhat.com From bderzhavets at hotmail.com Wed Aug 24 15:48:48 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 24 Aug 2016 15:48:48 +0000 Subject: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI In-Reply-To: References: <0c25e0c3-5d51-05fb-f628-173010314285@redhat.com> , Message-ID: ________________________________ From: Raoul Scarazzini Sent: Wednesday, August 24, 2016 5:25 PM To: Boris Derzhavets; Wesley Hayutin; Attila Darazs Cc: rdo-list Subject: Re: [rdo-list] Instack-virt-setup vs TripleO QuickStart in regards of managing HA PCS/Corosync cluster via pcs CLI On 24/08/2016 14:21, Boris Derzhavets wrote: [...] > ======================================================================== > Addressing your requests > ======================================================================== > [boris at fedora24wks tripleo-quickstart]$ cat ./config/general_config/ha.yml [...] > $ bash quickstart.sh --config ./config/general_config/ha.yml $VIRTHOST > EXIT ( undrecloud has been built ) Not only. At this point everything should be built. > Overcloud is not built at this moment. I log into undercloud and run overcloud-deploy.sh > [...] > [root at undercloud stack]# cd /etc/yum.repos.d > [root at undercloud yum.repos.d]# cat delorean-deps.repo > [delorean-mitaka-testing] > name=dlrn-mitaka-testing > baseurl=http://buildlogs.centos.org/centos/7/cloud/$basearch/openstack-mitaka/ > enabled=1 > gpgcheck=0 > priority=2 > [root at undercloud yum.repos.d]# cat delorean.repo > [delorean] > name=delorean-openstack-rally-3909299306233247d547bad265a1adb78adfb3d4 > baseurl=http://trunk.rdoproject.org/centos7-mitaka/39/09/3909299306233247d547bad265a1adb78adfb3d4_4e6dfa3c > enabled=1 > gpgcheck=0 This sounds really strange, since you should get master repo (so Newton, not Mitaka) while using quickstart without specifying --release. How long ago did you downloaded the quickstart git repo? > 5 min > Can you redeploy everything from scratch with the latest pull from quickstart's git repo? > That is what I have been doing for you since 10 a.m. this morning (MSK) Everything has been built from scratch during current business day. Check time && date of `ls -l` output _undercloud.qcow2 takes 3.5 hr for download 2.9 GB. What causes all deployment procedure to take at least 5-6 hr. Usually via my fiber channel I get 5 GB in 15 min from any location in US or Europe. Site http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/ is extremely slow . This issue has pretty long story. $ git clone https://github.com/openstack/tripleo-quickstart takes about 5 min. Rich Bowen stated a while ago that default release ( no spec on command line) works with Mitaka. Please, resolve this issue , because it is obviously not on my side. Boris. > -- Raoul Scarazzini rasca at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Wed Aug 24 16:17:02 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 24 Aug 2016 18:17:02 +0200 Subject: [rdo-list] [Cloud SIG][RDO] RDO meeting minutes - 2016-08-24 Message-ID: ============================== #rdo: RDO meeting - 2016-08-24 ============================== Meeting started by number80 at 15:00:18 UTC. The full logs are available at https://meetbot.fedoraproject.org/rdo/2016-08-24/rdo_meeting_-_2016-08-24.2016-08-24-15.00.log.html . Meeting summary --------------- * roll call (number80, 15:00:26) * making qemu-kvm-ev (qemu-kvm >=2.3.0) a hard requirement in openstack-nova (number80, 15:02:21) * AGREED: Enable virt SIG repo in rdo-release for Newton+ (+1: 12, 0: 0, -1: 0) (number80, 15:25:19) * ACTION: number80 send a review to update rdo-release newton+ (number80, 15:26:07) * ACTION: jpena to update dlrn-deps.repo in dlrn newton workers to include virt SIG repo (jpena, 15:26:31) * python-heat-agent review (number80, 15:28:19) * LINK: https://review.rdoproject.org/r/#/c/1909 (number80, 15:28:25) * LINK: https://www.redhat.com/archives/rdo-list/2016-August/msg00189.html (number80, 15:29:31) * AGREED: merge stevebaker python-heat-agent change (+1: 6, 0: 1, -1: 1) (number80, 15:39:50) * ACTION: number80 work with stevebaker to merge his changes (number80, 15:40:03) * rdopkg new release (number80, 15:40:19) * ACTION: jruzicka to release a new version of rdopkg (number80, 15:41:58) * ACTION: jruzicka to investigate rdopkg auto-releases using rpmfactory (jruzicka, 15:43:41) * open discussion (number80, 15:45:50) * ACTION: jruzicka to provide nicer interface to rdoinfo in coordination with dmsimard (jruzicka, 15:49:53) * next week chair (number80, 15:52:24) * chandankumar is chairing next week meeting (number80, 15:52:59) * openstack-sahara-tests update (number80, 15:53:36) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1318765 (number80, 15:53:54) * tosky is working on tracking jar files origin and removing the one with questionable licensing (number80, 15:54:31) Meeting ended at 15:56:00 UTC. Action Items ------------ * number80 send a review to update rdo-release newton+ * jpena to update dlrn-deps.repo in dlrn newton workers to include virt SIG repo * number80 work with stevebaker to merge his changes * jruzicka to release a new version of rdopkg * jruzicka to investigate rdopkg auto-releases using rpmfactory * jruzicka to provide nicer interface to rdoinfo in coordination with dmsimard Action Items, by person ----------------------- * dmsimard * jruzicka to provide nicer interface to rdoinfo in coordination with dmsimard * jpena * jpena to update dlrn-deps.repo in dlrn newton workers to include virt SIG repo * jruzicka * jruzicka to release a new version of rdopkg * jruzicka to investigate rdopkg auto-releases using rpmfactory * jruzicka to provide nicer interface to rdoinfo in coordination with dmsimard * number80 * number80 send a review to update rdo-release newton+ * number80 work with stevebaker to merge his changes * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * number80 (111) * dmsimard (41) * jpena (26) * jruzicka (22) * tosky (9) * zodbot (9) * jjoyce (8) * chandankumar (7) * jschlueter (6) * weshay (4) * imcsk8 (4) * Duck (3) * openstack (3) * dmellado (2) * adarazs (2) * hrybacki (2) * alphacc (1) * mvc (1) * hrybacki|appt (1) * rdogerrit (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From jruzicka at redhat.com Wed Aug 24 17:25:35 2016 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Wed, 24 Aug 2016 19:25:35 +0200 Subject: [rdo-list] cherry-picking with rdopkg In-Reply-To: References: Message-ID: <70f531ca-2f90-57a8-70b9-0d135a31cea1@redhat.com> On 23.8.2016 00:12, Ken Dreyer wrote: > Hi Jakub, > > In the Ceph project we often have to cherry-pick GitHub Pull Requests > downstream into RH's Ceph Storage product. In OpenStack it's mostly gerrit @ review.openstack.org. > Now that rdopkg supports pulling RHBZ numbers from the commits, we can > store the BZ data in the -patches branches, and rdopkg does the rest. > > To cherry-pick GitHub PRs for particular BZs, I've written the following app: > > https://github.com/ktdreyer/downstream-cherry-picker > > This ensures that the cherry-picks are done correctly each time, > enforces the RHBZ numbers, etc. Nice, automation is the way. > It would be cool to build this as a feature into rdopkg. It could be > something like: > > rdopkg cherry-pick https://github.com/ceph/ceph/pull/111 12345 Looks good. Additional sources could be supported without breaking interface a la rdopkg cherry-pick https://review.openstack.org/#/c/347447 12345 rdopkg cherry-pick https://github.com/foo/bar/commit/deadbee 12345 > And it would cherry-pick GitHub PR #111 into the -patches branch, > adding "Resolves: rhbz#12345" to each cherry-pick's log. > > Do you guys in RDO often cherry-pick large branches (like 8+ commits) > downstream at a time? If so, we could make it auto-detect GitHub vs > Gerrit (although I'm not sure what URL you'd use for a branch in > Gerrit)? Not me, but possibly others? Nonetheless, I believe auto adding RHBZ# to commit msg and being able to supply review/commit URL instead of hash would be nice quality of life improvement. > What do you think? It would be easier to maintain such functionality directly in rdopkg as opposed to adding another requirement and interface projects but modularity is nice, of course ;) It should be fairly easy to add to rdopkg if we decide to. I'd like to hear from people if they're interested in this (and the gerrit support). Cheers, Jakub From dms at redhat.com Wed Aug 24 19:14:31 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 24 Aug 2016 15:14:31 -0400 Subject: [rdo-list] [RDO] [TripleO] [Packstack] qemu-kvm >= 2.3 now explicitely required for OpenStack Nova >= Newton Message-ID: Hi, Until now, the openstack-nova package in RDO had a lax requirement on qemu-kvm which potentially meant an older version of qemu-kvm (or qemu-kvm-rhel) would be installed [1]. We have agreed at the RDO meeting this morning to go ahead and formally require qemu-kvm >= 2.3 from Newton onwards. This means that the base (~v1.5.3) qemu-kvm from CentOS (or qemu-kvm-rhel in RHEL) will no longer satisfy that requirement. The CentOS virt SIG maintains qemu-kvm-ev (>=2.3) which obsoletes qemu-kvm and will be preferred if it is available. This newer version is largely preferred to the one available by default in CentOS and RHEL due to improvements, bug fixes and features that are expected to be there by Nova. If you consume RDO's master trunk repositories (Delorean Newton/Delorean Master), this extra repository is already enabled in delorean-deps.repo [3] and you do not need to do anything. qemu-kvm-ev will automatically be installed in place of qemu-kvm. If you consume RDO from stable releases, the extra repository will automatically be enabled in the "centos-release-openstack-newton" package that will eventually be available. The repository will also be bundled in the "rdo-release.rpm" package [4] starting at Newton. Please let us know if you have any questions or notice any issues. Thanks ! [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1367696 [2]: https://wiki.centos.org/SpecialInterestGroup/Virtualization [3]: https://trunk.rdoproject.org/centos7-master/delorean-deps.repo [4]: https://github.com/redhat-openstack/rdo-release/pull/7 David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From dms at redhat.com Wed Aug 24 21:25:32 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 24 Aug 2016 17:25:32 -0400 Subject: [rdo-list] [RDO] [TripleO] [Packstack] qemu-kvm >= 2.3 now explicitely required for OpenStack Nova >= Newton In-Reply-To: References: Message-ID: This may have had unintended consequences when testing RDO on RHEL [1]. This is because the extra repository URL is: http://mirror.centos.org/centos/$releasever/virt/$basearch/kvm-common/ $releasever is expanded to "7Server", not 7. We've temporarily replaced $releasever by 7 until we figure out the right way to go about this. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1369944 David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Aug 24, 2016 at 3:14 PM, David Moreau Simard wrote: > Hi, > > Until now, the openstack-nova package in RDO had a lax requirement on > qemu-kvm which potentially meant an older version of qemu-kvm (or > qemu-kvm-rhel) would be installed [1]. > > We have agreed at the RDO meeting this morning to go ahead and > formally require qemu-kvm >= 2.3 from Newton onwards. > This means that the base (~v1.5.3) qemu-kvm from CentOS (or > qemu-kvm-rhel in RHEL) will no longer satisfy that requirement. > > The CentOS virt SIG maintains qemu-kvm-ev (>=2.3) which obsoletes > qemu-kvm and will be preferred if it is available. > This newer version is largely preferred to the one available by > default in CentOS and RHEL due to improvements, bug fixes and features > that are expected to be there by Nova. > > If you consume RDO's master trunk repositories (Delorean > Newton/Delorean Master), this extra repository is already enabled in > delorean-deps.repo [3] and you do not need to do anything. > qemu-kvm-ev will automatically be installed in place of qemu-kvm. > > If you consume RDO from stable releases, the extra repository will > automatically be enabled in the "centos-release-openstack-newton" > package that will eventually be available. > The repository will also be bundled in the "rdo-release.rpm" package > [4] starting at Newton. > > Please let us know if you have any questions or notice any issues. > > Thanks ! > > [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1367696 > [2]: https://wiki.centos.org/SpecialInterestGroup/Virtualization > [3]: https://trunk.rdoproject.org/centos7-master/delorean-deps.repo > [4]: https://github.com/redhat-openstack/rdo-release/pull/7 > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] From lmadsen at redhat.com Thu Aug 25 02:10:54 2016 From: lmadsen at redhat.com (Leif Madsen) Date: Wed, 24 Aug 2016 22:10:54 -0400 Subject: [rdo-list] cherry-picking with rdopkg In-Reply-To: <70f531ca-2f90-57a8-70b9-0d135a31cea1@redhat.com> References: <70f531ca-2f90-57a8-70b9-0d135a31cea1@redhat.com> Message-ID: On Wed, Aug 24, 2016 at 1:25 PM, Jakub Ruzicka wrote: > On 23.8.2016 00:12, Ken Dreyer wrote: > > What do you think? > > It would be easier to maintain such functionality directly in rdopkg as > opposed to adding another requirement and interface projects but > modularity is nice, of course ;) It should be fairly easy to add to > rdopkg if we decide to. > > I'd like to hear from people if they're interested in this (and the > gerrit support). > I don't have a particular need this second, but having been responsible for builds within the NFVPE team, and the requests I'm seeing from partners, this sounds like something I'd really appreciate having. Jakub, If you think this is a useful feature and is well implemented (and the syntax looks sane to me), I would love (based on future-Leif feedback) having this available to me. Thanks! Leif. -- Leif Madsen | Partner Engineer - NFV & CI NFV Partner Engineering Red Hat GPG: (D670F846) BEE0 336E 5406 42BA 6194 6831 B38A 291E D670 F846 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Thu Aug 25 03:53:35 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Thu, 25 Aug 2016 13:53:35 +1000 Subject: [rdo-list] Python-shade in RDO Message-ID: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> Hi, A while ago there was a discussion around python-shade library and getting it into RDO. [1] It's been a few months since then, and shade is now successfully packaged and shipped as part of Fedora [2] which is great, but now I wanted to restart the conversation about how to make it available to users of CentOS/RDO. While it was suggested to get it into EPEL, I don't feel that is the best course of action simply because of the restrictive update policies of EPEL not allowing us to update it as frequently as needed, and also because python-shade depends on the python openstack clients, which are not a part of EPEL (as my understanding). The best place for us to make this package available is in RDO itself, as shade is an official Openstack big tent project, and RDOs aims to be a distribution providing packages for Openstack projects. So I just wanted to confirm with everyone and get some feedback, but unless there is any major objections, I was going to start looking at the process to get a new package into RDO, which I assume means putting a review request in to the project https://github.com/rdo-packages (though I assume a new repo needs to be created for it first). Regards, Graeme [1] https://www.redhat.com/archives/rdo-list/2015-November/thread.html [2] http://koji.fedoraproject.org/koji/packageinfo?packageID=21707 -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From bderzhavets at hotmail.com Thu Aug 25 04:25:51 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 25 Aug 2016 04:25:51 +0000 Subject: [rdo-list] [RDO] [TripleO] [Packstack] qemu-kvm >= 2.3 now explicitely required for OpenStack Nova >= Newton In-Reply-To: References: Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of David Moreau Simard Sent: Wednesday, August 24, 2016 10:14 PM To: rdo-list; OpenStack Development Mailing List (not for usage questions) Subject: [rdo-list] [RDO] [TripleO] [Packstack] qemu-kvm >= 2.3 now explicitely required for OpenStack Nova >= Newton Hi, Until now, the openstack-nova package in RDO had a lax requirement on qemu-kvm which potentially meant an older version of qemu-kvm (or qemu-kvm-rhel) would be installed [1]. We have agreed at the RDO meeting this morning to go ahead and formally require qemu-kvm >= 2.3 from Newton onwards. This means that the base (~v1.5.3) qemu-kvm from CentOS (or qemu-kvm-rhel in RHEL) will no longer satisfy that requirement. The CentOS virt SIG maintains qemu-kvm-ev (>=2.3) which obsoletes qemu-kvm and will be preferred if it is available. This newer version is largely preferred to the one available by default in CentOS and RHEL due to improvements, bug fixes and features that are expected to be there by Nova. If you consume RDO's master trunk repositories (Delorean Newton/Delorean Master), this extra repository is already enabled in delorean-deps.repo [3] and you do not need to do anything. qemu-kvm-ev will automatically be installed in place of qemu-kvm. If you consume RDO from stable releases, the extra repository will automatically be enabled in the "centos-release-openstack-newton" package that will eventually be available. The repository will also be bundled in the "rdo-release.rpm" package [4] starting at Newton. Please let us know if you have any questions or notice any issues. > I failed with to deploy overcloud via TripleO Quickstart having qemu-kvm-ev-2.3 set up on VIRTHOST. Please , keep TripleO QS team aware of this change. Thank you. Boris. > Thanks ! [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1367696 [2]: https://wiki.centos.org/SpecialInterestGroup/Virtualization [3]: https://trunk.rdoproject.org/centos7-master/delorean-deps.repo [4]: https://github.com/redhat-openstack/rdo-release/pull/7 David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Thu Aug 25 06:33:05 2016 From: mrunge at redhat.com (Matthias Runge) Date: Thu, 25 Aug 2016 08:33:05 +0200 Subject: [rdo-list] [RDO] [TripleO] [Packstack] qemu-kvm >= 2.3 now explicitely required for OpenStack Nova >= Newton In-Reply-To: References: Message-ID: <94b42b09-861a-53db-a0ce-fb8ace192fca@redhat.com> On 25/08/16 06:25, Boris Derzhavets wrote: > > If you consume RDO from stable releases, the extra repository will > automatically be enabled in the "centos-release-openstack-newton" > package that will eventually be available. > The repository will also be bundled in the "rdo-release.rpm" package > [4] starting at Newton. > > Please let us know if you have any questions or notice any issues. >> > I failed with to deploy overcloud via TripleO Quickstart > having qemu-kvm-ev-2.3 set up on VIRTHOST. > Please , keep TripleO QS team aware of this change. > > Thank you. > Boris. >> > Boris, thank you for your feedback. Unfortunately, it is extraordinary hard to track in your emails what is your addition, and what was cited from the earlier mail. Would it be possible for you to make it more clear? I.e. no inline edits? Thank you. -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander From bderzhavets at hotmail.com Thu Aug 25 09:12:12 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 25 Aug 2016 09:12:12 +0000 Subject: [rdo-list] [RDO] [TripleO] [Packstack] qemu-kvm >= 2.3 now explicitely required for OpenStack Nova >= Newton In-Reply-To: <94b42b09-861a-53db-a0ce-fb8ace192fca@redhat.com> References: , <94b42b09-861a-53db-a0ce-fb8ace192fca@redhat.com> Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Matthias Runge Sent: Thursday, August 25, 2016 9:33 AM To: rdo-list at redhat.com Subject: Re: [rdo-list] [RDO] [TripleO] [Packstack] qemu-kvm >= 2.3 now explicitely required for OpenStack Nova >= Newton On 25/08/16 06:25, Boris Derzhavets wrote: > > If you consume RDO from stable releases, the extra repository will > automatically be enabled in the "centos-release-openstack-newton" > package that will eventually be available. > The repository will also be bundled in the "rdo-release.rpm" package > [4] starting at Newton. > > Please let us know if you have any questions or notice any issues. >> > I failed with to deploy overcloud via TripleO Quickstart > having qemu-kvm-ev-2.3 set up on VIRTHOST. > Please , keep TripleO QS team aware of this change. > > Thank you. > Boris. >> > Boris, thank you for your feedback. Unfortunately, it is extraordinary hard to track in your emails what is your addition, and what was cited from the earlier mail. Would it be possible for you to make it more clear? I.e. no inline edits? Thank you. First see https://www.redhat.com/archives/rdo-list/2016-August/msg00210.html ============================================================================================= David Moreau Simard wrote in message above :- Hi, Until now, the openstack-nova package in RDO had a lax requirement on qemu-kvm which potentially meant an older version of qemu-kvm (or qemu-kvm-rhel) would be installed [1]. We have agreed at the RDO meeting this morning to go ahead and formally require qemu-kvm >= 2.3 from Newton onwards. This means that the base (~v1.5.3) qemu-kvm from CentOS (or qemu-kvm-rhel in RHEL) will no longer satisfy that requirement. The CentOS virt SIG maintains qemu-kvm-ev (>=2.3) which obsoletes qemu-kvm and will be preferred if it is available. This newer version is largely preferred to the one available by default in CentOS and RHEL due to improvements, bug fixes and features that are expected to be there by Nova. If you consume RDO's master trunk repositories (Delorean Newton/Delorean Master), this extra repository is already enabled in delorean-deps.repo [3] and you do not need to do anything. qemu-kvm-ev will automatically be installed in place of qemu-kvm. If you consume RDO from stable releases, the extra repository will automatically be enabled in the "centos-release-openstack-newton" package that will eventually be available. The repository will also be bundled in the "rdo-release.rpm" package [4] starting at Newton. Please let us know if you have any questions or notice any issues. Thanks ! ================================================================================================= My notice :- TripleO QuickStart fails to work with VIRTHOST running qemu-kvm-ev-2.3 and does work after downgrade ( with all dependencies brought back ) to standard qemu-kvm -1.5 for CentOS 7.2. Please, make TripleO Quicstart Team aware of oncoming changes. I wrote it mostly for attention of John Trowbridge and Lars Kellogg Stedman. Does it make things more clear ? Boris -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, [https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/Red_Hat_RGB.jpg] Red Hat | The world's open source leader www.de.redhat.com Das ver?ndert alles! (wenn es um das Entwickeln und Bereitstellen von Anwendungen geht) Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Thu Aug 25 09:15:25 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 25 Aug 2016 11:15:25 +0200 Subject: [rdo-list] Python-shade in RDO In-Reply-To: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> Message-ID: 2016-08-25 5:53 GMT+02:00 Graeme Gillies : > Hi, > > A while ago there was a discussion around python-shade library and > getting it into RDO. [1] > > It's been a few months since then, and shade is now successfully > packaged and shipped as part of Fedora [2] which is great, but now I > wanted to restart the conversation about how to make it available to > users of CentOS/RDO. > > While it was suggested to get it into EPEL, I don't feel that is the > best course of action simply because of the restrictive update policies > of EPEL not allowing us to update it as frequently as needed, and also > because python-shade depends on the python openstack clients, which are > not a part of EPEL (as my understanding). > *nods* > The best place for us to make this package available is in RDO itself, > as shade is an official Openstack big tent project, and RDOs aims to be > a distribution providing packages for Openstack projects. > > So I just wanted to confirm with everyone and get some feedback, but > unless there is any major objections, I was going to start looking at > the process to get a new package into RDO, which I assume means putting > a review request in to the project https://github.com/rdo-packages > (though I assume a new repo needs to be created for it first). > Likely to be rejected as shade lives in openstack-infra namespace. Well, we've had requests to provide a separate repository consumable for openstack infra but until now, the discussion stalled due to lack of people driving it. We could create an rdo-extras EL7 repository that would contain: * latest stable clients + SDK * minor utilities that are only available in EPEL (or possibly a separate repo if it grows too much) Regards, H. > Regards, > > Graeme > > [1] https://www.redhat.com/archives/rdo-list/2015-November/thread.html > [2] http://koji.fedoraproject.org/koji/packageinfo?packageID=21707 > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mscherer at redhat.com Thu Aug 25 09:17:38 2016 From: mscherer at redhat.com (Michael Scherer) Date: Thu, 25 Aug 2016 11:17:38 +0200 Subject: [rdo-list] Python-shade in RDO In-Reply-To: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> Message-ID: <1472116658.4783.421.camel@redhat.com> Le jeudi 25 ao?t 2016 ? 13:53 +1000, Graeme Gillies a ?crit : > Hi, > > A while ago there was a discussion around python-shade library and > getting it into RDO. [1] > > It's been a few months since then, and shade is now successfully > packaged and shipped as part of Fedora [2] which is great, but now I > wanted to restart the conversation about how to make it available to > users of CentOS/RDO. > > While it was suggested to get it into EPEL, I don't feel that is the > best course of action simply because of the restrictive update policies > of EPEL not allowing us to update it as frequently as needed, and also > because python-shade depends on the python openstack clients, which are > not a part of EPEL (as my understanding). > > The best place for us to make this package available is in RDO itself, > as shade is an official Openstack big tent project, and RDOs aims to be > a distribution providing packages for Openstack projects. > > So I just wanted to confirm with everyone and get some feedback, but > unless there is any major objections, I was going to start looking at > the process to get a new package into RDO, which I assume means putting > a review request in to the project https://github.com/rdo-packages > (though I assume a new repo needs to be created for it first). As shade is used by ansible for orchestration, having to hunt various repositories around the centos ecosystem, all being uncoordinated and upgrading at different time is gonna be likely annoying. That's kinda the exact things people complained years ago in the rpm ecosystem, and we are just doing it again with SIGs. As a smooth ops, I would prefer to have EPEL as a option for shade[1]. [1] yes, this pun is awful, no, I am not sending that email for the sake of making that reference -- Michael Scherer Sysadmin, Community Infrastructure and Platform, OSAS -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From redf4lcon at gmail.com Thu Aug 25 10:32:09 2016 From: redf4lcon at gmail.com (Nicolas G) Date: Thu, 25 Aug 2016 12:32:09 +0200 Subject: [rdo-list] [RDO] Networking issue on Liberty deployed VMs after Mitaka upgrade Message-ID: Hello, I migrated a working Centos7/RDO 6 nodes cluster from Liberty to Mitaka. Thanks to coffee and Openstack/RDO migration guides, the cluster migration went fine. I noticed my Liberty deployed VMs are now running without network. My freshly Mitaka deployed VMs are working. I can't find any network related errors in nova/neutron log. Debug mode does not show anything strange. The only error is in the network-faulty VM console : http://paste.openstack.org/show/563346/. Mitaka-deployed VMs are fine : http://paste.openstack.org/show/563350/. Did I missed a VM related migration step ? Also, network informations is not displayed/available for my Liberty-deployed VMs : nova show 266d67e4-b9d5-48b0-8057-f1256b4f0a30 +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | node06.next.foederis.local | | OS-EXT-SRV-ATTR:hypervisor_hostname | node06.next.foederis.local | | OS-EXT-SRV-ATTR:instance_name | instance-00000030 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2016-04-18T09:03:54.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2016-04-15T07:28:03Z | | flavor | cpu.burn2 (9c421deb-eb48-46a3-81db-94a868c1a7bd) | | hostId | 3680f9b5036daf9b454f46a3bdbbea3d87bada7675e27bd6c9f2c0ae | | id | 266d67e4-b9d5-48b0-8057-f1256b4f0a30 | | image | debian8.4.0 (1096e296-5bff-40cf-b13e-e556169df435) | | key_name | nicolasPC | | metadata | {} | | name | burn05 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | 5fcdb004f67b4d7ea5a2467e13efe39d | | updated | 2016-08-25T10:03:58Z | | user_id | 6404fab89b914a2fa6fda45270291f5b | +--------------------------------------+----------------------------------------------------------+ Mitaka-deployed VMs : nova show 2cc86ee9-9214-494f-a6b3-12d5348a8211 +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | node06.next.foederis.local | | OS-EXT-SRV-ATTR:hypervisor_hostname | node06.next.foederis.local | | OS-EXT-SRV-ATTR:instance_name | instance-00000057 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2016-08-25T10:00:07.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2016-08-25T09:59:33Z | | flavor | cpu.burn2 (9c421deb-eb48-46a3-81db-94a868c1a7bd) | | hostId | 3680f9b5036daf9b454f46a3bdbbea3d87bada7675e27bd6c9f2c0ae | | id | 2cc86ee9-9214-494f-a6b3-12d5348a8211 | | image | debian8.4.0 (1096e296-5bff-40cf-b13e-e556169df435) | | key_name | nicolasPC | | metadata | {} | | name | burn01 | | os-extended-volumes:volumes_attached | [] | | private_network network | 10.69.1.68, 10.69.0.209 | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | 5fcdb004f67b4d7ea5a2467e13efe39d | | updated | 2016-08-25T10:00:08Z | | user_id | 6404fab89b914a2fa6fda45270291f5b | +--------------------------------------+----------------------------------------------------------+ Thanks you for your time. Best regards, Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Thu Aug 25 12:10:55 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 25 Aug 2016 14:10:55 +0200 Subject: [rdo-list] Python-shade in RDO In-Reply-To: <1472116658.4783.421.camel@redhat.com> References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <1472116658.4783.421.camel@redhat.com> Message-ID: 2016-08-25 11:17 GMT+02:00 Michael Scherer : > Le jeudi 25 ao?t 2016 ? 13:53 +1000, Graeme Gillies a ?crit : >> Hi, >> >> A while ago there was a discussion around python-shade library and >> getting it into RDO. [1] >> >> It's been a few months since then, and shade is now successfully >> packaged and shipped as part of Fedora [2] which is great, but now I >> wanted to restart the conversation about how to make it available to >> users of CentOS/RDO. >> >> While it was suggested to get it into EPEL, I don't feel that is the >> best course of action simply because of the restrictive update policies >> of EPEL not allowing us to update it as frequently as needed, and also >> because python-shade depends on the python openstack clients, which are >> not a part of EPEL (as my understanding). >> >> The best place for us to make this package available is in RDO itself, >> as shade is an official Openstack big tent project, and RDOs aims to be >> a distribution providing packages for Openstack projects. >> >> So I just wanted to confirm with everyone and get some feedback, but >> unless there is any major objections, I was going to start looking at >> the process to get a new package into RDO, which I assume means putting >> a review request in to the project https://github.com/rdo-packages >> (though I assume a new repo needs to be created for it first). > > As shade is used by ansible for orchestration, having to hunt various > repositories around the centos ecosystem, all being uncoordinated and > upgrading at different time is gonna be likely annoying. > > That's kinda the exact things people complained years ago in the rpm > ecosystem, and we are just doing it again with SIGs. > > As a smooth ops, I would prefer to have EPEL as a option for shade[1]. > > [1] yes, this pun is awful, no, I am not sending that email for the sake > of making that reference > -- > Michael Scherer > Sysadmin, Community Infrastructure and Platform, OSAS > > As long as EPEL doesn't fix its broken policy, it's unlikely to happen. 1. EPEL doesn't respect any EL7 packaging standard 2. Updates policy is used to block major version update even when it makes sense -> Impossible to maintain OpenStack clients in that context. 3. While some maintainers don't even respect the said updates policy to push broken updates I'd rather not speak about EPEL anymore. H. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dms at redhat.com Thu Aug 25 13:02:38 2016 From: dms at redhat.com (David Moreau Simard) Date: Thu, 25 Aug 2016 09:02:38 -0400 Subject: [rdo-list] Python-shade in RDO In-Reply-To: References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <1472116658.4783.421.camel@redhat.com> Message-ID: There is a configuration management SIG in CentOS, right ? Given that they already are packaging Ansible (maybe? Didn't check..) they might be inclined to carry shade since it's a core plugin dependency. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Aug 25, 2016 8:11 AM, "Ha?kel" wrote: 2016-08-25 11:17 GMT+02:00 Michael Scherer : > Le jeudi 25 ao?t 2016 ? 13:53 +1000, Graeme Gillies a ?crit : >> Hi, >> >> A while ago there was a discussion around python-shade library and >> getting it into RDO. [1] >> >> It's been a few months since then, and shade is now successfully >> packaged and shipped as part of Fedora [2] which is great, but now I >> wanted to restart the conversation about how to make it available to >> users of CentOS/RDO. >> >> While it was suggested to get it into EPEL, I don't feel that is the >> best course of action simply because of the restrictive update policies >> of EPEL not allowing us to update it as frequently as needed, and also >> because python-shade depends on the python openstack clients, which are >> not a part of EPEL (as my understanding). >> >> The best place for us to make this package available is in RDO itself, >> as shade is an official Openstack big tent project, and RDOs aims to be >> a distribution providing packages for Openstack projects. >> >> So I just wanted to confirm with everyone and get some feedback, but >> unless there is any major objections, I was going to start looking at >> the process to get a new package into RDO, which I assume means putting >> a review request in to the project https://github.com/rdo-packages >> (though I assume a new repo needs to be created for it first). > > As shade is used by ansible for orchestration, having to hunt various > repositories around the centos ecosystem, all being uncoordinated and > upgrading at different time is gonna be likely annoying. > > That's kinda the exact things people complained years ago in the rpm > ecosystem, and we are just doing it again with SIGs. > > As a smooth ops, I would prefer to have EPEL as a option for shade[1]. > > [1] yes, this pun is awful, no, I am not sending that email for the sake > of making that reference > -- > Michael Scherer > Sysadmin, Community Infrastructure and Platform, OSAS > > As long as EPEL doesn't fix its broken policy, it's unlikely to happen. 1. EPEL doesn't respect any EL7 packaging standard 2. Updates policy is used to block major version update even when it makes sense -> Impossible to maintain OpenStack clients in that context. 3. While some maintainers don't even respect the said updates policy to push broken updates I'd rather not speak about EPEL anymore. H. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Thu Aug 25 13:31:49 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 25 Aug 2016 15:31:49 +0200 Subject: [rdo-list] Python-shade in RDO In-Reply-To: References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <1472116658.4783.421.camel@redhat.com> Message-ID: 2016-08-25 15:02 GMT+02:00 David Moreau Simard : > There is a configuration management SIG in CentOS, right ? > > Given that they already are packaging Ansible (maybe? Didn't check..) they > might be inclined to carry shade since it's a core plugin dependency. > Shade is useless without OpenStack clients ... > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Aug 25, 2016 8:11 AM, "Ha?kel" wrote: > > 2016-08-25 11:17 GMT+02:00 Michael Scherer : >> Le jeudi 25 ao?t 2016 ? 13:53 +1000, Graeme Gillies a ?crit : >>> Hi, >>> >>> A while ago there was a discussion around python-shade library and >>> getting it into RDO. [1] >>> >>> It's been a few months since then, and shade is now successfully >>> packaged and shipped as part of Fedora [2] which is great, but now I >>> wanted to restart the conversation about how to make it available to >>> users of CentOS/RDO. >>> >>> While it was suggested to get it into EPEL, I don't feel that is the >>> best course of action simply because of the restrictive update policies >>> of EPEL not allowing us to update it as frequently as needed, and also >>> because python-shade depends on the python openstack clients, which are >>> not a part of EPEL (as my understanding). >>> >>> The best place for us to make this package available is in RDO itself, >>> as shade is an official Openstack big tent project, and RDOs aims to be >>> a distribution providing packages for Openstack projects. >>> >>> So I just wanted to confirm with everyone and get some feedback, but >>> unless there is any major objections, I was going to start looking at >>> the process to get a new package into RDO, which I assume means putting >>> a review request in to the project https://github.com/rdo-packages >>> (though I assume a new repo needs to be created for it first). >> >> As shade is used by ansible for orchestration, having to hunt various >> repositories around the centos ecosystem, all being uncoordinated and >> upgrading at different time is gonna be likely annoying. >> >> That's kinda the exact things people complained years ago in the rpm >> ecosystem, and we are just doing it again with SIGs. >> >> As a smooth ops, I would prefer to have EPEL as a option for shade[1]. >> >> [1] yes, this pun is awful, no, I am not sending that email for the sake >> of making that reference >> -- >> Michael Scherer >> Sysadmin, Community Infrastructure and Platform, OSAS >> >> > > As long as EPEL doesn't fix its broken policy, it's unlikely to happen. > 1. EPEL doesn't respect any EL7 packaging standard > 2. Updates policy is used to block major version update even when it makes > sense > -> Impossible to maintain OpenStack clients in that context. > 3. While some maintainers don't even respect the said updates policy > to push broken updates > > I'd rather not speak about EPEL anymore. > > H. > > >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > From dms at redhat.com Thu Aug 25 13:35:59 2016 From: dms at redhat.com (David Moreau Simard) Date: Thu, 25 Aug 2016 09:35:59 -0400 Subject: [rdo-list] Python-shade in RDO In-Reply-To: References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <1472116658.4783.421.camel@redhat.com> Message-ID: On Thu, Aug 25, 2016 at 9:31 AM, Ha?kel wrote: > Shade is useless without OpenStack clients ... Hah, good point... David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From tdecacqu at redhat.com Fri Aug 26 02:20:30 2016 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 26 Aug 2016 02:20:30 +0000 Subject: [rdo-list] Python-shade in RDO In-Reply-To: References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <1472116658.4783.421.camel@redhat.com> Message-ID: On 08/25/2016 01:31 PM, Ha?kel wrote: > 2016-08-25 15:02 GMT+02:00 David Moreau Simard : >> > There is a configuration management SIG in CentOS, right ? >> > >> > Given that they already are packaging Ansible (maybe? Didn't check..) they >> > might be inclined to carry shade since it's a core plugin dependency. >> > > Shade is useless without OpenStack clients ... > > Then why not pushing last stable OpenStack clients and Shade to epel ? And make sure RDO's clients take the priority when both repositories are configured... -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From ggillies at redhat.com Fri Aug 26 05:09:05 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Fri, 26 Aug 2016 15:09:05 +1000 Subject: [rdo-list] Python-shade in RDO In-Reply-To: References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> Message-ID: <3cdfa6fc-6ca7-b99c-b2c2-2f0a59719ab8@redhat.com> On 25/08/16 19:15, Ha?kel wrote: > 2016-08-25 5:53 GMT+02:00 Graeme Gillies : >> Hi, >> >> A while ago there was a discussion around python-shade library and >> getting it into RDO. [1] >> >> It's been a few months since then, and shade is now successfully >> packaged and shipped as part of Fedora [2] which is great, but now I >> wanted to restart the conversation about how to make it available to >> users of CentOS/RDO. >> >> While it was suggested to get it into EPEL, I don't feel that is the >> best course of action simply because of the restrictive update policies >> of EPEL not allowing us to update it as frequently as needed, and also >> because python-shade depends on the python openstack clients, which are >> not a part of EPEL (as my understanding). >> > > *nods* > >> The best place for us to make this package available is in RDO itself, >> as shade is an official Openstack big tent project, and RDOs aims to be >> a distribution providing packages for Openstack projects. >> >> So I just wanted to confirm with everyone and get some feedback, but >> unless there is any major objections, I was going to start looking at >> the process to get a new package into RDO, which I assume means putting >> a review request in to the project https://github.com/rdo-packages >> (though I assume a new repo needs to be created for it first). >> > > Likely to be rejected as shade lives in openstack-infra namespace. > > Well, we've had requests to provide a separate repository consumable > for openstack infra > but until now, the discussion stalled due to lack of people driving it. > > We could create an rdo-extras EL7 repository that would contain: > * latest stable clients + SDK > * minor utilities that are only available in EPEL (or possibly a > separate repo if it grows too much) > > Regards, > H. Sorry I'm a bit confused here, are you actually saying that shade can't be in RDO because it lives in a slightly different git repo location, a location by which, is still referenced as perfectly valid for openstack projects in Big tent https://github.com/openstack/governance/blob/master/reference/projects.yaml I'm also confused why you think the clients should also be moved out of rdo into another repo as well. This is just splitting the repos up needlessly isn't it? Shade, like oslo and other Openstack libraries, should be part of RDO. If shade was moved from the openstack-infra to the openstack git namespace, would it be accepted then? Regards, Graeme > >> Regards, >> >> Graeme >> >> [1] https://www.redhat.com/archives/rdo-list/2015-November/thread.html >> [2] http://koji.fedoraproject.org/koji/packageinfo?packageID=21707 >> >> -- >> Graeme Gillies >> Principal Systems Administrator >> Openstack Infrastructure >> Red Hat Australia >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From ggillies at redhat.com Fri Aug 26 05:10:58 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Fri, 26 Aug 2016 15:10:58 +1000 Subject: [rdo-list] Python-shade in RDO In-Reply-To: <1472116658.4783.421.camel@redhat.com> References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <1472116658.4783.421.camel@redhat.com> Message-ID: On 25/08/16 19:17, Michael Scherer wrote: > Le jeudi 25 ao?t 2016 ? 13:53 +1000, Graeme Gillies a ?crit : >> Hi, >> >> A while ago there was a discussion around python-shade library and >> getting it into RDO. [1] >> >> It's been a few months since then, and shade is now successfully >> packaged and shipped as part of Fedora [2] which is great, but now I >> wanted to restart the conversation about how to make it available to >> users of CentOS/RDO. >> >> While it was suggested to get it into EPEL, I don't feel that is the >> best course of action simply because of the restrictive update policies >> of EPEL not allowing us to update it as frequently as needed, and also >> because python-shade depends on the python openstack clients, which are >> not a part of EPEL (as my understanding). >> >> The best place for us to make this package available is in RDO itself, >> as shade is an official Openstack big tent project, and RDOs aims to be >> a distribution providing packages for Openstack projects. >> >> So I just wanted to confirm with everyone and get some feedback, but >> unless there is any major objections, I was going to start looking at >> the process to get a new package into RDO, which I assume means putting >> a review request in to the project https://github.com/rdo-packages >> (though I assume a new repo needs to be created for it first). > > As shade is used by ansible for orchestration, having to hunt various > repositories around the centos ecosystem, all being uncoordinated and > upgrading at different time is gonna be likely annoying. > > That's kinda the exact things people complained years ago in the rpm > ecosystem, and we are just doing it again with SIGs. > > As a smooth ops, I would prefer to have EPEL as a option for shade[1]. > > [1] yes, this pun is awful, no, I am not sending that email for the sake > of making that reference > My main use case is using it for ansible, so I sympathise here. The problem is however that shade itself doesn't work without very new versions of the python-openstack clients/libraries. They are currently not in EPEL. I figured if you are interested in Openstack and Ansible enough to want shade, you will be using the Centos Cloud Openstack repos as well anyway to get the clients. Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From me at gbraad.nl Fri Aug 26 05:23:23 2016 From: me at gbraad.nl (Gerard Braad) Date: Fri, 26 Aug 2016 13:23:23 +0800 Subject: [rdo-list] Python-shade in RDO In-Reply-To: References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <1472116658.4783.421.camel@redhat.com> Message-ID: On Fri, Aug 26, 2016 at 1:10 PM, Graeme Gillies wrote: > My main use case is using it for ansible, so I sympathise here. The > problem is however that shade itself doesn't work without very new > versions of the python-openstackclients/libraries. Referring to os-client-config and the python client dependencies as defined in: https://github.com/openstack-infra/shade/blob/master/requirements.txt From apevec at redhat.com Fri Aug 26 06:31:32 2016 From: apevec at redhat.com (Alan Pevec) Date: Fri, 26 Aug 2016 08:31:32 +0200 Subject: [rdo-list] Python-shade in RDO In-Reply-To: <3cdfa6fc-6ca7-b99c-b2c2-2f0a59719ab8@redhat.com> References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <3cdfa6fc-6ca7-b99c-b2c2-2f0a59719ab8@redhat.com> Message-ID: On Aug 26, 2016 07:09, "Graeme Gillies" wrote: > Sorry I'm a bit confused here, are you actually saying that shade can't > be in RDO because it lives in a slightly different git repo location, a > location by which, is still referenced as perfectly valid for openstack > projects in Big tent > > https://github.com/openstack/governance/blob/master/reference/projects.yaml > > I'm also confused why you think the clients should also be moved out of > rdo into another repo as well. This is just splitting the repos up > needlessly isn't it? Shade, like oslo and other Openstack libraries, > should be part of RDO. Problem with Shade is that it'd branchless so putting it into one RDO release repo won't work. That's why separate repo is suggested, which would also solve the other issue Haikel mentioned: upstream infra enables RDO repo only to get openvswitch which is not in EL7 base, so we would put that in rdo-extras. Cheers, Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From taisto.qvist at gmail.com Fri Aug 26 15:57:40 2016 From: taisto.qvist at gmail.com (Taisto Qvist) Date: Fri, 26 Aug 2016 17:57:40 +0200 Subject: [rdo-list] Is Keystone/Identity-v3 supported by RDO Packstack? Multi-node swift --answer-file? Message-ID: Hi folks, I've managed to create a fairly well working packstack-answer file, that I used for liberty installations, with a 4-multi-node setup, eventually handling nested kvm, live-migration, LBAAS, etc, so I am quite happy with that. Now, my aim is Identity v3. From the packstack answer-file, it seems supported, but I cant get it to work. I couldnt get it to work with liberty, and now testing with Mitaka, it still doesnt work, failing on: ERROR : Error appeared during Puppet run: 10.9.24.100_cinder.pp Error: Could not prefetch cinder_type provider 'openstack': Execution of '/usr/bin/openstack volume type list --quiet --format csv --long' returned 1: Expecting to find domain in project - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-d8cccfb9-c060-4cdd-867d-603bf70ebf24) I think this is also the same error I got, during my liberty tests. Has anyone got this working? Should it work? Another this which I just tested briefly, was to have swift-services on separate nodes(instead of on the controller, as I have now), but I couldnt get the syntax correct. Anyone have an answer-file example that configures multiple swift-nodes? Many Thanks, Taisto Qvist -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Fri Aug 26 16:06:02 2016 From: dms at redhat.com (David Moreau Simard) Date: Fri, 26 Aug 2016 12:06:02 -0400 Subject: [rdo-list] Is Keystone/Identity-v3 supported by RDO Packstack? Multi-node swift --answer-file? In-Reply-To: References: Message-ID: Fairly certain Cinder is a known problem against v3 in Packstack [1] but I don't have the details. [1]: https://github.com/openstack/packstack/blob/master/releasenotes/notes/keystone-v3-note-065b6302b49285f3.yaml David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Fri, Aug 26, 2016 at 11:57 AM, Taisto Qvist wrote: > Hi folks, > > I've managed to create a fairly well working packstack-answer file, that I > used for liberty installations, with a 4-multi-node setup, eventually > handling nested kvm, live-migration, LBAAS, etc, so I am quite happy with > that. > > Now, my aim is Identity v3. From the packstack answer-file, it seems > supported, but I cant get it to work. I couldnt get it to work with liberty, > and now testing with Mitaka, it still doesnt work, failing on: > > ERROR : Error appeared during Puppet run: 10.9.24.100_cinder.pp > Error: Could not prefetch cinder_type provider 'openstack': Execution of > '/usr/bin/openstack volume type list --quiet --format csv --long' returned > 1: Expecting to find domain in project - the server could not comply with > the request since it is either malformed or otherwise incorrect. The client > is assumed to be in error. (HTTP 400) (Request-ID: > req-d8cccfb9-c060-4cdd-867d-603bf70ebf24) > > I think this is also the same error I got, during my liberty tests. > > Has anyone got this working? Should it work? > > Another this which I just tested briefly, was to have swift-services on > separate nodes(instead of on the controller, as I have now), but I couldnt > get the syntax correct. > > Anyone have an answer-file example that configures multiple swift-nodes? > > Many Thanks, > Taisto Qvist > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Fri Aug 26 16:32:59 2016 From: javier.pena at redhat.com (Javier Pena) Date: Fri, 26 Aug 2016 12:32:59 -0400 (EDT) Subject: [rdo-list] Is Keystone/Identity-v3 supported by RDO Packstack? Multi-node swift --answer-file? In-Reply-To: References: Message-ID: <187458073.6327797.1472229179556.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Fairly certain Cinder is a known problem against v3 in Packstack [1] > but I don't have the details. > > [1]: > https://github.com/openstack/packstack/blob/master/releasenotes/notes/keystone-v3-note-065b6302b49285f3.yaml > We had a patch to fix that merged in Mitaka some time ago [1], which depended on a puppet-cinder patch [2]. We have not yet released a new package to the CBS repos, but you could try updating your openstack-packstack, openstack-packstack-puppet and openstack-puppet-modules using the packages from RDO stable/mitaka Trunk [3]. Regards, Javier [1] - https://review.openstack.org/323690 [2] - https://review.openstack.org/320956 [3] - http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-mitaka-tested/ > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Fri, Aug 26, 2016 at 11:57 AM, Taisto Qvist > wrote: > > Hi folks, > > > > I've managed to create a fairly well working packstack-answer file, that I > > used for liberty installations, with a 4-multi-node setup, eventually > > handling nested kvm, live-migration, LBAAS, etc, so I am quite happy with > > that. > > > > Now, my aim is Identity v3. From the packstack answer-file, it seems > > supported, but I cant get it to work. I couldnt get it to work with > > liberty, > > and now testing with Mitaka, it still doesnt work, failing on: > > > > ERROR : Error appeared during Puppet run: 10.9.24.100_cinder.pp > > Error: Could not prefetch cinder_type provider 'openstack': Execution of > > '/usr/bin/openstack volume type list --quiet --format csv --long' returned > > 1: Expecting to find domain in project - the server could not comply with > > the request since it is either malformed or otherwise incorrect. The client > > is assumed to be in error. (HTTP 400) (Request-ID: > > req-d8cccfb9-c060-4cdd-867d-603bf70ebf24) > > > > I think this is also the same error I got, during my liberty tests. > > > > Has anyone got this working? Should it work? > > > > Another this which I just tested briefly, was to have swift-services on > > separate nodes(instead of on the controller, as I have now), but I couldnt > > get the syntax correct. > > > > Anyone have an answer-file example that configures multiple swift-nodes? > > > > Many Thanks, > > Taisto Qvist > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From jason.waghorn at yandex.com Fri Aug 26 19:35:11 2016 From: jason.waghorn at yandex.com (Jason Waghorn) Date: Fri, 26 Aug 2016 22:35:11 +0300 Subject: [rdo-list] Is Keystone/Identity-v3 supported by RDO Packstack? Multi-node swift --answer-file? Message-ID: <390821472240111@web2j.yandex.ru> An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Aug 26 23:42:32 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 26 Aug 2016 19:42:32 -0400 Subject: [rdo-list] [CI] rdo has been been promoted Message-ID: Greetings, The RDO pipeline has passed. https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/681/ I've disabled the job so that the images can be extracted and used for the test day. Please do not re-enable the job until the image has been retrieved. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Fri Aug 26 23:58:33 2016 From: dms at redhat.com (David Moreau Simard) Date: Fri, 26 Aug 2016 19:58:33 -0400 Subject: [rdo-list] [CI] rdo has been been promoted In-Reply-To: References: Message-ID: The test day is not for two weeks still, we are hitting milestone 3 next week so the test day is the week after. Can we re-enable the pipeline until at least thursday when the milestone is actually released and we've promoted from it ? David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Fri, Aug 26, 2016 at 7:42 PM, Wesley Hayutin wrote: > Greetings, > > The RDO pipeline has passed. > https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/681/ > > I've disabled the job so that the images can be extracted and used for the > test day. Please do not re-enable the job until the image has been > retrieved. > > Thanks From jschluet at redhat.com Sat Aug 27 02:34:38 2016 From: jschluet at redhat.com (Jon Schlueter) Date: Fri, 26 Aug 2016 22:34:38 -0400 Subject: [rdo-list] [CI] rdo has been been promoted In-Reply-To: References: Message-ID: On Fri, Aug 26, 2016 at 7:58 PM, David Moreau Simard wrote: > The test day is not for two weeks still, we are hitting milestone 3 > next week so the test day is the week after. > > Can we re-enable the pipeline until at least thursday when the > milestone is actually released and we've promoted from it ? I would agree with Wes, let's capture this set of images for use as a fallback, and then re-enable the pipeline. I would be pleasantly surprised if we did get the promotion right after Milestone 3 but not holding my breath for it right especially right around a Milestone. Jon From jason.waghorn at yandex.com Sat Aug 27 05:00:59 2016 From: jason.waghorn at yandex.com (Jason Waghorn) Date: Sat, 27 Aug 2016 08:00:59 +0300 Subject: [rdo-list] How to force TripleO QuickStart use Newton M3 repos during testing days ? In-Reply-To: <390821472240111@web2j.yandex.ru> References: <390821472240111@web2j.yandex.ru> Message-ID: <53201472274059@web17j.yandex.ru> Would command like - $ bash quickstart.sh --config ha.yml $VIRTHOST --release newton be the right way ? Regards. Jason From marius at remote-lab.net Sun Aug 28 09:52:39 2016 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 28 Aug 2016 11:52:39 +0200 Subject: [rdo-list] Expired SSL certificate for www.rdoproject.org Message-ID: Hi everyone, The SSL certificate for www.rdoproject.org has expired: echo | openssl s_client -showcerts -servername www.rdoproject.org -connect www.rdoproject.org:443 2>/dev/null | openssl x509 -inform pem -noout -dates notBefore=May 30 00:50:00 2016 GMT notAfter=Aug 28 00:50:00 2016 GMT Chrome doesn't seem to allow me to go past the cert verification: You cannot visit www.rdoproject.org right now because the website uses HSTS. Network errors and attacks are usually temporary, so this page will probably work later. Can we have it renewed? Thanks, Marius From ggillies at redhat.com Sun Aug 28 23:19:30 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Mon, 29 Aug 2016 09:19:30 +1000 Subject: [rdo-list] Python-shade in RDO In-Reply-To: References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <3cdfa6fc-6ca7-b99c-b2c2-2f0a59719ab8@redhat.com> Message-ID: <4ee88a00-980a-8e01-c25e-3fdf4a3ba28c@redhat.com> On 26/08/16 16:31, Alan Pevec wrote: > > On Aug 26, 2016 07:09, "Graeme Gillies" > wrote: > >> Sorry I'm a bit confused here, are you actually saying that shade can't >> be in RDO because it lives in a slightly different git repo location, a >> location by which, is still referenced as perfectly valid for openstack >> projects in Big tent >> >> > https://github.com/openstack/governance/blob/master/reference/projects.yaml >> >> I'm also confused why you think the clients should also be moved out of >> rdo into another repo as well. This is just splitting the repos up >> needlessly isn't it? Shade, like oslo and other Openstack libraries, >> should be part of RDO. > > Problem with Shade is that it'd branchless so putting it into one RDO > release repo won't work. That's why separate repo is suggested, which > would also solve the other issue Haikel mentioned: upstream infra > enables RDO repo only to get openvswitch which is not in EL7 base, so we > would put that in rdo-extras. > > Cheers, > Alan > Sorry just so I am 100% clear here, in order for python-shade to just go into RDO it would need to have stable release branches, which I would assume match the standard Openstack release naming (liberty, mitaka, etc). Pulling back a bit, can we talk about the charter regarding RDO and packaging Openstack projects (which fall under big tent)? Under big tent, projects are not beholden to the explicit 6 month release cycle that has been mandated in the past. Most projects choose to stick with it, but there are a couple which don't. The official governance documentation [1] references projects can have the following release management release:cycle-with-milestones release:cycle-with-intermediary release:cycle-trailing release:independent The ones that are probably most interesting to this discussion are release:cycle-trailing and release:independent (of which shade uses). Can we get the packaging documentation modified to include an official policy on how the projects with the different release cycles are to be treated? I don't believe that projects with release:independent should be excluded from RDO, in fact, they definitely aren't because we package and ship rally as part of RDO and that uses release:independent. Regards, Graeme [1] https://governance.openstack.org/reference/tags/ -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From hguemar at fedoraproject.org Mon Aug 29 09:17:51 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Mon, 29 Aug 2016 11:17:51 +0200 Subject: [rdo-list] Python-shade in RDO In-Reply-To: <4ee88a00-980a-8e01-c25e-3fdf4a3ba28c@redhat.com> References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <3cdfa6fc-6ca7-b99c-b2c2-2f0a59719ab8@redhat.com> <4ee88a00-980a-8e01-c25e-3fdf4a3ba28c@redhat.com> Message-ID: 2016-08-29 1:19 GMT+02:00 Graeme Gillies : > On 26/08/16 16:31, Alan Pevec wrote: >> >> On Aug 26, 2016 07:09, "Graeme Gillies" > > wrote: >> >>> Sorry I'm a bit confused here, are you actually saying that shade can't >>> be in RDO because it lives in a slightly different git repo location, a >>> location by which, is still referenced as perfectly valid for openstack >>> projects in Big tent >>> >>> >> https://github.com/openstack/governance/blob/master/reference/projects.yaml >>> >>> I'm also confused why you think the clients should also be moved out of >>> rdo into another repo as well. This is just splitting the repos up >>> needlessly isn't it? Shade, like oslo and other Openstack libraries, >>> should be part of RDO. >> >> Problem with Shade is that it'd branchless so putting it into one RDO >> release repo won't work. That's why separate repo is suggested, which >> would also solve the other issue Haikel mentioned: upstream infra >> enables RDO repo only to get openvswitch which is not in EL7 base, so we >> would put that in rdo-extras. >> >> Cheers, >> Alan >> > > Sorry just so I am 100% clear here, in order for python-shade to just go > into RDO it would need to have stable release branches, which I would > assume match the standard Openstack release naming (liberty, mitaka, etc). > > Pulling back a bit, can we talk about the charter regarding RDO and > packaging Openstack projects (which fall under big tent)? > > Under big tent, projects are not beholden to the explicit 6 month > release cycle that has been mandated in the past. Most projects choose > to stick with it, but there are a couple which don't. > > The official governance documentation [1] references projects can have > the following release management > > release:cycle-with-milestones > release:cycle-with-intermediary > release:cycle-trailing > release:independent > > The ones that are probably most interesting to this discussion are > release:cycle-trailing and release:independent (of which shade uses). > > Can we get the packaging documentation modified to include an official > policy on how the projects with the different release cycles are to be > treated? I don't believe that projects with release:independent should > be excluded from RDO, in fact, they definitely aren't because we package > and ship rally as part of RDO and that uses release:independent. > (CC'ing Paul, as I would like to hear openstack-infra feedback on that point) Yes, we will discuss that item during our next meeting. If there's agreement to create a rdo-extras repository, projects tagged release:independent are candidate to be shipped in that repository. What we need to figure out is what *exactly* it should contains, provisionnally, it is: * shade + latest clients from stable branches * projects tagged release:independent * minor utilities only available through EPEL and useful for openstack-infra (should it be a separate repo?) Regards, H. > Regards, > > Graeme > > [1] https://governance.openstack.org/reference/tags/ > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia From dmellado at redhat.com Mon Aug 29 09:22:44 2016 From: dmellado at redhat.com (Daniel Mellado) Date: Mon, 29 Aug 2016 11:22:44 +0200 Subject: [rdo-list] Tempest Packaging in RDO Message-ID: Hi everyone, As a follow-up from the conversations that we had on the irc, I wanted to summarize the current issues and possible actions. As we ran out of time during the meeting I'd really appreciate some feedback on this. 1) Tempest Plugins With the current situation (project package and project-test package) we do install the entry point with the main package, even if the sub-package is not installed. In this case, Tempest will discover the test entry point without the code and fail. On this, I'd propose to integrate-back the -tests packages for the in-tree plugins, even if it would mean adding some more dependencies to them and remove tempest requirement as a dependency for out-tree ones (i.e. Designate[2] or tempest-horizon[3]), which will get rid of any circular dependency. This seems to breaks RPM logic. This is only to allow mixing packages and git/pip installations which is a source of troubles. At best, it could be a temporary measure. If it's about circular dependencies, RPM knows how to handle them, and the best way to break them would be plugins requiring tempest and not the reverse. We could have a a subpackage tempest-all that requires all the plugins for an AIO install. (alternatively: tempest requires all plugins, and plugins would require a tempest-core packages. tempest package being empty and tempest-core containing the framework) Also, the plugin state is not really optimal, as some of those plugins would pin to an specific commit of tempest. All of this should be sorted out in the end by [1] but for now we'll need to find some solution on this. 2) Tempest Requirements As tempest is installed in the undercloud, it will use whatever plugins are installed there, so if checking every test from the overcloud is a must it'd need to have a lot of packages installed as dependencies (that's the current status). Some ideas for this have been. - Create a parent-package/metapackage called 'tempest-all-test' or similar, which will allow to install all the sub-componants packages. This would allow not to install all the tempest dependencies but rather only one component at a time. It would Requires: all test packages for anyone who needs to install them all. - Puppet-Tempest: it will install tempest itself (albeit only from source right now, this can be addressed: https://bugs.launchpad.net/puppet-tempest/+bug/1549366) but it will also install tempest plugins based on the parameters that define the availability of services. For example, if "ceilometer_available" is set to true, it will install python-ceilometer-tests and set the config in tempest.conf "[service_available] ceilometer=True" 3) Tempest Config Tool Right now is the only diff between upstream vanilla and our downstream fork, on here, our proposal could be to move it to its own repo and decouple it from tempest so we could use vanilla and not depend on the downstream fork. This will later on also integrated and used on RefStack/DefCore (WIP). There has been quite some discussion around this, and there are duplicated projects doing the same work with some difference (basically to use or not api_discovery) During the past summit and afterwards, we agreed on creating something that would depend on the installer (fetching the configuration from TripleO), as the tool as it is won't be accepted on upstream tempest. This had been dropped dead since, so I'd like to resume the discussion. Thanks for any ideas! Daniel --- [1] http://lists.openstack.org/pipermail/openstack-dev/2016-August/101552.html [2] https://review.rdoproject.org/r/#/c/1820/ [3] https://github.com/openstack/tempest-horizon/blob/master/requirements.txt#L11 From mscherer at redhat.com Mon Aug 29 10:44:01 2016 From: mscherer at redhat.com (Michael Scherer) Date: Mon, 29 Aug 2016 12:44:01 +0200 Subject: [rdo-list] Expired SSL certificate for www.rdoproject.org In-Reply-To: References: Message-ID: <1472467441.20224.11.camel@redhat.com> Le dimanche 28 ao?t 2016 ? 11:52 +0200, Marius Cornea a ?crit : > Hi everyone, > > The SSL certificate for www.rdoproject.org has expired: > > echo | openssl s_client -showcerts -servername www.rdoproject.org > -connect www.rdoproject.org:443 2>/dev/null | openssl x509 -inform pem > -noout -dates > > notBefore=May 30 00:50:00 2016 GMT > notAfter=Aug 28 00:50:00 2016 GMT > > Chrome doesn't seem to allow me to go past the cert verification: > > You cannot visit www.rdoproject.org right now because the website uses > HSTS. Network errors and attacks are usually temporary, so this page > will probably work later. > > Can we have it renewed? Done. Problem was that the cronjob to renew was using the old lets encrypt client, and it got renamed to certbot, but this wasn't changed in the cronjob. -- Michael Scherer Sysadmin, Community Infrastructure and Platform, OSAS -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From rbowen at redhat.com Mon Aug 29 12:21:36 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 29 Aug 2016 08:21:36 -0400 Subject: [rdo-list] Unanswered RDO questions on ask.openstack.org Message-ID: Thanks again for all of the help in getting the unanswered questions list down to a manageable level. We're down to 26 this week. I appreciate any help you can give. --Rich 26 unanswered questions: quota show is different from horizon in RDO after update quota with command https://ask.openstack.org/en/question/96244/quota-show-is-different-from-horizon-in-rdo-after-update-quota-with-command/ Tags: rdo how to set quota with python https://ask.openstack.org/en/question/96231/how-to-set-quota-with-python/ Tags: quota "Parameter outiface failed on Firewall" during installation of openstack rdo on centos 7 https://ask.openstack.org/en/question/95657/parameter-outiface-failed-on-firewall-during-installation-of-openstack-rdo-on-centos-7/ Tags: rdo, devstack#mitaka multi nodes provider network ovs config https://ask.openstack.org/en/question/95423/multi-nodes-provider-network-ovs-config/ Tags: rdo, liberty-neutron Adding additional packages to an RDO installation https://ask.openstack.org/en/question/95380/adding-additional-packages-to-an-rdo-installation/ Tags: rdo, mistral RDO TripleO Mitaka HA Overcloud Failing https://ask.openstack.org/en/question/95249/rdo-tripleo-mitaka-ha-overcloud-failing/ Tags: mitaka, tripleo, overcloud, centos7 RDO - is there any fedora package newer than puppet-4.2.1-3.fc24.noarch.rpm https://ask.openstack.org/en/question/94969/rdo-is-there-any-fedora-package-newer-than-puppet-421-3fc24noarchrpm/ Tags: rdo, puppet, install-openstack OpenStack RDO mysqld 100% cpu https://ask.openstack.org/en/question/94961/openstack-rdo-mysqld-100-cpu/ Tags: openstack, mysqld, cpu Failed to set RDO repo on host-packstack-centOS-7 https://ask.openstack.org/en/question/94828/failed-to-set-rdo-repo-on-host-packstack-centos-7/ Tags: openstack-packstack, centos7, rdo how to deploy haskell-distributed in RDO? https://ask.openstack.org/en/question/94785/how-to-deploy-haskell-distributed-in-rdo/ Tags: rdo rdo tripleO liberty undercloud install failing https://ask.openstack.org/en/question/94023/rdo-tripleo-liberty-undercloud-install-failing/ Tags: rdo, rdo-manager, liberty, undercloud, instack AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ Tags: openstack, networking, aws Keystone authentication: Failed to contact the endpoint. https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ Tags: keystone, authenticate, endpoint, murano Liberty RDO: stack resource topology icons are pink https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ Tags: resource, topology, dashboard, horizon, pink No handlers could be found for logger "oslo_config.cfg" while syncing the glance database https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ Tags: liberty, glance, install-openstack CentOS OpenStack - compute node can't talk https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ Tags: rdo How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on RDO Liberty ? https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ Tags: rdo, liberty, swift, ha VM and container can't download anything from internet https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ Tags: rdo, neutron, network, connectivity Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ Tags: keyboard, map, keymap, vncproxy, novnc OpenStack-Docker driver failed https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ Tags: docker, openstack, liberty Routing between two tenants https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ Tags: kilo, fuel, rdo, routing openstack baremetal introspection internal server error https://ask.openstack.org/en/question/82790/openstack-baremetal-introspection-internal-server-error/ Tags: rdo, ironic-inspector, tripleo Installing openstack using packstack (rdo) failed https://ask.openstack.org/en/question/82473/installing-openstack-using-packstack-rdo-failed/ Tags: rdo, packstack, installation-error, keystone VMware Host Backend causes No valid host was found. Bug ??? https://ask.openstack.org/en/question/79738/vmware-host-backend-causes-no-valid-host-was-found-bug/ Tags: vmware, rdo Mutlinode Devstack with two interfaces https://ask.openstack.org/en/question/78615/mutlinode-devstack-with-two-interfaces/ Tags: devstack, vlan, openstack Overcloud deployment on VM fails as IP address from DHCP is not assigned https://ask.openstack.org/en/question/66272/overcloud-deployment-on-vm-fails-as-ip-address-from-dhcp-is-not-assigned/ Tags: overcloud_in_vm -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From ayoung at redhat.com Mon Aug 29 14:41:30 2016 From: ayoung at redhat.com (Adam Young) Date: Mon, 29 Aug 2016 10:41:30 -0400 Subject: [rdo-list] Unanswered RDO questions, ask.openstack.org In-Reply-To: <29ede1c1-4926-a0a9-7e5d-df19fddd345e@redhat.com> References: <29ede1c1-4926-a0a9-7e5d-df19fddd345e@redhat.com> Message-ID: <71a51e44-89a1-586d-ecfd-46eb25c9408c@redhat.com> On 08/08/2016 11:59 AM, Rich Bowen wrote: > 42 unanswered questions: > > RDO TripleO Mitaka HA Overcloud Failing > https://ask.openstack.org/en/question/95249/rdo-tripleo-mitaka-ha-overcloud-failing/ > Tags: mitaka, triple-o, overcloud, cento7 > > cinder volumes attached but not available during OS install > https://ask.openstack.org/en/question/95223/cinder-volumes-attached-but-not-available-during-os-install/ > Tags: mitaka, cinder, install > > RDO - is there any fedora package newer than puppet-4.2.1-3.fc24.noarch.rpm > https://ask.openstack.org/en/question/94969/rdo-is-there-any-fedora-package-newer-than-puppet-421-3fc24noarchrpm/ > Tags: rdo, puppet, install-openstack > > OpenStack RDO mysqld 100% cpu > https://ask.openstack.org/en/question/94961/openstack-rdo-mysqld-100-cpu/ > Tags: openstack, mysqld, cpu > > Failed to set RDO repo on host-packstact-centOS-7 > https://ask.openstack.org/en/question/94828/failed-to-set-rdo-repo-on-host-packstact-centos-7/ > Tags: openstack-packstack, centos7, rdo > > how to deploy haskell-distributed in RDO? > https://ask.openstack.org/en/question/94785/how-to-deploy-haskell-distributed-in-rdo/ > Tags: rdo > > How to set quota for domain and have it shared with all the > projects/tenants in domain > https://ask.openstack.org/en/question/94105/how-to-set-quota-for-domain-and-have-it-shared-with-all-the-projectstenants-in-domain/ > Tags: domainquotadriver > > rdo tripleO liberty undercloud install failing > https://ask.openstack.org/en/question/94023/rdo-tripleo-liberty-undercloud-install-failing/ > Tags: rdo, rdo-manager, liberty, undercloud, instack > > Add new compute node for TripleO deployment in virtual environment > https://ask.openstack.org/en/question/93703/add-new-compute-node-for-tripleo-deployment-in-virtual-environment/ > Tags: compute, tripleo, liberty, virtual, baremetal > > Unable to start Ceilometer services > https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-services/ > Tags: ceilometer, ceilometer-api > > Adding hard drive space to RDO installation > https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ > Tags: cinder, openstack, space, add > > AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack > https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ > Tags: openstack, networking, aws > > ceilometer: I've installed openstack mitaka. but swift stops working > when i configured the pipeline and ceilometer filter > https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ > Tags: ceilometer, openstack-swift, mitaka > > Fail on installing the controller on Cent OS 7 > https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ > Tags: installation, centos7, controller > > the error of service entity and API endpoints > https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ > Tags: service, entity, and, api, endpoints > > Running delorean fails: Git won't fetch sources > https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ > Tags: delorean, rdo > > Keystone authentication: Failed to contact the endpoint. > https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ > Tags: keystone, authenticate, endpoint, murano Thanks. This one was legit. I think I answered it correctly. > > Liberty RDO: stack resource topology icons are pink > https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ > Tags: stack, resource, topology, dashboard > > Build of instance aborted: Block Device Mapping is Invalid. > https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ > Tags: cinder, lvm, centos7 > > No handlers could be found for logger "oslo_config.cfg" while syncing > the glance database > https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ > Tags: liberty, glance, install-openstack > > how to use chef auto manage openstack in RDO? > https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ > Tags: chef, rdo > > Separate Cinder storage traffic from management > https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ > Tags: cinder, separate, nic, iscsi > > Openstack installation fails using packstack, failure is in installation > of openstack-nova-compute. Error: Dependency Package[nova-compute] has > failures > https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ > Tags: novacompute, rdo, packstack, dependency, failure > > CentOS OpenStack - compute node can't talk > https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ > Tags: rdo > > How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on > RDO Liberty ? > https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ > Tags: rdo, liberty, swift, ha > > VM and container can't download anything from internet > https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ > Tags: rdo, neutron, network, connectivity > > Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ > https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ > Tags: keyboard, map, keymap, vncproxy, novnc > > OpenStack-Docker driver failed > https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ > Tags: docker, openstack, liberty > > Sahara SSHException: Error reading SSH protocol banner > https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ > Tags: sahara, icehouse, ssh, vanila > > Error Sahara create cluster: 'Error attach volume to instance > https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ > Tags: sahara, attach-volume, vanila, icehouse > > > From hguemar at fedoraproject.org Mon Aug 29 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 29 Aug 2016 15:00:03 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160829150003.A518760A4003@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-08-31 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rbowen at redhat.com Mon Aug 29 18:26:22 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 29 Aug 2016 14:26:22 -0400 Subject: [rdo-list] Recent RDO blog posts Message-ID: <4d6da51a-52b0-8ca8-e378-93022caf5b8e@redhat.com> It's been a few weeks since I posted a blog update, and we've had some great posts in the meantime. Here's what RDO enthusiasts have been blogging about for the last few weeks. *Native DHCP support in OVN* by Numan Siddique Recently native DHCP support has been added to OVN. In this post we will see how native DHCP is supported in OVN and how it is used by OpenStack Neutron OVN ML2 driver. The code which supports native DHCP can be found here. ? read more at http://tm3.org/8d *Manual validation of Cinder A/A patches* by Gorka Eguileor In the Cinder Midcycle I agreed to create some sort of document explaining the manual tests I?ve been doing to validate the work on Cinder?s Active-Active High Availability -as a starting point for other testers and for the automation of the tests- and writing a blog post was the most convenient way for me to do so, so here it is. ? read more at http://tm3.org/8e *Exploring YAQL Expressions* by Lars Kellogg-Stedman The Newton release of Heat adds support for a yaql intrinsic function, which allows you to evaluate yaql expressions in your Heat templates. Unfortunately, the existing yaql documentation is somewhat limited, and does not offer examples of many of yaql's more advanced features. ? read more at http://tm3.org/8f *Tripleo HA Federation Proof-of-Concept* by Adam Young Keystone has supported identity federation for several releases. I have been working on a proof-of-concept integration of identity federation in a TripleO deployment. I was able to successfully login to Horizon via WebSSO, and want to share my notes. ? read more at http://tm3.org/8g *TripleO Deploy Artifacts (and puppet development workflow)* by Steve Hardy For a while now, TripleO has supported a "DeployArtifacts" interface, aimed at making it easier to deploy modified/additional files on your overcloud, without the overhead of frequently rebuilding images. ? read more at http://tm3.org/8h *TripleO deep dive session #6 (Overcloud - Physical network)* by Carlos Camacho This is the sixth video from a series of ?Deep Dive? sessions related to TripleO deployments. ? read more at http://tm3.org/8i *Improving QEMU security part 7: TLS support for migration* by Daniel Berrange This blog is part 7 of a series I am writing about work I?ve completed over the past few releases to improve QEMU security related features. ? read more at http://tm3.org/8j *Running Unit Tests on Old Versions of Keystone* by Adam Young Just because Icehouse is EOL does not mean no one is running it. One part of my job is back-porting patches to older versions of Keystone that my Company supports. ? read more at http://tm3.org/8k *BAND-AID for OOM issues with TripleO manual deployments* by Carlos Camacho First in the Undercloud, when deploying stacks you might find that heat-engine (4 workers) takes lot of RAM, in this case for specific usage peaks can be useful to have a swap file. In order to have this swap file enabled and used by the OS execute the following instructions in the Undercloud: ? read more at http://tm3.org/8l *Debugging submissions errors in TripleO CI* by Carlos Camacho Landing upstream submissions might be hard if you are not passing all the CI jobs that try to check that your code actually works. Let?s assume that CI is working properly without any kind of infra issue or without any error introduced by mistake from other submissions. In which case, we might ending having something like: ? read more at http://tm3.org/8m *Ceph, TripleO and the Newton release* by Giulio Fidente Time to roll up some notes on the status of Ceph in TripleO. The majority of these functionalities were available in the Mitaka release too but the examples work with code from the Newton release so they might not apply identical to Mitaka. ? read more at http://tm3.org/8n -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Aug 29 19:15:22 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 29 Aug 2016 15:15:22 -0400 Subject: [rdo-list] Upcoming OpenStack meetups Message-ID: <545c5894-878e-1fb9-fca3-5fe119076633@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Tuesday August 30 in Tokyo, JP: ??OpenStack???? ?29???? - http://www.meetup.com/Japan-OpenStack-User-Group/events/233195506/ * Wednesday August 31 in Seattle, WA, US: Openstack, Open source, and Building Companies with Steve Poitras - http://www.meetup.com/learnatlunch/events/229658042/ * Wednesday August 31 in Amsterdam, NL: Openstack & Ceph End-of-Summer meetup - http://www.meetup.com/Openstack-Amsterdam/events/232800987/ * Wednesday August 31 in K?ln, DE: Stackers Cologne Cloud Meetup - http://www.meetup.com/OpenStack-Cologne/events/233216972/ * Thursday September 01 in Fort Lauderdale, FL, US: Monthly SFOUG Meeting - http://www.meetup.com/South-Florida-OpenStack-Users-Group/events/233134825/ * Thursday September 01 in Pleasanton, CA, US: The Rise of the Container: The Dev/Ops Technology That Accelerates Ops/Dev - http://www.meetup.com/EastBay-OpenStack/events/232779623/ * Thursday September 01 in Eindhoven, NL: OpenStack Cloud (IaaS) - http://www.meetup.com/osbc-eindhoven/events/232221197/ * Saturday September 03 in Frederick, MD, US: OpenStack Encore - http://www.meetup.com/KeyLUG/events/230704353/ -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From apevec at redhat.com Mon Aug 29 21:05:36 2016 From: apevec at redhat.com (Alan Pevec) Date: Mon, 29 Aug 2016 23:05:36 +0200 Subject: [rdo-list] Python-shade in RDO In-Reply-To: <4ee88a00-980a-8e01-c25e-3fdf4a3ba28c@redhat.com> References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <3cdfa6fc-6ca7-b99c-b2c2-2f0a59719ab8@redhat.com> <4ee88a00-980a-8e01-c25e-3fdf4a3ba28c@redhat.com> Message-ID: > Can we get the packaging documentation modified to include an official > policy on how the projects with the different release cycles are to be > treated? I don't believe that projects with release:independent should > be excluded from RDO, in fact, they definitely aren't because we package > and ship rally as part of RDO and that uses release:independent. Rally will also need to move into this new separate release-independent RDO repo, looking at rally's requirements updates[1] you can see it won't work with RDO stable releases. I didn't mention excluding from RDO, just that it cannot be per-release repo. Cheers, Alan [1] https://github.com/openstack/rally/commit/52fef00aefb7c7c48714e0c125aed342dbda7c07#diff-b4ef698db8ca845e5845c4618278f29a From sbaker at redhat.com Mon Aug 29 21:43:53 2016 From: sbaker at redhat.com (Steve Baker) Date: Tue, 30 Aug 2016 09:43:53 +1200 Subject: [rdo-list] Tempest Packaging in RDO In-Reply-To: References: Message-ID: <695cc79a-28f2-4d2f-3b15-93be24016a53@redhat.com> On 29/08/16 21:22, Daniel Mellado wrote: > Hi everyone, > > As a follow-up from the conversations that we had on the irc, I wanted > to summarize the current issues and possible actions. As we ran out of > time during the meeting I'd really appreciate some feedback on this. > > 1) Tempest Plugins > > With the current situation (project package and project-test package) we > do install the entry point with the main package, even if the > sub-package is not installed. In this case, Tempest will discover the > test entry point without the code and fail. > > On this, I'd propose to integrate-back the -tests packages for the > in-tree plugins, even if it would mean adding some more dependencies to > them and remove tempest requirement as a dependency for out-tree ones > (i.e. Designate[2] or tempest-horizon[3]), which will get rid of any > circular dependency. > > This seems to breaks RPM logic. This is only to allow mixing packages > and git/pip installations which is a source of troubles. At best, it > could be a temporary measure. If it's about circular dependencies, RPM > knows how to handle them, and the best way to break them would be > plugins requiring tempest and not the reverse. > > We could have a a subpackage tempest-all that requires all the plugins > for an AIO install. (alternatively: tempest requires all plugins, and > plugins would require a tempest-core packages. tempest package being > empty and tempest-core containing the framework) > > Also, the plugin state is not really optimal, as some of those plugins > would pin to an specific commit of tempest. All of this should be sorted > out in the end by [1] but for now we'll need to find some solution on this. > I've been spending some time on this problem for openstack-heat-common and python-heat-tests-tempest. I have come up with a solution which would enable the original plan of tempest package depending on all the test-tempest packages. The solution is a little unconventional though so it will need some feedback. What it does is to do a standard python install then duplicate the egg-info directory and manipulate it to resemble a dedicated tempest plugin package. https://review.rdoproject.org/r/#/c/1980/ > 2) Tempest Requirements > > As tempest is installed in the undercloud, it will use whatever > plugins are installed there, so if checking every test from the > overcloud is a must it'd need to have a lot of packages installed as > dependencies (that's the current status). > > Some ideas for this have been. > > - Create a parent-package/metapackage called 'tempest-all-test' or > similar, which will allow to install all the sub-componants packages. > This would allow not to install all the tempest dependencies but rather > only one component at a time. It would Requires: all test packages for > anyone who needs to install them all. > > - Puppet-Tempest: it will install tempest itself (albeit only from > source right now, this can be addressed: > https://bugs.launchpad.net/puppet-tempest/+bug/1549366) but it will also > install tempest plugins based on the parameters that define the > availability of services. For example, if "ceilometer_available" is set > to true, it will install python-ceilometer-tests and set the config in > tempest.conf "[service_available] ceilometer=True" > > > 3) Tempest Config Tool > Right now is the only diff between upstream vanilla and our > downstream fork, on here, our proposal could be to move it to its own > repo and decouple it from tempest so we could use vanilla and not depend > on the downstream fork. This will later on also integrated and used on > RefStack/DefCore (WIP). > > There has been quite some discussion around this, and there are > duplicated projects doing the same work with some difference (basically > to use or not api_discovery) > > During the past summit and afterwards, we agreed on creating something > that would depend on the installer (fetching the configuration from > TripleO), as the tool as it is won't be accepted on upstream tempest. > > This had been dropped dead since, so I'd like to resume the discussion. > > Thanks for any ideas! > > Daniel > > --- > [1] > http://lists.openstack.org/pipermail/openstack-dev/2016-August/101552.html > [2] https://review.rdoproject.org/r/#/c/1820/ > [3] > https://github.com/openstack/tempest-horizon/blob/master/requirements.txt#L11 > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ggillies at redhat.com Tue Aug 30 06:23:49 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Tue, 30 Aug 2016 16:23:49 +1000 Subject: [rdo-list] Python-shade in RDO In-Reply-To: References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <3cdfa6fc-6ca7-b99c-b2c2-2f0a59719ab8@redhat.com> <4ee88a00-980a-8e01-c25e-3fdf4a3ba28c@redhat.com> Message-ID: <1a036e38-1126-1a01-5ebe-1c198432e516@redhat.com> On 29/08/16 19:17, Ha?kel wrote: > 2016-08-29 1:19 GMT+02:00 Graeme Gillies : >> On 26/08/16 16:31, Alan Pevec wrote: >>> >>> On Aug 26, 2016 07:09, "Graeme Gillies" >> > wrote: >>> >>>> Sorry I'm a bit confused here, are you actually saying that shade can't >>>> be in RDO because it lives in a slightly different git repo location, a >>>> location by which, is still referenced as perfectly valid for openstack >>>> projects in Big tent >>>> >>>> >>> https://github.com/openstack/governance/blob/master/reference/projects.yaml >>>> >>>> I'm also confused why you think the clients should also be moved out of >>>> rdo into another repo as well. This is just splitting the repos up >>>> needlessly isn't it? Shade, like oslo and other Openstack libraries, >>>> should be part of RDO. >>> >>> Problem with Shade is that it'd branchless so putting it into one RDO >>> release repo won't work. That's why separate repo is suggested, which >>> would also solve the other issue Haikel mentioned: upstream infra >>> enables RDO repo only to get openvswitch which is not in EL7 base, so we >>> would put that in rdo-extras. >>> >>> Cheers, >>> Alan >>> >> >> Sorry just so I am 100% clear here, in order for python-shade to just go >> into RDO it would need to have stable release branches, which I would >> assume match the standard Openstack release naming (liberty, mitaka, etc). >> >> Pulling back a bit, can we talk about the charter regarding RDO and >> packaging Openstack projects (which fall under big tent)? >> >> Under big tent, projects are not beholden to the explicit 6 month >> release cycle that has been mandated in the past. Most projects choose >> to stick with it, but there are a couple which don't. >> >> The official governance documentation [1] references projects can have >> the following release management >> >> release:cycle-with-milestones >> release:cycle-with-intermediary >> release:cycle-trailing >> release:independent >> >> The ones that are probably most interesting to this discussion are >> release:cycle-trailing and release:independent (of which shade uses). >> >> Can we get the packaging documentation modified to include an official >> policy on how the projects with the different release cycles are to be >> treated? I don't believe that projects with release:independent should >> be excluded from RDO, in fact, they definitely aren't because we package >> and ship rally as part of RDO and that uses release:independent. >> > > (CC'ing Paul, as I would like to hear openstack-infra feedback on that point) > > Yes, we will discuss that item during our next meeting. > If there's agreement to create a rdo-extras repository, projects > tagged release:independent are candidate to be shipped in that > repository. > > What we need to figure out is what *exactly* it should contains, > provisionnally, it is: > * shade + latest clients from stable branches > * projects tagged release:independent > * minor utilities only available through EPEL and useful for > openstack-infra (should it be a separate repo?) > > Regards, > H. I would really implore you to think very carefully about splitting out release:independant projects into a separate repo, especially one with the name "extras". Fracturing the repo setup only reduces usability, causes confusion, and for those projects that choose an independent release model, it makes them feel like second class citizens and less likely to want to care or be part of the RDO community. As I've already mentioned, we already ship a release:independant project in the normal repo, and I fail to see from a technical level why others can't do the same. Simply ship their latest stable release in the current stable repo. A lot of the projects that choose release:independant are smaller, perhaps a bit newer, and still in a growth phase. If RDO is able to show that those projects can be a part of RDO like everything else, it means they are more likely to participate in the community, as it makes their software more accessible. Remember the goal here is to grow the community and have as many projects participating in RDO as possible. Encouraging the smaller projects to do so is a great way to help that. Regards, Graeme > >> Regards, >> >> Graeme >> >> [1] https://governance.openstack.org/reference/tags/ >> >> -- >> Graeme Gillies >> Principal Systems Administrator >> Openstack Infrastructure >> Red Hat Australia -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From apevec at redhat.com Tue Aug 30 07:27:24 2016 From: apevec at redhat.com (Alan Pevec) Date: Tue, 30 Aug 2016 09:27:24 +0200 Subject: [rdo-list] Python-shade in RDO In-Reply-To: <1a036e38-1126-1a01-5ebe-1c198432e516@redhat.com> References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <3cdfa6fc-6ca7-b99c-b2c2-2f0a59719ab8@redhat.com> <4ee88a00-980a-8e01-c25e-3fdf4a3ba28c@redhat.com> <1a036e38-1126-1a01-5ebe-1c198432e516@redhat.com> Message-ID: > Fracturing the repo setup only reduces usability, causes confusion, and > for those projects that choose an independent release model, it makes > them feel like second class citizens and less likely to want to care or > be part of the RDO community. We already have multiple pre-release stable repos, I don't see why adding extras would create more confusion. Ideally, OpenStack upstream would not need per-release repos but without backward compatibility taken seriously in clients and libs that's not doable technically. > As I've already mentioned, we already ship a release:independant project > in the normal repo, and I fail to see from a technical level why others It's case-by-case and Rally is a bad example, it will need to move to release independent repo, same as Tempest. There's a good counter example of release:independent Gnocchi where they don't have stable/OPENSTACK-RELEASE branches but they do maintain stable/MAJOR.MINOR and ensure there's matching branch for OpenStack release[1] It's up to both, upstream project and package maintainer to ensure that, otherwise it won't work. > can't do the same. Simply ship their latest stable release in the > current stable repo. But that's exactly a thing: if such projects don't have stable releases matching particular OpenStack release we'd ship their random release from master, without guarantees it actually works. Also upstream doesn't have a way to deliver security updates, so we'd be on our own to backport patches. > Remember the goal here is to grow the community and have as many > projects participating in RDO as possible. Encouraging the smaller > projects to do so is a great way to help that. Let's first make a PoC and make sure user experience is great, there are new ideas coming from Fedora how to modularize a distribution. Cheers, Alan From apevec at redhat.com Tue Aug 30 07:30:00 2016 From: apevec at redhat.com (Alan Pevec) Date: Tue, 30 Aug 2016 09:30:00 +0200 Subject: [rdo-list] Python-shade in RDO In-Reply-To: References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <3cdfa6fc-6ca7-b99c-b2c2-2f0a59719ab8@redhat.com> <4ee88a00-980a-8e01-c25e-3fdf4a3ba28c@redhat.com> <1a036e38-1126-1a01-5ebe-1c198432e516@redhat.com> Message-ID: > stable/MAJOR.MINOR and ensure there's matching branch for OpenStack > release[1] Footnote missing: [1] https://github.com/redhat-openstack/rdoinfo/blob/master/rdo.yml#L333-L344 From apevec at redhat.com Tue Aug 30 07:55:28 2016 From: apevec at redhat.com (Alan Pevec) Date: Tue, 30 Aug 2016 09:55:28 +0200 Subject: [rdo-list] Tempest Packaging in RDO In-Reply-To: <695cc79a-28f2-4d2f-3b15-93be24016a53@redhat.com> References: <695cc79a-28f2-4d2f-3b15-93be24016a53@redhat.com> Message-ID: > The solution is a little unconventional though so it will need some > feedback. What it does is to do a standard python install then > duplicate the egg-info directory and manipulate it to resemble a dedicated > tempest plugin package. > > https://review.rdoproject.org/r/#/c/1980/ I like that as a workaround for in-tree tempest plugins but proper solution is really that upstream project creates a separate repo for their tempest plugin. Is that in progress for Heat? Cheers, Alan From javier.pena at redhat.com Tue Aug 30 08:02:57 2016 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 30 Aug 2016 04:02:57 -0400 (EDT) Subject: [rdo-list] Tempest Packaging in RDO In-Reply-To: References: Message-ID: <731736690.7134214.1472544177263.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Hi everyone, > > As a follow-up from the conversations that we had on the irc, I wanted > to summarize the current issues and possible actions. As we ran out of > time during the meeting I'd really appreciate some feedback on this. > > 1) Tempest Plugins > > With the current situation (project package and project-test package) we > do install the entry point with the main package, even if the > sub-package is not installed. In this case, Tempest will discover the > test entry point without the code and fail. > > On this, I'd propose to integrate-back the -tests packages for the > in-tree plugins, even if it would mean adding some more dependencies to > them and remove tempest requirement as a dependency for out-tree ones > (i.e. Designate[2] or tempest-horizon[3]), which will get rid of any > circular dependency. I'd really like to remove the tempest dependency for designate-tests and horizon-tests soon, at least to unblock part of the issues (not being able to properly test designate/horizon without pulling the whole Tempest lot with it). > > This seems to breaks RPM logic. This is only to allow mixing packages > and git/pip installations which is a source of troubles. At best, it > could be a temporary measure. If it's about circular dependencies, RPM > knows how to handle them, and the best way to break them would be > plugins requiring tempest and not the reverse. > > We could have a a subpackage tempest-all that requires all the plugins > for an AIO install. (alternatively: tempest requires all plugins, and > plugins would require a tempest-core packages. tempest package being > empty and tempest-core containing the framework) > If I understand it correctly, the main reason for having tempest depend on all test subpackages is to simplify the process for CI jobs, isn't it? If so, this tempest-all metapackage could be a good short-term solution, while we fix the bigger issues. Javier > Also, the plugin state is not really optimal, as some of those plugins > would pin to an specific commit of tempest. All of this should be sorted > out in the end by [1] but for now we'll need to find some solution on this. > > > 2) Tempest Requirements > > As tempest is installed in the undercloud, it will use whatever > plugins are installed there, so if checking every test from the > overcloud is a must it'd need to have a lot of packages installed as > dependencies (that's the current status). > > Some ideas for this have been. > > - Create a parent-package/metapackage called 'tempest-all-test' or > similar, which will allow to install all the sub-componants packages. > This would allow not to install all the tempest dependencies but rather > only one component at a time. It would Requires: all test packages for > anyone who needs to install them all. > > - Puppet-Tempest: it will install tempest itself (albeit only from > source right now, this can be addressed: > https://bugs.launchpad.net/puppet-tempest/+bug/1549366) but it will also > install tempest plugins based on the parameters that define the > availability of services. For example, if "ceilometer_available" is set > to true, it will install python-ceilometer-tests and set the config in > tempest.conf "[service_available] ceilometer=True" > > > 3) Tempest Config Tool > Right now is the only diff between upstream vanilla and our > downstream fork, on here, our proposal could be to move it to its own > repo and decouple it from tempest so we could use vanilla and not depend > on the downstream fork. This will later on also integrated and used on > RefStack/DefCore (WIP). > > There has been quite some discussion around this, and there are > duplicated projects doing the same work with some difference (basically > to use or not api_discovery) > > During the past summit and afterwards, we agreed on creating something > that would depend on the installer (fetching the configuration from > TripleO), as the tool as it is won't be accepted on upstream tempest. > > This had been dropped dead since, so I'd like to resume the discussion. > > Thanks for any ideas! > > Daniel > > --- > [1] > http://lists.openstack.org/pipermail/openstack-dev/2016-August/101552.html > [2] https://review.rdoproject.org/r/#/c/1820/ > [3] > https://github.com/openstack/tempest-horizon/blob/master/requirements.txt#L11 > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From hguemar at fedoraproject.org Tue Aug 30 13:19:43 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 30 Aug 2016 15:19:43 +0200 Subject: [rdo-list] Python-shade in RDO In-Reply-To: <1a036e38-1126-1a01-5ebe-1c198432e516@redhat.com> References: <9f38876e-eec8-b4b1-11fd-68d14f7c471d@redhat.com> <3cdfa6fc-6ca7-b99c-b2c2-2f0a59719ab8@redhat.com> <4ee88a00-980a-8e01-c25e-3fdf4a3ba28c@redhat.com> <1a036e38-1126-1a01-5ebe-1c198432e516@redhat.com> Message-ID: 2016-08-30 8:23 GMT+02:00 Graeme Gillies : > > I would really implore you to think very carefully about splitting out > release:independant projects into a separate repo, especially one with > the name "extras". > Naming is easy to change, could be RDO clients and SDK, RDO Cloud users or whatever > Fracturing the repo setup only reduces usability, causes confusion, and > for those projects that choose an independent release model, it makes > them feel like second class citizens and less likely to want to care or > be part of the RDO community. > As Alan explained, we can't. Actually, forcing release-independent projects like shade in release-specific repositories would make them more of a second-class citizen. Let's say that shade requires a newer version of clients than we can ship in Mitaka, that would force us to pin shade to an older release and maybe fork it to backport security updates. A separate repository would give us the flexibility to ship the latest and greatest of those projects without worrying to break stuff that will be installed in our cloud nodes. > As I've already mentioned, we already ship a release:independant project > in the normal repo, and I fail to see from a technical level why others > can't do the same. Simply ship their latest stable release in the > current stable repo. > That's poor workaround because we didn't have the flexibility to do otherwise. As for shade, stable releases requirements don't match release-dependent projects requirements and we do have a precedent: gnocchi. Gnocchi follows its own branching model and because of requirements, we have to collaborate with upstream devs to map specific gnocchi branches to an OpenStack release, though if you install gnocchi nodes in separated nodes, it does not matter. > A lot of the projects that choose release:independant are smaller, > perhaps a bit newer, and still in a growth phase. If RDO is able to show > that those projects can be a part of RDO like everything else, it means > they are more likely to participate in the community, as it makes their > software more accessible. > We can provide flexibility like we did with gnocchi but here I'd rather think of how could we provide better fits to our community by adding a new repository to fit the needs of cloud user > Remember the goal here is to grow the community and have as many > projects participating in RDO as possible. Encouraging the smaller > projects to do so is a great way to help that. > > Regards, > > Graeme > *nods* Regards, H. >> >>> Regards, >>> >>> Graeme >>> >>> [1] https://governance.openstack.org/reference/tags/ >>> >>> -- >>> Graeme Gillies >>> Principal Systems Administrator >>> Openstack Infrastructure >>> Red Hat Australia > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia From sbaker at redhat.com Tue Aug 30 21:45:42 2016 From: sbaker at redhat.com (Steve Baker) Date: Wed, 31 Aug 2016 09:45:42 +1200 Subject: [rdo-list] Tempest Packaging in RDO In-Reply-To: References: <695cc79a-28f2-4d2f-3b15-93be24016a53@redhat.com> Message-ID: <248ac2f4-ac68-52fd-7dec-a443d459442a@redhat.com> On 30/08/16 19:55, Alan Pevec wrote: >> The solution is a little unconventional though so it will need some >> feedback. What it does is to do a standard python install then >> duplicate the egg-info directory and manipulate it to resemble a dedicated >> tempest plugin package. >> >> https://review.rdoproject.org/r/#/c/1980/ > I like that as a workaround for in-tree tempest plugins but proper > solution is really that upstream project creates a separate repo for > their tempest plugin. Is that in progress for Heat? > We've discussed this multiple times and each time the consensus has been to leave the tests in-tree. Developers really like the convenience in including a functional/integration test in the same commit as a bug or feature. This means there are currently no plans to create a new repo for heat tempest tests. From apevec at redhat.com Wed Aug 31 07:59:35 2016 From: apevec at redhat.com (Alan Pevec) Date: Wed, 31 Aug 2016 09:59:35 +0200 Subject: [rdo-list] Tempest Packaging in RDO In-Reply-To: <248ac2f4-ac68-52fd-7dec-a443d459442a@redhat.com> References: <695cc79a-28f2-4d2f-3b15-93be24016a53@redhat.com> <248ac2f4-ac68-52fd-7dec-a443d459442a@redhat.com> Message-ID: On Tue, Aug 30, 2016 at 11:45 PM, Steve Baker wrote: > We've discussed this multiple times and each time the consensus has been to > leave the tests in-tree. Developers really like the convenience in including > a functional/integration test in the same commit as a bug or feature. This > means there are currently no plans to create a new repo for heat tempest > tests. But how will you handle in-tree Tempest plugin in stable branches? Tempest is branchless and other in-tree plugins have been doing ugly things like checking out specific Tempest commit which is not packageable :( Alan From rbowen at redhat.com Wed Aug 31 13:06:27 2016 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 31 Aug 2016 09:06:27 -0400 Subject: [rdo-list] Newton M3 test day, September 8, 9 Message-ID: As per $subject, we plan to hold the Newton M3 test day September 8th and 9th. Details are at https://www.rdoproject.org/testday/newton/milestone3/ As always we appreciate any updates to the test plans for things that you would like to see tested. Please provide instructions for those that are less familiar with the procedure. -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity From taisto.qvist at gmail.com Wed Aug 31 15:51:55 2016 From: taisto.qvist at gmail.com (Taisto Qvist) Date: Wed, 31 Aug 2016 17:51:55 +0200 Subject: [rdo-list] Is Keystone/Identity-v3 supported by RDO Packstack? Multi-node swift --answer-file? In-Reply-To: <187458073.6327797.1472229179556.JavaMail.zimbra@redhat.com> References: <187458073.6327797.1472229179556.JavaMail.zimbra@redhat.com> Message-ID: Thanks for the tips everyone. I found https://www.linux.com/blog/attempt-set-rdo-mitaka-any-given-time-delorean-trunks, and but that also failed with idv3, but on heat issues instead. I am unfortunately completely lost/confused/overwhelmed with all the different git/repository information found in different places, and building my own RPMs feels way to complicated as someone linked to. So I'll just clearly demonstrate my NooB-ness, by asking: If I follow the instructions on " https://www.rdoproject.org/install/quickstart/", how do I modify them to update my packstack-* to use the newer modules mentioned below? Regards Taisto 2016-08-26 18:32 GMT+02:00 Javier Pena : > > > ----- Original Message ----- > > Fairly certain Cinder is a known problem against v3 in Packstack [1] > > but I don't have the details. > > > > [1]: > > https://github.com/openstack/packstack/blob/master/ > releasenotes/notes/keystone-v3-note-065b6302b49285f3.yaml > > > > We had a patch to fix that merged in Mitaka some time ago [1], which > depended on a puppet-cinder patch [2]. We have not yet released a new > package to the CBS repos, but you could try updating your > openstack-packstack, openstack-packstack-puppet and > openstack-puppet-modules using the packages from RDO stable/mitaka Trunk > [3]. > > Regards, > Javier > > [1] - https://review.openstack.org/323690 > [2] - https://review.openstack.org/320956 > [3] - http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo- > trunk-mitaka-tested/ > > > > David Moreau Simard > > Senior Software Engineer | Openstack RDO > > > > dmsimard = [irc, github, twitter] > > > > > > On Fri, Aug 26, 2016 at 11:57 AM, Taisto Qvist > > wrote: > > > Hi folks, > > > > > > I've managed to create a fairly well working packstack-answer file, > that I > > > used for liberty installations, with a 4-multi-node setup, eventually > > > handling nested kvm, live-migration, LBAAS, etc, so I am quite happy > with > > > that. > > > > > > Now, my aim is Identity v3. From the packstack answer-file, it seems > > > supported, but I cant get it to work. I couldnt get it to work with > > > liberty, > > > and now testing with Mitaka, it still doesnt work, failing on: > > > > > > ERROR : Error appeared during Puppet run: 10.9.24.100_cinder.pp > > > Error: Could not prefetch cinder_type provider 'openstack': Execution > of > > > '/usr/bin/openstack volume type list --quiet --format csv --long' > returned > > > 1: Expecting to find domain in project - the server could not comply > with > > > the request since it is either malformed or otherwise incorrect. The > client > > > is assumed to be in error. (HTTP 400) (Request-ID: > > > req-d8cccfb9-c060-4cdd-867d-603bf70ebf24) > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Wed Aug 31 16:11:49 2016 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 31 Aug 2016 21:41:49 +0530 Subject: [rdo-list] [CentOS-devel] [Cloud SIG][RDO] RDO meeting minutes - 2016-08-31 Message-ID: ============================== #rdo: RDO meeting - 2016-09-31 ============================== Meeting started by chandankumar at 15:00:44 UTC. The full logs are available at https://meetbot.fedoraproject.org/rdo/2016-08-31/rdo_meeting_-_2016-09-31.2016-08-31-15.00.log.html . Meeting summary --------------- * roll call (chandankumar, 15:00:59) * tempest packaging (chandankumar, 15:04:03) * LINK: https://www.redhat.com/archives/rdo-list/2016-August/msg00240.html (chandankumar, 15:05:23) * ACTION: apevec review and try to simplify egg-info mangling for -tests-tempest (apevec, 15:17:36) * ACTION: chandankumar to take a look at python-tempest changes (chandankumar, 15:17:51) * config tool can be decoupled from redhat-openstack/tempest in ocata release cycle, (chandankumar, 15:24:44) * decoupling tempest config tool will help to consume in refstack in future. (chandankumar, 15:26:17) * shade distribution (chandankumar, 15:28:42) * ACTION: apevec to update rdo-list thread about rdo-tools (apevec, 15:40:20) * python3 support (chandankumar, 15:40:43) * LINK: http://cbs.centos.org/koji/packageinfo?packageID=2504 <- apevec (number80, 15:51:53) * ACTION: apuimedo to talk to SCL sig for python35 (chandankumar, 15:54:29) * oooq images (chandankumar, 15:57:06) * Newton milestone 3 is this week, and so test day is next week, pending promotions: https://www.rdoproject.org/testday/newton/milestone3/ (rbowen, 16:01:09) * If you will be at OpenStack Summit, and wish to give a demo: https://etherpad.openstack.org/p/rdo-barcelona-summit-booth (rbowen, 16:01:15) * chair for next meeting (chandankumar, 16:01:38) * ACTION: trown to chair for next meeting (chandankumar, 16:02:12) Meeting ended at 16:02:19 UTC. Action Items ------------ * apevec review and try to simplify egg-info mangling for -tests-tempest * chandankumar to take a look at python-tempest changes * apevec to update rdo-list thread about rdo-tools * apuimedo to talk to SCL sig for python35 * trown to chair for next meeting Action Items, by person ----------------------- * a * apevec review and try to simplify egg-info mangling for -tests-tempest * chandankumar to take a look at python-tempest changes * apevec to update rdo-list thread about rdo-tools * apuimedo to talk to SCL sig for python35 * trown to chair for next meeting * apevec * apevec review and try to simplify egg-info mangling for -tests-tempest * apevec to update rdo-list thread about rdo-tools * apuimedo * apuimedo to talk to SCL sig for python35 * at * chandankumar to take a look at python-tempest changes * apevec to update rdo-list thread about rdo-tools * chandankumar * chandankumar to take a look at python-tempest changes * changes * chandankumar to take a look at python-tempest changes * look * chandankumar to take a look at python-tempest changes * python-tempest * chandankumar to take a look at python-tempest changes * take * chandankumar to take a look at python-tempest changes * to * apevec review and try to simplify egg-info mangling for -tests-tempest * chandankumar to take a look at python-tempest changes * apevec to update rdo-list thread about rdo-tools * apuimedo to talk to SCL sig for python35 * trown to chair for next meeting * trown * trown to chair for next meeting * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (97) * chandankumar (69) * apuimedo (41) * number80 (36) * dmellado|mtg (26) * zodbot (17) * leifmadsen (14) * rbowen (11) * jruzicka (11) * dmellado (10) * trown (9) * jpena (9) * imcsk8_ (6) * weshay (6) * eggmaster (5) * myoung (4) * openstack (3) * rdogerrit (3) * Duck (2) * rdobot (1) * jschlueter (1) * pabelanger (1) * coolsvap (1) * hrybacki (1) * at (0) * to (0) * take (0) * python-tempest (0) * a (0) * look (0) * changes (0) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From bderzhavets at hotmail.com Wed Aug 31 18:03:10 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 31 Aug 2016 18:03:10 +0000 Subject: [rdo-list] Is Keystone/Identity-v3 supported by RDO Packstack? Multi-node swift --answer-file? In-Reply-To: References: <187458073.6327797.1472229179556.JavaMail.zimbra@redhat.com>, Message-ID: Please, see detailed instructions here :- Backport upstream commits to stable RDO Mitaka release && Deployments with Keystone API V3 http://bderzhavets.blogspot.ru/2016/05/backport-upstream-commits-to-stable-rdo.html [https://4.bp.blogspot.com/-8KYSnMrQGmo/V0LYnpQFOkI/AAAAAAAAGrE/0WGJJh8kLeQfmCh4QsLnGKgJ7WfN_4-EQCLcB/w1200-h630-p-nu/Screenshot%2Bfrom%2B2016-05-23%2B12-48-25.png] Xen Virtualization on Linux and Solaris: Backport upstream commits to stable RDO Mitaka release && Deployments with Keystone API V3 bderzhavets.blogspot.ru Boris. ________________________________ From: rdo-list-bounces at redhat.com on behalf of Taisto Qvist Sent: Wednesday, August 31, 2016 6:51:55 PM To: Javier Pena Cc: rdo-list Subject: Re: [rdo-list] Is Keystone/Identity-v3 supported by RDO Packstack? Multi-node swift --answer-file? Thanks for the tips everyone. I found https://www.linux.com/blog/attempt-set-rdo-mitaka-any-given-time-delorean-trunks, and but that also failed with idv3, but on heat issues instead. I am unfortunately completely lost/confused/overwhelmed with all the different git/repository information found in different places, and building my own RPMs feels way to complicated as someone linked to. So I'll just clearly demonstrate my NooB-ness, by asking: If I follow the instructions on "https://www.rdoproject.org/install/quickstart/", how do I modify them to update my packstack-* to use the newer modules mentioned below? Regards Taisto 2016-08-26 18:32 GMT+02:00 Javier Pena >: ----- Original Message ----- > Fairly certain Cinder is a known problem against v3 in Packstack [1] > but I don't have the details. > > [1]: > https://github.com/openstack/packstack/blob/master/releasenotes/notes/keystone-v3-note-065b6302b49285f3.yaml > We had a patch to fix that merged in Mitaka some time ago [1], which depended on a puppet-cinder patch [2]. We have not yet released a new package to the CBS repos, but you could try updating your openstack-packstack, openstack-packstack-puppet and openstack-puppet-modules using the packages from RDO stable/mitaka Trunk [3]. Regards, Javier [1] - https://review.openstack.org/323690 [2] - https://review.openstack.org/320956 [3] - http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-mitaka-tested/ > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Fri, Aug 26, 2016 at 11:57 AM, Taisto Qvist > > wrote: > > Hi folks, > > > > I've managed to create a fairly well working packstack-answer file, that I > > used for liberty installations, with a 4-multi-node setup, eventually > > handling nested kvm, live-migration, LBAAS, etc, so I am quite happy with > > that. > > > > Now, my aim is Identity v3. From the packstack answer-file, it seems > > supported, but I cant get it to work. I couldnt get it to work with > > liberty, > > and now testing with Mitaka, it still doesnt work, failing on: > > > > ERROR : Error appeared during Puppet run: 10.9.24.100_cinder.pp > > Error: Could not prefetch cinder_type provider 'openstack': Execution of > > '/usr/bin/openstack volume type list --quiet --format csv --long' returned > > 1: Expecting to find domain in project - the server could not comply with > > the request since it is either malformed or otherwise incorrect. The client > > is assumed to be in error. (HTTP 400) (Request-ID: > > req-d8cccfb9-c060-4cdd-867d-603bf70ebf24) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Aug 31 18:39:29 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 31 Aug 2016 18:39:29 +0000 Subject: [rdo-list] Newton M3 test day, September 8, 9 In-Reply-To: References: Message-ID: Instructions provided on page https://www.rdoproject.org/tripleo/ $ export VIRTHOST='my_test_machine.example.com' $ curl -O https://raw.githubusercontent.com/openstack/tripleo-quickstart/master/quickstart.sh $ bash quickstart.sh $VIRTHOST forces me to setup minimal configuration, is there any option which would allow me working with Newton M3 to test ha.yml like for Mitaka/Stable $ git clone https://github.com/openstack/tripleo-quickstart $ cd tripleo-quickstart $ sudo bash quickstart.sh --install-deps $ bash quickstart.sh --config ./ha.yml $VIRTHOST Thank you. Boris. ________________________________ From: rdo-list-bounces at redhat.com on behalf of Rich Bowen Sent: Wednesday, August 31, 2016 4:06 PM To: rdo-list at redhat.com Subject: [rdo-list] Newton M3 test day, September 8, 9 As per $subject, we plan to hold the Newton M3 test day September 8th and 9th. Details are at https://www.rdoproject.org/testday/newton/milestone3/ As always we appreciate any updates to the test plans for things that you would like to see tested. Please provide instructions for those that are less familiar with the procedure. -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdoproject.org @RDOCommunity _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbaker at redhat.com Wed Aug 31 21:47:28 2016 From: sbaker at redhat.com (Steve Baker) Date: Thu, 1 Sep 2016 09:47:28 +1200 Subject: [rdo-list] Tempest Packaging in RDO In-Reply-To: References: <695cc79a-28f2-4d2f-3b15-93be24016a53@redhat.com> <248ac2f4-ac68-52fd-7dec-a443d459442a@redhat.com> Message-ID: <66a78a0e-d93f-841d-8453-62d4f37603a5@redhat.com> On 31/08/16 19:59, Alan Pevec wrote: > On Tue, Aug 30, 2016 at 11:45 PM, Steve Baker wrote: >> We've discussed this multiple times and each time the consensus has been to >> leave the tests in-tree. Developers really like the convenience in including >> a functional/integration test in the same commit as a bug or feature. This >> means there are currently no plans to create a new repo for heat tempest >> tests. > But how will you handle in-tree Tempest plugin in stable branches? > Tempest is branchless and other in-tree plugins have been doing ugly > things like checking out specific Tempest commit which is not > packageable :( > > Alan We're doing the following, which I think will mitigate branchless concerns for now: - minimal dependency on tempest imports, only tempest.config and tempest.test_discover.plugins - self contained config in the [heat_plugin] namespace - config options like skip_functional_test_list and skip_scenario_test_list so that latest tests can run against an old API which lacks the required features for all tests I realize keeping tests in-tree is a hassle for downstream packaging, at least for now I'm in favor of keeping them in-tree but I will raise packaging concerns when we discuss this again. cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: