From ibravo at ltgfederal.com Thu Oct 1 02:29:44 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 30 Sep 2015 22:29:44 -0400 Subject: [Rdo-list] RDO Manager: Missing openstack-ironic-python-agent Message-ID: <591CA67E-6F50-41FA-A1BE-338715D59BAC@ltgfederal.com> After deploying the undercloud following the upstream documentation on a Centos 7 machine, the process of building the images fails by not finding the package openstack-ironic-python-agent. Let me show you: [stack at bl16 ~]$ openstack overcloud image build --type agent-ramdisk ? some time later ?. Running install-packages install. Package list: openstack-ironic-python-agent Loading "fastestmirror" plugin Config time: 0.009 Yum version: 3.4.3 rpmdb time: 0.000 Setting up Package Sacks Loading mirror speeds from cached hostfile * base: mirrors.advancedhosters.com * epel: mirror.us.leaseweb.net * extras: mirrors.advancedhosters.com * updates: centos.mbni.med.umich.edu pkgsack time: 14.092 Checking for virtual provide or file-provide for openstack-ironic-python-agent No package openstack-ironic-python-agent available. Error: Nothing to do Yet this package does exist in the current delorean repo: [stack at bl16 ~]$ sudo yum list openstack-ironic-python-agent Loaded plugins: fastestmirror, priorities Loading mirror speeds from cached hostfile * base: centos.mbni.med.umich.edu * epel: mirror.symnds.com * extras: mirror.vcu.edu * updates: mirror.symnds.com 482 packages excluded due to repository priority protections Installed Packages openstack-ironic-python-agent.noarch 0.1.0-dev770.el7.centos @delorean I tried to debug the code a bit, and I believe the issue might be caused by https://github.com/rdo-management/python-rdomanager-oscplugin/blob/646028f82eeaec9111c6a50ca2e57d3b14f811ba/rdomanager_oscplugin/v1/overcloud_image.py that is using the Kilo repository for building the images instead of the Liberty that is deployed in the undercloud. Indeed, the kilo repository at http://trunk.rdoproject.org/kilo/centos7/current-passed-ci/ don?t have any of the openstack-ironic-* packages, which they do exist in http://trunk.rdoproject.org/liberty/centos7/current-passed-ci/ Does this make sense? Thanks, IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Thu Oct 1 11:16:56 2015 From: trown at redhat.com (John Trowbridge) Date: Thu, 1 Oct 2015 07:16:56 -0400 Subject: [Rdo-list] RDO Manager: Missing openstack-ironic-python-agent In-Reply-To: <591CA67E-6F50-41FA-A1BE-338715D59BAC@ltgfederal.com> References: <591CA67E-6F50-41FA-A1BE-338715D59BAC@ltgfederal.com> Message-ID: <560D1628.7050405@redhat.com> On 09/30/2015 10:29 PM, Ignacio Bravo wrote: > After deploying the undercloud following the upstream documentation on a Centos 7 machine, the process of building the images fails by not finding the package openstack-ironic-python-agent. Let me show you: > > [stack at bl16 ~]$ openstack overcloud image build --type agent-ramdisk > > ? some time later ?. > > Running install-packages install. Package list: openstack-ironic-python-agent > Loading "fastestmirror" plugin > Config time: 0.009 > Yum version: 3.4.3 > rpmdb time: 0.000 > Setting up Package Sacks > Loading mirror speeds from cached hostfile > * base: mirrors.advancedhosters.com > * epel: mirror.us.leaseweb.net > * extras: mirrors.advancedhosters.com > * updates: centos.mbni.med.umich.edu > pkgsack time: 14.092 > Checking for virtual provide or file-provide for openstack-ironic-python-agent > No package openstack-ironic-python-agent available. > Error: Nothing to do > > Yet this package does exist in the current delorean repo: > > [stack at bl16 ~]$ sudo yum list openstack-ironic-python-agent > Loaded plugins: fastestmirror, priorities > Loading mirror speeds from cached hostfile > * base: centos.mbni.med.umich.edu > * epel: mirror.symnds.com > * extras: mirror.vcu.edu > * updates: mirror.symnds.com > 482 packages excluded due to repository priority protections > Installed Packages > openstack-ironic-python-agent.noarch 0.1.0-dev770.el7.centos @delorean > > I tried to debug the code a bit, and I believe the issue might be caused by https://github.com/rdo-management/python-rdomanager-oscplugin/blob/646028f82eeaec9111c6a50ca2e57d3b14f811ba/rdomanager_oscplugin/v1/overcloud_image.py that is using the Kilo repository for building the images instead of the Liberty that is deployed in the undercloud. > > Indeed, the kilo repository at http://trunk.rdoproject.org/kilo/centos7/current-passed-ci/ don?t have any of the openstack-ironic-* packages, which they do exist in http://trunk.rdoproject.org/liberty/centos7/current-passed-ci/ > > Does this make sense? I think this may be a documentation issue. I have been using `export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-current.repo /etc/yum.repos.d/delorean-deps.repo"` before building images. I see that is missing from the docs though. The other bit is that all of the code hosted on the rdo-management github has moved to openstack. So rdo-management/python-rdomanager-oscplugin -> openstack/python-tripleoclient. Thanks for sticking with it. By way, the inspection will not work until this patch and its dependencies land. (https://review.openstack.org/#/c/228190/) However, it can be safely skipped and still get a working deploy. > > Thanks, > IB > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From apevec at gmail.com Thu Oct 1 12:58:23 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 1 Oct 2015 14:58:23 +0200 Subject: [Rdo-list] [ci] delorean ci is down In-Reply-To: References: Message-ID: 2015-09-30 22:39 GMT+02:00 Wesley Hayutin : > FYI, > The jenkins slave used in delorean package CI is experiencing some issues > atm. > The CI will be shutdown while these issues are resolved. That slave is back to normal, looks like it was a temporary issue, we'll keep watching it. Can we add more slaves for this job? Cheers, Alan From javier.pena at redhat.com Thu Oct 1 15:29:40 2015 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 1 Oct 2015 11:29:40 -0400 (EDT) Subject: [Rdo-list] Migrating from MySQL-python to PyMySQL In-Reply-To: <1762368750.61570500.1443712321615.JavaMail.zimbra@redhat.com> Message-ID: <1278971255.61585345.1443713380809.JavaMail.zimbra@redhat.com> Hi all, During the review of a packaging change to the Neutron package [1], we realized that our installation tools are still using MySQL-python for the db connections, and several of our packages still depend on the MySQL-python package, even though they rely on oslo.db, which has now moved to PyMySQL as default driver [2]. I'd like to propose the following plan for this migration: a) To avoid any short-term breakage, make python-oslo-db require MySQL-python and python-PyMySQL. b) Remove all MySQL-python dependencies from those packages that should no longer require it ([3], if I did not miss anyone). All these packages already require python-oslo-db, so there would be no missing deps. c) Update installers to support PyMySQL in their db connection strings. d) Once MySQL-python is no longer necessary, remove it from the dependencies for python-oslo-db What do you think? Steps a) and b) should be relatively easy to do in the short term, but I'm concerned about the testing implications of c) at this time of the Liberty cycle. Regards, Javier [1]- https://review.gerrithub.io/247972 [2]- http://docs.openstack.org/developer/oslo.db/installation.html#using-with-mysql-python [3] python-cinder python-glance openstack-heat-common python-keystone python-manila python-neutron python-nova python-octavia openstack-designate-central openstack-designate-mdns From hguemar at fedoraproject.org Thu Oct 1 16:11:31 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 1 Oct 2015 18:11:31 +0200 Subject: [Rdo-list] Migrating from MySQL-python to PyMySQL In-Reply-To: <1278971255.61585345.1443713380809.JavaMail.zimbra@redhat.com> References: <1762368750.61570500.1443712321615.JavaMail.zimbra@redhat.com> <1278971255.61585345.1443713380809.JavaMail.zimbra@redhat.com> Message-ID: 2015-10-01 17:29 GMT+02:00 Javier Pena : > Hi all, > > During the review of a packaging change to the Neutron package [1], we realized that our installation tools are still using MySQL-python for the db connections, and several of our packages still depend on the MySQL-python package, even though they rely on oslo.db, which has now moved to PyMySQL as default driver [2]. > > I'd like to propose the following plan for this migration: > > a) To avoid any short-term breakage, make python-oslo-db require MySQL-python and python-PyMySQL. > b) Remove all MySQL-python dependencies from those packages that should no longer require it ([3], if I did not miss anyone). All these packages already require python-oslo-db, so there would be no missing deps. > c) Update installers to support PyMySQL in their db connection strings. > d) Once MySQL-python is no longer necessary, remove it from the dependencies for python-oslo-db > > What do you think? Steps a) and b) should be relatively easy to do in the short term, but I'm concerned about the testing implications of c) at this time of the Liberty cycle. > > Regards, > Javier > This has been discussed few months ago with Jakub Dornak who maintains the package in Fedora and it does not require any action in short term. 1. package has been renamed into python-mysql in Fedora and switched to PyMySQL as upstream sources http://pkgs.fedoraproject.org/cgit/python-mysql.git/tree/python-mysql.spec#n19 2. it provides/obsoletes MySQL-python http://pkgs.fedoraproject.org/cgit/python-mysql.git/tree/python-mysql.spec#n36 We're already using the newer driver :) But I agree that we should do the cleanup during the Mitaka cycle, please add a trello card with the following (I updated your list) 1. migrate all requirements from MySQL-python to python-mysql (easyfix) 2. update installers H. > [1]- https://review.gerrithub.io/247972 > [2]- http://docs.openstack.org/developer/oslo.db/installation.html#using-with-mysql-python > [3] python-cinder > python-glance > openstack-heat-common > python-keystone > python-manila > python-neutron > python-nova > python-octavia > openstack-designate-central > openstack-designate-mdns > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Thu Oct 1 16:20:53 2015 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 1 Oct 2015 12:20:53 -0400 (EDT) Subject: [Rdo-list] Migrating from MySQL-python to PyMySQL In-Reply-To: References: <1762368750.61570500.1443712321615.JavaMail.zimbra@redhat.com> <1278971255.61585345.1443713380809.JavaMail.zimbra@redhat.com> Message-ID: <668710649.61621569.1443716453437.JavaMail.zimbra@redhat.com> ----- Original Message ----- > 2015-10-01 17:29 GMT+02:00 Javier Pena : > > Hi all, > > > > During the review of a packaging change to the Neutron package [1], we > > realized that our installation tools are still using MySQL-python for the > > db connections, and several of our packages still depend on the > > MySQL-python package, even though they rely on oslo.db, which has now > > moved to PyMySQL as default driver [2]. > > > > I'd like to propose the following plan for this migration: > > > > a) To avoid any short-term breakage, make python-oslo-db require > > MySQL-python and python-PyMySQL. > > b) Remove all MySQL-python dependencies from those packages that should > > no longer require it ([3], if I did not miss anyone). All these packages > > already require python-oslo-db, so there would be no missing deps. > > c) Update installers to support PyMySQL in their db connection strings. > > d) Once MySQL-python is no longer necessary, remove it from the > > dependencies for python-oslo-db > > > > What do you think? Steps a) and b) should be relatively easy to do in the > > short term, but I'm concerned about the testing implications of c) at this > > time of the Liberty cycle. > > > > Regards, > > Javier > > > > This has been discussed few months ago with Jakub Dornak who maintains > the package in Fedora and it does not require any action in short > term. Aha, good to know :). > > 1. package has been renamed into python-mysql in Fedora and switched > to PyMySQL as upstream sources > http://pkgs.fedoraproject.org/cgit/python-mysql.git/tree/python-mysql.spec#n19 > 2. it provides/obsoletes MySQL-python > http://pkgs.fedoraproject.org/cgit/python-mysql.git/tree/python-mysql.spec#n36 > > We're already using the newer driver :) > > But I agree that we should do the cleanup during the Mitaka cycle, > please add a trello card with the following (I updated your list) > 1. migrate all requirements from MySQL-python to python-mysql (easyfix) Wouldn't it be a good idea to move those requirements to python-oslo-db, instead of having each individual package require it? Regards, Javier > 2. update installers > > H. > > > [1]- https://review.gerrithub.io/247972 > > [2]- > > http://docs.openstack.org/developer/oslo.db/installation.html#using-with-mysql-python > > [3] python-cinder > > python-glance > > openstack-heat-common > > python-keystone > > python-manila > > python-neutron > > python-nova > > python-octavia > > openstack-designate-central > > openstack-designate-mdns > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ihrachys at redhat.com Thu Oct 1 16:24:30 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 1 Oct 2015 18:24:30 +0200 Subject: [Rdo-list] Migrating from MySQL-python to PyMySQL In-Reply-To: References: <1762368750.61570500.1443712321615.JavaMail.zimbra@redhat.com> <1278971255.61585345.1443713380809.JavaMail.zimbra@redhat.com> Message-ID: > On 01 Oct 2015, at 18:11, Ha?kel wrote: > > 2015-10-01 17:29 GMT+02:00 Javier Pena : >> Hi all, >> >> During the review of a packaging change to the Neutron package [1], we realized that our installation tools are still using MySQL-python for the db connections, and several of our packages still depend on the MySQL-python package, even though they rely on oslo.db, which has now moved to PyMySQL as default driver [2]. >> >> I'd like to propose the following plan for this migration: >> >> a) To avoid any short-term breakage, make python-oslo-db require MySQL-python and python-PyMySQL. >> b) Remove all MySQL-python dependencies from those packages that should no longer require it ([3], if I did not miss anyone). All these packages already require python-oslo-db, so there would be no missing deps. >> c) Update installers to support PyMySQL in their db connection strings. >> d) Once MySQL-python is no longer necessary, remove it from the dependencies for python-oslo-db >> >> What do you think? Steps a) and b) should be relatively easy to do in the short term, but I'm concerned about the testing implications of c) at this time of the Liberty cycle. >> >> Regards, >> Javier >> > > This has been discussed few months ago with Jakub Dornak who maintains > the package in Fedora and it does not require any action in short > term. > > 1. package has been renamed into python-mysql in Fedora and switched > to PyMySQL as upstream sources > http://pkgs.fedoraproject.org/cgit/python-mysql.git/tree/python-mysql.spec#n19 > 2. it provides/obsoletes MySQL-python > http://pkgs.fedoraproject.org/cgit/python-mysql.git/tree/python-mysql.spec#n36 > > We're already using the newer driver :) > > But I agree that we should do the cleanup during the Mitaka cycle, > please add a trello card with the following (I updated your list) > 1. migrate all requirements from MySQL-python to python-mysql (easyfix) > 2. update installers WAT? Are they API compatible? I don?t believe so. So providing the old package is neat but actually wrong. Or am I missing something? Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From hguemar at fedoraproject.org Thu Oct 1 16:43:42 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 1 Oct 2015 18:43:42 +0200 Subject: [Rdo-list] Migrating from MySQL-python to PyMySQL In-Reply-To: <668710649.61621569.1443716453437.JavaMail.zimbra@redhat.com> References: <1762368750.61570500.1443712321615.JavaMail.zimbra@redhat.com> <1278971255.61585345.1443713380809.JavaMail.zimbra@redhat.com> <668710649.61621569.1443716453437.JavaMail.zimbra@redhat.com> Message-ID: 2015-10-01 18:20 GMT+02:00 Javier Pena : > > > Wouldn't it be a good idea to move those requirements to python-oslo-db, instead of having each individual package require it? > > Regards, > Javier > Yes, but I consider package directly depending on drivers as a bug :) You may add it to the list/ H. From ihrachys at redhat.com Thu Oct 1 16:47:30 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 1 Oct 2015 18:47:30 +0200 Subject: [Rdo-list] Migrating from MySQL-python to PyMySQL In-Reply-To: References: <1762368750.61570500.1443712321615.JavaMail.zimbra@redhat.com> <1278971255.61585345.1443713380809.JavaMail.zimbra@redhat.com> <668710649.61621569.1443716453437.JavaMail.zimbra@redhat.com> Message-ID: > On 01 Oct 2015, at 18:43, Ha?kel wrote: > > 2015-10-01 18:20 GMT+02:00 Javier Pena : >> >> >> Wouldn't it be a good idea to move those requirements to python-oslo-db, instead of having each individual package require it? >> >> Regards, >> Javier >> > > Yes, but I consider package directly depending on drivers as a bug :) > You may add it to the list/ Yes, neither oslo.db should depend on it. Installer should decide what they really want to configure: old or new, or postgres, or maybe even db2. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From hguemar at fedoraproject.org Thu Oct 1 16:55:32 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 1 Oct 2015 18:55:32 +0200 Subject: [Rdo-list] Migrating from MySQL-python to PyMySQL In-Reply-To: References: <1762368750.61570500.1443712321615.JavaMail.zimbra@redhat.com> <1278971255.61585345.1443713380809.JavaMail.zimbra@redhat.com> Message-ID: 2015-10-01 18:24 GMT+02:00 Ihar Hrachyshka : > > WAT? Are they API compatible? I don?t believe so. So providing the old package is neat but actually wrong. Or am I missing something? > > Ihar Ok, the same people are maintaining two drivers: * mysqlclient (which is what is currently packaged) and is a drop-in replacement to MySQL-python * pymysql which is API compatible but not a drop-in replacement (which is named after the project hence the confusion) I think that oslo.db should come with some drivers pre-installed and it should be mysqlclient that comes by default not to break upgrades. Then, installers should handle that. Regards, H, From mohammed.arafa at gmail.com Fri Oct 2 03:24:46 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 1 Oct 2015 23:24:46 -0400 Subject: [Rdo-list] [rdo-manager] rabbitmq bug fixed? documentation update request Message-ID: hi i seem to remember this bug https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/653405 the resolution requires an update to the documentation at http://docs.openstack.org/developer/tripleo-docs/installation/installing.html the /etc/hosts file requires to be in the format 127.0.0.1 myhost.mydomain myhost it current states 127.0.0.1 myhost.mydomain rabbitmq will attempt to find myhost (shortname) and fail. i have no idea what rabbitmq will do if it is resolving from dns thanks -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Fri Oct 2 13:10:33 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Fri, 2 Oct 2015 09:10:33 -0400 Subject: [Rdo-list] [rdo-manager] rabbitmq bug fixed? documentation update request In-Reply-To: References: Message-ID: Thanks Mohammed for your insight. Another documentation issue relates to the images being built using Kilo rather than liberty, if the following is not added (Thanks John): I think this may be a documentation issue. I have been using `export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-current.repo /etc/yum.repos.d/delorean-deps.repo"` If we want RDO Manager to be successful, we need to ensure that the documentation is up-to date. There is nothing more frustrating than following the steps on the documentation and not getting a successful built. __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Oct 1, 2015, at 11:24 PM, Mohammed Arafa wrote: > > hi > > i seem to remember this bug https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/653405 > the resolution requires an update to the documentation at http://docs.openstack.org/developer/tripleo-docs/installation/installing.html > > the /etc/hosts file requires to be in the format > 127.0.0.1 myhost.mydomain myhost > > it current states > 127.0.0.1 myhost.mydomain > > rabbitmq will attempt to find myhost (shortname) and fail. i have no idea what rabbitmq will do if it is resolving from dns > > thanks > > -- > > > > > 805010942448935 > GR750055912MA > Link to me on LinkedIn _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdreyer at redhat.com Fri Oct 2 19:28:01 2015 From: kdreyer at redhat.com (Ken Dreyer) Date: Fri, 2 Oct 2015 13:28:01 -0600 Subject: [Rdo-list] More transparency in RDO infra/process In-Reply-To: References: <55FB45D1.2090902@redhat.com> <5600465F.4030201@redhat.com> Message-ID: On Mon, Sep 21, 2015 at 1:55 PM, Ha?kel wrote: > We've been working into consolidating our infrastructure (Cf. the > fedora thread), and have a dedicated person to work full-time on RDO > CI so we could open it. Hi Ha?kel, Who is that person who's working full-time on RDO CI? I had some questions a while back about the setup on OpenShift: https://www.redhat.com/archives/rdo-list/2015-June/msg00156.html - Ken From apevec at gmail.com Sat Oct 3 07:49:35 2015 From: apevec at gmail.com (Alan Pevec) Date: Sat, 3 Oct 2015 09:49:35 +0200 Subject: [Rdo-list] More transparency in RDO infra/process In-Reply-To: References: <55FB45D1.2090902@redhat.com> <5600465F.4030201@redhat.com> Message-ID: > > We've been working into consolidating our infrastructure (Cf. the > > fedora thread), and have a dedicated person to work full-time on RDO > > CI so we could open it. > > Hi Ha?kel, > > Who is that person who's working full-time on RDO CI? > > I had some questions a while back about the setup on OpenShift: > https://www.redhat.com/archives/rdo-list/2015-June/msg00156.html David works now on RDO CI, but prod-rdojenkins predates him and was set up by Wes so he could answer that question. Cheers, Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Mon Oct 5 00:04:32 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sun, 4 Oct 2015 20:04:32 -0400 Subject: [Rdo-list] RDO Manager: Missing openstack-ironic-python-agent In-Reply-To: <560D1628.7050405@redhat.com> References: <591CA67E-6F50-41FA-A1BE-338715D59BAC@ltgfederal.com> <560D1628.7050405@redhat.com> Message-ID: John thanks for the tip. thing is i am subscribed to this mailing list. so i was able to search for it here. but google didnt. will this make it into the documentation? On Thu, Oct 1, 2015 at 7:16 AM, John Trowbridge wrote: > > > On 09/30/2015 10:29 PM, Ignacio Bravo wrote: > > After deploying the undercloud following the upstream documentation on a > Centos 7 machine, the process of building the images fails by not finding > the package openstack-ironic-python-agent. Let me show you: > > > > [stack at bl16 ~]$ openstack overcloud image build --type agent-ramdisk > > > > ? some time later ?. > > > > Running install-packages install. Package list: > openstack-ironic-python-agent > > Loading "fastestmirror" plugin > > Config time: 0.009 > > Yum version: 3.4.3 > > rpmdb time: 0.000 > > Setting up Package Sacks > > Loading mirror speeds from cached hostfile > > * base: mirrors.advancedhosters.com > > * epel: mirror.us.leaseweb.net > > * extras: mirrors.advancedhosters.com > > * updates: centos.mbni.med.umich.edu > > pkgsack time: 14.092 > > Checking for virtual provide or file-provide for > openstack-ironic-python-agent > > No package openstack-ironic-python-agent available. > > Error: Nothing to do > > > > Yet this package does exist in the current delorean repo: > > > > [stack at bl16 ~]$ sudo yum list openstack-ironic-python-agent > > Loaded plugins: fastestmirror, priorities > > Loading mirror speeds from cached hostfile > > * base: centos.mbni.med.umich.edu > > * epel: mirror.symnds.com > > * extras: mirror.vcu.edu > > * updates: mirror.symnds.com > > 482 packages excluded due to repository priority protections > > Installed Packages > > openstack-ironic-python-agent.noarch > 0.1.0-dev770.el7.centos > @delorean > > > > I tried to debug the code a bit, and I believe the issue might be caused > by > https://github.com/rdo-management/python-rdomanager-oscplugin/blob/646028f82eeaec9111c6a50ca2e57d3b14f811ba/rdomanager_oscplugin/v1/overcloud_image.py > < > https://github.com/rdo-management/python-rdomanager-oscplugin/blob/646028f82eeaec9111c6a50ca2e57d3b14f811ba/rdomanager_oscplugin/v1/overcloud_image.py> > that is using the Kilo repository for building the images instead of the > Liberty that is deployed in the undercloud. > > > > Indeed, the kilo repository at > http://trunk.rdoproject.org/kilo/centos7/current-passed-ci/ < > http://trunk.rdoproject.org/kilo/centos7/current-passed-ci/> don?t have > any of the openstack-ironic-* packages, which they do exist in > http://trunk.rdoproject.org/liberty/centos7/current-passed-ci/ < > http://trunk.rdoproject.org/liberty/centos7/current-passed-ci/> > > > > Does this make sense? > > I think this may be a documentation issue. I have been using > > `export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > /etc/yum.repos.d/delorean-current.repo > /etc/yum.repos.d/delorean-deps.repo"` > > before building images. I see that is missing from the docs though. > > The other bit is that all of the code hosted on the rdo-management > github has moved to openstack. So > rdo-management/python-rdomanager-oscplugin -> > openstack/python-tripleoclient. > > Thanks for sticking with it. By way, the inspection will not work until > this patch and its dependencies land. > (https://review.openstack.org/#/c/228190/) However, it can be safely > skipped and still get a working deploy. > > > > > Thanks, > > IB > > > > > > __ > > Ignacio Bravo > > LTG Federal, Inc > > www.ltgfederal.com > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Mon Oct 5 10:23:08 2015 From: trown at redhat.com (John Trowbridge) Date: Mon, 5 Oct 2015 06:23:08 -0400 Subject: [Rdo-list] RDO Manager: Missing openstack-ironic-python-agent In-Reply-To: References: <591CA67E-6F50-41FA-A1BE-338715D59BAC@ltgfederal.com> <560D1628.7050405@redhat.com> Message-ID: <56124F8C.4060700@redhat.com> On 10/04/2015 08:04 PM, Mohammed Arafa wrote: > John > thanks for the tip. > thing is i am subscribed to this mailing list. so i was able to search for > it here. but google didnt. > > will this make it into the documentation? We are using the upstream tripleo documentation for RDO-Manager. These docs are built from there own repo in the openstack namespace [1], and contributions are made just like all other openstack projects [2]. Contributions there are definitely welcome. That said, I would like to do a documentation clean-up as soon as we get CI back up for RDO-Manager on liberty. Thanks for sticking with it. - John Trowbridge [1] https://github.com/openstack/tripleo-docs [2] http://docs.openstack.org/infra/manual/developers.html#development-workflow > > On Thu, Oct 1, 2015 at 7:16 AM, John Trowbridge wrote: > >> >> >> On 09/30/2015 10:29 PM, Ignacio Bravo wrote: >>> After deploying the undercloud following the upstream documentation on a >> Centos 7 machine, the process of building the images fails by not finding >> the package openstack-ironic-python-agent. Let me show you: >>> >>> [stack at bl16 ~]$ openstack overcloud image build --type agent-ramdisk >>> >>> ? some time later ?. >>> >>> Running install-packages install. Package list: >> openstack-ironic-python-agent >>> Loading "fastestmirror" plugin >>> Config time: 0.009 >>> Yum version: 3.4.3 >>> rpmdb time: 0.000 >>> Setting up Package Sacks >>> Loading mirror speeds from cached hostfile >>> * base: mirrors.advancedhosters.com >>> * epel: mirror.us.leaseweb.net >>> * extras: mirrors.advancedhosters.com >>> * updates: centos.mbni.med.umich.edu >>> pkgsack time: 14.092 >>> Checking for virtual provide or file-provide for >> openstack-ironic-python-agent >>> No package openstack-ironic-python-agent available. >>> Error: Nothing to do >>> >>> Yet this package does exist in the current delorean repo: >>> >>> [stack at bl16 ~]$ sudo yum list openstack-ironic-python-agent >>> Loaded plugins: fastestmirror, priorities >>> Loading mirror speeds from cached hostfile >>> * base: centos.mbni.med.umich.edu >>> * epel: mirror.symnds.com >>> * extras: mirror.vcu.edu >>> * updates: mirror.symnds.com >>> 482 packages excluded due to repository priority protections >>> Installed Packages >>> openstack-ironic-python-agent.noarch >> 0.1.0-dev770.el7.centos >> @delorean >>> >>> I tried to debug the code a bit, and I believe the issue might be caused >> by >> https://github.com/rdo-management/python-rdomanager-oscplugin/blob/646028f82eeaec9111c6a50ca2e57d3b14f811ba/rdomanager_oscplugin/v1/overcloud_image.py >> < >> https://github.com/rdo-management/python-rdomanager-oscplugin/blob/646028f82eeaec9111c6a50ca2e57d3b14f811ba/rdomanager_oscplugin/v1/overcloud_image.py> >> that is using the Kilo repository for building the images instead of the >> Liberty that is deployed in the undercloud. >>> >>> Indeed, the kilo repository at >> http://trunk.rdoproject.org/kilo/centos7/current-passed-ci/ < >> http://trunk.rdoproject.org/kilo/centos7/current-passed-ci/> don?t have >> any of the openstack-ironic-* packages, which they do exist in >> http://trunk.rdoproject.org/liberty/centos7/current-passed-ci/ < >> http://trunk.rdoproject.org/liberty/centos7/current-passed-ci/> >>> >>> Does this make sense? >> >> I think this may be a documentation issue. I have been using >> >> `export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo >> /etc/yum.repos.d/delorean-current.repo >> /etc/yum.repos.d/delorean-deps.repo"` >> >> before building images. I see that is missing from the docs though. >> >> The other bit is that all of the code hosted on the rdo-management >> github has moved to openstack. So >> rdo-management/python-rdomanager-oscplugin -> >> openstack/python-tripleoclient. >> >> Thanks for sticking with it. By way, the inspection will not work until >> this patch and its dependencies land. >> (https://review.openstack.org/#/c/228190/) However, it can be safely >> skipped and still get a working deploy. >> >>> >>> Thanks, >>> IB >>> >>> >>> __ >>> Ignacio Bravo >>> LTG Federal, Inc >>> www.ltgfederal.com >>> >>> >>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > From cbrown2 at ocf.co.uk Mon Oct 5 11:01:38 2015 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Mon, 5 Oct 2015 12:01:38 +0100 Subject: [Rdo-list] RDO Manager: Missing openstack-ironic-python-agent In-Reply-To: <56124F8C.4060700@redhat.com> References: <591CA67E-6F50-41FA-A1BE-338715D59BAC@ltgfederal.com> <560D1628.7050405@redhat.com> <56124F8C.4060700@redhat.com> Message-ID: <1444042898.6599.21.camel@ocf-laptop> Hi John, On Mon, 2015-10-05 at 11:23 +0100, John Trowbridge wrote: > > On 10/04/2015 08:04 PM, Mohammed Arafa wrote: > > John > > thanks for the tip. > > thing is i am subscribed to this mailing list. so i was able to search for > > it here. but google didnt. > > > > will this make it into the documentation? > > We are using the upstream tripleo documentation for RDO-Manager. These > docs are built from there own repo in the openstack namespace [1], and > contributions are made just like all other openstack projects [2]. > Contributions there are definitely welcome. Thanks for this information and this fix. > That said, I would like to do a documentation clean-up as soon as we get > CI back up for RDO-Manager on liberty. Any idea when this will be? Liberty release day or earlier/later? > Thanks for sticking with it. Well, I went off to play with MAAS but came back when I realised it was in a pretty sorry state too. > - John Trowbridge > > [1] https://github.com/openstack/tripleo-docs > [2] > http://docs.openstack.org/infra/manual/developers.html#development-workflow -- Regards, Christopher --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus From pgsousa at gmail.com Mon Oct 5 14:00:44 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 5 Oct 2015 15:00:44 +0100 Subject: [Rdo-list] RDO-Manager stable version? Message-ID: Hi all, first of all, please forgive me if this question was raised before. I'm testing RDO-Manager on baremetal nodes, I'm on the phase of installing undercloud and I noticed that it uses liberty repos such as *http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/$basearch/os/ * My questions are: *- Is there a RDO Manager stable version to install a overcloud based on Kilo? If so, how do I test it?* *- Or should I only expect RDO-Manager to be stable on Liberty?* I've also noticed that Redhat launched Red Hat Enterprise Linux OpenStack Platform 7 director product that it seems to be based on RDO-Manager, so I'm curious if I should expect a CentOS RDO-Manager stable version to use in production soon. Regards, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Oct 5 15:00:02 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 5 Oct 2015 15:00:02 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20151005150002.D8AEC60A4003@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-10-07 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From mohammed.arafa at gmail.com Mon Oct 5 15:38:20 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 5 Oct 2015 11:38:20 -0400 Subject: [Rdo-list] RDO-Manager stable version? In-Reply-To: References: Message-ID: i asked the same question a week or so ago. in short, things are in flux as liberty work is ongoing. it is expected that rdo-manager on kilo will stabilise after that. how? i am not sure. i am on the user end of things. having said that, in the last week, i have been able to deploy the undercloud and import images on bare metal rdo manager host.you will need to go thru this mailer to glean the information but it is possible. i know i didnt answer your questions exactly but i hope it helps. also check out #rdo on irc/freenode: http://webchat.freenode.net/?channels=RDO On Mon, Oct 5, 2015 at 10:00 AM, Pedro Sousa wrote: > Hi all, > > first of all, please forgive me if this question was raised before. I'm > testing RDO-Manager on baremetal nodes, I'm on the phase of installing > undercloud and I noticed that it uses liberty repos such as *http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/$basearch/os/ > * > > My questions are: > > *- Is there a RDO Manager stable version to install a overcloud based on > Kilo? If so, how do I test it?* > *- Or should I only expect RDO-Manager to be stable on Liberty?* > > I've also noticed that Redhat launched Red Hat Enterprise Linux OpenStack > Platform 7 director product that it seems to be based on RDO-Manager, so > I'm curious if I should expect a CentOS RDO-Manager stable version to use > in production soon. > > Regards, > Pedro Sousa > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Mon Oct 5 15:54:16 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 5 Oct 2015 16:54:16 +0100 Subject: [Rdo-list] RDO-Manager stable version? In-Reply-To: References: Message-ID: Hi Mohammed, I've missed your question, I'm also on the user end part and glad to know I'm not alone :) I've played a bit with RDO Manager, it really looks a great product from architecture perspective, but it seems unstable for production, at least for now, what do you think based on your experience? Right now I'm stuck on importing baremetal nodes to ironic, I always get "111 connection refused" error. Other question, did you managed to get network isolation working in overcloud? Thanks, Pedro Sousa On Mon, Oct 5, 2015 at 4:38 PM, Mohammed Arafa wrote: > i asked the same question a week or so ago. > in short, things are in flux as liberty work is ongoing. it is expected > that rdo-manager on kilo will stabilise after that. how? i am not sure. i > am on the user end of things. > > having said that, in the last week, i have been able to deploy the > undercloud and import images on bare metal rdo manager host.you will need > to go thru this mailer to glean the information but it is possible. > > i know i didnt answer your questions exactly but i hope it helps. > > also check out #rdo on irc/freenode: > http://webchat.freenode.net/?channels=RDO > > > On Mon, Oct 5, 2015 at 10:00 AM, Pedro Sousa wrote: > >> Hi all, >> >> first of all, please forgive me if this question was raised before. I'm >> testing RDO-Manager on baremetal nodes, I'm on the phase of installing >> undercloud and I noticed that it uses liberty repos such as *http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/$basearch/os/ >> * >> >> My questions are: >> >> *- Is there a RDO Manager stable version to install a overcloud based on >> Kilo? If so, how do I test it?* >> *- Or should I only expect RDO-Manager to be stable on Liberty?* >> >> I've also noticed that Redhat launched Red Hat Enterprise Linux >> OpenStack Platform 7 director product that it seems to be based on >> RDO-Manager, so I'm curious if I should expect a CentOS RDO-Manager stable >> version to use in production soon. >> >> Regards, >> Pedro Sousa >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Oct 6 12:38:33 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 6 Oct 2015 08:38:33 -0400 Subject: [Rdo-list] RDO blog roundup, Week of October 5 Message-ID: <5613C0C9.9070803@redhat.com> Here's what RDO enthusiasts have been writing about over the past week. If you're writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you're not on my list, please let me know! Migrating Cinder volumes between OpenStack environments using shared NFS storage by Lars Kellogg-Stedman Many of the upgrade guides for OpenStack focus on in-place upgrades to your OpenStack environment. Some organizations may opt for a less risky (but more hardware intensive) option of setting up a parallel environment, and then migrating data into the new environment. In this article, we look at how to use Cinder backups with a shared NFS volume to facilitate the migration of Cinder volumes between two different OpenStack environments. ? read more at http://tm3.org/2r RDO Liberty (beta) DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1 by Boris Derzhavets Would you experience VXLAN tunnels disappiaring issue like it happens on RDO Kilo add following lines to ml2_conf.ini on each Compute Node ? read more at http://tm3.org/2s So, you're an ATC. Let me tell you something by Flavio Percoco It's that time of the cycle - ha! you saw this comming, didn't you? -, in OpenStack, when we need to elect new members for the Technical Committee. In a previous post, I talked about what being a PTL means. I talked directly to candidates and I encouraged them to understand each and every point that I've made in that post. This time, though, I'd like to talk directly to ATCs for a couple of reasons. First one is that Thierry Carrez has a great post already where he explains what being a TC member means. Second one is that I think you, my dear ATC, are one of the most valuable member of this community and of the ones with most power throughout OpenStack. ? read more at http://tm3.org/2t RDO Kilo DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1 by Boris Derzhavets RDO Kilo DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1 ? read more at http://tm3.org/2u How would work changing "enable_isolated_metadata" from false to true && openstack-service restart neutron on the fly on RDO Liberty ? by Boris Derzhavets Can meta-data co-exist in qrouter and qdhcp namespace at the same time so that LANs without Routers involved can access meta-data ? ? read more at http://tm3.org/2v Haikel Guemar talks about RPM packaging by Rich Bowen This continues my series talking with OpenStack project PTLs (Project Technical Leads) about their projects, what's new in Liberty, and what's coming in future releases. ? read (and listen) at http://tm3.org/2w -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From trown at redhat.com Tue Oct 6 19:51:56 2015 From: trown at redhat.com (John Trowbridge) Date: Tue, 6 Oct 2015 15:51:56 -0400 Subject: [Rdo-list] RDO-Manager liberty CI is green! Message-ID: <5614265C.3030609@redhat.com> We have automated CI passing against liberty based RDO-Manager.[1] The CI follows the upstream tripleo docs[2] with only one undocumented step for this documentation bug[3]. The HA job does not look like it will pass, but it is not 100% stable upstream[4]. Besides HA, we need the following: - Document the network isolation feature on liberty and get some job using that feature. - Get a promote job working that runs a packstack job as well as RDO-Manager job(s) and promotes based on all green (promotion is still manual) - Find a place to host images for the current-passed-ci repo to reduce the runtime on gate jobs. Huge props to weshay and dmsimard for helping get this going. --trown From trown at redhat.com Tue Oct 6 19:56:08 2015 From: trown at redhat.com (John Trowbridge) Date: Tue, 6 Oct 2015 15:56:08 -0400 Subject: [Rdo-list] RDO-Manager liberty CI is green! In-Reply-To: <5614265C.3030609@redhat.com> References: <5614265C.3030609@redhat.com> Message-ID: <56142758.5080500@redhat.com> On 10/06/2015 03:51 PM, John Trowbridge wrote: > We have automated CI passing against liberty based RDO-Manager.[1] The > CI follows the upstream tripleo docs[2] with only one undocumented step > for this documentation bug[3]. The HA job does not look like it will > pass, but it is not 100% stable upstream[4]. Besides HA, we need the > following: > > - Document the network isolation feature on liberty and get some job > using that feature. > - Get a promote job working that runs a packstack job as well as > RDO-Manager job(s) and promotes based on all green (promotion is still > manual) > - Find a place to host images for the current-passed-ci repo to reduce > the runtime on gate jobs. > > Huge props to weshay and dmsimard for helping get this going. > > --trown Would be nice if I added the footnotes... Also, huge thanks to all the folks kicking the tires and reporting issues in #rdo. [1] https://ci.centos.org/view/rdo/ [2] http://docs.openstack.org/developer/tripleo-docs/ [3] https://bugzilla.redhat.com/show_bug.cgi?id=1268990 [4] http://goodsquishy.com/downloads/tripleo-jobs.html > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From cbrown2 at ocf.co.uk Tue Oct 6 20:02:14 2015 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Tue, 6 Oct 2015 21:02:14 +0100 Subject: [Rdo-list] RDO-Manager liberty CI is green! In-Reply-To: <5614265C.3030609@redhat.com> References: <5614265C.3030609@redhat.com> Message-ID: <1444161734.15522.28.camel@ocf-laptop.lan> Hi John, On Tue, 2015-10-06 at 20:51 +0100, John Trowbridge wrote: > We have automated CI passing against liberty based RDO-Manager.[1] The > CI follows the upstream tripleo docs[2] with only one undocumented step > for this documentation bug[3]. The HA job does not look like it will > pass, but it is not 100% stable upstream[4]. Besides HA, we need the > following: That is great news and I'm sure will be a big help with the current build quality of RDO Manager. Would you mind adding the comments, particularly [3] please? -- Regards, Christopher Brown --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus From morazi at redhat.com Tue Oct 6 20:06:58 2015 From: morazi at redhat.com (Mike Orazi) Date: Tue, 6 Oct 2015 16:06:58 -0400 Subject: [Rdo-list] RDO-Manager liberty CI is green! In-Reply-To: <5614265C.3030609@redhat.com> References: <5614265C.3030609@redhat.com> Message-ID: <561429E2.5010402@redhat.com> On 10/06/2015 03:51 PM, John Trowbridge wrote: > We have automated CI passing against liberty based RDO-Manager.[1] The > CI follows the upstream tripleo docs[2] with only one undocumented step > for this documentation bug[3]. The HA job does not look like it will > pass, but it is not 100% stable upstream[4]. Besides HA, we need the > following: > > - Document the network isolation feature on liberty and get some job > using that feature. > - Get a promote job working that runs a packstack job as well as > RDO-Manager job(s) and promotes based on all green (promotion is still > manual) > - Find a place to host images for the current-passed-ci repo to reduce > the runtime on gate jobs. > > Huge props to weshay and dmsimard for helping get this going. > > --trown > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > Awesome work folks! Thanks for the update. - Mike From dsneddon at redhat.com Tue Oct 6 20:21:08 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 06 Oct 2015 13:21:08 -0700 Subject: [Rdo-list] RDO-Manager liberty CI is green! In-Reply-To: <5614265C.3030609@redhat.com> References: <5614265C.3030609@redhat.com> Message-ID: <56142D34.7000508@redhat.com> On 10/06/2015 12:51 PM, John Trowbridge wrote: > We have automated CI passing against liberty based RDO-Manager.[1] The > CI follows the upstream tripleo docs[2] with only one undocumented step > for this documentation bug[3]. The HA job does not look like it will > pass, but it is not 100% stable upstream[4]. Besides HA, we need the > following: > > - Document the network isolation feature on liberty and get some job > using that feature. > - Get a promote job working that runs a packstack job as well as > RDO-Manager job(s) and promotes based on all green (promotion is still > manual) > - Find a place to host images for the current-passed-ci repo to reduce > the runtime on gate jobs. > > Huge props to weshay and dmsimard for helping get this going. > > --trown > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > The changes in this review [1] (ill-named, since I used a Red Hat internal version reference) should bring the network isolation documentation in line with Liberty. I'll continue to update tripleo-docs as features land. Actually, I'll probably have a follow-up soon to document some more edge cases, but that documentation should work for now. Please feel free to give it a review and make suggestions. [1] - https://review.openstack.org/#/c/221908/ -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From dms at redhat.com Tue Oct 6 23:55:15 2015 From: dms at redhat.com (David Moreau Simard) Date: Tue, 6 Oct 2015 19:55:15 -0400 Subject: [Rdo-list] [ci] delorean ci is down In-Reply-To: References: Message-ID: Hi, We've had another incident tonight. We took the time to build two new slaves from scratch to prevent a new outage. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Thu, Oct 1, 2015 at 8:58 AM, Alan Pevec wrote: > 2015-09-30 22:39 GMT+02:00 Wesley Hayutin : >> FYI, >> The jenkins slave used in delorean package CI is experiencing some issues >> atm. >> The CI will be shutdown while these issues are resolved. > > That slave is back to normal, looks like it was a temporary issue, > we'll keep watching it. > Can we add more slaves for this job? > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dms at redhat.com Wed Oct 7 01:35:05 2015 From: dms at redhat.com (David Moreau Simard) Date: Tue, 6 Oct 2015 21:35:05 -0400 Subject: [Rdo-list] delorean.repo vs delorean-deps.repo Message-ID: Hi, I was wondering why these two files were split up if one can't be used without the other ? Can we bundle delorean-deps.repo inside delorean.repo ? There'd be three repositories in the file. It'd be simpler for both users and systems to consume this one repository file with everything we need in it. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From apevec at gmail.com Wed Oct 7 06:34:16 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 7 Oct 2015 08:34:16 +0200 Subject: [Rdo-list] delorean.repo vs delorean-deps.repo In-Reply-To: References: Message-ID: > I was wondering why these two files were split up if one can't be used > without the other ? > Can we bundle delorean-deps.repo inside delorean.repo ? There'd be > three repositories in the file. delorean-deps.repo is a single file which can be changed when we move repos e.g. cbs.centos.org/repos/ will be blocked soon and we'll need to switch to the mirror on buildlogs.c.o and eventually to the release repos on mirror.c.o. delorean.repo is a static file generated by Delorean when it runs so it can't be changed as easily. > It'd be simpler for both users and systems to consume this one > repository file with everything we need in it. Good point, maybe we could figure out something using server-side include: https://github.com/redhat-openstack/delorean-instance/issues/14 Cheers, Alan From weiler at soe.ucsc.edu Wed Oct 7 00:34:28 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Tue, 6 Oct 2015 17:34:28 -0700 Subject: [Rdo-list] Jumbo MTU to instances in Kilo? Message-ID: <56146894.2020107@soe.ucsc.edu> Hi Y'all, I know someone must have figured this one out, but I can't seem to get 9000 byte MTUs working. I have it set in plugin.ini, etc, my nodes have MTU=9000 on their interfaces, so does the network node. dnsmasq also is configured to set MTU=9000 on instances, which works. But I still can't ping with large packets to my instance: [weiler at stacker ~]$ ping 10.50.100.2 PING 10.50.100.2 (10.50.100.2) 56(84) bytes of data. 64 bytes from 10.50.100.2: icmp_seq=1 ttl=63 time=2.95 ms 64 bytes from 10.50.100.2: icmp_seq=2 ttl=63 time=1.14 ms 64 bytes from 10.50.100.2: icmp_seq=3 ttl=63 time=0.661 ms That works fine. This however doesn't work: [root at stacker ~]# ping -M do -s 8000 10.50.100.2 PING 10.50.100.2 (10.50.100.2) 8000(8028) bytes of data. From 10.50.100.2 icmp_seq=1 Frag needed and DF set (mtu = 1500) ping: local error: Message too long, mtu=1500 ping: local error: Message too long, mtu=1500 ping: local error: Message too long, mtu=1500 ping: local error: Message too long, mtu=1500 It looks like somehow the br-int interface for OVS isn't set at 9000, but I can't figure out how to do that... Here's ifconfig on my compute node: br-enp3s0f0: flags=4163 mtu 9000 inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) RX packets 2401432 bytes 359276713 (342.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 30 bytes 1572 (1.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 br-int: flags=4163 mtu 1500 inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid 0x20 ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) RX packets 69 bytes 6866 (6.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp3s0f0: flags=4419 mtu 9000 inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) RX packets 130174458 bytes 15334807929 (14.2 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22919305 bytes 5859090420 (5.4 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp3s0f0.50: flags=4163 mtu 9000 inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) RX packets 38429352 bytes 5152853436 (4.7 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 419842 bytes 101161981 (96.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 22141566 bytes 1185622090 (1.1 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22141566 bytes 1185622090 (1.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qbr247da3ed-a4: flags=4163 mtu 1500 inet6 fe80::5c8f:c0ff:fe79:bc11 prefixlen 64 scopeid 0x20 ether b6:1f:54:3f:3d:48 txqueuelen 0 (Ethernet) RX packets 16 bytes 1472 (1.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qbrf42ea01f-fe: flags=4163 mtu 1500 inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid 0x20 ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) RX packets 15 bytes 1456 (1.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvb247da3ed-a4: flags=4419 mtu 1500 inet6 fe80::b41f:54ff:fe3f:3d48 prefixlen 64 scopeid 0x20 ether b6:1f:54:3f:3d:48 txqueuelen 1000 (Ethernet) RX packets 247 bytes 28323 (27.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 233 bytes 25355 (24.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvbf42ea01f-fe: flags=4419 mtu 1500 inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid 0x20 ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) RX packets 377 bytes 57664 (56.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 333 bytes 38765 (37.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvo247da3ed-a4: flags=4419 mtu 1500 inet6 fe80::dcfa:f1ff:fe03:ee88 prefixlen 64 scopeid 0x20 ether de:fa:f1:03:ee:88 txqueuelen 1000 (Ethernet) RX packets 233 bytes 25355 (24.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 247 bytes 28323 (27.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvof42ea01f-fe: flags=4419 mtu 1500 inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid 0x20 ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) RX packets 333 bytes 38765 (37.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 377 bytes 57664 (56.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap247da3ed-a4: flags=4163 mtu 1500 inet6 fe80::fc16:3eff:fede:5eea prefixlen 64 scopeid 0x20 ether fe:16:3e:de:5e:ea txqueuelen 500 (Ethernet) RX packets 219 bytes 24239 (23.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 224 bytes 26661 (26.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099 mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 This is on RHEL 7.1. Any obvious way I can get all the intermediate bridges to MTU=9000? I've RTFM'd and googled to no avail... Here's the ovs-vsctl outout: [root at node-136 ~]# ovs-vsctl show 6f5a5f00-59e2-4420-aeaf-7ad464ead232 Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port "qvo247da3ed-a4" tag: 1 Interface "qvo247da3ed-a4" Port "int-br-eth1" Interface "int-br-eth1" Port "int-br-enp3s0f0" Interface "int-br-enp3s0f0" type: patch options: {peer="phy-br-enp3s0f0"} Bridge "br-enp3s0f0" Port "enp3s0f0" Interface "enp3s0f0" Port "br-enp3s0f0" Interface "br-enp3s0f0" type: internal Port "phy-br-enp3s0f0" Interface "phy-br-enp3s0f0" type: patch options: {peer="int-br-enp3s0f0"} ovs_version: "2.3.1" Many thanks if anyone has any information on this topic! Or can point me to some documentation I missed... Thanks, erich From shayne.alone at gmail.com Wed Oct 7 05:11:26 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Wed, 07 Oct 2015 05:11:26 +0000 Subject: [Rdo-list] overcloud build image dependency [ openstack-ironic-python-agent ] In-Reply-To: References: Message-ID: I had successfully installed under cloud via: http://docs.openstack.org/developer/tripleo-docs/installation/installing.html next i went to build images via: [stack at undercloud ~]$ openstack overcloud image build --all --debug there is a fail just check end of this log: http://paste.ubuntu.com/12701964/ which point it can't find package: [ openstack-ironic-python-agent ] in any repo! I checked this package via "yum info" on another terminal and find it's repository is: [ delorean ] reference: http://paste.ubuntu.com/12701972/ if you check last lines of first log I exposed, - http://paste.ubuntu.com/12701964/ - this repository is not listed on loading mirror phase... please let me know how can I add this repository into image building... -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Oct 7 08:10:41 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 7 Oct 2015 10:10:41 +0200 Subject: [Rdo-list] overcloud build image dependency [ openstack-ironic-python-agent ] In-Reply-To: References: Message-ID: <5614D381.9070401@redhat.com> On 10/07/2015 07:11 AM, AliReza Taleghani wrote: > I had successfully installed under cloud via: > http://docs.openstack.org/developer/tripleo-docs/installation/installing.html > > next i went to build images via: > [stack at undercloud ~]$ openstack overcloud image build --all --debug > > there is a fail just check end of this log: > http://paste.ubuntu.com/12701964/ > which point it can't find package: [ openstack-ironic-python-agent ] in > any repo! > I checked this package via "yum info" on another terminal and find it's > repository is: [ delorean ] reference: http://paste.ubuntu.com/12701972/ > > if you check last lines of first log I exposed, - > http://paste.ubuntu.com/12701964/ - this repository is not listed on > loading mirror phase... > > please let me know how can I add this repository into image building... Hi! I think you got hit by https://bugzilla.redhat.com/show_bug.cgi?id=1268990 please find the workaround there. > > -- > Sincerely, > Ali R. Taleghani > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From shayne.alone at gmail.com Wed Oct 7 08:17:10 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Wed, 07 Oct 2015 08:17:10 +0000 Subject: [Rdo-list] overcloud build image dependency [ openstack-ironic-python-agent ] In-Reply-To: <5614D381.9070401@redhat.com> References: <5614D381.9070401@redhat.com> Message-ID: ok, I will check it there is one other thing which I think it's my less attention: the wiki => http://docs.openstack.org/developer/tripleo-docs/installation/installing.html expose both #Enable last known good RDO Trunk Delorean repository # Enable latest RDO Trunk Delorean repository and I apply both :-/ what will happend if i set lated [RDO TRUNK REPO] to enable=0 ? On Wed, Oct 7, 2015 at 11:41 AM Dmitry Tantsur wrote: > On 10/07/2015 07:11 AM, AliReza Taleghani wrote: > > I had successfully installed under cloud via: > > > http://docs.openstack.org/developer/tripleo-docs/installation/installing.html > > > > next i went to build images via: > > [stack at undercloud ~]$ openstack overcloud image build --all --debug > > > > there is a fail just check end of this log: > > http://paste.ubuntu.com/12701964/ > > which point it can't find package: [ openstack-ironic-python-agent ] in > > any repo! > > I checked this package via "yum info" on another terminal and find it's > > repository is: [ delorean ] reference: > http://paste.ubuntu.com/12701972/ > > > > if you check last lines of first log I exposed, - > > http://paste.ubuntu.com/12701964/ - this repository is not listed on > > loading mirror phase... > > > > please let me know how can I add this repository into image building... > > Hi! > > I think you got hit by > https://bugzilla.redhat.com/show_bug.cgi?id=1268990 please find the > workaround there. > > > > > -- > > Sincerely, > > Ali R. Taleghani > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.alone at gmail.com Wed Oct 7 08:34:05 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Wed, 07 Oct 2015 08:34:05 +0000 Subject: [Rdo-list] overcloud build image dependency [ openstack-ironic-python-agent ] In-Reply-To: References: <5614D381.9070401@redhat.com> Message-ID: thanks Dmitry; it get over, I need export repos just before building images as following: ######bash [stack at undercloud ~]$ export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [stack at undercloud ~]$ openstack overcloud image build --all --debug ###### On Wed, Oct 7, 2015 at 11:47 AM AliReza Taleghani wrote: > ok, I will check it > there is one other thing which I think it's my less attention: > the wiki => > http://docs.openstack.org/developer/tripleo-docs/installation/installing.html > > expose both > #Enable last known good RDO Trunk Delorean repository > # Enable latest RDO Trunk Delorean repository > and I apply both :-/ > what will happend if i set lated [RDO TRUNK REPO] to enable=0 ? > > > On Wed, Oct 7, 2015 at 11:41 AM Dmitry Tantsur > wrote: > >> On 10/07/2015 07:11 AM, AliReza Taleghani wrote: >> > I had successfully installed under cloud via: >> > >> http://docs.openstack.org/developer/tripleo-docs/installation/installing.html >> > >> > next i went to build images via: >> > [stack at undercloud ~]$ openstack overcloud image build --all --debug >> > >> > there is a fail just check end of this log: >> > http://paste.ubuntu.com/12701964/ >> > which point it can't find package: [ openstack-ironic-python-agent ] in >> > any repo! >> > I checked this package via "yum info" on another terminal and find it's >> > repository is: [ delorean ] reference: >> http://paste.ubuntu.com/12701972/ >> > >> > if you check last lines of first log I exposed, - >> > http://paste.ubuntu.com/12701964/ - this repository is not listed on >> > loading mirror phase... >> > >> > please let me know how can I add this repository into image building... >> >> Hi! >> >> I think you got hit by >> https://bugzilla.redhat.com/show_bug.cgi?id=1268990 please find the >> workaround there. >> >> > >> > -- >> > Sincerely, >> > Ali R. Taleghani >> > >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -- > Sincerely, > Ali R. Taleghani > -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnavarro at redhat.com Wed Oct 7 08:46:51 2015 From: pnavarro at redhat.com (Pedro Navarro Perez) Date: Wed, 7 Oct 2015 04:46:51 -0400 (EDT) Subject: [Rdo-list] Jumbo MTU to instances in Kilo? In-Reply-To: <56146894.2020107@soe.ucsc.edu> References: <56146894.2020107@soe.ucsc.edu> Message-ID: <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> Hi Erich, did you recreate the neutron networks after the configuration changes? Pedro Navarro P?rez OpenStack product specialist Red Hat Iberia Passeig de Gr?cia 120, 08008 Barcelona Spain M +34 639 642 379 E pnavarro at redhat.com ----- Original Message ----- From: "Erich Weiler" To: rdo-list at redhat.com Sent: Wednesday, 7 October, 2015 2:34:28 AM Subject: [Rdo-list] Jumbo MTU to instances in Kilo? Hi Y'all, I know someone must have figured this one out, but I can't seem to get 9000 byte MTUs working. I have it set in plugin.ini, etc, my nodes have MTU=9000 on their interfaces, so does the network node. dnsmasq also is configured to set MTU=9000 on instances, which works. But I still can't ping with large packets to my instance: [weiler at stacker ~]$ ping 10.50.100.2 PING 10.50.100.2 (10.50.100.2) 56(84) bytes of data. 64 bytes from 10.50.100.2: icmp_seq=1 ttl=63 time=2.95 ms 64 bytes from 10.50.100.2: icmp_seq=2 ttl=63 time=1.14 ms 64 bytes from 10.50.100.2: icmp_seq=3 ttl=63 time=0.661 ms That works fine. This however doesn't work: [root at stacker ~]# ping -M do -s 8000 10.50.100.2 PING 10.50.100.2 (10.50.100.2) 8000(8028) bytes of data. From 10.50.100.2 icmp_seq=1 Frag needed and DF set (mtu = 1500) ping: local error: Message too long, mtu=1500 ping: local error: Message too long, mtu=1500 ping: local error: Message too long, mtu=1500 ping: local error: Message too long, mtu=1500 It looks like somehow the br-int interface for OVS isn't set at 9000, but I can't figure out how to do that... Here's ifconfig on my compute node: br-enp3s0f0: flags=4163 mtu 9000 inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) RX packets 2401432 bytes 359276713 (342.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 30 bytes 1572 (1.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 br-int: flags=4163 mtu 1500 inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid 0x20 ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) RX packets 69 bytes 6866 (6.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp3s0f0: flags=4419 mtu 9000 inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) RX packets 130174458 bytes 15334807929 (14.2 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22919305 bytes 5859090420 (5.4 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp3s0f0.50: flags=4163 mtu 9000 inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) RX packets 38429352 bytes 5152853436 (4.7 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 419842 bytes 101161981 (96.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 22141566 bytes 1185622090 (1.1 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22141566 bytes 1185622090 (1.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qbr247da3ed-a4: flags=4163 mtu 1500 inet6 fe80::5c8f:c0ff:fe79:bc11 prefixlen 64 scopeid 0x20 ether b6:1f:54:3f:3d:48 txqueuelen 0 (Ethernet) RX packets 16 bytes 1472 (1.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qbrf42ea01f-fe: flags=4163 mtu 1500 inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid 0x20 ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) RX packets 15 bytes 1456 (1.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvb247da3ed-a4: flags=4419 mtu 1500 inet6 fe80::b41f:54ff:fe3f:3d48 prefixlen 64 scopeid 0x20 ether b6:1f:54:3f:3d:48 txqueuelen 1000 (Ethernet) RX packets 247 bytes 28323 (27.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 233 bytes 25355 (24.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvbf42ea01f-fe: flags=4419 mtu 1500 inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid 0x20 ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) RX packets 377 bytes 57664 (56.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 333 bytes 38765 (37.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvo247da3ed-a4: flags=4419 mtu 1500 inet6 fe80::dcfa:f1ff:fe03:ee88 prefixlen 64 scopeid 0x20 ether de:fa:f1:03:ee:88 txqueuelen 1000 (Ethernet) RX packets 233 bytes 25355 (24.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 247 bytes 28323 (27.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvof42ea01f-fe: flags=4419 mtu 1500 inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid 0x20 ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) RX packets 333 bytes 38765 (37.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 377 bytes 57664 (56.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap247da3ed-a4: flags=4163 mtu 1500 inet6 fe80::fc16:3eff:fede:5eea prefixlen 64 scopeid 0x20 ether fe:16:3e:de:5e:ea txqueuelen 500 (Ethernet) RX packets 219 bytes 24239 (23.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 224 bytes 26661 (26.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099 mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 This is on RHEL 7.1. Any obvious way I can get all the intermediate bridges to MTU=9000? I've RTFM'd and googled to no avail... Here's the ovs-vsctl outout: [root at node-136 ~]# ovs-vsctl show 6f5a5f00-59e2-4420-aeaf-7ad464ead232 Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port "qvo247da3ed-a4" tag: 1 Interface "qvo247da3ed-a4" Port "int-br-eth1" Interface "int-br-eth1" Port "int-br-enp3s0f0" Interface "int-br-enp3s0f0" type: patch options: {peer="phy-br-enp3s0f0"} Bridge "br-enp3s0f0" Port "enp3s0f0" Interface "enp3s0f0" Port "br-enp3s0f0" Interface "br-enp3s0f0" type: internal Port "phy-br-enp3s0f0" Interface "phy-br-enp3s0f0" type: patch options: {peer="int-br-enp3s0f0"} ovs_version: "2.3.1" Many thanks if anyone has any information on this topic! Or can point me to some documentation I missed... Thanks, erich _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Wed Oct 7 10:12:30 2015 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 7 Oct 2015 06:12:30 -0400 (EDT) Subject: [Rdo-list] delorean.repo vs delorean-deps.repo In-Reply-To: References: Message-ID: <1512777106.65406325.1444212750519.JavaMail.zimbra@redhat.com> ----- Original Message ----- > > I was wondering why these two files were split up if one can't be used > > without the other ? > > Can we bundle delorean-deps.repo inside delorean.repo ? There'd be > > three repositories in the file. > > delorean-deps.repo is a single file which can be changed when we move > repos e.g. cbs.centos.org/repos/ will be blocked soon and we'll need > to switch to the mirror on buildlogs.c.o and eventually to the release > repos on mirror.c.o. > delorean.repo is a static file generated by Delorean when it runs so > it can't be changed as easily. > > > It'd be simpler for both users and systems to consume this one > > repository file with everything we need in it. > > Good point, maybe we could figure out something using server-side include: > https://github.com/redhat-openstack/delorean-instance/issues/14 > It may be easier if we can just patch delorean to include the contents of delorean-deps.repo into delorean.repo. I have proposed this in https://review.gerrithub.io/249207 Cheers, Javier > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rasca at redhat.com Wed Oct 7 10:45:32 2015 From: rasca at redhat.com (Raoul Scarazzini) Date: Wed, 7 Oct 2015 12:45:32 +0200 Subject: [Rdo-list] How to know which galera server haproxy is pointing in an overcloud HA env Message-ID: <5614F7CC.6040808@redhat.com> Hi all, I'm operating in an overcloud HA environment, using galera as db backend. In haproxy I've got this configuration: listen mysql bind 172.16.20.11:3306 option tcpka option httpchk stick on dst stick-table type ip size 1000 timeout client 0 timeout server 0 server overcloud-controller-0 172.16.20.13:3306 backup check fall 5 inter 2000 on-marked-down shutdown-sessions port 9200 rise 2 server overcloud-controller-1 172.16.20.14:3306 backup check fall 5 inter 2000 on-marked-down shutdown-sessions port 9200 rise 2 server overcloud-controller-2 172.16.20.15:3306 backup check fall 5 inter 2000 on-marked-down shutdown-sessions port 9200 rise 2 Now, for what I've understood, the "stick on dst" directive determines that just one of the three galera servers is contacted by the proxy (even if the galera configuration is an active/active/active setup), and this is fine for a couple of reasons that involves locking and things like this. So using stick on dst makes our balancer operating as active-standby-standby without automatic failback. And this is also fine. What I am looking for at this point is to understand which server haproxy is pointing. I've enabled this option in haproxy conf: stats socket /var/run/haproxy mode 600 level admin so I am able to see, on the host which carries the VIP associated to haproxy bind address, the stick table: [root at overcloud-controller-1 ~]# echo "show table mysql" | socat /var/run/haproxy stdio # table: mysql, type: ip, size:1000, used:1 0x7f9babb4d7d4: key=172.16.20.11 use=0 exp=0 server_id=1 But of course this does not help me to understand which host is haproxy contacting. I was expecting to see on the output something useful, but the key is the value I already know. Can you help me to find out a way? Thanks a lot, -- Raoul Scarazzini rasca at redhat.com From mcornea at redhat.com Wed Oct 7 11:07:30 2015 From: mcornea at redhat.com (Marius Cornea) Date: Wed, 7 Oct 2015 07:07:30 -0400 (EDT) Subject: [Rdo-list] How to know which galera server haproxy is pointing in an overcloud HA env In-Reply-To: <5614F7CC.6040808@redhat.com> References: <5614F7CC.6040808@redhat.com> Message-ID: <1852667655.37767349.1444216050936.JavaMail.zimbra@redhat.com> Hi Raoul, You can check the haproxy stats dashboard and determine which of the backend nodes takes the sessions. To access it look for the ip address haproxy.stats binds to in haproxy.cfg and reach it via http on port 1993. Thanks, Marius ----- Original Message ----- > From: "Raoul Scarazzini" > To: rdo-list at redhat.com > Sent: Wednesday, October 7, 2015 12:45:32 PM > Subject: [Rdo-list] How to know which galera server haproxy is pointing in an overcloud HA env > > Hi all, > I'm operating in an overcloud HA environment, using galera as db > backend. In haproxy I've got this configuration: > > listen mysql > bind 172.16.20.11:3306 > option tcpka > option httpchk > stick on dst > stick-table type ip size 1000 > timeout client 0 > timeout server 0 > server overcloud-controller-0 172.16.20.13:3306 backup check fall 5 > inter 2000 on-marked-down shutdown-sessions port 9200 rise 2 > server overcloud-controller-1 172.16.20.14:3306 backup check fall 5 > inter 2000 on-marked-down shutdown-sessions port 9200 rise 2 > server overcloud-controller-2 172.16.20.15:3306 backup check fall 5 > inter 2000 on-marked-down shutdown-sessions port 9200 rise 2 > > Now, for what I've understood, the "stick on dst" directive determines > that just one of the three galera servers is contacted by the proxy > (even if the galera configuration is an active/active/active setup), and > this is fine for a couple of reasons that involves locking and things > like this. > So using stick on dst makes our balancer operating as > active-standby-standby without automatic failback. And this is also fine. > > What I am looking for at this point is to understand which server > haproxy is pointing. > I've enabled this option in haproxy conf: > > stats socket /var/run/haproxy mode 600 level admin > > so I am able to see, on the host which carries the VIP associated to > haproxy bind address, the stick table: > > [root at overcloud-controller-1 ~]# echo "show table mysql" | socat > /var/run/haproxy stdio > # table: mysql, type: ip, size:1000, used:1 > 0x7f9babb4d7d4: key=172.16.20.11 use=0 exp=0 server_id=1 > > But of course this does not help me to understand which host is haproxy > contacting. I was expecting to see on the output something useful, but > the key is the value I already know. > > Can you help me to find out a way? > > Thanks a lot, > > -- > Raoul Scarazzini > rasca at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rasca at redhat.com Wed Oct 7 11:18:40 2015 From: rasca at redhat.com (Raoul Scarazzini) Date: Wed, 7 Oct 2015 13:18:40 +0200 Subject: [Rdo-list] How to know which galera server haproxy is pointing in an overcloud HA env In-Reply-To: <1852667655.37767349.1444216050936.JavaMail.zimbra@redhat.com> References: <5614F7CC.6040808@redhat.com> <1852667655.37767349.1444216050936.JavaMail.zimbra@redhat.com> Message-ID: <5614FF90.7090908@redhat.com> Il giorno 7/10/2015 13:07:30, Marius Cornea ha scritto: > You can check the haproxy stats dashboard and determine which of the backend nodes takes the sessions. To access it look for the ip address haproxy.stats binds to in haproxy.cfg and reach it via http on port 1993. Hi Marius, thank you for the answer. This is not so clear to me. What do you mean with "takes the sessions"? If I understand correctly I need to store somewhere a delta, because without it how can I know which server is increasing? Look at this: [root at overcloud-controller-1 ~]# echo "show stat" | socat /var/run/haproxy stdio | grep mysql mysql,FRONTEND,,,65,87,2000,6290,12411640,28349332,0,0,0,,,,,OPEN,,,,,,,,,1,13,0,,,,0,1,0,13,,,,,,,,,,,0,0,0,,,0,0,0,0,,,,,,,, mysql,overcloud-controller-0,0,0,65,87,,6290,12411640,28349332,,0,,0,0,0,0,UP,1,0,1,0,0,10271,0,,1,13,1,,1,,2,1,,13,L7OK,200,54,,,,,,,0,,,,0,0,,,,,1,OK,,0,1,0,14961, mysql,overcloud-controller-1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,0,1,8,2,4651,4676,,1,13,2,,0,,2,0,,0,L7STS,503,65,,,,,,,0,,,,0,0,,,,,-1,Service Unavailable,,0,0,0,0, mysql,overcloud-controller-2,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,0,1,6,1,6042,94,,1,13,3,,0,,2,0,,0,L7OK,200,55,,,,,,,0,,,,0,0,,,,,-1,OK,,0,0,0,0, mysql,BACKEND,0,0,65,87,200,6290,12411640,28349332,0,0,,0,0,0,0,UP,1,0,2,,0,10271,0,,1,13,0,,1,,1,1,,13,,,,,,,,,,,,,,0,0,0,0,0,0,1,,,0,1,0,14961, As you can see the mysql,overcloud-controller-1 is DOWN, so the choice is between the other two. From what I can suppose, is the first one the suspected one, but just because it has a value that increases. Your suggestion is to double take the value to each server and see which one increases? What if in that moment there are no connections and the values remains the same? Thanks, -- Raoul Scarazzini rasca at redhat.com From shayne.alone at gmail.com Wed Oct 7 11:21:16 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Wed, 07 Oct 2015 11:21:16 +0000 Subject: [Rdo-list] environment setup error [ baremetal introspection ] Message-ID: Hi guys; I had install under-cloud, build images , upload images and import nodes as well... know , as the documents mentioned, it's time to *introspection *imported nodes.but I get an error as I try to do as logged here: [stack at undercloud ~]$ openstack baremetal introspection bulk start --debug introspection error log: http://paste.ubuntu.com/12703322/ #My imported nodes are as following: node [instackenv.json] info: http://paste.ubuntu.com/12703306/ #And under-cloud deployment config is as following: undercloud config and ifconfig : http://paste.ubuntu.com/12703312/ Brief: under-cloud instance has 3 netwrok interface: - eno16777728 Public Address - eno33554960 Provisioning Interface - eno50332184 iLo Network is there any problem with my network configuration? or better! is the introspection error in relation with my network topology? i mean if introspection is going to use ipmi to detect baremetal nodes specs and I am using a separated interface for iLo! dose it get into problem? thanks -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.alone at gmail.com Wed Oct 7 11:33:48 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Wed, 07 Oct 2015 11:33:48 +0000 Subject: [Rdo-list] environment setup error [ baremetal introspection ] In-Reply-To: References: Message-ID: I miss this point that: Provisioning interface of under-cloud is in the same network as bare metal servers nic0 iLo interface of under-cloud is in the same and separated network as bare metal servers iLo # descriptions in undercloud.conf point into that inspection ip-range show be in the same network address as DHCP. is there relation with pm_addr field on nodes.json which i imported and use ilo static ip address and this field there? On Wed, Oct 7, 2015 at 2:51 PM AliReza Taleghani wrote: > Hi guys; > I had install under-cloud, build images , upload images and import nodes > as well... > > know , as the documents mentioned, it's time to *introspection *imported > nodes.but I get an error as I try to do as logged here: > [stack at undercloud ~]$ openstack baremetal introspection bulk start > --debug > introspection error log: > http://paste.ubuntu.com/12703322/ > > #My imported nodes are as following: > node [instackenv.json] info: http://paste.ubuntu.com/12703306/ > > #And under-cloud deployment config is as following: > undercloud config and ifconfig : http://paste.ubuntu.com/12703312/ > > > Brief: > under-cloud instance has 3 netwrok interface: > > - eno16777728 Public Address > - eno33554960 Provisioning Interface > - eno50332184 iLo Network > > is there any problem with my network configuration? or better! is the > introspection error in relation with my network topology? > > i mean if introspection is going to use ipmi to detect baremetal nodes > specs and I am using a separated interface for iLo! dose it get into > problem? > > > thanks > -- > Sincerely, > Ali R. Taleghani > -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.alone at gmail.com Wed Oct 7 14:32:54 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Wed, 07 Oct 2015 14:32:54 +0000 Subject: [Rdo-list] environment setup error [ baremetal introspection ] In-Reply-To: References: Message-ID: Hi all, the problem seem to be related on the sqlite file permission! /var/lib/ironic-inspector/inspector.sqlite was owned by root! I moved it into new name and restart service! the new file created and owned by: -rw-r--r--. 1 ironic-inspector ironic-inspector 14336 Oct 7 17:35 /var/lib/ironic-inspector/inspector.sqlite On Wed, Oct 7, 2015 at 3:03 PM AliReza Taleghani wrote: > I miss this point that: > Provisioning interface of under-cloud is in the same network as bare metal > servers nic0 > iLo interface of under-cloud is in the same and separated network as bare > metal servers iLo > > # descriptions in undercloud.conf point into that inspection ip-range show > be in the same network address as DHCP. > is there relation with pm_addr field on nodes.json which i imported and > use ilo static ip address and this field there? > > > > On Wed, Oct 7, 2015 at 2:51 PM AliReza Taleghani > wrote: > >> Hi guys; >> I had install under-cloud, build images , upload images and import nodes >> as well... >> >> know , as the documents mentioned, it's time to *introspection *imported >> nodes.but I get an error as I try to do as logged here: >> [stack at undercloud ~]$ openstack baremetal introspection bulk start >> --debug >> introspection error log: >> http://paste.ubuntu.com/12703322/ >> >> #My imported nodes are as following: >> node [instackenv.json] info: http://paste.ubuntu.com/12703306/ >> >> #And under-cloud deployment config is as following: >> undercloud config and ifconfig : http://paste.ubuntu.com/12703312/ >> >> >> Brief: >> under-cloud instance has 3 netwrok interface: >> >> - eno16777728 Public Address >> - eno33554960 Provisioning Interface >> - eno50332184 iLo Network >> >> is there any problem with my network configuration? or better! is the >> introspection error in relation with my network topology? >> >> i mean if introspection is going to use ipmi to detect baremetal nodes >> specs and I am using a separated interface for iLo! dose it get into >> problem? >> >> >> thanks >> -- >> Sincerely, >> Ali R. Taleghani >> > -- > Sincerely, > Ali R. Taleghani > -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Wed Oct 7 15:21:11 2015 From: mcornea at redhat.com (Marius Cornea) Date: Wed, 7 Oct 2015 11:21:11 -0400 (EDT) Subject: [Rdo-list] How to know which galera server haproxy is pointing in an overcloud HA env In-Reply-To: <5614FF90.7090908@redhat.com> References: <5614F7CC.6040808@redhat.com> <1852667655.37767349.1444216050936.JavaMail.zimbra@redhat.com> <5614FF90.7090908@redhat.com> Message-ID: <1173238997.38020056.1444231271758.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Raoul Scarazzini" > To: "Marius Cornea" > Cc: rdo-list at redhat.com > Sent: Wednesday, October 7, 2015 1:18:40 PM > Subject: Re: [Rdo-list] How to know which galera server haproxy is pointing in an overcloud HA env > > Il giorno 7/10/2015 13:07:30, Marius Cornea ha scritto: > > You can check the haproxy stats dashboard and determine which of the > backend nodes takes the sessions. To access it look for the ip address > haproxy.stats binds to in haproxy.cfg and reach it via http on port 1993. > > Hi Marius, > thank you for the answer. This is not so clear to me. What do you mean > with "takes the sessions"? If I understand correctly I need to store > somewhere a delta, because without it how can I know which server is > increasing? I was just suggesting a way to see to which of the backend nodes haproxy is directing the traffic. Please see the attachment. > Look at this: > > [root at overcloud-controller-1 ~]# echo "show stat" | socat > /var/run/haproxy stdio | grep mysql > mysql,FRONTEND,,,65,87,2000,6290,12411640,28349332,0,0,0,,,,,OPEN,,,,,,,,,1,13,0,,,,0,1,0,13,,,,,,,,,,,0,0,0,,,0,0,0,0,,,,,,,, > mysql,overcloud-controller-0,0,0,65,87,,6290,12411640,28349332,,0,,0,0,0,0,UP,1,0,1,0,0,10271,0,,1,13,1,,1,,2,1,,13,L7OK,200,54,,,,,,,0,,,,0,0,,,,,1,OK,,0,1,0,14961, > mysql,overcloud-controller-1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,0,1,8,2,4651,4676,,1,13,2,,0,,2,0,,0,L7STS,503,65,,,,,,,0,,,,0,0,,,,,-1,Service > Unavailable,,0,0,0,0, > mysql,overcloud-controller-2,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,0,1,6,1,6042,94,,1,13,3,,0,,2,0,,0,L7OK,200,55,,,,,,,0,,,,0,0,,,,,-1,OK,,0,0,0,0, > mysql,BACKEND,0,0,65,87,200,6290,12411640,28349332,0,0,,0,0,0,0,UP,1,0,2,,0,10271,0,,1,13,0,,1,,1,1,,13,,,,,,,,,,,,,,0,0,0,0,0,0,1,,,0,1,0,14961, > > As you can see the mysql,overcloud-controller-1 is DOWN, so the choice > is between the other two. From what I can suppose, is the first one the > suspected one, but just because it has a value that increases. > Your suggestion is to double take the value to each server and see which > one increases? What if in that moment there are no connections and the > values remains the same? > > Thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > -------------- next part -------------- A non-text attachment was scrubbed... Name: haproxy.mysql.png Type: image/png Size: 22416 bytes Desc: not available URL: From mcornea at redhat.com Wed Oct 7 16:53:47 2015 From: mcornea at redhat.com (Marius Cornea) Date: Wed, 7 Oct 2015 12:53:47 -0400 (EDT) Subject: [Rdo-list] environment setup error [ baremetal introspection ] In-Reply-To: References: Message-ID: <1546216166.38138472.1444236827725.JavaMail.zimbra@redhat.com> Hi, This issue is tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1268992 Thanks, Marius ----- Original Message ----- > From: "AliReza Taleghani" > To: rdo-list at redhat.com > Sent: Wednesday, October 7, 2015 4:32:54 PM > Subject: Re: [Rdo-list] environment setup error [ baremetal introspection ] > > Hi all, the problem seem to be related on the sqlite file permission! > > /var/lib/ironic-inspector/inspector.sqlite was owned by root! > I moved it into new name and restart service! the new file created and owned > by: > -rw-r--r--. 1 ironic-inspector ironic-inspector 14336 Oct 7 17:35 > /var/lib/ironic-inspector/inspector.sqlite > > On Wed, Oct 7, 2015 at 3:03 PM AliReza Taleghani < shayne.alone at gmail.com > > wrote: > > > > I miss this point that: > Provisioning interface of under-cloud is in the same network as bare metal > servers nic0 > iLo interface of under-cloud is in the same and separated network as bare > metal servers iLo > > # descriptions in undercloud.conf point into that inspection ip-range show be > in the same network address as DHCP. > is there relation with pm_addr field on nodes.json which i imported and use > ilo static ip address and this field there? > > > > On Wed, Oct 7, 2015 at 2:51 PM AliReza Taleghani < shayne.alone at gmail.com > > wrote: > > > > Hi guys; > I had install under-cloud, build images , upload images and import nodes as > well... > > know , as the documents mentioned, it's time to introspection imported > nodes.but I get an error as I try to do as logged here: > [stack at undercloud ~]$ openstack baremetal introspection bulk start --debug > introspection error log: http://paste.ubuntu.com/12703322/ > > #My imported nodes are as following: > node [instackenv.json] info: http://paste.ubuntu.com/12703306/ > > #And under-cloud deployment config is as following: > undercloud config and ifconfig : http://paste.ubuntu.com/12703312/ > > > Brief: > under-cloud instance has 3 netwrok interface: > > * eno16777728 Public Address > * eno33554960 Provisioning Interface > * eno50332184 iLo Network > > > > is there any problem with my network configuration? or better! is the > introspection error in relation with my network topology? > > i mean if introspection is going to use ipmi to detect baremetal nodes specs > and I am using a separated interface for iLo! dose it get into problem? > > > > > thanks > -- > Sincerely, > Ali R. Taleghani > -- > Sincerely, > Ali R. Taleghani > -- > Sincerely, > Ali R. Taleghani > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From weiler at soe.ucsc.edu Wed Oct 7 16:57:12 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Wed, 7 Oct 2015 09:57:12 -0700 Subject: [Rdo-list] Jumbo MTU to instances in Kilo? In-Reply-To: <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> Message-ID: <56154EE8.6000908@soe.ucsc.edu> Yeah, I made the changes and then recreated all the networks. For some reason br-int and the individual virtual instance interfaces on the compute node still show 1500 byte frames. Has anyone else configured jumbo frames in a Kilo environment? Or maybe I'm just an outlier... ;) -erich On 10/07/2015 01:46 AM, Pedro Navarro Perez wrote: > Hi Erich, > > did you recreate the neutron networks after the configuration changes? > > Pedro Navarro P?rez > OpenStack product specialist > Red Hat Iberia > Passeig de Gr?cia 120, > 08008 Barcelona > Spain > M +34 639 642 379 > E pnavarro at redhat.com > > ----- Original Message ----- > From: "Erich Weiler" > To: rdo-list at redhat.com > Sent: Wednesday, 7 October, 2015 2:34:28 AM > Subject: [Rdo-list] Jumbo MTU to instances in Kilo? > > Hi Y'all, > > I know someone must have figured this one out, but I can't seem to get > 9000 byte MTUs working. I have it set in plugin.ini, etc, my nodes have > MTU=9000 on their interfaces, so does the network node. dnsmasq also is > configured to set MTU=9000 on instances, which works. But I still can't > ping with large packets to my instance: > > [weiler at stacker ~]$ ping 10.50.100.2 > PING 10.50.100.2 (10.50.100.2) 56(84) bytes of data. > 64 bytes from 10.50.100.2: icmp_seq=1 ttl=63 time=2.95 ms > 64 bytes from 10.50.100.2: icmp_seq=2 ttl=63 time=1.14 ms > 64 bytes from 10.50.100.2: icmp_seq=3 ttl=63 time=0.661 ms > > That works fine. This however doesn't work: > > [root at stacker ~]# ping -M do -s 8000 10.50.100.2 > PING 10.50.100.2 (10.50.100.2) 8000(8028) bytes of data. > From 10.50.100.2 icmp_seq=1 Frag needed and DF set (mtu = 1500) > ping: local error: Message too long, mtu=1500 > ping: local error: Message too long, mtu=1500 > ping: local error: Message too long, mtu=1500 > ping: local error: Message too long, mtu=1500 > > It looks like somehow the br-int interface for OVS isn't set at 9000, > but I can't figure out how to do that... > > Here's ifconfig on my compute node: > > br-enp3s0f0: flags=4163 mtu 9000 > inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 > ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) > RX packets 2401432 bytes 359276713 (342.6 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 30 bytes 1572 (1.5 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > br-int: flags=4163 mtu 1500 > inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid 0x20 > ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) > RX packets 69 bytes 6866 (6.7 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 8 bytes 648 (648.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp3s0f0: flags=4419 mtu 9000 > inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 > ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) > RX packets 130174458 bytes 15334807929 (14.2 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 22919305 bytes 5859090420 (5.4 GiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp3s0f0.50: flags=4163 mtu 9000 > inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 > inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 > ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) > RX packets 38429352 bytes 5152853436 (4.7 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 419842 bytes 101161981 (96.4 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 22141566 bytes 1185622090 (1.1 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 22141566 bytes 1185622090 (1.1 GiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qbr247da3ed-a4: flags=4163 mtu 1500 > inet6 fe80::5c8f:c0ff:fe79:bc11 prefixlen 64 scopeid 0x20 > ether b6:1f:54:3f:3d:48 txqueuelen 0 (Ethernet) > RX packets 16 bytes 1472 (1.4 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 8 bytes 648 (648.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qbrf42ea01f-fe: flags=4163 mtu 1500 > inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid 0x20 > ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) > RX packets 15 bytes 1456 (1.4 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 8 bytes 648 (648.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvb247da3ed-a4: flags=4419 mtu 1500 > inet6 fe80::b41f:54ff:fe3f:3d48 prefixlen 64 scopeid 0x20 > ether b6:1f:54:3f:3d:48 txqueuelen 1000 (Ethernet) > RX packets 247 bytes 28323 (27.6 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 233 bytes 25355 (24.7 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvbf42ea01f-fe: flags=4419 mtu 1500 > inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid 0x20 > ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) > RX packets 377 bytes 57664 (56.3 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 333 bytes 38765 (37.8 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvo247da3ed-a4: flags=4419 mtu 1500 > inet6 fe80::dcfa:f1ff:fe03:ee88 prefixlen 64 scopeid 0x20 > ether de:fa:f1:03:ee:88 txqueuelen 1000 (Ethernet) > RX packets 233 bytes 25355 (24.7 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 247 bytes 28323 (27.6 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvof42ea01f-fe: flags=4419 mtu 1500 > inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid 0x20 > ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) > RX packets 333 bytes 38765 (37.8 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 377 bytes 57664 (56.3 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > tap247da3ed-a4: flags=4163 mtu 1500 > inet6 fe80::fc16:3eff:fede:5eea prefixlen 64 scopeid 0x20 > ether fe:16:3e:de:5e:ea txqueuelen 500 (Ethernet) > RX packets 219 bytes 24239 (23.6 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 224 bytes 26661 (26.0 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > virbr0: flags=4099 mtu 1500 > inet 192.168.122.1 netmask 255.255.255.0 broadcast > 192.168.122.255 > ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) > RX packets 0 bytes 0 (0.0 B) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 0 bytes 0 (0.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > This is on RHEL 7.1. Any obvious way I can get all the intermediate > bridges to MTU=9000? I've RTFM'd and googled to no avail... > > Here's the ovs-vsctl outout: > > [root at node-136 ~]# ovs-vsctl show > 6f5a5f00-59e2-4420-aeaf-7ad464ead232 > Bridge br-int > fail_mode: secure > Port br-int > Interface br-int > type: internal > Port "qvo247da3ed-a4" > tag: 1 > Interface "qvo247da3ed-a4" > Port "int-br-eth1" > Interface "int-br-eth1" > Port "int-br-enp3s0f0" > Interface "int-br-enp3s0f0" > type: patch > options: {peer="phy-br-enp3s0f0"} > Bridge "br-enp3s0f0" > Port "enp3s0f0" > Interface "enp3s0f0" > Port "br-enp3s0f0" > Interface "br-enp3s0f0" > type: internal > Port "phy-br-enp3s0f0" > Interface "phy-br-enp3s0f0" > type: patch > options: {peer="int-br-enp3s0f0"} > ovs_version: "2.3.1" > > Many thanks if anyone has any information on this topic! Or can point > me to some documentation I missed... > > Thanks, > erich > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From trown at redhat.com Wed Oct 7 18:34:34 2015 From: trown at redhat.com (John Trowbridge) Date: Wed, 7 Oct 2015 14:34:34 -0400 Subject: [Rdo-list] [meeting] RDO packaging meeting (2015-10-07) Message-ID: <561565BA.5050200@redhat.com> ======================================== #rdo: RDO packaging meeting (2015-10-07) ======================================== Meeting started by trown at 15:02:04 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-10-07/rdo.2015-10-07-15.02.log.html . Meeting summary --------------- * LINK: https://etherpad.openstack.org/p/RDO-Packaging (trown, 15:03:20) * META: rename the meeting to "RDO meeting" (trown, 15:04:41) * ACTION: apevec to rename wherever needed (reminder, agenda) meeting title to "RDO meeting" (trown, 15:07:32) * ACTION: apevec add etherpad with meeting agenda to invite (jschlueter, 15:08:55) * RDO tooling: feature requests and priorties (mostly rdopkg, but also status of upstream Packaging RPM project) (trown, 15:09:09) * LINK: https://github.com/redhat-openstack/rdopkg/issues (trown, 15:11:01) * ACTION: apevec to schedule rdopkg triage with jruzicka and number80 (apevec, 15:13:09) * ACTION: hguemar send proposal draft (number80, 15:18:02) * New package Request (trown, 15:19:07) * LINK: https://fedoraproject.org/wiki/PackageDB_admin_requests. (trown, 15:20:44) * Packages needs version bump (trown, 15:21:18) * ACTION: number80 to follow up with python-jmespath maintainers (trown, 15:25:13) * RDO Doc Hack Day (trown, 15:25:42) * RDO Doc Hack Day will coincide with the Oct. 12-13 RDO Test Day (trown, 15:29:26) * RDO Test Day Oct. 12-13 (trown, 15:33:00) * ACTION: jschlueter putting together website page pull request for test day 2 (jschlueter, 15:33:52) * RDO-Manager basic CI is passing (non-HA no network isolation) (trown, 15:35:42) * LINK: https://ci.centos.org/view/rdo/ (trown, 15:36:01) * ACTION: trown to send announcement to rdo-list for Oct 12-13 test day with RDO-Manager information (trown, 15:41:49) * ACTION: trown to send PR to website for Test Day page with "basic" and "stretch" scenarios (trown, 15:44:14) * LINK: https://github.com/redhat-openstack/tempest/blob/rebased-upstream/README.rpm (apevec, 15:45:18) * ACTION: jschlueter followup with dkranz to get tempest test info for testday updated (jschlueter, 15:47:45) * Liberty RC2 (trown, 15:48:40) * LINK: RDO Liberty Progress https://trello.com/c/GPqDlVLs/63-liberty-rc-rpms (chandankumar, 15:49:57) * chair rotation for next meeting (trown, 15:55:55) * LINK: https://github.com/redhat-openstack/website/pull/114 initial pull request for testday 2 (jschlueter, 15:57:37) * ACTION: chandankumar to chair next meeting (trown, 15:57:48) * open floor (trown, 15:59:43) Meeting ended at 16:02:02 UTC. Action Items ------------ * apevec to rename wherever needed (reminder, agenda) meeting title to "RDO meeting" * apevec add etherpad with meeting agenda to invite * apevec to schedule rdopkg triage with jruzicka and number80 * hguemar send proposal draft * number80 to follow up with python-jmespath maintainers * jschlueter putting together website page pull request for test day 2 * trown to send announcement to rdo-list for Oct 12-13 test day with RDO-Manager information * trown to send PR to website for Test Day page with "basic" and "stretch" scenarios * jschlueter followup with dkranz to get tempest test info for testday updated * chandankumar to chair next meeting Action Items, by person ----------------------- * apevec * apevec to rename wherever needed (reminder, agenda) meeting title to "RDO meeting" * apevec add etherpad with meeting agenda to invite * apevec to schedule rdopkg triage with jruzicka and number80 * chandankumar * chandankumar to chair next meeting * jruzicka * apevec to schedule rdopkg triage with jruzicka and number80 * jschlueter * jschlueter putting together website page pull request for test day 2 * jschlueter followup with dkranz to get tempest test info for testday updated * number80 * apevec to schedule rdopkg triage with jruzicka and number80 * number80 to follow up with python-jmespath maintainers * trown * trown to send announcement to rdo-list for Oct 12-13 test day with RDO-Manager information * trown to send PR to website for Test Day page with "basic" and "stretch" scenarios * **UNASSIGNED** * hguemar send proposal draft People Present (lines said) --------------------------- * apevec (101) * trown (70) * number80 (33) * dmsimard (23) * jruzicka (21) * jschlueter (17) * chandankumar (11) * sasha2 (4) * bkosick (4) * eggmaster (3) * zodbot (3) * ibravo (2) * social (2) * dyasny (1) * alphacc (1) * xaeth (1) * blinky_ghost_ (1) * jpena (1) * aortega (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From chkumar246 at gmail.com Wed Oct 7 16:37:32 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 7 Oct 2015 22:07:32 +0530 Subject: [Rdo-list] Bug statistics for 2015-10-07 Message-ID: # RDO Bugs on 2015-10-07 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 281 - Fixed (MODIFIED, POST, ON_QA): 179 ## Number of open bugs by component diskimage-builder [ 4] +++ distribution [ 12] +++++++++ dnsmasq [ 1] instack [ 4] +++ instack-undercloud [ 27] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 11] ++++++++ openstack-cinder [ 14] ++++++++++ openstack-foreman-inst... [ 3] ++ openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 1] openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] +++++ openstack-neutron [ 6] ++++ openstack-nova [ 17] ++++++++++++ openstack-packstack [ 53] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] ++++++++ openstack-selinux [ 13] +++++++++ openstack-swift [ 2] + openstack-tripleo [ 24] ++++++++++++++++++ openstack-tripleo-heat... [ 4] +++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 3] ++ openvswitch [ 1] Package Review [ 1] python-glanceclient [ 2] + python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] +++ python-oslo-config [ 1] rdo-manager [ 24] ++++++++++++++++++ rdo-manager-cli [ 6] ++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (281 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (12 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-10-07 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1264072 ] http://bugzilla.redhat.com/1264072 (NEW) Component: distribution Last change: 2015-10-02 Summary: app-catalog-ui new package [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (27 bugs) [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1269002 ] http://bugzilla.redhat.com/1269002 (NEW) Component: instack-undercloud Last change: 2015-10-05 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (11 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: Ceilometer requires pymongo>=3.0.2 [1214928 ] http://bugzilla.redhat.com/1214928 (NEW) Component: openstack-ceilometer Last change: 2015-04-23 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1265721 ] http://bugzilla.redhat.com/1265721 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (NEW) Component: openstack-ceilometer Last change: 2015-09-23 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1265818 ] http://bugzilla.redhat.com/1265818 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-28 Summary: ceilometer polling agent does not start [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (14 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (1 bug) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-08-25 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists ### openstack-neutron (6 bugs) [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks ### openstack-nova (17 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-06-14 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-01-08 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-06-08 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-06-23 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-04-10 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: novnc init script doesnt write to log [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-02-09 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-06-04 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: Nova AVC messages ### openstack-packstack (53 bugs) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1269158 ] http://bugzilla.redhat.com/1269158 (NEW) Component: openstack-packstack Last change: 2015-10-06 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-packstack Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1266028 ] http://bugzilla.redhat.com/1266028 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack should use pymysql database driver since Liberty [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1269255 ] http://bugzilla.redhat.com/1269255 (NEW) Component: openstack-packstack Last change: 2015-10-06 Summary: Failed to start RabbitMQ broker. [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on ### openstack-puppet-modules (11 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm ### openstack-selinux (13 bugs) [1261465 ] http://bugzilla.redhat.com/1261465 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: OpenStack Keystone is not functional [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-07-23 Summary: Glance over nfs fails due to selinux [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2015-10-02 Summary: Nova rootwrap-daemon requires a selinux exception ### openstack-swift (2 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (24 bugs) [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Node registration fails silently if instackenv.json is badly formatted [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI ### openstack-tripleo-heat-templates (4 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-09-24 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (3 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### Package Review (1 bug) [1243550 ] http://bugzilla.redhat.com/1243550 (ASSIGNED) Component: Package Review Last change: 2015-09-22 Summary: Review Request: openstack-aodh - OpenStack Telemetry Alarming ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-09-17 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-06-04 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (24 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1268990 ] http://bugzilla.redhat.com/1268990 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (179 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (1 bug) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) ### openstack-glance (4 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (13 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (59 bugs) [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-07-21 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 ### openstack-puppet-modules (18 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} ### openstack-sahara (1 bug) [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (12 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (1 bug) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-10-05 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (6 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1268992 ] http://bugzilla.redhat.com/1268992 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: [RDO-Manager][Liberty] : openstack baremetal introspection bulk start causes "Internal server error" ( introspection fails) . [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-05-29 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (8 bugs) [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Wed Oct 7 21:59:54 2015 From: dms at redhat.com (David Moreau Simard) Date: Wed, 7 Oct 2015 17:59:54 -0400 Subject: [Rdo-list] delorean.repo vs delorean-deps.repo In-Reply-To: <1512777106.65406325.1444212750519.JavaMail.zimbra@redhat.com> References: <1512777106.65406325.1444212750519.JavaMail.zimbra@redhat.com> Message-ID: So, following a lengthy discussion on IRC and on gerrit [1], we realized it wasn't as easy as it seemed at first glance. It did raise some interesting questions, though. There are really three repositories for liberty right now: - delorean (from delorean.repo) - delorean-liberty-testing (from delorean-deps.repo) - delorean-common-testing (from delorean-deps.repo) For the packages in [delorean] to work properly, it needs the dependencies found in [delorean-liberty-testing] and [delorean-common-testing]. The naive idea I submitted was "Why not just bundled the two -testing repos inside delorean.repo ?" . Yes, this could work momentarily - but let's pretend that we have CI pass over a set of packages and then promote a delorean repository to "current-passed-ci". Some time passes by and the packages in either delorean-liberty-testing or delorean-common-testing change and actually breaks the build that we tagged as passing CI. Since all the packages are not in every given repository and each repository can change independently of the others, we are unable to guarantee that a specific repository that passes today will pass tomorrow. So, what do we do ? - Pull in every liberty-testing and common-testing packages to every delorean repository ? Sounds expensive but is possibly what Ubuntu does [2] - Also promote a set of -deps repo when we promote a delorean repo ? How ? I see three objectives in trying to improve the current situation: - Streamline/improve system and user experience by having one repository to add, not two - Improve stability by making sure builds that passed will remain stable - Ensure builds and their results are reproducible every time Any ideas ? [1]: https://review.gerrithub.io/249207 [2]: http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/trusty-updates/liberty/main/binary-amd64/Packages David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Oct 7, 2015 at 6:12 AM, Javier Pena wrote: > > > ----- Original Message ----- >> > I was wondering why these two files were split up if one can't be used >> > without the other ? >> > Can we bundle delorean-deps.repo inside delorean.repo ? There'd be >> > three repositories in the file. >> >> delorean-deps.repo is a single file which can be changed when we move >> repos e.g. cbs.centos.org/repos/ will be blocked soon and we'll need >> to switch to the mirror on buildlogs.c.o and eventually to the release >> repos on mirror.c.o. >> delorean.repo is a static file generated by Delorean when it runs so >> it can't be changed as easily. >> >> > It'd be simpler for both users and systems to consume this one >> > repository file with everything we need in it. >> >> Good point, maybe we could figure out something using server-side include: >> https://github.com/redhat-openstack/delorean-instance/issues/14 >> > > It may be easier if we can just patch delorean to include the contents of delorean-deps.repo into delorean.repo. I have proposed this in > https://review.gerrithub.io/249207 > > Cheers, > Javier > >> Cheers, >> Alan >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> From mcornea at redhat.com Wed Oct 7 22:51:23 2015 From: mcornea at redhat.com (Marius Cornea) Date: Wed, 7 Oct 2015 18:51:23 -0400 (EDT) Subject: [Rdo-list] Nodes introspection In-Reply-To: <1440797309.38330665.1444257351568.JavaMail.zimbra@redhat.com> Message-ID: <1536062897.38333007.1444258283111.JavaMail.zimbra@redhat.com> Hi everyone, A couple of questions regarding introspection (following the upstream docs): 1. Is Swift still used for storing nodes introspection details? I see no Swift object resulting after running introspection. 2. During introspection I can see error messages on the VMs console(running a virt environment) but it's hard to record them since the VM quickly turns off. What's the proper way to debug this? 3. The profile matching docs[1] reference ironic-discoverd which has now become ironic-inspector. How can we make this work? s/discoverd/inspector/ ? Thanks, Marius [1] http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/profile_matching.html From weiler at soe.ucsc.edu Wed Oct 7 23:35:49 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Wed, 7 Oct 2015 16:35:49 -0700 Subject: [Rdo-list] Jumbo MTU to instances in Kilo? In-Reply-To: <56154EE8.6000908@soe.ucsc.edu> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> <56154EE8.6000908@soe.ucsc.edu> Message-ID: <5615AC55.2040701@soe.ucsc.edu> Actually I think I'm closer - on the compute nodes, I set this in nova.conf: network_device_mtu=9000 even though there was a big note above it that said not to use it because this option was deprecated. But after setting that option, and restarting nova and openvswitch, br-int, my tap device and my qvb device all got set to MTU=9000. So I'm closer! But still one item is blocking me. I show this tracepath from my controller node direct to the VM (which is on a compute node on the local network): # tracepath 10.50.100.4 1?: [LOCALHOST] pmtu 9000 1: 10.50.100.4 0.682ms 1: 10.50.100.4 0.241ms 2: 10.50.100.4 0.297ms pmtu 1500 2: 10.50.100.4 1.664ms reached 10.50.100.4 is the VM. It looks like the path is jumbo clean up until that third hop. But the thing is, I don't know what the third hop is. ;) On my compute node I still see some stuff with MTU=1500, but I'm not sure if one of those is blocking me: # ifconfig br-enp3s0f0: flags=4163 mtu 9000 inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) RX packets 2401498 bytes 359284253 (342.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 30 bytes 1572 (1.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 br-int: flags=4163 mtu 9000 inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid 0x20 ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) RX packets 133 bytes 12934 (12.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp3s0f0: flags=4419 mtu 9000 inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) RX packets 165957142 bytes 20333410092 (18.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 23299881 bytes 5950708819 (5.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp3s0f0.50: flags=4163 mtu 9000 inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) RX packets 6014767 bytes 813880745 (776.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 79301 bytes 19052451 (18.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 22462729 bytes 1202484822 (1.1 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22462729 bytes 1202484822 (1.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qbr922bd9f5-bb: flags=4163 mtu 9000 inet6 fe80::4c1a:55ff:feba:14c3 prefixlen 64 scopeid 0x20 ether 56:a6:a6:db:83:c4 txqueuelen 0 (Ethernet) RX packets 16 bytes 1520 (1.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qbrf42ea01f-fe: flags=4163 mtu 1500 inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid 0x20 ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) RX packets 15 bytes 1456 (1.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvb922bd9f5-bb: flags=4419 mtu 9000 inet6 fe80::54a6:a6ff:fedb:83c4 prefixlen 64 scopeid 0x20 ether 56:a6:a6:db:83:c4 txqueuelen 1000 (Ethernet) RX packets 86 bytes 9610 (9.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 133 bytes 12767 (12.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvbf42ea01f-fe: flags=4419 mtu 1500 inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid 0x20 ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) RX packets 377 bytes 57664 (56.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 333 bytes 38765 (37.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvo922bd9f5-bb: flags=4419 mtu 9000 inet6 fe80::b44a:bff:fe72:aaea prefixlen 64 scopeid 0x20 ether b6:4a:0b:72:aa:ea txqueuelen 1000 (Ethernet) RX packets 133 bytes 12767 (12.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 86 bytes 9610 (9.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qvof42ea01f-fe: flags=4419 mtu 1500 inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid 0x20 ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) RX packets 333 bytes 38765 (37.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 377 bytes 57664 (56.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap922bd9f5-bb: flags=4163 mtu 9000 inet6 fe80::fc16:3eff:fefa:9945 prefixlen 64 scopeid 0x20 ether fe:16:3e:fa:99:45 txqueuelen 500 (Ethernet) RX packets 118 bytes 11561 (11.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 95 bytes 10316 (10.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099 mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 My network node has all interfaces set to MTU=9000. I thought maybe the bottleneck might be there but I don't think it is. Here's ifconfig from my network node: # ifconfig lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 2042 bytes 238727 (233.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2042 bytes 238727 (233.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 p1p2: flags=4163 mtu 9000 inet6 fe80::207:43ff:fe10:deb8 prefixlen 64 scopeid 0x20 ether 00:07:43:10:de:b8 txqueuelen 1000 (Ethernet) RX packets 2156053308 bytes 325330839639 (302.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 223004 bytes 24769304 (23.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 72 p2p1: flags=4163 mtu 9000 inet 10.50.1.51 netmask 255.255.0.0 broadcast 10.50.255.255 inet6 fe80::260:ddff:fe44:2aea prefixlen 64 scopeid 0x20 ether 00:60:dd:44:2a:ea txqueuelen 1000 (Ethernet) RX packets 49352916 bytes 3501547231 (3.2 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 18876911 bytes 3768900461 (3.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 p2p2: flags=4163 mtu 9000 inet6 fe80::260:ddff:fe44:2aeb prefixlen 64 scopeid 0x20 ether 00:60:dd:44:2a:eb txqueuelen 1000 (Ethernet) RX packets 2491224974 bytes 348058319500 (324.1 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1597 bytes 204525 (199.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Any way I can figure out what the third hop is from my tracepath? Thanks as always for the sage advice! -erich On 10/07/2015 09:57 AM, Erich Weiler wrote: > Yeah, I made the changes and then recreated all the networks. For some > reason br-int and the individual virtual instance interfaces on the > compute node still show 1500 byte frames. > > Has anyone else configured jumbo frames in a Kilo environment? Or maybe > I'm just an outlier... ;) > > -erich > > On 10/07/2015 01:46 AM, Pedro Navarro Perez wrote: >> Hi Erich, >> >> did you recreate the neutron networks after the configuration changes? >> >> Pedro Navarro P?rez >> OpenStack product specialist >> Red Hat Iberia >> Passeig de Gr?cia 120, >> 08008 Barcelona >> Spain >> M +34 639 642 379 >> E pnavarro at redhat.com >> >> ----- Original Message ----- >> From: "Erich Weiler" >> To: rdo-list at redhat.com >> Sent: Wednesday, 7 October, 2015 2:34:28 AM >> Subject: [Rdo-list] Jumbo MTU to instances in Kilo? >> >> Hi Y'all, >> >> I know someone must have figured this one out, but I can't seem to get >> 9000 byte MTUs working. I have it set in plugin.ini, etc, my nodes have >> MTU=9000 on their interfaces, so does the network node. dnsmasq also is >> configured to set MTU=9000 on instances, which works. But I still can't >> ping with large packets to my instance: >> >> [weiler at stacker ~]$ ping 10.50.100.2 >> PING 10.50.100.2 (10.50.100.2) 56(84) bytes of data. >> 64 bytes from 10.50.100.2: icmp_seq=1 ttl=63 time=2.95 ms >> 64 bytes from 10.50.100.2: icmp_seq=2 ttl=63 time=1.14 ms >> 64 bytes from 10.50.100.2: icmp_seq=3 ttl=63 time=0.661 ms >> >> That works fine. This however doesn't work: >> >> [root at stacker ~]# ping -M do -s 8000 10.50.100.2 >> PING 10.50.100.2 (10.50.100.2) 8000(8028) bytes of data. >> From 10.50.100.2 icmp_seq=1 Frag needed and DF set (mtu = 1500) >> ping: local error: Message too long, mtu=1500 >> ping: local error: Message too long, mtu=1500 >> ping: local error: Message too long, mtu=1500 >> ping: local error: Message too long, mtu=1500 >> >> It looks like somehow the br-int interface for OVS isn't set at 9000, >> but I can't figure out how to do that... >> >> Here's ifconfig on my compute node: >> >> br-enp3s0f0: flags=4163 mtu 9000 >> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >> 0x20 >> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >> RX packets 2401432 bytes 359276713 (342.6 MiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 30 bytes 1572 (1.5 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> br-int: flags=4163 mtu 1500 >> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid >> 0x20 >> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >> RX packets 69 bytes 6866 (6.7 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 8 bytes 648 (648.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> enp3s0f0: flags=4419 mtu 9000 >> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >> 0x20 >> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >> RX packets 130174458 bytes 15334807929 (14.2 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 22919305 bytes 5859090420 (5.4 GiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> enp3s0f0.50: flags=4163 mtu 9000 >> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >> 0x20 >> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >> RX packets 38429352 bytes 5152853436 (4.7 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 419842 bytes 101161981 (96.4 MiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> lo: flags=73 mtu 65536 >> inet 127.0.0.1 netmask 255.0.0.0 >> inet6 ::1 prefixlen 128 scopeid 0x10 >> loop txqueuelen 0 (Local Loopback) >> RX packets 22141566 bytes 1185622090 (1.1 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 22141566 bytes 1185622090 (1.1 GiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qbr247da3ed-a4: flags=4163 mtu 1500 >> inet6 fe80::5c8f:c0ff:fe79:bc11 prefixlen 64 scopeid >> 0x20 >> ether b6:1f:54:3f:3d:48 txqueuelen 0 (Ethernet) >> RX packets 16 bytes 1472 (1.4 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 8 bytes 648 (648.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qbrf42ea01f-fe: flags=4163 mtu 1500 >> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid >> 0x20 >> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >> RX packets 15 bytes 1456 (1.4 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 8 bytes 648 (648.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvb247da3ed-a4: flags=4419 >> mtu 1500 >> inet6 fe80::b41f:54ff:fe3f:3d48 prefixlen 64 scopeid >> 0x20 >> ether b6:1f:54:3f:3d:48 txqueuelen 1000 (Ethernet) >> RX packets 247 bytes 28323 (27.6 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 233 bytes 25355 (24.7 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvbf42ea01f-fe: flags=4419 >> mtu 1500 >> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid >> 0x20 >> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >> RX packets 377 bytes 57664 (56.3 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 333 bytes 38765 (37.8 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvo247da3ed-a4: flags=4419 >> mtu 1500 >> inet6 fe80::dcfa:f1ff:fe03:ee88 prefixlen 64 scopeid >> 0x20 >> ether de:fa:f1:03:ee:88 txqueuelen 1000 (Ethernet) >> RX packets 233 bytes 25355 (24.7 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 247 bytes 28323 (27.6 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvof42ea01f-fe: flags=4419 >> mtu 1500 >> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid >> 0x20 >> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >> RX packets 333 bytes 38765 (37.8 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 377 bytes 57664 (56.3 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> tap247da3ed-a4: flags=4163 mtu 1500 >> inet6 fe80::fc16:3eff:fede:5eea prefixlen 64 scopeid >> 0x20 >> ether fe:16:3e:de:5e:ea txqueuelen 500 (Ethernet) >> RX packets 219 bytes 24239 (23.6 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 224 bytes 26661 (26.0 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> virbr0: flags=4099 mtu 1500 >> inet 192.168.122.1 netmask 255.255.255.0 broadcast >> 192.168.122.255 >> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >> RX packets 0 bytes 0 (0.0 B) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 0 bytes 0 (0.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> This is on RHEL 7.1. Any obvious way I can get all the intermediate >> bridges to MTU=9000? I've RTFM'd and googled to no avail... >> >> Here's the ovs-vsctl outout: >> >> [root at node-136 ~]# ovs-vsctl show >> 6f5a5f00-59e2-4420-aeaf-7ad464ead232 >> Bridge br-int >> fail_mode: secure >> Port br-int >> Interface br-int >> type: internal >> Port "qvo247da3ed-a4" >> tag: 1 >> Interface "qvo247da3ed-a4" >> Port "int-br-eth1" >> Interface "int-br-eth1" >> Port "int-br-enp3s0f0" >> Interface "int-br-enp3s0f0" >> type: patch >> options: {peer="phy-br-enp3s0f0"} >> Bridge "br-enp3s0f0" >> Port "enp3s0f0" >> Interface "enp3s0f0" >> Port "br-enp3s0f0" >> Interface "br-enp3s0f0" >> type: internal >> Port "phy-br-enp3s0f0" >> Interface "phy-br-enp3s0f0" >> type: patch >> options: {peer="int-br-enp3s0f0"} >> ovs_version: "2.3.1" >> >> Many thanks if anyone has any information on this topic! Or can point >> me to some documentation I missed... >> >> Thanks, >> erich >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> From weiler at soe.ucsc.edu Wed Oct 7 23:52:37 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Wed, 7 Oct 2015 16:52:37 -0700 Subject: [Rdo-list] Jumbo MTU to instances in Kilo? In-Reply-To: <5615AC55.2040701@soe.ucsc.edu> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> <56154EE8.6000908@soe.ucsc.edu> <5615AC55.2040701@soe.ucsc.edu> Message-ID: <5615B045.1070503@soe.ucsc.edu> Actually I was wrong, it WAS on the network node. The virtual router interfaces were not set to MTU=9000. On network node: [root at os-net-01 ~]# ip netns qdhcp-c395cff9-af7b-4456-91e3-3c55e6c2c5f5 qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 i[root at os-net-01 ~]# ip netns exec qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qg-fa1e2a28-25: flags=4163 mtu 1500 inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) RX packets 34071065 bytes 5046408745 (4.6 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 442 bytes 51915 (50.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qr-51904c89-b8: flags=4163 mtu 1500 inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) RX packets 702 bytes 75369 (73.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 814 bytes 92259 (90.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I can fix it manually: [root at os-net-01 neutron]# ip netns exec qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qg-fa1e2a28-25 mtu 9000 [root at os-net-01 neutron]# ip netns exec qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qr-51904c89-b8 mtu 9000 [root at os-net-01 neutron]# ip netns exec qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qg-fa1e2a28-25: flags=4163 mtu 9000 inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) RX packets 34086053 bytes 5048637833 (4.7 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 442 bytes 51915 (50.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qr-51904c89-b8: flags=4163 mtu 9000 inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) RX packets 702 bytes 75369 (73.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 814 bytes 92259 (90.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 And then I have a jumbo clean path everywhere! All is good then. But... How to set this in a config file or something so I don't have to do it manually? I found this bug report: https://bugs.launchpad.net/neutron/+bug/1311097 Anyone know if that bug is still out there? Or how can I set virtual router interfaces MTU by default when I create the router? cheers, erich On 10/07/2015 04:35 PM, Erich Weiler wrote: > Actually I think I'm closer - on the compute nodes, I set this in > nova.conf: > > network_device_mtu=9000 > > even though there was a big note above it that said not to use it > because this option was deprecated. But after setting that option, and > restarting nova and openvswitch, br-int, my tap device and my qvb device > all got set to MTU=9000. So I'm closer! But still one item is blocking > me. I show this tracepath from my controller node direct to the VM > (which is on a compute node on the local network): > > # tracepath 10.50.100.4 > 1?: [LOCALHOST] pmtu 9000 > 1: 10.50.100.4 0.682ms > 1: 10.50.100.4 0.241ms > 2: 10.50.100.4 0.297ms pmtu > 1500 > 2: 10.50.100.4 1.664ms reached > > 10.50.100.4 is the VM. It looks like the path is jumbo clean up until > that third hop. But the thing is, I don't know what the third hop is. ;) > > On my compute node I still see some stuff with MTU=1500, but I'm not > sure if one of those is blocking me: > > # ifconfig > br-enp3s0f0: flags=4163 mtu 9000 > inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 > ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) > RX packets 2401498 bytes 359284253 (342.6 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 30 bytes 1572 (1.5 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > br-int: flags=4163 mtu 9000 > inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid 0x20 > ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) > RX packets 133 bytes 12934 (12.6 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 8 bytes 648 (648.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp3s0f0: flags=4419 mtu 9000 > inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 > ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) > RX packets 165957142 bytes 20333410092 (18.9 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 23299881 bytes 5950708819 (5.5 GiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp3s0f0.50: flags=4163 mtu 9000 > inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 > inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 > ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) > RX packets 6014767 bytes 813880745 (776.1 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 79301 bytes 19052451 (18.1 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 22462729 bytes 1202484822 (1.1 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 22462729 bytes 1202484822 (1.1 GiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qbr922bd9f5-bb: flags=4163 mtu 9000 > inet6 fe80::4c1a:55ff:feba:14c3 prefixlen 64 scopeid 0x20 > ether 56:a6:a6:db:83:c4 txqueuelen 0 (Ethernet) > RX packets 16 bytes 1520 (1.4 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 8 bytes 648 (648.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qbrf42ea01f-fe: flags=4163 mtu 1500 > inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid 0x20 > ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) > RX packets 15 bytes 1456 (1.4 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 8 bytes 648 (648.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvb922bd9f5-bb: flags=4419 mtu > 9000 > inet6 fe80::54a6:a6ff:fedb:83c4 prefixlen 64 scopeid 0x20 > ether 56:a6:a6:db:83:c4 txqueuelen 1000 (Ethernet) > RX packets 86 bytes 9610 (9.3 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 133 bytes 12767 (12.4 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvbf42ea01f-fe: flags=4419 mtu > 1500 > inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid 0x20 > ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) > RX packets 377 bytes 57664 (56.3 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 333 bytes 38765 (37.8 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvo922bd9f5-bb: flags=4419 mtu > 9000 > inet6 fe80::b44a:bff:fe72:aaea prefixlen 64 scopeid 0x20 > ether b6:4a:0b:72:aa:ea txqueuelen 1000 (Ethernet) > RX packets 133 bytes 12767 (12.4 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 86 bytes 9610 (9.3 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvof42ea01f-fe: flags=4419 mtu > 1500 > inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid 0x20 > ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) > RX packets 333 bytes 38765 (37.8 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 377 bytes 57664 (56.3 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > tap922bd9f5-bb: flags=4163 mtu 9000 > inet6 fe80::fc16:3eff:fefa:9945 prefixlen 64 scopeid 0x20 > ether fe:16:3e:fa:99:45 txqueuelen 500 (Ethernet) > RX packets 118 bytes 11561 (11.2 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 95 bytes 10316 (10.0 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > virbr0: flags=4099 mtu 1500 > inet 192.168.122.1 netmask 255.255.255.0 broadcast > 192.168.122.255 > ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) > RX packets 0 bytes 0 (0.0 B) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 0 bytes 0 (0.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > My network node has all interfaces set to MTU=9000. I thought maybe the > bottleneck might be there but I don't think it is. Here's ifconfig > from my network node: > > # ifconfig > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 2042 bytes 238727 (233.1 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 2042 bytes 238727 (233.1 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > p1p2: flags=4163 mtu 9000 > inet6 fe80::207:43ff:fe10:deb8 prefixlen 64 scopeid 0x20 > ether 00:07:43:10:de:b8 txqueuelen 1000 (Ethernet) > RX packets 2156053308 bytes 325330839639 (302.9 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 223004 bytes 24769304 (23.6 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > device interrupt 72 > > p2p1: flags=4163 mtu 9000 > inet 10.50.1.51 netmask 255.255.0.0 broadcast 10.50.255.255 > inet6 fe80::260:ddff:fe44:2aea prefixlen 64 scopeid 0x20 > ether 00:60:dd:44:2a:ea txqueuelen 1000 (Ethernet) > RX packets 49352916 bytes 3501547231 (3.2 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 18876911 bytes 3768900461 (3.5 GiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > p2p2: flags=4163 mtu 9000 > inet6 fe80::260:ddff:fe44:2aeb prefixlen 64 scopeid 0x20 > ether 00:60:dd:44:2a:eb txqueuelen 1000 (Ethernet) > RX packets 2491224974 bytes 348058319500 (324.1 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 1597 bytes 204525 (199.7 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > Any way I can figure out what the third hop is from my tracepath? > > Thanks as always for the sage advice! > > -erich > > On 10/07/2015 09:57 AM, Erich Weiler wrote: >> Yeah, I made the changes and then recreated all the networks. For some >> reason br-int and the individual virtual instance interfaces on the >> compute node still show 1500 byte frames. >> >> Has anyone else configured jumbo frames in a Kilo environment? Or maybe >> I'm just an outlier... ;) >> >> -erich >> >> On 10/07/2015 01:46 AM, Pedro Navarro Perez wrote: >>> Hi Erich, >>> >>> did you recreate the neutron networks after the configuration changes? >>> >>> Pedro Navarro P?rez >>> OpenStack product specialist >>> Red Hat Iberia >>> Passeig de Gr?cia 120, >>> 08008 Barcelona >>> Spain >>> M +34 639 642 379 >>> E pnavarro at redhat.com >>> >>> ----- Original Message ----- >>> From: "Erich Weiler" >>> To: rdo-list at redhat.com >>> Sent: Wednesday, 7 October, 2015 2:34:28 AM >>> Subject: [Rdo-list] Jumbo MTU to instances in Kilo? >>> >>> Hi Y'all, >>> >>> I know someone must have figured this one out, but I can't seem to get >>> 9000 byte MTUs working. I have it set in plugin.ini, etc, my nodes have >>> MTU=9000 on their interfaces, so does the network node. dnsmasq also is >>> configured to set MTU=9000 on instances, which works. But I still can't >>> ping with large packets to my instance: >>> >>> [weiler at stacker ~]$ ping 10.50.100.2 >>> PING 10.50.100.2 (10.50.100.2) 56(84) bytes of data. >>> 64 bytes from 10.50.100.2: icmp_seq=1 ttl=63 time=2.95 ms >>> 64 bytes from 10.50.100.2: icmp_seq=2 ttl=63 time=1.14 ms >>> 64 bytes from 10.50.100.2: icmp_seq=3 ttl=63 time=0.661 ms >>> >>> That works fine. This however doesn't work: >>> >>> [root at stacker ~]# ping -M do -s 8000 10.50.100.2 >>> PING 10.50.100.2 (10.50.100.2) 8000(8028) bytes of data. >>> From 10.50.100.2 icmp_seq=1 Frag needed and DF set (mtu = 1500) >>> ping: local error: Message too long, mtu=1500 >>> ping: local error: Message too long, mtu=1500 >>> ping: local error: Message too long, mtu=1500 >>> ping: local error: Message too long, mtu=1500 >>> >>> It looks like somehow the br-int interface for OVS isn't set at 9000, >>> but I can't figure out how to do that... >>> >>> Here's ifconfig on my compute node: >>> >>> br-enp3s0f0: flags=4163 mtu 9000 >>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>> 0x20 >>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>> RX packets 2401432 bytes 359276713 (342.6 MiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 30 bytes 1572 (1.5 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> br-int: flags=4163 mtu 1500 >>> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid >>> 0x20 >>> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >>> RX packets 69 bytes 6866 (6.7 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 8 bytes 648 (648.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> enp3s0f0: flags=4419 mtu 9000 >>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>> 0x20 >>> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >>> RX packets 130174458 bytes 15334807929 (14.2 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 22919305 bytes 5859090420 (5.4 GiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> enp3s0f0.50: flags=4163 mtu 9000 >>> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>> 0x20 >>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>> RX packets 38429352 bytes 5152853436 (4.7 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 419842 bytes 101161981 (96.4 MiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> lo: flags=73 mtu 65536 >>> inet 127.0.0.1 netmask 255.0.0.0 >>> inet6 ::1 prefixlen 128 scopeid 0x10 >>> loop txqueuelen 0 (Local Loopback) >>> RX packets 22141566 bytes 1185622090 (1.1 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 22141566 bytes 1185622090 (1.1 GiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qbr247da3ed-a4: flags=4163 mtu 1500 >>> inet6 fe80::5c8f:c0ff:fe79:bc11 prefixlen 64 scopeid >>> 0x20 >>> ether b6:1f:54:3f:3d:48 txqueuelen 0 (Ethernet) >>> RX packets 16 bytes 1472 (1.4 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 8 bytes 648 (648.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qbrf42ea01f-fe: flags=4163 mtu 1500 >>> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid >>> 0x20 >>> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >>> RX packets 15 bytes 1456 (1.4 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 8 bytes 648 (648.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvb247da3ed-a4: flags=4419 >>> mtu 1500 >>> inet6 fe80::b41f:54ff:fe3f:3d48 prefixlen 64 scopeid >>> 0x20 >>> ether b6:1f:54:3f:3d:48 txqueuelen 1000 (Ethernet) >>> RX packets 247 bytes 28323 (27.6 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 233 bytes 25355 (24.7 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvbf42ea01f-fe: flags=4419 >>> mtu 1500 >>> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid >>> 0x20 >>> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >>> RX packets 377 bytes 57664 (56.3 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 333 bytes 38765 (37.8 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvo247da3ed-a4: flags=4419 >>> mtu 1500 >>> inet6 fe80::dcfa:f1ff:fe03:ee88 prefixlen 64 scopeid >>> 0x20 >>> ether de:fa:f1:03:ee:88 txqueuelen 1000 (Ethernet) >>> RX packets 233 bytes 25355 (24.7 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 247 bytes 28323 (27.6 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvof42ea01f-fe: flags=4419 >>> mtu 1500 >>> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid >>> 0x20 >>> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >>> RX packets 333 bytes 38765 (37.8 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 377 bytes 57664 (56.3 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> tap247da3ed-a4: flags=4163 mtu 1500 >>> inet6 fe80::fc16:3eff:fede:5eea prefixlen 64 scopeid >>> 0x20 >>> ether fe:16:3e:de:5e:ea txqueuelen 500 (Ethernet) >>> RX packets 219 bytes 24239 (23.6 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 224 bytes 26661 (26.0 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> virbr0: flags=4099 mtu 1500 >>> inet 192.168.122.1 netmask 255.255.255.0 broadcast >>> 192.168.122.255 >>> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >>> RX packets 0 bytes 0 (0.0 B) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 0 bytes 0 (0.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> This is on RHEL 7.1. Any obvious way I can get all the intermediate >>> bridges to MTU=9000? I've RTFM'd and googled to no avail... >>> >>> Here's the ovs-vsctl outout: >>> >>> [root at node-136 ~]# ovs-vsctl show >>> 6f5a5f00-59e2-4420-aeaf-7ad464ead232 >>> Bridge br-int >>> fail_mode: secure >>> Port br-int >>> Interface br-int >>> type: internal >>> Port "qvo247da3ed-a4" >>> tag: 1 >>> Interface "qvo247da3ed-a4" >>> Port "int-br-eth1" >>> Interface "int-br-eth1" >>> Port "int-br-enp3s0f0" >>> Interface "int-br-enp3s0f0" >>> type: patch >>> options: {peer="phy-br-enp3s0f0"} >>> Bridge "br-enp3s0f0" >>> Port "enp3s0f0" >>> Interface "enp3s0f0" >>> Port "br-enp3s0f0" >>> Interface "br-enp3s0f0" >>> type: internal >>> Port "phy-br-enp3s0f0" >>> Interface "phy-br-enp3s0f0" >>> type: patch >>> options: {peer="int-br-enp3s0f0"} >>> ovs_version: "2.3.1" >>> >>> Many thanks if anyone has any information on this topic! Or can point >>> me to some documentation I missed... >>> >>> Thanks, >>> erich >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> From dtantsur at redhat.com Thu Oct 8 08:18:45 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 8 Oct 2015 10:18:45 +0200 Subject: [Rdo-list] Nodes introspection In-Reply-To: <1536062897.38333007.1444258283111.JavaMail.zimbra@redhat.com> References: <1536062897.38333007.1444258283111.JavaMail.zimbra@redhat.com> Message-ID: <561626E5.7070300@redhat.com> On 10/08/2015 12:51 AM, Marius Cornea wrote: > Hi everyone, > > A couple of questions regarding introspection (following the upstream docs): > > 1. Is Swift still used for storing nodes introspection details? I see no Swift object resulting after running introspection. I think it's temporary disabled due to transition discoverd -> inspector, and will be reenabled as soon as we fix the high priority issues. > > 2. During introspection I can see error messages on the VMs console(running a virt environment) but it's hard to record them since the VM quickly turns off. What's the proper way to debug this? Is it with ironic-python-agent, right? If so, I've noted that it always fails to connect to inspector. Then it is restart by systemd, and the second attempts succeeds. We'll have procedure in place to collect logs from the ramdisk, but it's not wired in yet as well. Anyway, if the introspection works at all, you should not be too concerned for now :) > > 3. The profile matching docs[1] reference ironic-discoverd which has now become ironic-inspector. How can we make this work? s/discoverd/inspector/ ? Profile matching is in especially uncertain state right now. Thanks for reminder, we should start sorting out - as soon as introspection is part of the gate again. > > Thanks, > Marius > > [1] http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/profile_matching.html > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From javier.pena at redhat.com Thu Oct 8 09:09:10 2015 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 8 Oct 2015 05:09:10 -0400 (EDT) Subject: [Rdo-list] delorean.repo vs delorean-deps.repo In-Reply-To: References: <1512777106.65406325.1444212750519.JavaMail.zimbra@redhat.com> Message-ID: <1347286070.67237283.1444295350433.JavaMail.zimbra@redhat.com> ----- Original Message ----- > So, following a lengthy discussion on IRC and on gerrit [1], we > realized it wasn't as easy as it seemed at first glance. > It did raise some interesting questions, though. > > There are really three repositories for liberty right now: > - delorean (from delorean.repo) > - delorean-liberty-testing (from delorean-deps.repo) > - delorean-common-testing (from delorean-deps.repo) > > For the packages in [delorean] to work properly, it needs the > dependencies found in [delorean-liberty-testing] and > [delorean-common-testing]. > The naive idea I submitted was "Why not just bundled the two -testing > repos inside delorean.repo ?" . > > Yes, this could work momentarily - but let's pretend that we have CI > pass over a set of packages and then promote a delorean repository to > "current-passed-ci". > Some time passes by and the packages in either > delorean-liberty-testing or delorean-common-testing change and > actually breaks the build that we tagged as passing CI. > > Since all the packages are not in every given repository and each > repository can change independently of the others, we are unable to > guarantee that a specific repository that passes today will pass > tomorrow. > So, what do we do ? > - Pull in every liberty-testing and common-testing packages to every > delorean repository ? Sounds expensive but is possibly what Ubuntu > does [2] > - Also promote a set of -deps repo when we promote a delorean repo ? How ? > Would we want to keep all passed-ci repos over time? It should not be too hard to make the promote script fetch all packages in delorean-liberty-testing and delorean-common-testing during promotion and rebuild the repo with them. However, it could increase storage usage (~330 MB at the moment). Cheers, Javier > I see three objectives in trying to improve the current situation: > - Streamline/improve system and user experience by having one > repository to add, not two > - Improve stability by making sure builds that passed will remain stable > - Ensure builds and their results are reproducible every time > > Any ideas ? > > [1]: https://review.gerrithub.io/249207 > [2]: > http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/trusty-updates/liberty/main/binary-amd64/Packages > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > > On Wed, Oct 7, 2015 at 6:12 AM, Javier Pena wrote: > > > > > > ----- Original Message ----- > >> > I was wondering why these two files were split up if one can't be used > >> > without the other ? > >> > Can we bundle delorean-deps.repo inside delorean.repo ? There'd be > >> > three repositories in the file. > >> > >> delorean-deps.repo is a single file which can be changed when we move > >> repos e.g. cbs.centos.org/repos/ will be blocked soon and we'll need > >> to switch to the mirror on buildlogs.c.o and eventually to the release > >> repos on mirror.c.o. > >> delorean.repo is a static file generated by Delorean when it runs so > >> it can't be changed as easily. > >> > >> > It'd be simpler for both users and systems to consume this one > >> > repository file with everything we need in it. > >> > >> Good point, maybe we could figure out something using server-side include: > >> https://github.com/redhat-openstack/delorean-instance/issues/14 > >> > > > > It may be easier if we can just patch delorean to include the contents of > > delorean-deps.repo into delorean.repo. I have proposed this in > > https://review.gerrithub.io/249207 > > > > Cheers, > > Javier > > > >> Cheers, > >> Alan > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From apevec at gmail.com Thu Oct 8 09:40:31 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 8 Oct 2015 11:40:31 +0200 Subject: [Rdo-list] delorean.repo vs delorean-deps.repo In-Reply-To: <1347286070.67237283.1444295350433.JavaMail.zimbra@redhat.com> References: <1512777106.65406325.1444212750519.JavaMail.zimbra@redhat.com> <1347286070.67237283.1444295350433.JavaMail.zimbra@redhat.com> Message-ID: > Would we want to keep all passed-ci repos over time? It should not be too hard to make the promote script fetch all packages in delorean-liberty-testing and delorean-common-testing during promotion and rebuild the repo with them. However, it could increase storage usage (~330 MB at the moment). This might be shaved a bit if we'd use hardlinking but I would not invest too much effort into this, using -testing repos is temporary during developement. Proposed production CI flow would be: 1. builds are tagged into CBS -testing tag from rdoupdate yml in gerrit 2. this triggers cbs repos regen which would trigger promotion job on ci.centos.org 3. on pass builds are tagged into -release tag and announcement auto-generated using openstack/reno and description in rdoupdate yaml 4. release is signed and published to mirror.centos.org This CI should ensure that deps is reliable and can be treated same as base OS. Cheers, Alan From amedeo.salvati at fastweb.it Thu Oct 8 10:02:35 2015 From: amedeo.salvati at fastweb.it (Salvati Amedeo) Date: Thu, 8 Oct 2015 10:02:35 +0000 Subject: [Rdo-list] R: Jumbo MTU to instances in Kilo? In-Reply-To: <5615B045.1070503@soe.ucsc.edu> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> <56154EE8.6000908@soe.ucsc.edu> <5615AC55.2040701@soe.ucsc.edu> <5615B045.1070503@soe.ucsc.edu> Message-ID: <751EEF2DDA813143A5F20A73F5B78F382D5892@ex022ims.fastwebit.ofc> Eric, also, to set jumbo frames on your env, you have to set mtu from VM to controller: # echo "dhcp-option-force=26,8900" > /etc/neutron/dnsmasq-neutron.conf # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf # openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini agent veth_mtu 8900 # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT network_device_mtu 9000 # openstack-config --set /etc/nova/nova.conf DEFAULT network_device_mtu 9000 <--- this on every nova-compute take a look at l3_agent.ini file, without network_device_mtu every new router will use default mtu at 1500 # ip netns exec qrouter-26f64a08-52ab-4643-b903-9aea6eae047a /bin/bash # ip a | grep mtu 1: lo: mtu 65536 qdisc noqueue state UNKNOWN 69: ha-89546945-ab: mtu 9000 qdisc noqueue state UNKNOWN 74: qr-f207f652-da: mtu 9000 qdisc noqueue state UNKNOWN 81: qg-ab978cd0-ad: mtu 9000 qdisc noqueue state UNKNOWN HTH Amedeo -----Messaggio originale----- Da: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] Per conto di Erich Weiler Inviato: gioved? 8 ottobre 2015 01:53 A: Pedro Navarro Perez Cc: rdo-list at redhat.com Oggetto: Re: [Rdo-list] Jumbo MTU to instances in Kilo? Actually I was wrong, it WAS on the network node. The virtual router interfaces were not set to MTU=9000. On network node: [root at os-net-01 ~]# ip netns qdhcp-c395cff9-af7b-4456-91e3-3c55e6c2c5f5 qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 i[root at os-net-01 ~]# ip netns exec qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qg-fa1e2a28-25: flags=4163 mtu 1500 inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) RX packets 34071065 bytes 5046408745 (4.6 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 442 bytes 51915 (50.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qr-51904c89-b8: flags=4163 mtu 1500 inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) RX packets 702 bytes 75369 (73.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 814 bytes 92259 (90.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I can fix it manually: [root at os-net-01 neutron]# ip netns exec qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qg-fa1e2a28-25 mtu 9000 [root at os-net-01 neutron]# ip netns exec qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qr-51904c89-b8 mtu 9000 [root at os-net-01 neutron]# ip netns exec qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qg-fa1e2a28-25: flags=4163 mtu 9000 inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) RX packets 34086053 bytes 5048637833 (4.7 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 442 bytes 51915 (50.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qr-51904c89-b8: flags=4163 mtu 9000 inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) RX packets 702 bytes 75369 (73.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 814 bytes 92259 (90.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 And then I have a jumbo clean path everywhere! All is good then. But... How to set this in a config file or something so I don't have to do it manually? I found this bug report: https://bugs.launchpad.net/neutron/+bug/1311097 Anyone know if that bug is still out there? Or how can I set virtual router interfaces MTU by default when I create the router? cheers, erich On 10/07/2015 04:35 PM, Erich Weiler wrote: > Actually I think I'm closer - on the compute nodes, I set this in > nova.conf: > > network_device_mtu=9000 > > even though there was a big note above it that said not to use it > because this option was deprecated. But after setting that option, > and restarting nova and openvswitch, br-int, my tap device and my qvb > device all got set to MTU=9000. So I'm closer! But still one item is > blocking me. I show this tracepath from my controller node direct to > the VM (which is on a compute node on the local network): > > # tracepath 10.50.100.4 > 1?: [LOCALHOST] pmtu 9000 > 1: 10.50.100.4 0.682ms > 1: 10.50.100.4 0.241ms > 2: 10.50.100.4 0.297ms pmtu > 1500 > 2: 10.50.100.4 1.664ms reached > > 10.50.100.4 is the VM. It looks like the path is jumbo clean up until > that third hop. But the thing is, I don't know what the third hop is. > ;) > > On my compute node I still see some stuff with MTU=1500, but I'm not > sure if one of those is blocking me: > > # ifconfig > br-enp3s0f0: flags=4163 mtu 9000 > inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 > ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) > RX packets 2401498 bytes 359284253 (342.6 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 30 bytes 1572 (1.5 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > br-int: flags=4163 mtu 9000 > inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid 0x20 > ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) > RX packets 133 bytes 12934 (12.6 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 8 bytes 648 (648.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp3s0f0: flags=4419 mtu 9000 > inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 > ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) > RX packets 165957142 bytes 20333410092 (18.9 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 23299881 bytes 5950708819 (5.5 GiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp3s0f0.50: flags=4163 mtu 9000 > inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 > inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 > ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) > RX packets 6014767 bytes 813880745 (776.1 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 79301 bytes 19052451 (18.1 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 22462729 bytes 1202484822 (1.1 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 22462729 bytes 1202484822 (1.1 GiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qbr922bd9f5-bb: flags=4163 mtu 9000 > inet6 fe80::4c1a:55ff:feba:14c3 prefixlen 64 scopeid 0x20 > ether 56:a6:a6:db:83:c4 txqueuelen 0 (Ethernet) > RX packets 16 bytes 1520 (1.4 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 8 bytes 648 (648.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qbrf42ea01f-fe: flags=4163 mtu 1500 > inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid 0x20 > ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) > RX packets 15 bytes 1456 (1.4 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 8 bytes 648 (648.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvb922bd9f5-bb: flags=4419 > mtu > 9000 > inet6 fe80::54a6:a6ff:fedb:83c4 prefixlen 64 scopeid 0x20 > ether 56:a6:a6:db:83:c4 txqueuelen 1000 (Ethernet) > RX packets 86 bytes 9610 (9.3 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 133 bytes 12767 (12.4 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvbf42ea01f-fe: flags=4419 > mtu > 1500 > inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid 0x20 > ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) > RX packets 377 bytes 57664 (56.3 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 333 bytes 38765 (37.8 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvo922bd9f5-bb: flags=4419 > mtu > 9000 > inet6 fe80::b44a:bff:fe72:aaea prefixlen 64 scopeid 0x20 > ether b6:4a:0b:72:aa:ea txqueuelen 1000 (Ethernet) > RX packets 133 bytes 12767 (12.4 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 86 bytes 9610 (9.3 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qvof42ea01f-fe: flags=4419 > mtu > 1500 > inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid 0x20 > ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) > RX packets 333 bytes 38765 (37.8 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 377 bytes 57664 (56.3 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > tap922bd9f5-bb: flags=4163 mtu 9000 > inet6 fe80::fc16:3eff:fefa:9945 prefixlen 64 scopeid 0x20 > ether fe:16:3e:fa:99:45 txqueuelen 500 (Ethernet) > RX packets 118 bytes 11561 (11.2 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 95 bytes 10316 (10.0 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > virbr0: flags=4099 mtu 1500 > inet 192.168.122.1 netmask 255.255.255.0 broadcast > 192.168.122.255 > ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) > RX packets 0 bytes 0 (0.0 B) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 0 bytes 0 (0.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > My network node has all interfaces set to MTU=9000. I thought maybe the > bottleneck might be there but I don't think it is. Here's ifconfig > from my network node: > > # ifconfig > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 2042 bytes 238727 (233.1 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 2042 bytes 238727 (233.1 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > p1p2: flags=4163 mtu 9000 > inet6 fe80::207:43ff:fe10:deb8 prefixlen 64 scopeid 0x20 > ether 00:07:43:10:de:b8 txqueuelen 1000 (Ethernet) > RX packets 2156053308 bytes 325330839639 (302.9 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 223004 bytes 24769304 (23.6 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > device interrupt 72 > > p2p1: flags=4163 mtu 9000 > inet 10.50.1.51 netmask 255.255.0.0 broadcast 10.50.255.255 > inet6 fe80::260:ddff:fe44:2aea prefixlen 64 scopeid 0x20 > ether 00:60:dd:44:2a:ea txqueuelen 1000 (Ethernet) > RX packets 49352916 bytes 3501547231 (3.2 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 18876911 bytes 3768900461 (3.5 GiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > p2p2: flags=4163 mtu 9000 > inet6 fe80::260:ddff:fe44:2aeb prefixlen 64 scopeid 0x20 > ether 00:60:dd:44:2a:eb txqueuelen 1000 (Ethernet) > RX packets 2491224974 bytes 348058319500 (324.1 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 1597 bytes 204525 (199.7 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > Any way I can figure out what the third hop is from my tracepath? > > Thanks as always for the sage advice! > > -erich > > On 10/07/2015 09:57 AM, Erich Weiler wrote: >> Yeah, I made the changes and then recreated all the networks. For >> some reason br-int and the individual virtual instance interfaces on >> the compute node still show 1500 byte frames. >> >> Has anyone else configured jumbo frames in a Kilo environment? Or >> maybe I'm just an outlier... ;) >> >> -erich >> >> On 10/07/2015 01:46 AM, Pedro Navarro Perez wrote: >>> Hi Erich, >>> >>> did you recreate the neutron networks after the configuration changes? >>> >>> Pedro Navarro P?rez >>> OpenStack product specialist >>> Red Hat Iberia >>> Passeig de Gr?cia 120, >>> 08008 Barcelona >>> Spain >>> M +34 639 642 379 >>> E pnavarro at redhat.com >>> >>> ----- Original Message ----- >>> From: "Erich Weiler" >>> To: rdo-list at redhat.com >>> Sent: Wednesday, 7 October, 2015 2:34:28 AM >>> Subject: [Rdo-list] Jumbo MTU to instances in Kilo? >>> >>> Hi Y'all, >>> >>> I know someone must have figured this one out, but I can't seem to >>> get >>> 9000 byte MTUs working. I have it set in plugin.ini, etc, my nodes >>> have >>> MTU=9000 on their interfaces, so does the network node. dnsmasq >>> also is configured to set MTU=9000 on instances, which works. But I >>> still can't ping with large packets to my instance: >>> >>> [weiler at stacker ~]$ ping 10.50.100.2 PING 10.50.100.2 (10.50.100.2) >>> 56(84) bytes of data. >>> 64 bytes from 10.50.100.2: icmp_seq=1 ttl=63 time=2.95 ms >>> 64 bytes from 10.50.100.2: icmp_seq=2 ttl=63 time=1.14 ms >>> 64 bytes from 10.50.100.2: icmp_seq=3 ttl=63 time=0.661 ms >>> >>> That works fine. This however doesn't work: >>> >>> [root at stacker ~]# ping -M do -s 8000 10.50.100.2 PING 10.50.100.2 >>> (10.50.100.2) 8000(8028) bytes of data. >>> From 10.50.100.2 icmp_seq=1 Frag needed and DF set (mtu = 1500) >>> ping: local error: Message too long, mtu=1500 >>> ping: local error: Message too long, mtu=1500 >>> ping: local error: Message too long, mtu=1500 >>> ping: local error: Message too long, mtu=1500 >>> >>> It looks like somehow the br-int interface for OVS isn't set at >>> 9000, but I can't figure out how to do that... >>> >>> Here's ifconfig on my compute node: >>> >>> br-enp3s0f0: flags=4163 mtu 9000 >>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>> 0x20 >>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>> RX packets 2401432 bytes 359276713 (342.6 MiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 30 bytes 1572 (1.5 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> br-int: flags=4163 mtu 1500 >>> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid >>> 0x20 >>> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >>> RX packets 69 bytes 6866 (6.7 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 8 bytes 648 (648.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> enp3s0f0: flags=4419 mtu 9000 >>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>> 0x20 >>> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >>> RX packets 130174458 bytes 15334807929 (14.2 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 22919305 bytes 5859090420 (5.4 GiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> enp3s0f0.50: flags=4163 mtu 9000 >>> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>> 0x20 >>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>> RX packets 38429352 bytes 5152853436 (4.7 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 419842 bytes 101161981 (96.4 MiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> lo: flags=73 mtu 65536 >>> inet 127.0.0.1 netmask 255.0.0.0 >>> inet6 ::1 prefixlen 128 scopeid 0x10 >>> loop txqueuelen 0 (Local Loopback) >>> RX packets 22141566 bytes 1185622090 (1.1 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 22141566 bytes 1185622090 (1.1 GiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qbr247da3ed-a4: flags=4163 mtu 1500 >>> inet6 fe80::5c8f:c0ff:fe79:bc11 prefixlen 64 scopeid >>> 0x20 >>> ether b6:1f:54:3f:3d:48 txqueuelen 0 (Ethernet) >>> RX packets 16 bytes 1472 (1.4 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 8 bytes 648 (648.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qbrf42ea01f-fe: flags=4163 mtu 1500 >>> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid >>> 0x20 >>> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >>> RX packets 15 bytes 1456 (1.4 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 8 bytes 648 (648.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvb247da3ed-a4: flags=4419 >>> mtu 1500 >>> inet6 fe80::b41f:54ff:fe3f:3d48 prefixlen 64 scopeid >>> 0x20 >>> ether b6:1f:54:3f:3d:48 txqueuelen 1000 (Ethernet) >>> RX packets 247 bytes 28323 (27.6 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 233 bytes 25355 (24.7 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvbf42ea01f-fe: flags=4419 >>> mtu 1500 >>> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid >>> 0x20 >>> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >>> RX packets 377 bytes 57664 (56.3 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 333 bytes 38765 (37.8 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvo247da3ed-a4: flags=4419 >>> mtu 1500 >>> inet6 fe80::dcfa:f1ff:fe03:ee88 prefixlen 64 scopeid >>> 0x20 >>> ether de:fa:f1:03:ee:88 txqueuelen 1000 (Ethernet) >>> RX packets 233 bytes 25355 (24.7 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 247 bytes 28323 (27.6 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvof42ea01f-fe: flags=4419 >>> mtu 1500 >>> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid >>> 0x20 >>> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >>> RX packets 333 bytes 38765 (37.8 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 377 bytes 57664 (56.3 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> tap247da3ed-a4: flags=4163 mtu 1500 >>> inet6 fe80::fc16:3eff:fede:5eea prefixlen 64 scopeid >>> 0x20 >>> ether fe:16:3e:de:5e:ea txqueuelen 500 (Ethernet) >>> RX packets 219 bytes 24239 (23.6 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 224 bytes 26661 (26.0 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> virbr0: flags=4099 mtu 1500 >>> inet 192.168.122.1 netmask 255.255.255.0 broadcast >>> 192.168.122.255 >>> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >>> RX packets 0 bytes 0 (0.0 B) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 0 bytes 0 (0.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> This is on RHEL 7.1. Any obvious way I can get all the intermediate >>> bridges to MTU=9000? I've RTFM'd and googled to no avail... >>> >>> Here's the ovs-vsctl outout: >>> >>> [root at node-136 ~]# ovs-vsctl show >>> 6f5a5f00-59e2-4420-aeaf-7ad464ead232 >>> Bridge br-int >>> fail_mode: secure >>> Port br-int >>> Interface br-int >>> type: internal >>> Port "qvo247da3ed-a4" >>> tag: 1 >>> Interface "qvo247da3ed-a4" >>> Port "int-br-eth1" >>> Interface "int-br-eth1" >>> Port "int-br-enp3s0f0" >>> Interface "int-br-enp3s0f0" >>> type: patch >>> options: {peer="phy-br-enp3s0f0"} >>> Bridge "br-enp3s0f0" >>> Port "enp3s0f0" >>> Interface "enp3s0f0" >>> Port "br-enp3s0f0" >>> Interface "br-enp3s0f0" >>> type: internal >>> Port "phy-br-enp3s0f0" >>> Interface "phy-br-enp3s0f0" >>> type: patch >>> options: {peer="int-br-enp3s0f0"} >>> ovs_version: "2.3.1" >>> >>> Many thanks if anyone has any information on this topic! Or can >>> point me to some documentation I missed... >>> >>> Thanks, >>> erich >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Thu Oct 8 11:19:50 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 8 Oct 2015 13:19:50 +0200 Subject: [Rdo-list] delorean.repo vs delorean-deps.repo In-Reply-To: References: <1512777106.65406325.1444212750519.JavaMail.zimbra@redhat.com> <1347286070.67237283.1444295350433.JavaMail.zimbra@redhat.com> Message-ID: ... > 4. release is signed and published to mirror.centos.org > > This CI should ensure that deps is reliable and can be treated same as base OS. And after chatting w/ KB on #centos-devel we have a solution for archiving complete set of deps including CentOS base for CI passes: repos on mirror.centos.org keep all the old builds, not just latest, so job would need to save only repodata for all repos on mirror.c.o to get exact set of packages which passed CI. Cheers, Alan From trown at redhat.com Thu Oct 8 11:29:09 2015 From: trown at redhat.com (John Trowbridge) Date: Thu, 8 Oct 2015 07:29:09 -0400 Subject: [Rdo-list] Nodes introspection In-Reply-To: <561626E5.7070300@redhat.com> References: <1536062897.38333007.1444258283111.JavaMail.zimbra@redhat.com> <561626E5.7070300@redhat.com> Message-ID: <56165385.7080709@redhat.com> On 10/08/2015 04:18 AM, Dmitry Tantsur wrote: > On 10/08/2015 12:51 AM, Marius Cornea wrote: >> Hi everyone, >> >> A couple of questions regarding introspection (following the upstream >> docs): >> >> 1. Is Swift still used for storing nodes introspection details? I see >> no Swift object resulting after running introspection. > > I think it's temporary disabled due to transition discoverd -> > inspector, and will be reenabled as soon as we fix the high priority > issues. > The short answer is that there is now a store_data option in the [processing] section that controls this behavior. It defaults to none, but can be set to 'swift'. Right now, we are not collecting very much, so there is not really much point in defaulting it to 'swift', but I plan on follow-up patches to the puppet support for inspector [1][2] to add the ability to specify extra collectors (logging, python-hardware, etc.) along with defaulting to storing data in swift. I would really prefer if the one remaining critical fix [3] is the last patch to the inspector bash element, as trying to develop both in parallel has been a bit cumbersome. >> >> 2. During introspection I can see error messages on the VMs >> console(running a virt environment) but it's hard to record them since >> the VM quickly turns off. What's the proper way to debug this? > > Is it with ironic-python-agent, right? If so, I've noted that it always > fails to connect to inspector. Then it is restart by systemd, and the > second attempts succeeds. We'll have procedure in place to collect logs > from the ramdisk, but it's not wired in yet as well. Anyway, if the > introspection works at all, you should not be too concerned for now :) > >> >> 3. The profile matching docs[1] reference ironic-discoverd which has >> now become ironic-inspector. How can we make this work? >> s/discoverd/inspector/ ? > > Profile matching is in especially uncertain state right now. Thanks for > reminder, we should start sorting out - as soon as introspection is part > of the gate again. Ya we are not even collecting the the data that was previously used for profile matching right now. Getting working RDO CI has taken priority for me over this. [1] https://review.openstack.org/#/c/223690/ [2] https://review.openstack.org/#/c/228190/ [3] https://review.openstack.org/#/c/231656/ From rasca at redhat.com Thu Oct 8 14:15:59 2015 From: rasca at redhat.com (Raoul Scarazzini) Date: Thu, 8 Oct 2015 16:15:59 +0200 Subject: [Rdo-list] How to know which galera server haproxy is pointing in an overcloud HA env In-Reply-To: <1173238997.38020056.1444231271758.JavaMail.zimbra@redhat.com> References: <5614F7CC.6040808@redhat.com> <1852667655.37767349.1444216050936.JavaMail.zimbra@redhat.com> <5614FF90.7090908@redhat.com> <1173238997.38020056.1444231271758.JavaMail.zimbra@redhat.com> Message-ID: <56167A9F.2080705@redhat.com> Il giorno 7/10/2015 17:21:11, Marius Cornea ha scritto: > I was just suggesting a way to see to which of the backend nodes haproxy is directing the traffic. Please see the attachment. Thanks again Marius, from your point of view, does this script make sense? #!/bin/bash # haproxy bind address VIP=$1 # Associative array for controller -> bytes list declare -A controllers function get_stats { # 2nd field -> controller name | 10th field -> bytes in | 11th field -> bytes out stats=$(echo "show stat" | socat /var/run/haproxy stdio | grep mysql,overcloud | cut -f2,10 -d,) } get_stats # Put the first byte values in the array for line in $stats do controller=$(echo $line | cut -f1 -d,) controllers[$controller]=$(echo $line|cut -f2 -d,) done # Do something (nothing) on the VIP's db mysql -u nonexistant -h $VIP &> /dev/null get_stats # Compare the stats the one different is the master for controller in ${!controllers[@]} do value2=$(echo "$stats"|grep $controller|cut -f2 -d,) [ ${controllers[$controller]} -ne $value2 ] && echo "$controller is MASTER" || echo "$controller is slave" done I know it's ugly, but since we don't have any other method to get those informations I don't see any other solution. Of course it can be adapted to get values from http instead of the socket (that by default is not enabled). What do you think? Thanks a lot, -- Raoul Scarazzini rasca at redhat.com From ibravo at ltgfederal.com Thu Oct 8 15:29:51 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 8 Oct 2015 11:29:51 -0400 Subject: [Rdo-list] Check progress of overcloud deployment Message-ID: <79988324-3BDE-4C5D-BD66-AD33EFC25D84@ltgfederal.com> Hi, What is the best way to track the progress of the overcloud installation via RDO Manager? I was looking at the heat logs on /var/log/heat but they were too verbose. Ideas? Thanks, IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasha at redhat.com Thu Oct 8 15:35:35 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Thu, 8 Oct 2015 11:35:35 -0400 (EDT) Subject: [Rdo-list] Check progress of overcloud deployment In-Reply-To: <79988324-3BDE-4C5D-BD66-AD33EFC25D84@ltgfederal.com> References: <79988324-3BDE-4C5D-BD66-AD33EFC25D84@ltgfederal.com> Message-ID: <596182160.53491775.1444318535350.JavaMail.zimbra@redhat.com> Hi, How about: heat resource-list -n 5 overcloud|grep -v COMPLETE Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Ignacio Bravo" > To: "rdo-list" > Sent: Thursday, October 8, 2015 11:29:51 AM > Subject: [Rdo-list] Check progress of overcloud deployment > > Hi, > > What is the best way to track the progress of the overcloud installation via > RDO Manager? > I was looking at the heat logs on /var/log/heat but they were too verbose. > Ideas? > > Thanks, > IB > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mcornea at redhat.com Thu Oct 8 15:51:27 2015 From: mcornea at redhat.com (Marius Cornea) Date: Thu, 8 Oct 2015 11:51:27 -0400 (EDT) Subject: [Rdo-list] Check progress of overcloud deployment In-Reply-To: <79988324-3BDE-4C5D-BD66-AD33EFC25D84@ltgfederal.com> References: <79988324-3BDE-4C5D-BD66-AD33EFC25D84@ltgfederal.com> Message-ID: <1731802919.38827290.1444319487443.JavaMail.zimbra@redhat.com> Hi, I do watch -n1 'heat stack-list -n | grep PROGRESS' ----- Original Message ----- > From: "Ignacio Bravo" > To: "rdo-list" > Sent: Thursday, October 8, 2015 5:29:51 PM > Subject: [Rdo-list] Check progress of overcloud deployment > > Hi, > > What is the best way to track the progress of the overcloud installation via > RDO Manager? > I was looking at the heat logs on /var/log/heat but they were too verbose. > Ideas? > > Thanks, > IB > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From weiler at soe.ucsc.edu Thu Oct 8 16:22:56 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Thu, 8 Oct 2015 09:22:56 -0700 Subject: [Rdo-list] R: Jumbo MTU to instances in Kilo? In-Reply-To: <751EEF2DDA813143A5F20A73F5B78F382D5892@ex022ims.fastwebit.ofc> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> <56154EE8.6000908@soe.ucsc.edu> <5615AC55.2040701@soe.ucsc.edu> <5615B045.1070503@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D5892@ex022ims.fastwebit.ofc> Message-ID: <56169860.2000404@soe.ucsc.edu> Thanks Amedeo, The bit about the config item in the l3_agent.ini file is new to me - I couldn't find that in the documentation, or even as a comment in the file as a config option. If it is a config item as you point out, maybe it should have a commented section in l3_agent.ini? Thanks for the insight! cheers, erich On 10/08/2015 03:02 AM, Salvati Amedeo wrote: > Eric, > > also, to set jumbo frames on your env, you have to set mtu from VM to controller: > > # echo "dhcp-option-force=26,8900" > /etc/neutron/dnsmasq-neutron.conf > # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf > # openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini agent veth_mtu 8900 > # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT network_device_mtu 9000 > # openstack-config --set /etc/nova/nova.conf DEFAULT network_device_mtu 9000 <--- this on every nova-compute > > take a look at l3_agent.ini file, without network_device_mtu every new router will use default mtu at 1500 > > # ip netns exec qrouter-26f64a08-52ab-4643-b903-9aea6eae047a /bin/bash > # ip a | grep mtu > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > 69: ha-89546945-ab: mtu 9000 qdisc noqueue state UNKNOWN > 74: qr-f207f652-da: mtu 9000 qdisc noqueue state UNKNOWN > 81: qg-ab978cd0-ad: mtu 9000 qdisc noqueue state UNKNOWN > > HTH > Amedeo > > -----Messaggio originale----- > Da: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] Per conto di Erich Weiler > Inviato: gioved? 8 ottobre 2015 01:53 > A: Pedro Navarro Perez > Cc: rdo-list at redhat.com > Oggetto: Re: [Rdo-list] Jumbo MTU to instances in Kilo? > > Actually I was wrong, it WAS on the network node. The virtual router interfaces were not set to MTU=9000. On network node: > > [root at os-net-01 ~]# ip netns > qdhcp-c395cff9-af7b-4456-91e3-3c55e6c2c5f5 > qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 > > i[root at os-net-01 ~]# ip netns exec > qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 0 bytes 0 (0.0 B) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 0 bytes 0 (0.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qg-fa1e2a28-25: flags=4163 mtu 1500 > inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 > inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 > ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) > RX packets 34071065 bytes 5046408745 (4.6 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 442 bytes 51915 (50.6 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qr-51904c89-b8: flags=4163 mtu 1500 > inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 > inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 > ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) > RX packets 702 bytes 75369 (73.6 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 814 bytes 92259 (90.0 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > I can fix it manually: > > [root at os-net-01 neutron]# ip netns exec > qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qg-fa1e2a28-25 mtu > 9000 > [root at os-net-01 neutron]# ip netns exec > qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qr-51904c89-b8 mtu > 9000 > [root at os-net-01 neutron]# ip netns exec > qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 0 bytes 0 (0.0 B) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 0 bytes 0 (0.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qg-fa1e2a28-25: flags=4163 mtu 9000 > inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 > inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 > ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) > RX packets 34086053 bytes 5048637833 (4.7 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 442 bytes 51915 (50.6 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qr-51904c89-b8: flags=4163 mtu 9000 > inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 > inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 > ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) > RX packets 702 bytes 75369 (73.6 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 814 bytes 92259 (90.0 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > And then I have a jumbo clean path everywhere! All is good then. > But... How to set this in a config file or something so I don't have to do it manually? > > I found this bug report: > > https://bugs.launchpad.net/neutron/+bug/1311097 > > Anyone know if that bug is still out there? Or how can I set virtual router interfaces MTU by default when I create the router? > > cheers, > erich > > On 10/07/2015 04:35 PM, Erich Weiler wrote: >> Actually I think I'm closer - on the compute nodes, I set this in >> nova.conf: >> >> network_device_mtu=9000 >> >> even though there was a big note above it that said not to use it >> because this option was deprecated. But after setting that option, >> and restarting nova and openvswitch, br-int, my tap device and my qvb >> device all got set to MTU=9000. So I'm closer! But still one item is >> blocking me. I show this tracepath from my controller node direct to >> the VM (which is on a compute node on the local network): >> >> # tracepath 10.50.100.4 >> 1?: [LOCALHOST] pmtu 9000 >> 1: 10.50.100.4 0.682ms >> 1: 10.50.100.4 0.241ms >> 2: 10.50.100.4 0.297ms pmtu >> 1500 >> 2: 10.50.100.4 1.664ms reached >> >> 10.50.100.4 is the VM. It looks like the path is jumbo clean up until >> that third hop. But the thing is, I don't know what the third hop is. >> ;) >> >> On my compute node I still see some stuff with MTU=1500, but I'm not >> sure if one of those is blocking me: >> >> # ifconfig >> br-enp3s0f0: flags=4163 mtu 9000 >> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >> RX packets 2401498 bytes 359284253 (342.6 MiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 30 bytes 1572 (1.5 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> br-int: flags=4163 mtu 9000 >> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid 0x20 >> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >> RX packets 133 bytes 12934 (12.6 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 8 bytes 648 (648.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> enp3s0f0: flags=4419 mtu 9000 >> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >> RX packets 165957142 bytes 20333410092 (18.9 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 23299881 bytes 5950708819 (5.5 GiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> enp3s0f0.50: flags=4163 mtu 9000 >> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >> RX packets 6014767 bytes 813880745 (776.1 MiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 79301 bytes 19052451 (18.1 MiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> lo: flags=73 mtu 65536 >> inet 127.0.0.1 netmask 255.0.0.0 >> inet6 ::1 prefixlen 128 scopeid 0x10 >> loop txqueuelen 0 (Local Loopback) >> RX packets 22462729 bytes 1202484822 (1.1 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 22462729 bytes 1202484822 (1.1 GiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qbr922bd9f5-bb: flags=4163 mtu 9000 >> inet6 fe80::4c1a:55ff:feba:14c3 prefixlen 64 scopeid 0x20 >> ether 56:a6:a6:db:83:c4 txqueuelen 0 (Ethernet) >> RX packets 16 bytes 1520 (1.4 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 8 bytes 648 (648.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qbrf42ea01f-fe: flags=4163 mtu 1500 >> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid 0x20 >> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >> RX packets 15 bytes 1456 (1.4 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 8 bytes 648 (648.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvb922bd9f5-bb: flags=4419 >> mtu >> 9000 >> inet6 fe80::54a6:a6ff:fedb:83c4 prefixlen 64 scopeid 0x20 >> ether 56:a6:a6:db:83:c4 txqueuelen 1000 (Ethernet) >> RX packets 86 bytes 9610 (9.3 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 133 bytes 12767 (12.4 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvbf42ea01f-fe: flags=4419 >> mtu >> 1500 >> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid 0x20 >> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >> RX packets 377 bytes 57664 (56.3 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 333 bytes 38765 (37.8 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvo922bd9f5-bb: flags=4419 >> mtu >> 9000 >> inet6 fe80::b44a:bff:fe72:aaea prefixlen 64 scopeid 0x20 >> ether b6:4a:0b:72:aa:ea txqueuelen 1000 (Ethernet) >> RX packets 133 bytes 12767 (12.4 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 86 bytes 9610 (9.3 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvof42ea01f-fe: flags=4419 >> mtu >> 1500 >> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid 0x20 >> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >> RX packets 333 bytes 38765 (37.8 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 377 bytes 57664 (56.3 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> tap922bd9f5-bb: flags=4163 mtu 9000 >> inet6 fe80::fc16:3eff:fefa:9945 prefixlen 64 scopeid 0x20 >> ether fe:16:3e:fa:99:45 txqueuelen 500 (Ethernet) >> RX packets 118 bytes 11561 (11.2 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 95 bytes 10316 (10.0 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> virbr0: flags=4099 mtu 1500 >> inet 192.168.122.1 netmask 255.255.255.0 broadcast >> 192.168.122.255 >> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >> RX packets 0 bytes 0 (0.0 B) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 0 bytes 0 (0.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> My network node has all interfaces set to MTU=9000. I thought maybe the >> bottleneck might be there but I don't think it is. Here's ifconfig >> from my network node: >> >> # ifconfig >> lo: flags=73 mtu 65536 >> inet 127.0.0.1 netmask 255.0.0.0 >> inet6 ::1 prefixlen 128 scopeid 0x10 >> loop txqueuelen 0 (Local Loopback) >> RX packets 2042 bytes 238727 (233.1 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 2042 bytes 238727 (233.1 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> p1p2: flags=4163 mtu 9000 >> inet6 fe80::207:43ff:fe10:deb8 prefixlen 64 scopeid 0x20 >> ether 00:07:43:10:de:b8 txqueuelen 1000 (Ethernet) >> RX packets 2156053308 bytes 325330839639 (302.9 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 223004 bytes 24769304 (23.6 MiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> device interrupt 72 >> >> p2p1: flags=4163 mtu 9000 >> inet 10.50.1.51 netmask 255.255.0.0 broadcast 10.50.255.255 >> inet6 fe80::260:ddff:fe44:2aea prefixlen 64 scopeid 0x20 >> ether 00:60:dd:44:2a:ea txqueuelen 1000 (Ethernet) >> RX packets 49352916 bytes 3501547231 (3.2 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 18876911 bytes 3768900461 (3.5 GiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> p2p2: flags=4163 mtu 9000 >> inet6 fe80::260:ddff:fe44:2aeb prefixlen 64 scopeid 0x20 >> ether 00:60:dd:44:2a:eb txqueuelen 1000 (Ethernet) >> RX packets 2491224974 bytes 348058319500 (324.1 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 1597 bytes 204525 (199.7 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> Any way I can figure out what the third hop is from my tracepath? >> >> Thanks as always for the sage advice! >> >> -erich >> >> On 10/07/2015 09:57 AM, Erich Weiler wrote: >>> Yeah, I made the changes and then recreated all the networks. For >>> some reason br-int and the individual virtual instance interfaces on >>> the compute node still show 1500 byte frames. >>> >>> Has anyone else configured jumbo frames in a Kilo environment? Or >>> maybe I'm just an outlier... ;) >>> >>> -erich >>> >>> On 10/07/2015 01:46 AM, Pedro Navarro Perez wrote: >>>> Hi Erich, >>>> >>>> did you recreate the neutron networks after the configuration changes? >>>> >>>> Pedro Navarro P?rez >>>> OpenStack product specialist >>>> Red Hat Iberia >>>> Passeig de Gr?cia 120, >>>> 08008 Barcelona >>>> Spain >>>> M +34 639 642 379 >>>> E pnavarro at redhat.com >>>> >>>> ----- Original Message ----- >>>> From: "Erich Weiler" >>>> To: rdo-list at redhat.com >>>> Sent: Wednesday, 7 October, 2015 2:34:28 AM >>>> Subject: [Rdo-list] Jumbo MTU to instances in Kilo? >>>> >>>> Hi Y'all, >>>> >>>> I know someone must have figured this one out, but I can't seem to >>>> get >>>> 9000 byte MTUs working. I have it set in plugin.ini, etc, my nodes >>>> have >>>> MTU=9000 on their interfaces, so does the network node. dnsmasq >>>> also is configured to set MTU=9000 on instances, which works. But I >>>> still can't ping with large packets to my instance: >>>> >>>> [weiler at stacker ~]$ ping 10.50.100.2 PING 10.50.100.2 (10.50.100.2) >>>> 56(84) bytes of data. >>>> 64 bytes from 10.50.100.2: icmp_seq=1 ttl=63 time=2.95 ms >>>> 64 bytes from 10.50.100.2: icmp_seq=2 ttl=63 time=1.14 ms >>>> 64 bytes from 10.50.100.2: icmp_seq=3 ttl=63 time=0.661 ms >>>> >>>> That works fine. This however doesn't work: >>>> >>>> [root at stacker ~]# ping -M do -s 8000 10.50.100.2 PING 10.50.100.2 >>>> (10.50.100.2) 8000(8028) bytes of data. >>>> From 10.50.100.2 icmp_seq=1 Frag needed and DF set (mtu = 1500) >>>> ping: local error: Message too long, mtu=1500 >>>> ping: local error: Message too long, mtu=1500 >>>> ping: local error: Message too long, mtu=1500 >>>> ping: local error: Message too long, mtu=1500 >>>> >>>> It looks like somehow the br-int interface for OVS isn't set at >>>> 9000, but I can't figure out how to do that... >>>> >>>> Here's ifconfig on my compute node: >>>> >>>> br-enp3s0f0: flags=4163 mtu 9000 >>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>> 0x20 >>>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>>> RX packets 2401432 bytes 359276713 (342.6 MiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 30 bytes 1572 (1.5 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> br-int: flags=4163 mtu 1500 >>>> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid >>>> 0x20 >>>> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >>>> RX packets 69 bytes 6866 (6.7 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 8 bytes 648 (648.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> enp3s0f0: flags=4419 mtu 9000 >>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>> 0x20 >>>> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >>>> RX packets 130174458 bytes 15334807929 (14.2 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 22919305 bytes 5859090420 (5.4 GiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> enp3s0f0.50: flags=4163 mtu 9000 >>>> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>> 0x20 >>>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>>> RX packets 38429352 bytes 5152853436 (4.7 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 419842 bytes 101161981 (96.4 MiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> lo: flags=73 mtu 65536 >>>> inet 127.0.0.1 netmask 255.0.0.0 >>>> inet6 ::1 prefixlen 128 scopeid 0x10 >>>> loop txqueuelen 0 (Local Loopback) >>>> RX packets 22141566 bytes 1185622090 (1.1 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 22141566 bytes 1185622090 (1.1 GiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qbr247da3ed-a4: flags=4163 mtu 1500 >>>> inet6 fe80::5c8f:c0ff:fe79:bc11 prefixlen 64 scopeid >>>> 0x20 >>>> ether b6:1f:54:3f:3d:48 txqueuelen 0 (Ethernet) >>>> RX packets 16 bytes 1472 (1.4 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 8 bytes 648 (648.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qbrf42ea01f-fe: flags=4163 mtu 1500 >>>> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid >>>> 0x20 >>>> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >>>> RX packets 15 bytes 1456 (1.4 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 8 bytes 648 (648.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qvb247da3ed-a4: flags=4419 >>>> mtu 1500 >>>> inet6 fe80::b41f:54ff:fe3f:3d48 prefixlen 64 scopeid >>>> 0x20 >>>> ether b6:1f:54:3f:3d:48 txqueuelen 1000 (Ethernet) >>>> RX packets 247 bytes 28323 (27.6 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 233 bytes 25355 (24.7 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qvbf42ea01f-fe: flags=4419 >>>> mtu 1500 >>>> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid >>>> 0x20 >>>> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >>>> RX packets 377 bytes 57664 (56.3 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 333 bytes 38765 (37.8 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qvo247da3ed-a4: flags=4419 >>>> mtu 1500 >>>> inet6 fe80::dcfa:f1ff:fe03:ee88 prefixlen 64 scopeid >>>> 0x20 >>>> ether de:fa:f1:03:ee:88 txqueuelen 1000 (Ethernet) >>>> RX packets 233 bytes 25355 (24.7 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 247 bytes 28323 (27.6 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qvof42ea01f-fe: flags=4419 >>>> mtu 1500 >>>> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid >>>> 0x20 >>>> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >>>> RX packets 333 bytes 38765 (37.8 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 377 bytes 57664 (56.3 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> tap247da3ed-a4: flags=4163 mtu 1500 >>>> inet6 fe80::fc16:3eff:fede:5eea prefixlen 64 scopeid >>>> 0x20 >>>> ether fe:16:3e:de:5e:ea txqueuelen 500 (Ethernet) >>>> RX packets 219 bytes 24239 (23.6 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 224 bytes 26661 (26.0 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> virbr0: flags=4099 mtu 1500 >>>> inet 192.168.122.1 netmask 255.255.255.0 broadcast >>>> 192.168.122.255 >>>> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >>>> RX packets 0 bytes 0 (0.0 B) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 0 bytes 0 (0.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> This is on RHEL 7.1. Any obvious way I can get all the intermediate >>>> bridges to MTU=9000? I've RTFM'd and googled to no avail... >>>> >>>> Here's the ovs-vsctl outout: >>>> >>>> [root at node-136 ~]# ovs-vsctl show >>>> 6f5a5f00-59e2-4420-aeaf-7ad464ead232 >>>> Bridge br-int >>>> fail_mode: secure >>>> Port br-int >>>> Interface br-int >>>> type: internal >>>> Port "qvo247da3ed-a4" >>>> tag: 1 >>>> Interface "qvo247da3ed-a4" >>>> Port "int-br-eth1" >>>> Interface "int-br-eth1" >>>> Port "int-br-enp3s0f0" >>>> Interface "int-br-enp3s0f0" >>>> type: patch >>>> options: {peer="phy-br-enp3s0f0"} >>>> Bridge "br-enp3s0f0" >>>> Port "enp3s0f0" >>>> Interface "enp3s0f0" >>>> Port "br-enp3s0f0" >>>> Interface "br-enp3s0f0" >>>> type: internal >>>> Port "phy-br-enp3s0f0" >>>> Interface "phy-br-enp3s0f0" >>>> type: patch >>>> options: {peer="int-br-enp3s0f0"} >>>> ovs_version: "2.3.1" >>>> >>>> Many thanks if anyone has any information on this topic! Or can >>>> point me to some documentation I missed... >>>> >>>> Thanks, >>>> erich >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mcornea at redhat.com Thu Oct 8 16:27:29 2015 From: mcornea at redhat.com (Marius Cornea) Date: Thu, 8 Oct 2015 12:27:29 -0400 (EDT) Subject: [Rdo-list] How to know which galera server haproxy is pointing in an overcloud HA env In-Reply-To: <56167A9F.2080705@redhat.com> References: <5614F7CC.6040808@redhat.com> <1852667655.37767349.1444216050936.JavaMail.zimbra@redhat.com> <5614FF90.7090908@redhat.com> <1173238997.38020056.1444231271758.JavaMail.zimbra@redhat.com> <56167A9F.2080705@redhat.com> Message-ID: <1382555050.38849837.1444321649862.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Raoul Scarazzini" > To: "Marius Cornea" > Cc: rdo-list at redhat.com > Sent: Thursday, October 8, 2015 4:15:59 PM > Subject: Re: [Rdo-list] How to know which galera server haproxy is pointing in an overcloud HA env > > Il giorno 7/10/2015 17:21:11, Marius Cornea ha scritto: > > I was just suggesting a way to see to which of the backend nodes > haproxy is directing the traffic. Please see the attachment. > > Thanks again Marius, > from your point of view, does this script make sense? > > #!/bin/bash > > # haproxy bind address > VIP=$1 > > # Associative array for controller -> bytes list > declare -A controllers > > function get_stats { > # 2nd field -> controller name | 10th field -> bytes in | 11th field -> > bytes out > stats=$(echo "show stat" | socat /var/run/haproxy stdio | grep > mysql,overcloud | cut -f2,10 -d,) > } > > get_stats > > # Put the first byte values in the array > for line in $stats > do > controller=$(echo $line | cut -f1 -d,) > controllers[$controller]=$(echo $line|cut -f2 -d,) > done > > # Do something (nothing) on the VIP's db > mysql -u nonexistant -h $VIP &> /dev/null > > get_stats > > # Compare the stats the one different is the master > for controller in ${!controllers[@]} > do > value2=$(echo "$stats"|grep $controller|cut -f2 -d,) > [ ${controllers[$controller]} -ne $value2 ] && echo "$controller is > MASTER" || echo "$controller is slave" > done > > I know it's ugly, but since we don't have any other method to get those > informations I don't see any other solution. Of course it can be adapted > to get values from http instead of the socket (that by default is not > enabled). > > What do you think? Looks good to me, I gave it a try on my system and it did the job. > Thanks a lot, > > -- > Raoul Scarazzini > rasca at redhat.com > From shardy at redhat.com Thu Oct 8 17:16:32 2015 From: shardy at redhat.com (Steven Hardy) Date: Thu, 8 Oct 2015 18:16:32 +0100 Subject: [Rdo-list] Check progress of overcloud deployment In-Reply-To: <79988324-3BDE-4C5D-BD66-AD33EFC25D84@ltgfederal.com> References: <79988324-3BDE-4C5D-BD66-AD33EFC25D84@ltgfederal.com> Message-ID: <20151008171631.GC5452@t430slt.redhat.com> On Thu, Oct 08, 2015 at 11:29:51AM -0400, Ignacio Bravo wrote: > Hi, > What is the best way to track the progress of the overcloud installation > via RDO Manager? > I was looking at the heat logs on /var/log/heat but they were too verbose. > Ideas? I prefer to open a new shell and do: heat resource-list -n5 overcloud | grep IN_PROGRESS. Basically the same as the suggestion already made by Sasha. You can then see each stage of the deployment happen, including nested resources. Alternatively for less detail, just do: heat resource-list overcloud | grep IN_PROGRESS Or, for more detail, you can look at the events: heat event-list -n5 overcloud (note, this is fairly inefficient at present so takes a while to run). It'd be good if we built something into the CLI which could automatically show progress like this, I believe this is something that's already been discussed and may even be in progress. Steve From dsneddon at redhat.com Thu Oct 8 17:21:51 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Thu, 08 Oct 2015 10:21:51 -0700 Subject: [Rdo-list] Check progress of overcloud deployment In-Reply-To: <20151008171631.GC5452@t430slt.redhat.com> References: <79988324-3BDE-4C5D-BD66-AD33EFC25D84@ltgfederal.com> <20151008171631.GC5452@t430slt.redhat.com> Message-ID: <5616A62F.4020003@redhat.com> On 10/08/2015 10:16 AM, Steven Hardy wrote: > On Thu, Oct 08, 2015 at 11:29:51AM -0400, Ignacio Bravo wrote: >> Hi, >> What is the best way to track the progress of the overcloud installation >> via RDO Manager? >> I was looking at the heat logs on /var/log/heat but they were too verbose. >> Ideas? > > I prefer to open a new shell and do: > > heat resource-list -n5 overcloud | grep IN_PROGRESS. > > Basically the same as the suggestion already made by Sasha. > > You can then see each stage of the deployment happen, including nested > resources. > > Alternatively for less detail, just do: > > heat resource-list overcloud | grep IN_PROGRESS > > Or, for more detail, you can look at the events: > > heat event-list -n5 overcloud (note, this is fairly inefficient at > present so takes a while to run). > > It'd be good if we built something into the CLI which could automatically > show progress like this, I believe this is something that's already been > discussed and may even be in progress. > > Steve > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > A brief twist on Steven's command-line: I like to see all states that aren't complete. This shows the resources that go into failed state (if any) immediately, as well as the CREATE_INIT stage that proceeds IN_PROGRESS: heat resource-list -n5 overcloud | grep -v COMPLETE Since this shows more states, there is a little bit more volatility to the status display. It's a matter of personal preference how much real-time info you want displayed. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From amedeo.salvati at fastweb.it Fri Oct 9 12:24:39 2015 From: amedeo.salvati at fastweb.it (Salvati Amedeo) Date: Fri, 9 Oct 2015 12:24:39 +0000 Subject: [Rdo-list] R: R: Jumbo MTU to instances in Kilo? In-Reply-To: <56169860.2000404@soe.ucsc.edu> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> <56154EE8.6000908@soe.ucsc.edu> <5615AC55.2040701@soe.ucsc.edu> <5615B045.1070503@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D5892@ex022ims.fastwebit.ofc> <56169860.2000404@soe.ucsc.edu> Message-ID: <751EEF2DDA813143A5F20A73F5B78F382D60AF@ex022ims.fastwebit.ofc> Erich you are welcome in the club :D One side note: as we have rhosp and not rdo, we asked to rh to document this and they wrote a solution on their kb: https://access.redhat.com/solutions/1417133 Regards, Amedeo -----Messaggio originale----- Da: Erich Weiler [mailto:weiler at soe.ucsc.edu] Inviato: gioved? 8 ottobre 2015 18:23 A: Salvati Amedeo; Pedro Navarro Perez Cc: rdo-list at redhat.com Oggetto: Re: R: [Rdo-list] Jumbo MTU to instances in Kilo? Thanks Amedeo, The bit about the config item in the l3_agent.ini file is new to me - I couldn't find that in the documentation, or even as a comment in the file as a config option. If it is a config item as you point out, maybe it should have a commented section in l3_agent.ini? Thanks for the insight! cheers, erich On 10/08/2015 03:02 AM, Salvati Amedeo wrote: > Eric, > > also, to set jumbo frames on your env, you have to set mtu from VM to controller: > > # echo "dhcp-option-force=26,8900" > /etc/neutron/dnsmasq-neutron.conf > # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf > # openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini agent veth_mtu 8900 > # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT network_device_mtu 9000 > # openstack-config --set /etc/nova/nova.conf DEFAULT network_device_mtu 9000 <--- this on every nova-compute > > take a look at l3_agent.ini file, without network_device_mtu every new > router will use default mtu at 1500 > > # ip netns exec qrouter-26f64a08-52ab-4643-b903-9aea6eae047a /bin/bash > # ip a | grep mtu > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > 69: ha-89546945-ab: mtu 9000 qdisc > noqueue state UNKNOWN > 74: qr-f207f652-da: mtu 9000 qdisc > noqueue state UNKNOWN > 81: qg-ab978cd0-ad: mtu 9000 qdisc > noqueue state UNKNOWN > > HTH > Amedeo > > -----Messaggio originale----- > Da: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > Per conto di Erich Weiler > Inviato: gioved? 8 ottobre 2015 01:53 > A: Pedro Navarro Perez > Cc: rdo-list at redhat.com > Oggetto: Re: [Rdo-list] Jumbo MTU to instances in Kilo? > > Actually I was wrong, it WAS on the network node. The virtual router interfaces were not set to MTU=9000. On network node: > > [root at os-net-01 ~]# ip netns > qdhcp-c395cff9-af7b-4456-91e3-3c55e6c2c5f5 > qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 > > i[root at os-net-01 ~]# ip netns exec > qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 0 bytes 0 (0.0 B) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 0 bytes 0 (0.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qg-fa1e2a28-25: flags=4163 mtu 1500 > inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 > inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 > ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) > RX packets 34071065 bytes 5046408745 (4.6 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 442 bytes 51915 (50.6 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qr-51904c89-b8: flags=4163 mtu 1500 > inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 > inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 > ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) > RX packets 702 bytes 75369 (73.6 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 814 bytes 92259 (90.0 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > I can fix it manually: > > [root at os-net-01 neutron]# ip netns exec > qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qg-fa1e2a28-25 > mtu > 9000 > [root at os-net-01 neutron]# ip netns exec > qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qr-51904c89-b8 > mtu > 9000 > [root at os-net-01 neutron]# ip netns exec > qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 0 bytes 0 (0.0 B) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 0 bytes 0 (0.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qg-fa1e2a28-25: flags=4163 mtu 9000 > inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 > inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 > ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) > RX packets 34086053 bytes 5048637833 (4.7 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 442 bytes 51915 (50.6 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > qr-51904c89-b8: flags=4163 mtu 9000 > inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 > inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 > ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) > RX packets 702 bytes 75369 (73.6 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 814 bytes 92259 (90.0 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > And then I have a jumbo clean path everywhere! All is good then. > But... How to set this in a config file or something so I don't have to do it manually? > > I found this bug report: > > https://bugs.launchpad.net/neutron/+bug/1311097 > > Anyone know if that bug is still out there? Or how can I set virtual router interfaces MTU by default when I create the router? > > cheers, > erich > > On 10/07/2015 04:35 PM, Erich Weiler wrote: >> Actually I think I'm closer - on the compute nodes, I set this in >> nova.conf: >> >> network_device_mtu=9000 >> >> even though there was a big note above it that said not to use it >> because this option was deprecated. But after setting that option, >> and restarting nova and openvswitch, br-int, my tap device and my qvb >> device all got set to MTU=9000. So I'm closer! But still one item >> is blocking me. I show this tracepath from my controller node direct >> to the VM (which is on a compute node on the local network): >> >> # tracepath 10.50.100.4 >> 1?: [LOCALHOST] pmtu 9000 >> 1: 10.50.100.4 0.682ms >> 1: 10.50.100.4 0.241ms >> 2: 10.50.100.4 0.297ms pmtu >> 1500 >> 2: 10.50.100.4 1.664ms reached >> >> 10.50.100.4 is the VM. It looks like the path is jumbo clean up >> until that third hop. But the thing is, I don't know what the third hop is. >> ;) >> >> On my compute node I still see some stuff with MTU=1500, but I'm not >> sure if one of those is blocking me: >> >> # ifconfig >> br-enp3s0f0: flags=4163 mtu 9000 >> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >> RX packets 2401498 bytes 359284253 (342.6 MiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 30 bytes 1572 (1.5 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> br-int: flags=4163 mtu 9000 >> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid 0x20 >> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >> RX packets 133 bytes 12934 (12.6 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 8 bytes 648 (648.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> enp3s0f0: flags=4419 mtu 9000 >> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >> RX packets 165957142 bytes 20333410092 (18.9 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 23299881 bytes 5950708819 (5.5 GiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> enp3s0f0.50: flags=4163 mtu 9000 >> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >> RX packets 6014767 bytes 813880745 (776.1 MiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 79301 bytes 19052451 (18.1 MiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> lo: flags=73 mtu 65536 >> inet 127.0.0.1 netmask 255.0.0.0 >> inet6 ::1 prefixlen 128 scopeid 0x10 >> loop txqueuelen 0 (Local Loopback) >> RX packets 22462729 bytes 1202484822 (1.1 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 22462729 bytes 1202484822 (1.1 GiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qbr922bd9f5-bb: flags=4163 mtu 9000 >> inet6 fe80::4c1a:55ff:feba:14c3 prefixlen 64 scopeid 0x20 >> ether 56:a6:a6:db:83:c4 txqueuelen 0 (Ethernet) >> RX packets 16 bytes 1520 (1.4 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 8 bytes 648 (648.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qbrf42ea01f-fe: flags=4163 mtu 1500 >> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid 0x20 >> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >> RX packets 15 bytes 1456 (1.4 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 8 bytes 648 (648.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvb922bd9f5-bb: flags=4419 >> mtu >> 9000 >> inet6 fe80::54a6:a6ff:fedb:83c4 prefixlen 64 scopeid 0x20 >> ether 56:a6:a6:db:83:c4 txqueuelen 1000 (Ethernet) >> RX packets 86 bytes 9610 (9.3 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 133 bytes 12767 (12.4 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvbf42ea01f-fe: flags=4419 >> mtu >> 1500 >> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid 0x20 >> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >> RX packets 377 bytes 57664 (56.3 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 333 bytes 38765 (37.8 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvo922bd9f5-bb: flags=4419 >> mtu >> 9000 >> inet6 fe80::b44a:bff:fe72:aaea prefixlen 64 scopeid 0x20 >> ether b6:4a:0b:72:aa:ea txqueuelen 1000 (Ethernet) >> RX packets 133 bytes 12767 (12.4 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 86 bytes 9610 (9.3 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qvof42ea01f-fe: flags=4419 >> mtu >> 1500 >> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid 0x20 >> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >> RX packets 333 bytes 38765 (37.8 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 377 bytes 57664 (56.3 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> tap922bd9f5-bb: flags=4163 mtu 9000 >> inet6 fe80::fc16:3eff:fefa:9945 prefixlen 64 scopeid 0x20 >> ether fe:16:3e:fa:99:45 txqueuelen 500 (Ethernet) >> RX packets 118 bytes 11561 (11.2 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 95 bytes 10316 (10.0 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> virbr0: flags=4099 mtu 1500 >> inet 192.168.122.1 netmask 255.255.255.0 broadcast >> 192.168.122.255 >> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >> RX packets 0 bytes 0 (0.0 B) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 0 bytes 0 (0.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> My network node has all interfaces set to MTU=9000. I thought maybe the >> bottleneck might be there but I don't think it is. Here's ifconfig >> from my network node: >> >> # ifconfig >> lo: flags=73 mtu 65536 >> inet 127.0.0.1 netmask 255.0.0.0 >> inet6 ::1 prefixlen 128 scopeid 0x10 >> loop txqueuelen 0 (Local Loopback) >> RX packets 2042 bytes 238727 (233.1 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 2042 bytes 238727 (233.1 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> p1p2: flags=4163 mtu 9000 >> inet6 fe80::207:43ff:fe10:deb8 prefixlen 64 scopeid 0x20 >> ether 00:07:43:10:de:b8 txqueuelen 1000 (Ethernet) >> RX packets 2156053308 bytes 325330839639 (302.9 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 223004 bytes 24769304 (23.6 MiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> device interrupt 72 >> >> p2p1: flags=4163 mtu 9000 >> inet 10.50.1.51 netmask 255.255.0.0 broadcast 10.50.255.255 >> inet6 fe80::260:ddff:fe44:2aea prefixlen 64 scopeid 0x20 >> ether 00:60:dd:44:2a:ea txqueuelen 1000 (Ethernet) >> RX packets 49352916 bytes 3501547231 (3.2 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 18876911 bytes 3768900461 (3.5 GiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> p2p2: flags=4163 mtu 9000 >> inet6 fe80::260:ddff:fe44:2aeb prefixlen 64 scopeid 0x20 >> ether 00:60:dd:44:2a:eb txqueuelen 1000 (Ethernet) >> RX packets 2491224974 bytes 348058319500 (324.1 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 1597 bytes 204525 (199.7 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> Any way I can figure out what the third hop is from my tracepath? >> >> Thanks as always for the sage advice! >> >> -erich >> >> On 10/07/2015 09:57 AM, Erich Weiler wrote: >>> Yeah, I made the changes and then recreated all the networks. For >>> some reason br-int and the individual virtual instance interfaces on >>> the compute node still show 1500 byte frames. >>> >>> Has anyone else configured jumbo frames in a Kilo environment? Or >>> maybe I'm just an outlier... ;) >>> >>> -erich >>> >>> On 10/07/2015 01:46 AM, Pedro Navarro Perez wrote: >>>> Hi Erich, >>>> >>>> did you recreate the neutron networks after the configuration changes? >>>> >>>> Pedro Navarro P?rez >>>> OpenStack product specialist >>>> Red Hat Iberia >>>> Passeig de Gr?cia 120, >>>> 08008 Barcelona >>>> Spain >>>> M +34 639 642 379 >>>> E pnavarro at redhat.com >>>> >>>> ----- Original Message ----- >>>> From: "Erich Weiler" >>>> To: rdo-list at redhat.com >>>> Sent: Wednesday, 7 October, 2015 2:34:28 AM >>>> Subject: [Rdo-list] Jumbo MTU to instances in Kilo? >>>> >>>> Hi Y'all, >>>> >>>> I know someone must have figured this one out, but I can't seem to >>>> get >>>> 9000 byte MTUs working. I have it set in plugin.ini, etc, my nodes >>>> have >>>> MTU=9000 on their interfaces, so does the network node. dnsmasq >>>> also is configured to set MTU=9000 on instances, which works. But >>>> I still can't ping with large packets to my instance: >>>> >>>> [weiler at stacker ~]$ ping 10.50.100.2 PING 10.50.100.2 (10.50.100.2) >>>> 56(84) bytes of data. >>>> 64 bytes from 10.50.100.2: icmp_seq=1 ttl=63 time=2.95 ms >>>> 64 bytes from 10.50.100.2: icmp_seq=2 ttl=63 time=1.14 ms >>>> 64 bytes from 10.50.100.2: icmp_seq=3 ttl=63 time=0.661 ms >>>> >>>> That works fine. This however doesn't work: >>>> >>>> [root at stacker ~]# ping -M do -s 8000 10.50.100.2 PING 10.50.100.2 >>>> (10.50.100.2) 8000(8028) bytes of data. >>>> From 10.50.100.2 icmp_seq=1 Frag needed and DF set (mtu = 1500) >>>> ping: local error: Message too long, mtu=1500 >>>> ping: local error: Message too long, mtu=1500 >>>> ping: local error: Message too long, mtu=1500 >>>> ping: local error: Message too long, mtu=1500 >>>> >>>> It looks like somehow the br-int interface for OVS isn't set at >>>> 9000, but I can't figure out how to do that... >>>> >>>> Here's ifconfig on my compute node: >>>> >>>> br-enp3s0f0: flags=4163 mtu 9000 >>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>> 0x20 >>>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>>> RX packets 2401432 bytes 359276713 (342.6 MiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 30 bytes 1572 (1.5 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> br-int: flags=4163 mtu 1500 >>>> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid >>>> 0x20 >>>> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >>>> RX packets 69 bytes 6866 (6.7 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 8 bytes 648 (648.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> enp3s0f0: flags=4419 mtu 9000 >>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>> 0x20 >>>> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >>>> RX packets 130174458 bytes 15334807929 (14.2 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 22919305 bytes 5859090420 (5.4 GiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> enp3s0f0.50: flags=4163 mtu 9000 >>>> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>> 0x20 >>>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>>> RX packets 38429352 bytes 5152853436 (4.7 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 419842 bytes 101161981 (96.4 MiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> lo: flags=73 mtu 65536 >>>> inet 127.0.0.1 netmask 255.0.0.0 >>>> inet6 ::1 prefixlen 128 scopeid 0x10 >>>> loop txqueuelen 0 (Local Loopback) >>>> RX packets 22141566 bytes 1185622090 (1.1 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 22141566 bytes 1185622090 (1.1 GiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> qbr247da3ed-a4: flags=4163 mtu 1500 >>>> inet6 fe80::5c8f:c0ff:fe79:bc11 prefixlen 64 scopeid >>>> 0x20 >>>> ether b6:1f:54:3f:3d:48 txqueuelen 0 (Ethernet) >>>> RX packets 16 bytes 1472 (1.4 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 8 bytes 648 (648.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> qbrf42ea01f-fe: flags=4163 mtu 1500 >>>> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid >>>> 0x20 >>>> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >>>> RX packets 15 bytes 1456 (1.4 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 8 bytes 648 (648.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> qvb247da3ed-a4: flags=4419 >>>> mtu 1500 >>>> inet6 fe80::b41f:54ff:fe3f:3d48 prefixlen 64 scopeid >>>> 0x20 >>>> ether b6:1f:54:3f:3d:48 txqueuelen 1000 (Ethernet) >>>> RX packets 247 bytes 28323 (27.6 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 233 bytes 25355 (24.7 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> qvbf42ea01f-fe: flags=4419 >>>> mtu 1500 >>>> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid >>>> 0x20 >>>> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >>>> RX packets 377 bytes 57664 (56.3 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 333 bytes 38765 (37.8 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> qvo247da3ed-a4: flags=4419 >>>> mtu 1500 >>>> inet6 fe80::dcfa:f1ff:fe03:ee88 prefixlen 64 scopeid >>>> 0x20 >>>> ether de:fa:f1:03:ee:88 txqueuelen 1000 (Ethernet) >>>> RX packets 233 bytes 25355 (24.7 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 247 bytes 28323 (27.6 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> qvof42ea01f-fe: flags=4419 >>>> mtu 1500 >>>> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid >>>> 0x20 >>>> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >>>> RX packets 333 bytes 38765 (37.8 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 377 bytes 57664 (56.3 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> tap247da3ed-a4: flags=4163 mtu 1500 >>>> inet6 fe80::fc16:3eff:fede:5eea prefixlen 64 scopeid >>>> 0x20 >>>> ether fe:16:3e:de:5e:ea txqueuelen 500 (Ethernet) >>>> RX packets 219 bytes 24239 (23.6 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 224 bytes 26661 (26.0 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> virbr0: flags=4099 mtu 1500 >>>> inet 192.168.122.1 netmask 255.255.255.0 broadcast >>>> 192.168.122.255 >>>> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >>>> RX packets 0 bytes 0 (0.0 B) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 0 bytes 0 (0.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>> 0 >>>> >>>> This is on RHEL 7.1. Any obvious way I can get all the >>>> intermediate bridges to MTU=9000? I've RTFM'd and googled to no avail... >>>> >>>> Here's the ovs-vsctl outout: >>>> >>>> [root at node-136 ~]# ovs-vsctl show >>>> 6f5a5f00-59e2-4420-aeaf-7ad464ead232 >>>> Bridge br-int >>>> fail_mode: secure >>>> Port br-int >>>> Interface br-int >>>> type: internal >>>> Port "qvo247da3ed-a4" >>>> tag: 1 >>>> Interface "qvo247da3ed-a4" >>>> Port "int-br-eth1" >>>> Interface "int-br-eth1" >>>> Port "int-br-enp3s0f0" >>>> Interface "int-br-enp3s0f0" >>>> type: patch >>>> options: {peer="phy-br-enp3s0f0"} >>>> Bridge "br-enp3s0f0" >>>> Port "enp3s0f0" >>>> Interface "enp3s0f0" >>>> Port "br-enp3s0f0" >>>> Interface "br-enp3s0f0" >>>> type: internal >>>> Port "phy-br-enp3s0f0" >>>> Interface "phy-br-enp3s0f0" >>>> type: patch >>>> options: {peer="int-br-enp3s0f0"} >>>> ovs_version: "2.3.1" >>>> >>>> Many thanks if anyone has any information on this topic! Or can >>>> point me to some documentation I missed... >>>> >>>> Thanks, >>>> erich >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ihrachys at redhat.com Fri Oct 9 12:33:25 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 9 Oct 2015 14:33:25 +0200 Subject: [Rdo-list] Jumbo MTU to instances in Kilo? In-Reply-To: <56169860.2000404@soe.ucsc.edu> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> <56154EE8.6000908@soe.ucsc.edu> <5615AC55.2040701@soe.ucsc.edu> <5615B045.1070503@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D5892@ex022ims.fastwebit.ofc> <56169860.2000404@soe.ucsc.edu> Message-ID: <713CF641-FBC2-4E3C-AE3A-4B3973314471@redhat.com> > On 08 Oct 2015, at 18:22, Erich Weiler wrote: > > Thanks Amedeo, > > The bit about the config item in the l3_agent.ini file is new to me - I couldn't find that in the documentation, or even as a comment in the file as a config option. If it is a config item as you point out, maybe it should have a commented section in l3_agent.ini? > > Thanks for the insight! Thanks folks for pointing out the missing option in the config files! I reported the issue: https://bugs.launchpad.net/neutron/+bug/1504527 Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dsneddon at redhat.com Fri Oct 9 17:39:20 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 09 Oct 2015 10:39:20 -0700 Subject: [Rdo-list] R: R: Jumbo MTU to instances in Kilo? In-Reply-To: <751EEF2DDA813143A5F20A73F5B78F382D60AF@ex022ims.fastwebit.ofc> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> <56154EE8.6000908@soe.ucsc.edu> <5615AC55.2040701@soe.ucsc.edu> <5615B045.1070503@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D5892@ex022ims.fastwebit.ofc> <56169860.2000404@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D60AF@ex022ims.fastwebit.ofc> Message-ID: <5617FBC8.8020702@redhat.com> Amedeo, Thanks for pointing this out. Although the KB article now includes this setting, some of our other documentation doesn't include this setting. I'll make sure it gets added. I'm curious whether anyone has tested out the new MTU-related options that were added to Kilo: advertise_mtu path_mtu segment_mtu physnet_mtus I haven't gotten a chance to test and document these new options myself. They serve to simplify configuration a bit, but also the new physnet_mtus option allows you to set a different MTU per interface: Example: physnet_mtus = physnet1:1550, physnet2:1500 Or, to set MTU for physnet2 and leave physnet1 as default: physnet_mtus = physnet2:1550 Lastly, has anyone ever run into problems when running (MTU - 50 bytes) as the veth_mtu with VXLAN? I see documentation all over recommending (MTU - 100 bytes), but I don't see why VXLAN should take that many extra bytes. I've done extensive testing at VM MTU 8950 over a 9000 MTU link, and never run into an issue. Is this just cargo-culting, or is there a reason to give VXLAN additional headroom in some scenarios? -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter On 10/09/2015 05:24 AM, Salvati Amedeo wrote: > Erich you are welcome in the club :D > > One side note: as we have rhosp and not rdo, we asked to rh to document this and they wrote a solution on their kb: > > https://access.redhat.com/solutions/1417133 > > Regards, > Amedeo > > -----Messaggio originale----- > Da: Erich Weiler [mailto:weiler at soe.ucsc.edu] > Inviato: gioved? 8 ottobre 2015 18:23 > A: Salvati Amedeo; Pedro Navarro Perez > Cc: rdo-list at redhat.com > Oggetto: Re: R: [Rdo-list] Jumbo MTU to instances in Kilo? > > Thanks Amedeo, > > The bit about the config item in the l3_agent.ini file is new to me - I couldn't find that in the documentation, or even as a comment in the file as a config option. If it is a config item as you point out, maybe it should have a commented section in l3_agent.ini? > > Thanks for the insight! > > cheers, > erich > > On 10/08/2015 03:02 AM, Salvati Amedeo wrote: >> Eric, >> >> also, to set jumbo frames on your env, you have to set mtu from VM to controller: >> >> # echo "dhcp-option-force=26,8900" > /etc/neutron/dnsmasq-neutron.conf >> # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf >> # openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini agent veth_mtu 8900 >> # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT network_device_mtu 9000 >> # openstack-config --set /etc/nova/nova.conf DEFAULT network_device_mtu 9000 <--- this on every nova-compute >> >> take a look at l3_agent.ini file, without network_device_mtu every new >> router will use default mtu at 1500 >> >> # ip netns exec qrouter-26f64a08-52ab-4643-b903-9aea6eae047a /bin/bash >> # ip a | grep mtu >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> 69: ha-89546945-ab: mtu 9000 qdisc >> noqueue state UNKNOWN >> 74: qr-f207f652-da: mtu 9000 qdisc >> noqueue state UNKNOWN >> 81: qg-ab978cd0-ad: mtu 9000 qdisc >> noqueue state UNKNOWN >> >> HTH >> Amedeo >> >> -----Messaggio originale----- >> Da: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] >> Per conto di Erich Weiler >> Inviato: gioved? 8 ottobre 2015 01:53 >> A: Pedro Navarro Perez >> Cc: rdo-list at redhat.com >> Oggetto: Re: [Rdo-list] Jumbo MTU to instances in Kilo? >> >> Actually I was wrong, it WAS on the network node. The virtual router interfaces were not set to MTU=9000. On network node: >> >> [root at os-net-01 ~]# ip netns >> qdhcp-c395cff9-af7b-4456-91e3-3c55e6c2c5f5 >> qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 >> >> i[root at os-net-01 ~]# ip netns exec >> qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig >> lo: flags=73 mtu 65536 >> inet 127.0.0.1 netmask 255.0.0.0 >> inet6 ::1 prefixlen 128 scopeid 0x10 >> loop txqueuelen 0 (Local Loopback) >> RX packets 0 bytes 0 (0.0 B) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 0 bytes 0 (0.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qg-fa1e2a28-25: flags=4163 mtu 1500 >> inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 >> inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 >> ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) >> RX packets 34071065 bytes 5046408745 (4.6 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 442 bytes 51915 (50.6 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qr-51904c89-b8: flags=4163 mtu 1500 >> inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 >> inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 >> ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) >> RX packets 702 bytes 75369 (73.6 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 814 bytes 92259 (90.0 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> I can fix it manually: >> >> [root at os-net-01 neutron]# ip netns exec >> qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qg-fa1e2a28-25 >> mtu >> 9000 >> [root at os-net-01 neutron]# ip netns exec >> qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qr-51904c89-b8 >> mtu >> 9000 >> [root at os-net-01 neutron]# ip netns exec >> qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig >> lo: flags=73 mtu 65536 >> inet 127.0.0.1 netmask 255.0.0.0 >> inet6 ::1 prefixlen 128 scopeid 0x10 >> loop txqueuelen 0 (Local Loopback) >> RX packets 0 bytes 0 (0.0 B) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 0 bytes 0 (0.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qg-fa1e2a28-25: flags=4163 mtu 9000 >> inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 >> inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 >> ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) >> RX packets 34086053 bytes 5048637833 (4.7 GiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 442 bytes 51915 (50.6 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> qr-51904c89-b8: flags=4163 mtu 9000 >> inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 >> inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 >> ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) >> RX packets 702 bytes 75369 (73.6 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 814 bytes 92259 (90.0 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> And then I have a jumbo clean path everywhere! All is good then. >> But... How to set this in a config file or something so I don't have to do it manually? >> >> I found this bug report: >> >> https://bugs.launchpad.net/neutron/+bug/1311097 >> >> Anyone know if that bug is still out there? Or how can I set virtual router interfaces MTU by default when I create the router? >> >> cheers, >> erich >> >> On 10/07/2015 04:35 PM, Erich Weiler wrote: >>> Actually I think I'm closer - on the compute nodes, I set this in >>> nova.conf: >>> >>> network_device_mtu=9000 >>> >>> even though there was a big note above it that said not to use it >>> because this option was deprecated. But after setting that option, >>> and restarting nova and openvswitch, br-int, my tap device and my qvb >>> device all got set to MTU=9000. So I'm closer! But still one item >>> is blocking me. I show this tracepath from my controller node direct >>> to the VM (which is on a compute node on the local network): >>> >>> # tracepath 10.50.100.4 >>> 1?: [LOCALHOST] pmtu 9000 >>> 1: 10.50.100.4 0.682ms >>> 1: 10.50.100.4 0.241ms >>> 2: 10.50.100.4 0.297ms pmtu >>> 1500 >>> 2: 10.50.100.4 1.664ms reached >>> >>> 10.50.100.4 is the VM. It looks like the path is jumbo clean up >>> until that third hop. But the thing is, I don't know what the third hop is. >>> ;) >>> >>> On my compute node I still see some stuff with MTU=1500, but I'm not >>> sure if one of those is blocking me: >>> >>> # ifconfig >>> br-enp3s0f0: flags=4163 mtu 9000 >>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>> RX packets 2401498 bytes 359284253 (342.6 MiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 30 bytes 1572 (1.5 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> br-int: flags=4163 mtu 9000 >>> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid 0x20 >>> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >>> RX packets 133 bytes 12934 (12.6 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 8 bytes 648 (648.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> enp3s0f0: flags=4419 mtu 9000 >>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >>> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >>> RX packets 165957142 bytes 20333410092 (18.9 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 23299881 bytes 5950708819 (5.5 GiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> enp3s0f0.50: flags=4163 mtu 9000 >>> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>> RX packets 6014767 bytes 813880745 (776.1 MiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 79301 bytes 19052451 (18.1 MiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> lo: flags=73 mtu 65536 >>> inet 127.0.0.1 netmask 255.0.0.0 >>> inet6 ::1 prefixlen 128 scopeid 0x10 >>> loop txqueuelen 0 (Local Loopback) >>> RX packets 22462729 bytes 1202484822 (1.1 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 22462729 bytes 1202484822 (1.1 GiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qbr922bd9f5-bb: flags=4163 mtu 9000 >>> inet6 fe80::4c1a:55ff:feba:14c3 prefixlen 64 scopeid 0x20 >>> ether 56:a6:a6:db:83:c4 txqueuelen 0 (Ethernet) >>> RX packets 16 bytes 1520 (1.4 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 8 bytes 648 (648.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qbrf42ea01f-fe: flags=4163 mtu 1500 >>> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid 0x20 >>> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >>> RX packets 15 bytes 1456 (1.4 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 8 bytes 648 (648.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvb922bd9f5-bb: flags=4419 >>> mtu >>> 9000 >>> inet6 fe80::54a6:a6ff:fedb:83c4 prefixlen 64 scopeid 0x20 >>> ether 56:a6:a6:db:83:c4 txqueuelen 1000 (Ethernet) >>> RX packets 86 bytes 9610 (9.3 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 133 bytes 12767 (12.4 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvbf42ea01f-fe: flags=4419 >>> mtu >>> 1500 >>> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid 0x20 >>> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >>> RX packets 377 bytes 57664 (56.3 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 333 bytes 38765 (37.8 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvo922bd9f5-bb: flags=4419 >>> mtu >>> 9000 >>> inet6 fe80::b44a:bff:fe72:aaea prefixlen 64 scopeid 0x20 >>> ether b6:4a:0b:72:aa:ea txqueuelen 1000 (Ethernet) >>> RX packets 133 bytes 12767 (12.4 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 86 bytes 9610 (9.3 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qvof42ea01f-fe: flags=4419 >>> mtu >>> 1500 >>> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid 0x20 >>> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >>> RX packets 333 bytes 38765 (37.8 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 377 bytes 57664 (56.3 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> tap922bd9f5-bb: flags=4163 mtu 9000 >>> inet6 fe80::fc16:3eff:fefa:9945 prefixlen 64 scopeid 0x20 >>> ether fe:16:3e:fa:99:45 txqueuelen 500 (Ethernet) >>> RX packets 118 bytes 11561 (11.2 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 95 bytes 10316 (10.0 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> virbr0: flags=4099 mtu 1500 >>> inet 192.168.122.1 netmask 255.255.255.0 broadcast >>> 192.168.122.255 >>> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >>> RX packets 0 bytes 0 (0.0 B) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 0 bytes 0 (0.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> My network node has all interfaces set to MTU=9000. I thought maybe the >>> bottleneck might be there but I don't think it is. Here's ifconfig >>> from my network node: >>> >>> # ifconfig >>> lo: flags=73 mtu 65536 >>> inet 127.0.0.1 netmask 255.0.0.0 >>> inet6 ::1 prefixlen 128 scopeid 0x10 >>> loop txqueuelen 0 (Local Loopback) >>> RX packets 2042 bytes 238727 (233.1 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 2042 bytes 238727 (233.1 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> p1p2: flags=4163 mtu 9000 >>> inet6 fe80::207:43ff:fe10:deb8 prefixlen 64 scopeid 0x20 >>> ether 00:07:43:10:de:b8 txqueuelen 1000 (Ethernet) >>> RX packets 2156053308 bytes 325330839639 (302.9 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 223004 bytes 24769304 (23.6 MiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> device interrupt 72 >>> >>> p2p1: flags=4163 mtu 9000 >>> inet 10.50.1.51 netmask 255.255.0.0 broadcast 10.50.255.255 >>> inet6 fe80::260:ddff:fe44:2aea prefixlen 64 scopeid 0x20 >>> ether 00:60:dd:44:2a:ea txqueuelen 1000 (Ethernet) >>> RX packets 49352916 bytes 3501547231 (3.2 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 18876911 bytes 3768900461 (3.5 GiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> p2p2: flags=4163 mtu 9000 >>> inet6 fe80::260:ddff:fe44:2aeb prefixlen 64 scopeid 0x20 >>> ether 00:60:dd:44:2a:eb txqueuelen 1000 (Ethernet) >>> RX packets 2491224974 bytes 348058319500 (324.1 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 1597 bytes 204525 (199.7 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> Any way I can figure out what the third hop is from my tracepath? >>> >>> Thanks as always for the sage advice! >>> >>> -erich >>> >>> On 10/07/2015 09:57 AM, Erich Weiler wrote: >>>> Yeah, I made the changes and then recreated all the networks. For >>>> some reason br-int and the individual virtual instance interfaces on >>>> the compute node still show 1500 byte frames. >>>> >>>> Has anyone else configured jumbo frames in a Kilo environment? Or >>>> maybe I'm just an outlier... ;) >>>> >>>> -erich >>>> >>>> On 10/07/2015 01:46 AM, Pedro Navarro Perez wrote: >>>>> Hi Erich, >>>>> >>>>> did you recreate the neutron networks after the configuration changes? >>>>> >>>>> Pedro Navarro P?rez >>>>> OpenStack product specialist >>>>> Red Hat Iberia >>>>> Passeig de Gr?cia 120, >>>>> 08008 Barcelona >>>>> Spain >>>>> M +34 639 642 379 >>>>> E pnavarro at redhat.com >>>>> >>>>> ----- Original Message ----- >>>>> From: "Erich Weiler" >>>>> To: rdo-list at redhat.com >>>>> Sent: Wednesday, 7 October, 2015 2:34:28 AM >>>>> Subject: [Rdo-list] Jumbo MTU to instances in Kilo? >>>>> >>>>> Hi Y'all, >>>>> >>>>> I know someone must have figured this one out, but I can't seem to >>>>> get >>>>> 9000 byte MTUs working. I have it set in plugin.ini, etc, my nodes >>>>> have >>>>> MTU=9000 on their interfaces, so does the network node. dnsmasq >>>>> also is configured to set MTU=9000 on instances, which works. But >>>>> I still can't ping with large packets to my instance: >>>>> >>>>> [weiler at stacker ~]$ ping 10.50.100.2 PING 10.50.100.2 (10.50.100.2) >>>>> 56(84) bytes of data. >>>>> 64 bytes from 10.50.100.2: icmp_seq=1 ttl=63 time=2.95 ms >>>>> 64 bytes from 10.50.100.2: icmp_seq=2 ttl=63 time=1.14 ms >>>>> 64 bytes from 10.50.100.2: icmp_seq=3 ttl=63 time=0.661 ms >>>>> >>>>> That works fine. This however doesn't work: >>>>> >>>>> [root at stacker ~]# ping -M do -s 8000 10.50.100.2 PING 10.50.100.2 >>>>> (10.50.100.2) 8000(8028) bytes of data. >>>>> From 10.50.100.2 icmp_seq=1 Frag needed and DF set (mtu = 1500) >>>>> ping: local error: Message too long, mtu=1500 >>>>> ping: local error: Message too long, mtu=1500 >>>>> ping: local error: Message too long, mtu=1500 >>>>> ping: local error: Message too long, mtu=1500 >>>>> >>>>> It looks like somehow the br-int interface for OVS isn't set at >>>>> 9000, but I can't figure out how to do that... >>>>> >>>>> Here's ifconfig on my compute node: >>>>> >>>>> br-enp3s0f0: flags=4163 mtu 9000 >>>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>>> 0x20 >>>>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>>>> RX packets 2401432 bytes 359276713 (342.6 MiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 30 bytes 1572 (1.5 KiB) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> br-int: flags=4163 mtu 1500 >>>>> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid >>>>> 0x20 >>>>> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >>>>> RX packets 69 bytes 6866 (6.7 KiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 8 bytes 648 (648.0 B) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> enp3s0f0: flags=4419 mtu 9000 >>>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>>> 0x20 >>>>> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >>>>> RX packets 130174458 bytes 15334807929 (14.2 GiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 22919305 bytes 5859090420 (5.4 GiB) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> enp3s0f0.50: flags=4163 mtu 9000 >>>>> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >>>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>>> 0x20 >>>>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>>>> RX packets 38429352 bytes 5152853436 (4.7 GiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 419842 bytes 101161981 (96.4 MiB) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> lo: flags=73 mtu 65536 >>>>> inet 127.0.0.1 netmask 255.0.0.0 >>>>> inet6 ::1 prefixlen 128 scopeid 0x10 >>>>> loop txqueuelen 0 (Local Loopback) >>>>> RX packets 22141566 bytes 1185622090 (1.1 GiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 22141566 bytes 1185622090 (1.1 GiB) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> qbr247da3ed-a4: flags=4163 mtu 1500 >>>>> inet6 fe80::5c8f:c0ff:fe79:bc11 prefixlen 64 scopeid >>>>> 0x20 >>>>> ether b6:1f:54:3f:3d:48 txqueuelen 0 (Ethernet) >>>>> RX packets 16 bytes 1472 (1.4 KiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 8 bytes 648 (648.0 B) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> qbrf42ea01f-fe: flags=4163 mtu 1500 >>>>> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid >>>>> 0x20 >>>>> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >>>>> RX packets 15 bytes 1456 (1.4 KiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 8 bytes 648 (648.0 B) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> qvb247da3ed-a4: flags=4419 >>>>> mtu 1500 >>>>> inet6 fe80::b41f:54ff:fe3f:3d48 prefixlen 64 scopeid >>>>> 0x20 >>>>> ether b6:1f:54:3f:3d:48 txqueuelen 1000 (Ethernet) >>>>> RX packets 247 bytes 28323 (27.6 KiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 233 bytes 25355 (24.7 KiB) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> qvbf42ea01f-fe: flags=4419 >>>>> mtu 1500 >>>>> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid >>>>> 0x20 >>>>> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >>>>> RX packets 377 bytes 57664 (56.3 KiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 333 bytes 38765 (37.8 KiB) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> qvo247da3ed-a4: flags=4419 >>>>> mtu 1500 >>>>> inet6 fe80::dcfa:f1ff:fe03:ee88 prefixlen 64 scopeid >>>>> 0x20 >>>>> ether de:fa:f1:03:ee:88 txqueuelen 1000 (Ethernet) >>>>> RX packets 233 bytes 25355 (24.7 KiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 247 bytes 28323 (27.6 KiB) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> qvof42ea01f-fe: flags=4419 >>>>> mtu 1500 >>>>> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid >>>>> 0x20 >>>>> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >>>>> RX packets 333 bytes 38765 (37.8 KiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 377 bytes 57664 (56.3 KiB) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> tap247da3ed-a4: flags=4163 mtu 1500 >>>>> inet6 fe80::fc16:3eff:fede:5eea prefixlen 64 scopeid >>>>> 0x20 >>>>> ether fe:16:3e:de:5e:ea txqueuelen 500 (Ethernet) >>>>> RX packets 219 bytes 24239 (23.6 KiB) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 224 bytes 26661 (26.0 KiB) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> virbr0: flags=4099 mtu 1500 >>>>> inet 192.168.122.1 netmask 255.255.255.0 broadcast >>>>> 192.168.122.255 >>>>> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >>>>> RX packets 0 bytes 0 (0.0 B) >>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>> TX packets 0 bytes 0 (0.0 B) >>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>> 0 >>>>> >>>>> This is on RHEL 7.1. Any obvious way I can get all the >>>>> intermediate bridges to MTU=9000? I've RTFM'd and googled to no avail... >>>>> >>>>> Here's the ovs-vsctl outout: >>>>> >>>>> [root at node-136 ~]# ovs-vsctl show >>>>> 6f5a5f00-59e2-4420-aeaf-7ad464ead232 >>>>> Bridge br-int >>>>> fail_mode: secure >>>>> Port br-int >>>>> Interface br-int >>>>> type: internal >>>>> Port "qvo247da3ed-a4" >>>>> tag: 1 >>>>> Interface "qvo247da3ed-a4" >>>>> Port "int-br-eth1" >>>>> Interface "int-br-eth1" >>>>> Port "int-br-enp3s0f0" >>>>> Interface "int-br-enp3s0f0" >>>>> type: patch >>>>> options: {peer="phy-br-enp3s0f0"} >>>>> Bridge "br-enp3s0f0" >>>>> Port "enp3s0f0" >>>>> Interface "enp3s0f0" >>>>> Port "br-enp3s0f0" >>>>> Interface "br-enp3s0f0" >>>>> type: internal >>>>> Port "phy-br-enp3s0f0" >>>>> Interface "phy-br-enp3s0f0" >>>>> type: patch >>>>> options: {peer="int-br-enp3s0f0"} >>>>> ovs_version: "2.3.1" >>>>> >>>>> Many thanks if anyone has any information on this topic! Or can >>>>> point me to some documentation I missed... >>>>> >>>>> Thanks, >>>>> erich >>>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From dsneddon at redhat.com Fri Oct 9 17:54:40 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 09 Oct 2015 10:54:40 -0700 Subject: [Rdo-list] R: R: Jumbo MTU to instances in Kilo? In-Reply-To: <5617FBC8.8020702@redhat.com> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> <56154EE8.6000908@soe.ucsc.edu> <5615AC55.2040701@soe.ucsc.edu> <5615B045.1070503@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D5892@ex022ims.fastwebit.ofc> <56169860.2000404@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D60AF@ex022ims.fastwebit.ofc> <5617FBC8.8020702@redhat.com> Message-ID: <5617FF60.8030405@redhat.com> I forgot to link to the MTU Selection and Advertisement spec: http://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html On 10/09/2015 10:39 AM, Dan Sneddon wrote: > Amedeo, > > Thanks for pointing this out. Although the KB article now includes this > setting, some of our other documentation doesn't include this setting. > I'll make sure it gets added. > > I'm curious whether anyone has tested out the new MTU-related options > that were added to Kilo: > > advertise_mtu > path_mtu > segment_mtu > physnet_mtus > > I haven't gotten a chance to test and document these new options > myself. They serve to simplify configuration a bit, but also the new > physnet_mtus option allows you to set a different MTU per interface: > > Example: > physnet_mtus = physnet1:1550, physnet2:1500 > Or, to set MTU for physnet2 and leave physnet1 as default: > physnet_mtus = physnet2:1550 > > Lastly, has anyone ever run into problems when running (MTU - 50 bytes) > as the veth_mtu with VXLAN? I see documentation all over recommending > (MTU - 100 bytes), but I don't see why VXLAN should take that many > extra bytes. I've done extensive testing at VM MTU 8950 over a 9000 MTU > link, and never run into an issue. Is this just cargo-culting, or is > there a reason to give VXLAN additional headroom in some scenarios? > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > On 10/09/2015 05:24 AM, Salvati Amedeo wrote: >> Erich you are welcome in the club :D >> >> One side note: as we have rhosp and not rdo, we asked to rh to document this and they wrote a solution on their kb: >> >> https://access.redhat.com/solutions/1417133 >> >> Regards, >> Amedeo >> >> -----Messaggio originale----- >> Da: Erich Weiler [mailto:weiler at soe.ucsc.edu] >> Inviato: gioved? 8 ottobre 2015 18:23 >> A: Salvati Amedeo; Pedro Navarro Perez >> Cc: rdo-list at redhat.com >> Oggetto: Re: R: [Rdo-list] Jumbo MTU to instances in Kilo? >> >> Thanks Amedeo, >> >> The bit about the config item in the l3_agent.ini file is new to me - I couldn't find that in the documentation, or even as a comment in the file as a config option. If it is a config item as you point out, maybe it should have a commented section in l3_agent.ini? >> >> Thanks for the insight! >> >> cheers, >> erich >> >> On 10/08/2015 03:02 AM, Salvati Amedeo wrote: >>> Eric, >>> >>> also, to set jumbo frames on your env, you have to set mtu from VM to controller: >>> >>> # echo "dhcp-option-force=26,8900" > /etc/neutron/dnsmasq-neutron.conf >>> # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf >>> # openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini agent veth_mtu 8900 >>> # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT network_device_mtu 9000 >>> # openstack-config --set /etc/nova/nova.conf DEFAULT network_device_mtu 9000 <--- this on every nova-compute >>> >>> take a look at l3_agent.ini file, without network_device_mtu every new >>> router will use default mtu at 1500 >>> >>> # ip netns exec qrouter-26f64a08-52ab-4643-b903-9aea6eae047a /bin/bash >>> # ip a | grep mtu >>> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >>> 69: ha-89546945-ab: mtu 9000 qdisc >>> noqueue state UNKNOWN >>> 74: qr-f207f652-da: mtu 9000 qdisc >>> noqueue state UNKNOWN >>> 81: qg-ab978cd0-ad: mtu 9000 qdisc >>> noqueue state UNKNOWN >>> >>> HTH >>> Amedeo >>> >>> -----Messaggio originale----- >>> Da: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] >>> Per conto di Erich Weiler >>> Inviato: gioved? 8 ottobre 2015 01:53 >>> A: Pedro Navarro Perez >>> Cc: rdo-list at redhat.com >>> Oggetto: Re: [Rdo-list] Jumbo MTU to instances in Kilo? >>> >>> Actually I was wrong, it WAS on the network node. The virtual router interfaces were not set to MTU=9000. On network node: >>> >>> [root at os-net-01 ~]# ip netns >>> qdhcp-c395cff9-af7b-4456-91e3-3c55e6c2c5f5 >>> qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 >>> >>> i[root at os-net-01 ~]# ip netns exec >>> qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig >>> lo: flags=73 mtu 65536 >>> inet 127.0.0.1 netmask 255.0.0.0 >>> inet6 ::1 prefixlen 128 scopeid 0x10 >>> loop txqueuelen 0 (Local Loopback) >>> RX packets 0 bytes 0 (0.0 B) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 0 bytes 0 (0.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qg-fa1e2a28-25: flags=4163 mtu 1500 >>> inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 >>> inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 >>> ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) >>> RX packets 34071065 bytes 5046408745 (4.6 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 442 bytes 51915 (50.6 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qr-51904c89-b8: flags=4163 mtu 1500 >>> inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 >>> inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 >>> ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) >>> RX packets 702 bytes 75369 (73.6 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 814 bytes 92259 (90.0 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> I can fix it manually: >>> >>> [root at os-net-01 neutron]# ip netns exec >>> qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qg-fa1e2a28-25 >>> mtu >>> 9000 >>> [root at os-net-01 neutron]# ip netns exec >>> qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig qr-51904c89-b8 >>> mtu >>> 9000 >>> [root at os-net-01 neutron]# ip netns exec >>> qrouter-0b52e3a6-135c-4481-b286-7c96229f6555 ifconfig >>> lo: flags=73 mtu 65536 >>> inet 127.0.0.1 netmask 255.0.0.0 >>> inet6 ::1 prefixlen 128 scopeid 0x10 >>> loop txqueuelen 0 (Local Loopback) >>> RX packets 0 bytes 0 (0.0 B) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 0 bytes 0 (0.0 B) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qg-fa1e2a28-25: flags=4163 mtu 9000 >>> inet 10.50.100.1 netmask 255.255.0.0 broadcast 10.50.255.255 >>> inet6 fe80::f816:3eff:fe6a:608b prefixlen 64 scopeid 0x20 >>> ether fa:16:3e:6a:60:8b txqueuelen 0 (Ethernet) >>> RX packets 34086053 bytes 5048637833 (4.7 GiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 442 bytes 51915 (50.6 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> qr-51904c89-b8: flags=4163 mtu 9000 >>> inet 10.100.0.1 netmask 255.255.0.0 broadcast 10.100.255.255 >>> inet6 fe80::f816:3eff:fe37:eca6 prefixlen 64 scopeid 0x20 >>> ether fa:16:3e:37:ec:a6 txqueuelen 0 (Ethernet) >>> RX packets 702 bytes 75369 (73.6 KiB) >>> RX errors 0 dropped 0 overruns 0 frame 0 >>> TX packets 814 bytes 92259 (90.0 KiB) >>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>> >>> And then I have a jumbo clean path everywhere! All is good then. >>> But... How to set this in a config file or something so I don't have to do it manually? >>> >>> I found this bug report: >>> >>> https://bugs.launchpad.net/neutron/+bug/1311097 >>> >>> Anyone know if that bug is still out there? Or how can I set virtual router interfaces MTU by default when I create the router? >>> >>> cheers, >>> erich >>> >>> On 10/07/2015 04:35 PM, Erich Weiler wrote: >>>> Actually I think I'm closer - on the compute nodes, I set this in >>>> nova.conf: >>>> >>>> network_device_mtu=9000 >>>> >>>> even though there was a big note above it that said not to use it >>>> because this option was deprecated. But after setting that option, >>>> and restarting nova and openvswitch, br-int, my tap device and my qvb >>>> device all got set to MTU=9000. So I'm closer! But still one item >>>> is blocking me. I show this tracepath from my controller node direct >>>> to the VM (which is on a compute node on the local network): >>>> >>>> # tracepath 10.50.100.4 >>>> 1?: [LOCALHOST] pmtu 9000 >>>> 1: 10.50.100.4 0.682ms >>>> 1: 10.50.100.4 0.241ms >>>> 2: 10.50.100.4 0.297ms pmtu >>>> 1500 >>>> 2: 10.50.100.4 1.664ms reached >>>> >>>> 10.50.100.4 is the VM. It looks like the path is jumbo clean up >>>> until that third hop. But the thing is, I don't know what the third hop is. >>>> ;) >>>> >>>> On my compute node I still see some stuff with MTU=1500, but I'm not >>>> sure if one of those is blocking me: >>>> >>>> # ifconfig >>>> br-enp3s0f0: flags=4163 mtu 9000 >>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >>>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>>> RX packets 2401498 bytes 359284253 (342.6 MiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 30 bytes 1572 (1.5 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> br-int: flags=4163 mtu 9000 >>>> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid 0x20 >>>> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >>>> RX packets 133 bytes 12934 (12.6 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 8 bytes 648 (648.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> enp3s0f0: flags=4419 mtu 9000 >>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >>>> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >>>> RX packets 165957142 bytes 20333410092 (18.9 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 23299881 bytes 5950708819 (5.5 GiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> enp3s0f0.50: flags=4163 mtu 9000 >>>> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid 0x20 >>>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>>> RX packets 6014767 bytes 813880745 (776.1 MiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 79301 bytes 19052451 (18.1 MiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> lo: flags=73 mtu 65536 >>>> inet 127.0.0.1 netmask 255.0.0.0 >>>> inet6 ::1 prefixlen 128 scopeid 0x10 >>>> loop txqueuelen 0 (Local Loopback) >>>> RX packets 22462729 bytes 1202484822 (1.1 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 22462729 bytes 1202484822 (1.1 GiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qbr922bd9f5-bb: flags=4163 mtu 9000 >>>> inet6 fe80::4c1a:55ff:feba:14c3 prefixlen 64 scopeid 0x20 >>>> ether 56:a6:a6:db:83:c4 txqueuelen 0 (Ethernet) >>>> RX packets 16 bytes 1520 (1.4 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 8 bytes 648 (648.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qbrf42ea01f-fe: flags=4163 mtu 1500 >>>> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid 0x20 >>>> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >>>> RX packets 15 bytes 1456 (1.4 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 8 bytes 648 (648.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qvb922bd9f5-bb: flags=4419 >>>> mtu >>>> 9000 >>>> inet6 fe80::54a6:a6ff:fedb:83c4 prefixlen 64 scopeid 0x20 >>>> ether 56:a6:a6:db:83:c4 txqueuelen 1000 (Ethernet) >>>> RX packets 86 bytes 9610 (9.3 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 133 bytes 12767 (12.4 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qvbf42ea01f-fe: flags=4419 >>>> mtu >>>> 1500 >>>> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid 0x20 >>>> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >>>> RX packets 377 bytes 57664 (56.3 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 333 bytes 38765 (37.8 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qvo922bd9f5-bb: flags=4419 >>>> mtu >>>> 9000 >>>> inet6 fe80::b44a:bff:fe72:aaea prefixlen 64 scopeid 0x20 >>>> ether b6:4a:0b:72:aa:ea txqueuelen 1000 (Ethernet) >>>> RX packets 133 bytes 12767 (12.4 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 86 bytes 9610 (9.3 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> qvof42ea01f-fe: flags=4419 >>>> mtu >>>> 1500 >>>> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid 0x20 >>>> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >>>> RX packets 333 bytes 38765 (37.8 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 377 bytes 57664 (56.3 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> tap922bd9f5-bb: flags=4163 mtu 9000 >>>> inet6 fe80::fc16:3eff:fefa:9945 prefixlen 64 scopeid 0x20 >>>> ether fe:16:3e:fa:99:45 txqueuelen 500 (Ethernet) >>>> RX packets 118 bytes 11561 (11.2 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 95 bytes 10316 (10.0 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> virbr0: flags=4099 mtu 1500 >>>> inet 192.168.122.1 netmask 255.255.255.0 broadcast >>>> 192.168.122.255 >>>> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >>>> RX packets 0 bytes 0 (0.0 B) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 0 bytes 0 (0.0 B) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> My network node has all interfaces set to MTU=9000. I thought maybe the >>>> bottleneck might be there but I don't think it is. Here's ifconfig >>>> from my network node: >>>> >>>> # ifconfig >>>> lo: flags=73 mtu 65536 >>>> inet 127.0.0.1 netmask 255.0.0.0 >>>> inet6 ::1 prefixlen 128 scopeid 0x10 >>>> loop txqueuelen 0 (Local Loopback) >>>> RX packets 2042 bytes 238727 (233.1 KiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 2042 bytes 238727 (233.1 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> p1p2: flags=4163 mtu 9000 >>>> inet6 fe80::207:43ff:fe10:deb8 prefixlen 64 scopeid 0x20 >>>> ether 00:07:43:10:de:b8 txqueuelen 1000 (Ethernet) >>>> RX packets 2156053308 bytes 325330839639 (302.9 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 223004 bytes 24769304 (23.6 MiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> device interrupt 72 >>>> >>>> p2p1: flags=4163 mtu 9000 >>>> inet 10.50.1.51 netmask 255.255.0.0 broadcast 10.50.255.255 >>>> inet6 fe80::260:ddff:fe44:2aea prefixlen 64 scopeid 0x20 >>>> ether 00:60:dd:44:2a:ea txqueuelen 1000 (Ethernet) >>>> RX packets 49352916 bytes 3501547231 (3.2 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 18876911 bytes 3768900461 (3.5 GiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> p2p2: flags=4163 mtu 9000 >>>> inet6 fe80::260:ddff:fe44:2aeb prefixlen 64 scopeid 0x20 >>>> ether 00:60:dd:44:2a:eb txqueuelen 1000 (Ethernet) >>>> RX packets 2491224974 bytes 348058319500 (324.1 GiB) >>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>> TX packets 1597 bytes 204525 (199.7 KiB) >>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >>>> >>>> Any way I can figure out what the third hop is from my tracepath? >>>> >>>> Thanks as always for the sage advice! >>>> >>>> -erich >>>> >>>> On 10/07/2015 09:57 AM, Erich Weiler wrote: >>>>> Yeah, I made the changes and then recreated all the networks. For >>>>> some reason br-int and the individual virtual instance interfaces on >>>>> the compute node still show 1500 byte frames. >>>>> >>>>> Has anyone else configured jumbo frames in a Kilo environment? Or >>>>> maybe I'm just an outlier... ;) >>>>> >>>>> -erich >>>>> >>>>> On 10/07/2015 01:46 AM, Pedro Navarro Perez wrote: >>>>>> Hi Erich, >>>>>> >>>>>> did you recreate the neutron networks after the configuration changes? >>>>>> >>>>>> Pedro Navarro P?rez >>>>>> OpenStack product specialist >>>>>> Red Hat Iberia >>>>>> Passeig de Gr?cia 120, >>>>>> 08008 Barcelona >>>>>> Spain >>>>>> M +34 639 642 379 >>>>>> E pnavarro at redhat.com >>>>>> >>>>>> ----- Original Message ----- >>>>>> From: "Erich Weiler" >>>>>> To: rdo-list at redhat.com >>>>>> Sent: Wednesday, 7 October, 2015 2:34:28 AM >>>>>> Subject: [Rdo-list] Jumbo MTU to instances in Kilo? >>>>>> >>>>>> Hi Y'all, >>>>>> >>>>>> I know someone must have figured this one out, but I can't seem to >>>>>> get >>>>>> 9000 byte MTUs working. I have it set in plugin.ini, etc, my nodes >>>>>> have >>>>>> MTU=9000 on their interfaces, so does the network node. dnsmasq >>>>>> also is configured to set MTU=9000 on instances, which works. But >>>>>> I still can't ping with large packets to my instance: >>>>>> >>>>>> [weiler at stacker ~]$ ping 10.50.100.2 PING 10.50.100.2 (10.50.100.2) >>>>>> 56(84) bytes of data. >>>>>> 64 bytes from 10.50.100.2: icmp_seq=1 ttl=63 time=2.95 ms >>>>>> 64 bytes from 10.50.100.2: icmp_seq=2 ttl=63 time=1.14 ms >>>>>> 64 bytes from 10.50.100.2: icmp_seq=3 ttl=63 time=0.661 ms >>>>>> >>>>>> That works fine. This however doesn't work: >>>>>> >>>>>> [root at stacker ~]# ping -M do -s 8000 10.50.100.2 PING 10.50.100.2 >>>>>> (10.50.100.2) 8000(8028) bytes of data. >>>>>> From 10.50.100.2 icmp_seq=1 Frag needed and DF set (mtu = 1500) >>>>>> ping: local error: Message too long, mtu=1500 >>>>>> ping: local error: Message too long, mtu=1500 >>>>>> ping: local error: Message too long, mtu=1500 >>>>>> ping: local error: Message too long, mtu=1500 >>>>>> >>>>>> It looks like somehow the br-int interface for OVS isn't set at >>>>>> 9000, but I can't figure out how to do that... >>>>>> >>>>>> Here's ifconfig on my compute node: >>>>>> >>>>>> br-enp3s0f0: flags=4163 mtu 9000 >>>>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>>>>> RX packets 2401432 bytes 359276713 (342.6 MiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 30 bytes 1572 (1.5 KiB) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> br-int: flags=4163 mtu 1500 >>>>>> inet6 fe80::64dc:94ff:fe35:db4c prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether 66:dc:94:35:db:4c txqueuelen 0 (Ethernet) >>>>>> RX packets 69 bytes 6866 (6.7 KiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 8 bytes 648 (648.0 B) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> enp3s0f0: flags=4419 mtu 9000 >>>>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether 0c:c4:7a:58:42:3e txqueuelen 1000 (Ethernet) >>>>>> RX packets 130174458 bytes 15334807929 (14.2 GiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 22919305 bytes 5859090420 (5.4 GiB) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> enp3s0f0.50: flags=4163 mtu 9000 >>>>>> inet 10.50.1.236 netmask 255.255.0.0 broadcast 10.50.255.255 >>>>>> inet6 fe80::ec4:7aff:fe58:423e prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether 0c:c4:7a:58:42:3e txqueuelen 0 (Ethernet) >>>>>> RX packets 38429352 bytes 5152853436 (4.7 GiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 419842 bytes 101161981 (96.4 MiB) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> lo: flags=73 mtu 65536 >>>>>> inet 127.0.0.1 netmask 255.0.0.0 >>>>>> inet6 ::1 prefixlen 128 scopeid 0x10 >>>>>> loop txqueuelen 0 (Local Loopback) >>>>>> RX packets 22141566 bytes 1185622090 (1.1 GiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 22141566 bytes 1185622090 (1.1 GiB) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> qbr247da3ed-a4: flags=4163 mtu 1500 >>>>>> inet6 fe80::5c8f:c0ff:fe79:bc11 prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether b6:1f:54:3f:3d:48 txqueuelen 0 (Ethernet) >>>>>> RX packets 16 bytes 1472 (1.4 KiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 8 bytes 648 (648.0 B) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> qbrf42ea01f-fe: flags=4163 mtu 1500 >>>>>> inet6 fe80::f484:f1ff:fe53:fb2e prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether c2:a6:d8:25:63:ea txqueuelen 0 (Ethernet) >>>>>> RX packets 15 bytes 1456 (1.4 KiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 8 bytes 648 (648.0 B) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> qvb247da3ed-a4: flags=4419 >>>>>> mtu 1500 >>>>>> inet6 fe80::b41f:54ff:fe3f:3d48 prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether b6:1f:54:3f:3d:48 txqueuelen 1000 (Ethernet) >>>>>> RX packets 247 bytes 28323 (27.6 KiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 233 bytes 25355 (24.7 KiB) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> qvbf42ea01f-fe: flags=4419 >>>>>> mtu 1500 >>>>>> inet6 fe80::c0a6:d8ff:fe25:63ea prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether c2:a6:d8:25:63:ea txqueuelen 1000 (Ethernet) >>>>>> RX packets 377 bytes 57664 (56.3 KiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 333 bytes 38765 (37.8 KiB) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> qvo247da3ed-a4: flags=4419 >>>>>> mtu 1500 >>>>>> inet6 fe80::dcfa:f1ff:fe03:ee88 prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether de:fa:f1:03:ee:88 txqueuelen 1000 (Ethernet) >>>>>> RX packets 233 bytes 25355 (24.7 KiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 247 bytes 28323 (27.6 KiB) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> qvof42ea01f-fe: flags=4419 >>>>>> mtu 1500 >>>>>> inet6 fe80::f03e:35ff:fefe:e52 prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether f2:3e:35:fe:0e:52 txqueuelen 1000 (Ethernet) >>>>>> RX packets 333 bytes 38765 (37.8 KiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 377 bytes 57664 (56.3 KiB) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> tap247da3ed-a4: flags=4163 mtu 1500 >>>>>> inet6 fe80::fc16:3eff:fede:5eea prefixlen 64 scopeid >>>>>> 0x20 >>>>>> ether fe:16:3e:de:5e:ea txqueuelen 500 (Ethernet) >>>>>> RX packets 219 bytes 24239 (23.6 KiB) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 224 bytes 26661 (26.0 KiB) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> virbr0: flags=4099 mtu 1500 >>>>>> inet 192.168.122.1 netmask 255.255.255.0 broadcast >>>>>> 192.168.122.255 >>>>>> ether 52:54:00:c4:75:9f txqueuelen 0 (Ethernet) >>>>>> RX packets 0 bytes 0 (0.0 B) >>>>>> RX errors 0 dropped 0 overruns 0 frame 0 >>>>>> TX packets 0 bytes 0 (0.0 B) >>>>>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions >>>>>> 0 >>>>>> >>>>>> This is on RHEL 7.1. Any obvious way I can get all the >>>>>> intermediate bridges to MTU=9000? I've RTFM'd and googled to no avail... >>>>>> >>>>>> Here's the ovs-vsctl outout: >>>>>> >>>>>> [root at node-136 ~]# ovs-vsctl show >>>>>> 6f5a5f00-59e2-4420-aeaf-7ad464ead232 >>>>>> Bridge br-int >>>>>> fail_mode: secure >>>>>> Port br-int >>>>>> Interface br-int >>>>>> type: internal >>>>>> Port "qvo247da3ed-a4" >>>>>> tag: 1 >>>>>> Interface "qvo247da3ed-a4" >>>>>> Port "int-br-eth1" >>>>>> Interface "int-br-eth1" >>>>>> Port "int-br-enp3s0f0" >>>>>> Interface "int-br-enp3s0f0" >>>>>> type: patch >>>>>> options: {peer="phy-br-enp3s0f0"} >>>>>> Bridge "br-enp3s0f0" >>>>>> Port "enp3s0f0" >>>>>> Interface "enp3s0f0" >>>>>> Port "br-enp3s0f0" >>>>>> Interface "br-enp3s0f0" >>>>>> type: internal >>>>>> Port "phy-br-enp3s0f0" >>>>>> Interface "phy-br-enp3s0f0" >>>>>> type: patch >>>>>> options: {peer="int-br-enp3s0f0"} >>>>>> ovs_version: "2.3.1" >>>>>> >>>>>> Many thanks if anyone has any information on this topic! Or can >>>>>> point me to some documentation I missed... >>>>>> >>>>>> Thanks, >>>>>> erich >>>>>> >>>>>> _______________________________________________ >>>>>> Rdo-list mailing list >>>>>> Rdo-list at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>> >>>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From trown at redhat.com Fri Oct 9 17:54:57 2015 From: trown at redhat.com (John Trowbridge) Date: Fri, 9 Oct 2015 13:54:57 -0400 Subject: [Rdo-list] RDO Test Day Oct 12-13 Message-ID: <5617FF71.5050208@redhat.com> We will be doing a second RDO test day for RDO Liberty next Monday and Tuesday. (October 12-13) As with the first test day, we will use the beta.rdoproject.org page to coordinate efforts [1]. Unlike the first test day, RDO Manager will be available as an installer for testing. We will use a forked version of the upstream tripleo docs [2]. These have a couple of patches to make the docs Liberty specific, since upstream has already moved on to Mitaka. The RDO Manager scenarios to test will be updated on the website before Monday. [3] Thanks for helping us kick the tires one more time before we release RDO Liberty! --trown [1] http://beta.rdoproject.org/testday/rdo-test-day-liberty-02/ [2] https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ [3] http://beta.rdoproject.org/testday/testedsetups-liberty-02/ From rbowen at redhat.com Fri Oct 9 19:39:25 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 9 Oct 2015 15:39:25 -0400 Subject: [Rdo-list] RDO Test Day Oct 12-13 In-Reply-To: <5617FF71.5050208@redhat.com> References: <5617FF71.5050208@redhat.com> Message-ID: <561817ED.6070901@redhat.com> On 10/09/2015 01:54 PM, John Trowbridge wrote: > We will be doing a second RDO test day for RDO Liberty next Monday and > Tuesday. (October 12-13) John, thanks so much for taking care of this. I ended up being out today due to illness. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From weiler at soe.ucsc.edu Fri Oct 9 19:46:55 2015 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Fri, 9 Oct 2015 12:46:55 -0700 Subject: [Rdo-list] R: R: Jumbo MTU to instances in Kilo? In-Reply-To: <5617FBC8.8020702@redhat.com> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> <56154EE8.6000908@soe.ucsc.edu> <5615AC55.2040701@soe.ucsc.edu> <5615B045.1070503@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D5892@ex022ims.fastwebit.ofc> <56169860.2000404@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D60AF@ex022ims.fastwebit.ofc> <5617FBC8.8020702@redhat.com> Message-ID: <561819AF.30607@soe.ucsc.edu> > Lastly, has anyone ever run into problems when running (MTU - 50 bytes) > as the veth_mtu with VXLAN? I see documentation all over recommending > (MTU - 100 bytes), but I don't see why VXLAN should take that many > extra bytes. I've done extensive testing at VM MTU 8950 over a 9000 MTU > link, and never run into an issue. Is this just cargo-culting, or is > there a reason to give VXLAN additional headroom in some scenarios? Along those same lines... I'm running tenant segregation via regular VLAN segregation (not VXLAN or GRE), using all interfaces set to MTU=9000 (the instance all the way up to the main network gateway, all router interfaces, etc). My physical switches have MTU=9260 on all ports however. It seems to work and perform OK, but is there a recommendation on giving regular VLANs a little VLAN-tag headroom as well? Like, should I be setting my instance MTU to 8950 or something? From dsneddon at redhat.com Fri Oct 9 20:21:37 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 09 Oct 2015 13:21:37 -0700 Subject: [Rdo-list] R: R: Jumbo MTU to instances in Kilo? In-Reply-To: <561819AF.30607@soe.ucsc.edu> References: <56146894.2020107@soe.ucsc.edu> <172621117.34425463.1444207611143.JavaMail.zimbra@redhat.com> <56154EE8.6000908@soe.ucsc.edu> <5615AC55.2040701@soe.ucsc.edu> <5615B045.1070503@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D5892@ex022ims.fastwebit.ofc> <56169860.2000404@soe.ucsc.edu> <751EEF2DDA813143A5F20A73F5B78F382D60AF@ex022ims.fastwebit.ofc> <5617FBC8.8020702@redhat.com> <561819AF.30607@soe.ucsc.edu> Message-ID: <561821D1.9090909@redhat.com> On 10/09/2015 12:46 PM, Erich Weiler wrote: >> Lastly, has anyone ever run into problems when running (MTU - 50 bytes) >> as the veth_mtu with VXLAN? I see documentation all over recommending >> (MTU - 100 bytes), but I don't see why VXLAN should take that many >> extra bytes. I've done extensive testing at VM MTU 8950 over a 9000 MTU >> link, and never run into an issue. Is this just cargo-culting, or is >> there a reason to give VXLAN additional headroom in some scenarios? > > Along those same lines... I'm running tenant segregation via regular > VLAN segregation (not VXLAN or GRE), using all interfaces set to > MTU=9000 (the instance all the way up to the main network gateway, all > router interfaces, etc). My physical switches have MTU=9260 on all > ports however. > > It seems to work and perform OK, but is there a recommendation on > giving regular VLANs a little VLAN-tag headroom as well? Like, should > I be setting my instance MTU to 8950 or something? Relax, Neutron VLAN mode uses the line MTU, no headroom is needed. "What about the VLAN tag header?" you might ask. Well, every switch vendor reserves 4 bytes on top of whatever you set the MTU to on an interface to make room for VLAN tags. So if you set the MTU to 9000 on an interface, the actual MTU the switch hardware uses is 9004. This used to be a problem in the very early days of VLANs, where some Cisco switches wouldn't reserve the 4-bytes beyond a certain upper limit, so people got used to setting their host MTU 4-bytes less *just in case* they went through an affected switch. This hasn't been necessary for any switch that I'm aware of produced since about ca. 2000. Internally, OVS will strip the VLAN tag, and replace it with an internal VLAN tag that is used by OVS to separate each tenant's traffic. Because the outer tag gets replaced, no additional headroom is required. Under normal circumstances, the VM has no idea it is on a VLAN, so it also uses the full MTU of the physical interface with no additional overhead. Things are a little different if you are using Q-in-Q to pass VLAN tags down to the host, but you would know if you were doing that. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From mohammed.arafa at gmail.com Fri Oct 9 21:43:15 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 9 Oct 2015 23:43:15 +0200 Subject: [Rdo-list] [rdo-manager] instackenv.json Message-ID: i seem to have hit this bug where node registration fails silently if instackenv.json is badly formatted thing is i cant seem to decipher where my config file is broken { "nodes": [ { "pm_password": "P at ssw0rd", "pm_type": "pxe_ipmitool", "mac": [ "00:17:a4:77:00:1c" ], "cpu": "2", "memory": "65536", "disk": "900", "arch": "x86_64", "pm_user": "root", "pm_addr": "192.168.11.213" }, ] } 1) https://downloads.plex.tv/plex-media-server/0.9.12.13.1464-4ccd2ca/plexmediaserver-0.9.12.13.1464-4ccd2ca.x86_64.rpm -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * From dsneddon at redhat.com Fri Oct 9 21:55:47 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 09 Oct 2015 14:55:47 -0700 Subject: [Rdo-list] [rdo-manager] instackenv.json In-Reply-To: References: Message-ID: <561837E3.3060602@redhat.com> On 10/09/2015 02:43 PM, Mohammed Arafa wrote: > i seem to have hit this bug where node registration fails silently if > instackenv.json is badly formatted > > thing is i cant seem to decipher where my config file is broken > > > { > "nodes": [ > { > "pm_password": "P at ssw0rd", > "pm_type": "pxe_ipmitool", > "mac": [ > "00:17:a4:77:00:1c" > ], > "cpu": "2", > "memory": "65536", > "disk": "900", > "arch": "x86_64", > "pm_user": "root", > "pm_addr": "192.168.11.213" > }, > ] > } > > > 1) > https://downloads.plex.tv/plex-media-server/0.9.12.13.1464-4ccd2ca/plexmediaserver-0.9.12.13.1464-4ccd2ca.x86_64.rpm > Do you only have the one node? Because I don't think you want a comma after the node. This validates: { "nodes": [ { "pm_password": "P at ssw0rd", "pm_type": "pxe_ipmitool", "mac": [ "00:17:a4:77:00:1c" ], "cpu": "2", "memory": "65536", "disk": "900", "arch": "x86_64", "pm_user": "root", "pm_addr": "192.168.11.213" } ] } By the way, when I'm doing OpenStack deployments, these resources help out a lot with both JSON and YAML validation: http://jsonlint.com http://yamllint.com http://jsontoyaml.com http://yamltojson.com -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From mohammed.arafa at gmail.com Sat Oct 10 00:34:40 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sat, 10 Oct 2015 02:34:40 +0200 Subject: [Rdo-list] [rdo-manager] instackenv.json In-Reply-To: <561837E3.3060602@redhat.com> References: <561837E3.3060602@redhat.com> Message-ID: Dan Thank you it worked Yes its only one node. I whittled it down to remove as many variables for errors as possible. [stack at rdomanager ~]$ openstack baremetal instackenv validate -f ~/instackenv.json System Power : off Power Overload : false Power Interlock : inactive Main Power Fault : false Power Control Fault : false Power Restore Policy : always-on Last Power Event : Chassis Intrusion : inactive Front-Panel Lockout : inactive Drive Fault : false Cooling/Fan Fault : false Front Panel Control : none SUCCESS: found 0 errors Now I have another problem, seems to be iptables related. So when I check the ironic-inspector service, it was stopped, and the only way i could get it to run was to reboot the machine. i verified it was started then did a bulk introspection. was i surprised when i saw that it failed again. and the service was stopped too. not sure why iptables would cause the service to crash and refuse to restart. Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.676 40112 INFO ironic_inspector.main [-] Enabled processing hooks: ['ramdisk_error', 'root_device_hint', 'scheduler', 'validate_interfaces'] Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.694 40112 WARNING ironic_inspector.firewall [-] iptables does not support -w flag, please update it to at least version 1.4.21 Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.740 40112 ERROR ironic_inspector.firewall [-] iptables ('-N', 'ironic-inspector') failed: Oct 10 01:40:25 rdomanager ironic-inspector: sudo: sorry, you must have a tty to run sudo Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 CRITICAL ironic_inspector [-] CalledProcessError: Command '('sudo', 'ironic-inspector-rootwrap', '/etc/ironic-inspector/rootwrap.conf', 'iptables', '-N', 'ironic-inspector')' returned non-zero exit status 1 Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector Traceback (most recent call last): Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector File "/usr/bin/ironic-inspector", line 10, in Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector sys.exit(main()) Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector File "/usr/lib/python2.7/site-packages/ironic_inspector/main.py", line 388, in main Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector init() Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector File "/usr/lib/python2.7/site-packages/ironic_inspector/main.py", line 325, in init Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector firewall.init() Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector File "/usr/lib/python2.7/site-packages/ironic_inspector/firewall.py", line 81, in init Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector _iptables('-N', CHAIN) Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector File "/usr/lib/python2.7/site-packages/ironic_inspector/firewall.py", line 42, in _iptables Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector subprocess.check_output(cmd, **kwargs) Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector raise CalledProcessError(retcode, cmd, output=output) Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector CalledProcessError: Command '('sudo', 'ironic-inspector-rootwrap', '/etc/ironic-inspector/rootwrap.conf', 'iptables', '-N', 'ironic-inspector')' returned non-zero exit status 1 Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 40112 ERROR ironic_inspector Oct 10 01:40:25 rdomanager systemd: openstack-ironic-inspector.service: main process exited, code=exited, status=1/FAILURE Oct 10 01:40:25 rdomanager systemd: Unit openstack-ironic-inspector.service entered failed state. On Fri, Oct 9, 2015 at 5:55 PM, Dan Sneddon wrote: > > On 10/09/2015 02:43 PM, Mohammed Arafa wrote: > > i seem to have hit this bug where node registration fails silently if > > instackenv.json is badly formatted > > > > thing is i cant seem to decipher where my config file is broken > > > > > > { > >? ? ?"nodes": [ > >? ? ? ? ?{ > >? ? ? ? ? ? "pm_password": "P at ssw0rd", > >? ? ? ? ? ? "pm_type": "pxe_ipmitool", > >? ? ? ? ? ? "mac": [ > >? ? ? ? ? ? ? ? ? ?"00:17:a4:77:00:1c" > >? ? ? ? ? ? ], > >? ? ? ? ? ? "cpu": "2", > >? ? ? ? ? ? "memory": "65536", > >? ? ? ? ? ? "disk": "900", > >? ? ? ? ? ? "arch": "x86_64", > >? ? ? ? ? ? "pm_user": "root", > >? ? ? ? ? ? "pm_addr": "192.168.11.213" > >? ? ?}, > >? ?] > > } > > > > > > 1) > > https://downloads.plex.tv/plex-media-server/0.9.12.13.1464-4ccd2ca/plexmediaserver-0.9.12.13.1464-4ccd2ca.x86_64.rpm > > > > Do you only have the one node? Because I don't think you want a comma > after the node. > > This validates: > > { > ? ? "nodes": [ > ? ? ? ? { > ? ? ? ? ? ?"pm_password": "P at ssw0rd", > ? ? ? ? ? ?"pm_type": "pxe_ipmitool", > ? ? ? ? ? ?"mac": [ > ? ? ? ? ? ? ? ? ? "00:17:a4:77:00:1c" > ? ? ? ? ? ?], > ? ? ? ? ? ?"cpu": "2", > ? ? ? ? ? ?"memory": "65536", > ? ? ? ? ? ?"disk": "900", > ? ? ? ? ? ?"arch": "x86_64", > ? ? ? ? ? ?"pm_user": "root", > ? ? ? ? ? ?"pm_addr": "192.168.11.213" > ? ? } > ? ] > } > > By the way, when I'm doing OpenStack deployments, these resources help > out a lot with both JSON and YAML validation: > > http://jsonlint.com > http://yamllint.com > http://jsontoyaml.com > http://yamltojson.com > > -- > Dan Sneddon? ? ? ? ?|? Principal OpenStack Engineer > dsneddon at redhat.com |? redhat.com/openstack > 650.254.4025? ? ? ? |? dsneddon:irc? ?@dxs:twitter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- [image] [image] [image] 805010942448935 GR750055912MA Link to me on LinkedIn From dsneddon at redhat.com Sat Oct 10 01:47:48 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 09 Oct 2015 18:47:48 -0700 Subject: [Rdo-list] Undercloud Deployment Error Message-ID: <56186E44.60400@redhat.com> I'm working with a Red Hat partner who is doing development against RDO, and he was trying to install the latest undercloud code from RDO Delorean passed CI. His Undercloud isn't installing: "Error: Could not find resource 'Keystone_domain[heat_domain]' for relationship from 'Class[Keystone::Roles::Admin]' on node instack" in the file /etc/puppet/manifests/puppet-stack-config.pp:295 He looked further into the code to see what was causing this, and found something which has changed in the last few weeks: "Service['keystone'] -> Class['::keystone::roles::admin'] -> Keystone_domain['heat_domain']" -> file /etc/puppet/manifests/puppet-stack-config.pp:295 [NEW CODE] as opposed to : "Service['keystone'] -> Class['::keystone::roles::admin'] -> Exec['heat_domain_create']" -> file /etc/puppet/manifests/puppet-stack-config.pp: [OLD DEPLOYMENT - couple of weeks ago] Does anyone know what is causing this error, or whether this is a bug which has crept in to "openstack undercloud install"? -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From jrichar1 at ball.com Thu Oct 8 15:19:44 2015 From: jrichar1 at ball.com (Richards, Jeff) Date: Thu, 8 Oct 2015 15:19:44 +0000 Subject: [Rdo-list] Heat stack create failed on overcloud deploy in tripleO Message-ID: <6D1DB475E9650E4EADE65C051EFBB98B467C7AF5@EX2010-CO-03.AERO.BALL.com> Trying to stand up a cloud using tripleO on CentOS 7 with libvirt+kvm virtual machines. Following along the tripleO documentation just to get something stood up for learning purposes, so using all the default options and installing from repos as per docs. Seems I am almost there. The overcloud deploy from the undercloud controller does not complete successfully: [stack at instack ~]$ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates Stack failed with status: Resource CREATE failed: resources.ComputePuppetDeployment: resources.ComputeNodesPostDeployment.Error: resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1 Heat Stack create failed. Nova list shows the controller and novacompute running, heat stack-list shows the stack_status as "UPDATE_FAILED". Logging on to the overlcoud controller and poking around I Traced it down to an error in /var/lib/heat-config/heat-config-puppet/0d44cd1d-799b-4dcd-b09c-538a89bf3b7a.pp:467: package_manifest{$package_manifest_name: ensure => present} Error is Invalid resource type package_manifest (from /var/log/messages on the overcloud controller): template[/etc/puppet/modules/keepalived/templates/global_config.erb]:3\n (at /etc/puppet/modules/keepalived/templates/global_config.erb:3:in `block in result')\u001b[0m\n \u001b[1;31mWarning: notify is a metaparam; this value will inherit to all contained resources in the keepalived::instance definition\u001b[0m\n \u001b[1;31mError: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type package_manifest at /var/lib/heat-config/heat-config-puppet/0d44cd1d-799b-4dcd-b09c-538a89bf3b7a.pp:467 on node overcloud-controller-0.localdomain\n Wrapped exception:\n Invalid resource type package_manifest\u001b[0m\n\u001b[1;31mError: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type package_manifest at /var/lib/heat-config/heat-config-puppet/0d44cd1d-799b-4dcd-b09c-538a89bf3b7a.pp:467 on node overcloud-controller-0.localdomain\u001b[0m\n I do not know puppet and have spent nearly a full day searching for a workaround. Any tips would be appreciated! Jeff Richards This message and any enclosures are intended only for the addressee. Please notify the sender by email if you are not the intended recipient. If you are not the intended recipient, you may not use, copy, disclose, or distribute this message or its contents or enclosures to any other person and any such actions may be unlawful. Ball reserves the right to monitor and review all messages and enclosures sent to or from this email address. -------------- next part -------------- An HTML attachment was scrubbed... URL: From celik.esra at tubitak.gov.tr Fri Oct 9 07:15:18 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Fri, 9 Oct 2015 10:15:18 +0300 (EEST) Subject: [Rdo-list] Maintenance mode for deployment Message-ID: <856940727.1952940.1444374918883.JavaMail.zimbra@tubitak.gov.tr> Hi all, After a succusful introspection I see my nodes in available state and maintenance=True [stack at undercloud ~]$ openstack baremetal introspection bulk startSetting available nodes to manageable...Starting introspection of node: 36777b8b-401e-47e9-9eb0-8c2f6b372da6Starting introspection of node: 8de0f3eb-3581-4080-bea4-28125bd7ee1aWaiting for introspection to finish...Introspection for UUID 36777b8b-401e-47e9-9eb0-8c2f6b372da6 finished successfully.Introspection for UUID 8de0f3eb-3581-4080-bea4-28125bd7ee1a finished successfully.Setting manageable nodes to available...Node 36777b8b-401e-47e9-9eb0-8c2f6b372da6 has been set to available.Node 8de0f3eb-3581-4080-bea4-28125bd7ee1a has been set to available.Introspection completed. [stack at undercloud ~]$ ironic node-list+--------------------------------------+------+---------------+-------------+--------------------+-------------+| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |+--------------------------------------+------+---------------+-------------+--------------------+-------------+| 36777b8b-401e-47e9-9eb0-8c2f6b372da6 | None | None | power off | available | True || 8de0f3eb-3581-4080-bea4-28125bd7ee1a | None | None | power off | available | True |+--------------------------------------+------+---------------+-------------+--------------------+-------------+ However when I start deploying I get the following error [stack at undercloud ~]$ openstack overcloud deploy --templatesDeployment failed: Not enough nodes - available: 0, requested: 2In tripleoclient/utils.py I noticed that available node means that it is not in maintenance mode: 423: available = len(baremetal_client.node.list(associated=False,424: maintenance=False)) Should I set my node's maintenance = false before deployment? Actually this is not mentioned in doc (http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#prepare-your-environment) Esra ÇEL?K TÜB?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.alone at gmail.com Sat Oct 10 14:50:46 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Sat, 10 Oct 2015 18:20:46 +0330 Subject: [Rdo-list] Undercloud Installation Version Message-ID: Hi, what will happen if I just enable: - Enable last known good RDO Trunk Delorean repository for core openstack packages - Enable the Delorean Deps repository I mean not to enable: - Enable latest RDO Trunk Delorean repository only for the TripleO packages dose it mean that I am strictly sticked to stable ci passed branch? or it's miss configration and I should reinstall Cent OS 7 (minimal) to be able to reinstall undercloud? dose it reflect on openstack version of overcloud? (Juno vs Kilo) Sincerely, Ali R. Taleghani @linkedIn -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.alone at gmail.com Sun Oct 11 06:16:59 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Sun, 11 Oct 2015 06:16:59 +0000 Subject: [Rdo-list] Ironic sqlite db Message-ID: after a clean installation I faced errors with tell ironic sqlite table don't contains tables as I check manually and it was empty sqlite db file... I also was not able to create tables via [ironic-dbstnc] too :-/ http://paste.ubuntu.com/12748615/ -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.alone at gmail.com Sun Oct 11 06:18:40 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Sun, 11 Oct 2015 06:18:40 +0000 Subject: [Rdo-list] Ironic sqlite db In-Reply-To: References: Message-ID: it's the create_chema result as well... http://paste.ubuntu.com/12748638/ On Sun, Oct 11, 2015 at 9:46 AM AliReza Taleghani wrote: > after a clean installation I faced errors with tell ironic sqlite table > don't contains tables as I check manually and it was empty sqlite db file... > I also was not able to create tables via [ironic-dbstnc] too :-/ > http://paste.ubuntu.com/12748615/ > > -- > Sincerely, > Ali R. Taleghani > -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.alone at gmail.com Sun Oct 11 06:42:08 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Sun, 11 Oct 2015 06:42:08 +0000 Subject: [Rdo-list] Ironic sqlite db In-Reply-To: References: Message-ID: This also caused bare metal hardware detection failed as following: http://paste.ubuntu.com/12748714/ I think it's can't load default sqlite schema :-/ where I can find the sql template for manual importing into: inspector.sqlite [root at undercloud ~]# sqlite3 /var/lib/ironic-inspector/inspector.sqlite ".dump" PRAGMA foreign_keys=OFF; BEGIN TRANSACTION; COMMIT; On Sun, Oct 11, 2015 at 9:48 AM AliReza Taleghani wrote: > it's the create_chema result as well... > http://paste.ubuntu.com/12748638/ > > On Sun, Oct 11, 2015 at 9:46 AM AliReza Taleghani > wrote: > >> after a clean installation I faced errors with tell ironic sqlite table >> don't contains tables as I check manually and it was empty sqlite db file... >> I also was not able to create tables via [ironic-dbstnc] too :-/ >> http://paste.ubuntu.com/12748615/ >> >> -- >> Sincerely, >> Ali R. Taleghani >> > -- > Sincerely, > Ali R. Taleghani > -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Mon Oct 12 04:31:42 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Mon, 12 Oct 2015 00:31:42 -0400 Subject: [Rdo-list] RDO Test Day Oct 12-13 In-Reply-To: <561817ED.6070901@redhat.com> References: <5617FF71.5050208@redhat.com> <561817ED.6070901@redhat.com> Message-ID: <561B37AE.9000406@ltgfederal.com> Just tested RDO Manager with Ceph, both virtual and baremetal. neither worked. Notes on etherpad. Ignacio Bravo LTG Federal Inc On 10/09/2015 03:39 PM, Rich Bowen wrote: > > > On 10/09/2015 01:54 PM, John Trowbridge wrote: >> We will be doing a second RDO test day for RDO Liberty next Monday and >> Tuesday. (October 12-13) > > > John, thanks so much for taking care of this. I ended up being out > today due to illness. > From dtantsur at redhat.com Mon Oct 12 08:32:28 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 12 Oct 2015 10:32:28 +0200 Subject: [Rdo-list] Ironic sqlite db In-Reply-To: References: Message-ID: <561B701C.209@redhat.com> On 10/11/2015 08:16 AM, AliReza Taleghani wrote: > after a clean installation I faced errors with tell ironic sqlite table > don't contains tables as I check manually and it was empty sqlite db file... > I also was not able to create tables via [ironic-dbstnc] too :-/ > http://paste.ubuntu.com/12748615/ Hi! The project you're debugging is ironic-inspector, not ironic. No surprise that ironic-dbsync does not work, please use ironic-inspector-dbsync instead. > > -- > Sincerely, > Ali R. Taleghani > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From dtantsur at redhat.com Mon Oct 12 08:33:57 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 12 Oct 2015 10:33:57 +0200 Subject: [Rdo-list] RDO Test Day Oct 12-13 In-Reply-To: <5617FF71.5050208@redhat.com> References: <5617FF71.5050208@redhat.com> Message-ID: <561B7075.2060106@redhat.com> On 10/09/2015 07:54 PM, John Trowbridge wrote: > We will be doing a second RDO test day for RDO Liberty next Monday and > Tuesday. (October 12-13) > > As with the first test day, we will use the beta.rdoproject.org page to > coordinate efforts [1]. > > Unlike the first test day, RDO Manager will be available as an installer > for testing. We will use a forked version of the upstream tripleo docs > [2]. These have a couple of patches to make the docs Liberty specific, > since upstream has already moved on to Mitaka. > > The RDO Manager scenarios to test will be updated on the website before > Monday. [3] > > Thanks for helping us kick the tires one more time before we release RDO > Liberty! > > --trown > > [1] http://beta.rdoproject.org/testday/rdo-test-day-liberty-02/ > [2] > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ Is this documentation even updated? I thought we've moved to http://docs.openstack.org/developer/tripleo-docs > [3] http://beta.rdoproject.org/testday/testedsetups-liberty-02/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mpavlase at redhat.com Mon Oct 12 08:36:27 2015 From: mpavlase at redhat.com (=?windows-1252?Q?Martin_Pavl=E1sek?=) Date: Mon, 12 Oct 2015 10:36:27 +0200 Subject: [Rdo-list] RDO Test Day Oct 12-13 In-Reply-To: <5617FF71.5050208@redhat.com> References: <5617FF71.5050208@redhat.com> Message-ID: <561B710B.1030907@redhat.com> Hi all, I have a question - how can I assign myself to some chosen testing topology/configuration? I'm asking for this so obvious thing, because I don't see there any link like Sign in or similar to enter and have permission to edit wiki page (anonymous user can't do that). It's because test day wiki page [1] is landing on domain http://beta.rdoproject.org and not http://www.rdoproject.org that provides log in form. I'll take this one: 1 Control 1 Compute, virtual setup on CentOS 7 (RDO Manager Based Installation). Thanks, Martin On 09/10/15 19:54, John Trowbridge wrote: > We will be doing a second RDO test day for RDO Liberty next Monday and > Tuesday. (October 12-13) > > As with the first test day, we will use the beta.rdoproject.org page to > coordinate efforts [1]. > > Unlike the first test day, RDO Manager will be available as an installer > for testing. We will use a forked version of the upstream tripleo docs > [2]. These have a couple of patches to make the docs Liberty specific, > since upstream has already moved on to Mitaka. > > The RDO Manager scenarios to test will be updated on the website before > Monday. [3] > > Thanks for helping us kick the tires one more time before we release RDO > Liberty! > > --trown > > [1] http://beta.rdoproject.org/testday/rdo-test-day-liberty-02/ > [2] > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ > [3] http://beta.rdoproject.org/testday/testedsetups-liberty-02/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Mon Oct 12 08:37:50 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 12 Oct 2015 10:37:50 +0200 Subject: [Rdo-list] RDO Test Day Oct 12-13 In-Reply-To: <561B7075.2060106@redhat.com> References: <5617FF71.5050208@redhat.com> <561B7075.2060106@redhat.com> Message-ID: >> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ > > Is this documentation even updated? I thought we've moved to > http://docs.openstack.org/developer/tripleo-docs Upstream docs are for Trunk, John forked it to https://github.com/redhat-openstack/tripleo-docs and changed to use stable/liberty Delorean. Cheers, Alan From dtantsur at redhat.com Mon Oct 12 08:39:52 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 12 Oct 2015 10:39:52 +0200 Subject: [Rdo-list] RDO Test Day Oct 12-13 In-Reply-To: References: <5617FF71.5050208@redhat.com> <561B7075.2060106@redhat.com> Message-ID: <561B71D8.6030702@redhat.com> On 10/12/2015 10:37 AM, Alan Pevec wrote: >>> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ >> >> Is this documentation even updated? I thought we've moved to >> http://docs.openstack.org/developer/tripleo-docs > > Upstream docs are for Trunk, John forked it to > https://github.com/redhat-openstack/tripleo-docs and changed to use > stable/liberty Delorean. Good to know, thanks > > Cheers, > Alan > From apevec at gmail.com Mon Oct 12 08:41:42 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 12 Oct 2015 10:41:42 +0200 Subject: [Rdo-list] RDO Test Day Oct 12-13 In-Reply-To: <561B710B.1030907@redhat.com> References: <5617FF71.5050208@redhat.com> <561B710B.1030907@redhat.com> Message-ID: > because I don't see there any link like Sign in or similar to enter and have permission to edit > wiki page (anonymous user can't do that). The new website is backed by git repository https://github.com/redhat-openstack/website where you can send pull-requests. Also at the bottom of every page there's "Edit this page on GitHub" link. Cheers, Alan From trown at redhat.com Mon Oct 12 12:46:02 2015 From: trown at redhat.com (John Trowbridge) Date: Mon, 12 Oct 2015 08:46:02 -0400 Subject: [Rdo-list] Maintenance mode for deployment In-Reply-To: <856940727.1952940.1444374918883.JavaMail.zimbra@tubitak.gov.tr> References: <856940727.1952940.1444374918883.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <561BAB8A.9020106@redhat.com> On 10/09/2015 03:15 AM, Esra Celik wrote: > > Hi all, > > After a succusful introspection I see my nodes in available state and maintenance=True > > [stack at undercloud ~]$ openstack baremetal introspection bulk startSetting available nodes to manageable...Starting introspection of node: 36777b8b-401e-47e9-9eb0-8c2f6b372da6Starting introspection of node: 8de0f3eb-3581-4080-bea4-28125bd7ee1aWaiting for introspection to finish...Introspection for UUID 36777b8b-401e-47e9-9eb0-8c2f6b372da6 finished successfully.Introspection for UUID 8de0f3eb-3581-4080-bea4-28125bd7ee1a finished successfully.Setting manageable nodes to available...Node 36777b8b-401e-47e9-9eb0-8c2f6b372da6 has been set to available.Node 8de0f3eb-3581-4080-bea4-28125bd7ee1a has been set to available.Introspection completed. > [stack at undercloud ~]$ ironic node-list+--------------------------------------+------+---------------+-------------+--------------------+-------------+| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |+--------------------------------------+------+---------------+-------------+--------------------+-------------+| 36777b8b-401e-47e9-9eb0-8c2f6b372da6 | None | None | power off | available | True || 8de0f3eb-3581-4080-bea4-28125bd7ee1a | None | None | power off | available | True |+--------------------------------------+------+---------------+-------------+--------------------+-------------+ > However when I start deploying I get the following error > > [stack at undercloud ~]$ openstack overcloud deploy --templatesDeployment failed: Not enough nodes - available: 0, requested: 2In tripleoclient/utils.py I noticed that available node means that it is not in maintenance mode: > 423: available = len(baremetal_client.node.list(associated=False,424: maintenance=False)) > Should I set my node's maintenance = false before deployment? > Actually this is not mentioned in doc (http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#prepare-your-environment) The nodes being in maintenance is indicative of an actual issue. They should not be in maintenance after doing introspection. The ironic-conductor logs would be a good place to look for why the nodes were put into maintenance mode. > > > Esra ÇEL?K > TÜB?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From trown at redhat.com Mon Oct 12 12:50:26 2015 From: trown at redhat.com (John Trowbridge) Date: Mon, 12 Oct 2015 08:50:26 -0400 Subject: [Rdo-list] [rdo-manager] instackenv.json In-Reply-To: References: <561837E3.3060602@redhat.com> Message-ID: <561BAC92.3010500@redhat.com> On 10/09/2015 08:34 PM, Mohammed Arafa wrote: > Dan > > Thank you it worked > Yes its only one node. I whittled it down to remove as many variables > for errors as possible. > > [stack at rdomanager ~]$ openstack baremetal instackenv validate -f > ~/instackenv.json > System Power : off > Power Overload : false > Power Interlock : inactive > Main Power Fault : false > Power Control Fault : false > Power Restore Policy : always-on > Last Power Event : > Chassis Intrusion : inactive > Front-Panel Lockout : inactive > Drive Fault : false > Cooling/Fan Fault : false > Front Panel Control : none > SUCCESS: found 0 errors > > > Now I have another problem, seems to be iptables related. So when I > check the ironic-inspector service, it was stopped, and the only way i > could get it to run was to reboot the machine. i verified it was > started then did a bulk introspection. was i surprised when i saw that > it failed again. and the service was stopped too. > > not sure why iptables would cause the service to crash and refuse to restart. > > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.676 > 40112 INFO ironic_inspector.main [-] Enabled processing hooks: > ['ramdisk_error', 'root_device_hint', 'scheduler', > 'validate_interfaces'] > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.694 > 40112 WARNING ironic_inspector.firewall [-] iptables does not support > -w flag, please update it to at least version 1.4.21 > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.740 > 40112 ERROR ironic_inspector.firewall [-] iptables ('-N', > 'ironic-inspector') failed: > Oct 10 01:40:25 rdomanager ironic-inspector: sudo: sorry, you must > have a tty to run sudo This error ^ has been fixed in the latest packaging for ironic-inspector and is the cause of the crash. This means the latest repo with the includepkgs whitelist was not used. The repo setup instructions in the documentation[1], are exactly what we use in CI, so YMMV if using any other repo combination. [1] https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 CRITICAL ironic_inspector [-] CalledProcessError: Command > '('sudo', 'ironic-inspector-rootwrap', > '/etc/ironic-inspector/rootwrap.conf', 'iptables', '-N', > 'ironic-inspector')' returned non-zero exit status 1 > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector Traceback (most recent call last): > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector File "/usr/bin/ironic-inspector", line > 10, in > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector sys.exit(main()) > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector File > "/usr/lib/python2.7/site-packages/ironic_inspector/main.py", line 388, > in main > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector init() > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector File > "/usr/lib/python2.7/site-packages/ironic_inspector/main.py", line 325, > in init > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector firewall.init() > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector File > "/usr/lib/python2.7/site-packages/ironic_inspector/firewall.py", line > 81, in init > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector _iptables('-N', CHAIN) > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector File > "/usr/lib/python2.7/site-packages/ironic_inspector/firewall.py", line > 42, in _iptables > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector subprocess.check_output(cmd, > **kwargs) > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector File > "/usr/lib64/python2.7/subprocess.py", line 575, in check_output > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector raise CalledProcessError(retcode, > cmd, output=output) > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector CalledProcessError: Command '('sudo', > 'ironic-inspector-rootwrap', '/etc/ironic-inspector/rootwrap.conf', > 'iptables', '-N', 'ironic-inspector')' returned non-zero exit status 1 > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > 40112 ERROR ironic_inspector > Oct 10 01:40:25 rdomanager systemd: > openstack-ironic-inspector.service: main process exited, code=exited, > status=1/FAILURE > Oct 10 01:40:25 rdomanager systemd: Unit > openstack-ironic-inspector.service entered failed state. > > > On Fri, Oct 9, 2015 at 5:55 PM, Dan Sneddon wrote: >> >> On 10/09/2015 02:43 PM, Mohammed Arafa wrote: >>> i seem to have hit this bug where node registration fails silently if >>> instackenv.json is badly formatted >>> >>> thing is i cant seem to decipher where my config file is broken >>> >>> >>> { >>> "nodes": [ >>> { >>> "pm_password": "P at ssw0rd", >>> "pm_type": "pxe_ipmitool", >>> "mac": [ >>> "00:17:a4:77:00:1c" >>> ], >>> "cpu": "2", >>> "memory": "65536", >>> "disk": "900", >>> "arch": "x86_64", >>> "pm_user": "root", >>> "pm_addr": "192.168.11.213" >>> }, >>> ] >>> } >>> >>> >>> 1) >>> https://downloads.plex.tv/plex-media-server/0.9.12.13.1464-4ccd2ca/plexmediaserver-0.9.12.13.1464-4ccd2ca.x86_64.rpm >>> >> >> Do you only have the one node? Because I don't think you want a comma >> after the node. >> >> This validates: >> >> { >> "nodes": [ >> { >> "pm_password": "P at ssw0rd", >> "pm_type": "pxe_ipmitool", >> "mac": [ >> "00:17:a4:77:00:1c" >> ], >> "cpu": "2", >> "memory": "65536", >> "disk": "900", >> "arch": "x86_64", >> "pm_user": "root", >> "pm_addr": "192.168.11.213" >> } >> ] >> } >> >> By the way, when I'm doing OpenStack deployments, these resources help >> out a lot with both JSON and YAML validation: >> >> http://jsonlint.com >> http://yamllint.com >> http://jsontoyaml.com >> http://yamltojson.com >> >> -- >> Dan Sneddon | Principal OpenStack Engineer >> dsneddon at redhat.com | redhat.com/openstack >> 650.254.4025 | dsneddon:irc @dxs:twitter >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > From trown at redhat.com Mon Oct 12 13:07:57 2015 From: trown at redhat.com (John Trowbridge) Date: Mon, 12 Oct 2015 09:07:57 -0400 Subject: [Rdo-list] Undercloud Installation Version In-Reply-To: References: Message-ID: <561BB0AD.3010609@redhat.com> On 10/10/2015 10:50 AM, AliReza Taleghani wrote: > Hi, what will happen if I just enable: > > - Enable last known good RDO Trunk Delorean repository for core > openstack packages > - Enable the Delorean Deps repository > > I mean not to enable: > > - Enable latest RDO Trunk Delorean repository only for the TripleO > packages > > > dose it mean that I am strictly sticked to stable ci passed branch? > or it's miss configration and I should reinstall Cent OS 7 (minimal) to be > able to reinstall undercloud? I replied on the other thread as well, but it is best to follow exactly the repo setup in the documentation, since that is the only thing we run CI against. > > dose it reflect on openstack version of overcloud? (Juno vs Kilo) No, if you use centos7-liberty repos you get liberty packages. > > > > > Sincerely, > Ali R. Taleghani > @linkedIn > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rbowen at redhat.com Mon Oct 12 14:11:29 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 12 Oct 2015 10:11:29 -0400 Subject: [Rdo-list] RDO blogs, week of October 12 Message-ID: <561BBF91.2040306@redhat.com> As usual, I've put a roundup of blog posts by various RDO enthusiasts up at http://beta.rdoproject.org/blog/2015/10/rdo-blog-roundup-week-of-october-12/ If you've got anything to add, you can contribute to this resource via Github, at https://github.com/redhat-openstack/website By the way, we're getting very close to pushing out the new website. You can see the launch milestone tickets at https://github.com/redhat-openstack/website/milestones/Launch and possibly help us get there a little bit faster. Thanks again for your patience. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ukalifon at redhat.com Mon Oct 12 14:57:17 2015 From: ukalifon at redhat.com (Udi Kalifon) Date: Mon, 12 Oct 2015 17:57:17 +0300 Subject: [Rdo-list] Test day issue: parse error Message-ID: Hello, We are encountering an error during instack-virt-setup: ++ sudo virsh net-list --all --persistent ++ grep default ++ awk 'BEGIN{OFS=":";} {print $2,$3}' + default_net=active:yes + state=active + autostart=yes + '[' active '!=' active ']' + '[' yes '!=' yes ']' Domain seed has been undefined seed VM not running seed VM not defined Created machine seed with UUID f59eb2f0-c7ac-429e-950c-df2fd4b6f301 Seed VM created with MAC 52:54:00:05:af:0f parse error: Invalid string: control characters from U+0000 through U+001F must be escaped at line 32, column 30 Any ideas? I don't know which file causes this parse error, it's not the instack-virt-setup. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Oct 12 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 12 Oct 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20151012150003.9B0B260A3FD9@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-10-14 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From sam at cassiba.com Mon Oct 12 14:56:25 2015 From: sam at cassiba.com (Samuel Cassiba) Date: Mon, 12 Oct 2015 07:56:25 -0700 Subject: [Rdo-list] instack-virt-setup depends on EPEL for Liberty Message-ID: Clean install of CentOS 7.1 (as of yesterday) and following along with http://docs.openstack.org/developer/tripleo-docs/ for Liberty. When I go to run instack-virt-setup, I'm prompted to configure EPEL: + tripleo install-dependencies EPEL repository is required to install python-pip for CentOS. See http://fedoraproject.org/wiki/EPEL When I look at /usr/libexec/openstack-tripleo/install-dependencies, I find it does call for EPEL: if [ "$TRIPLEO_OS_FAMILY" = "redhat" ]; then # For CentOS, python-pip and jq are in EPEL if [ "$TRIPLEO_OS_DISTRO" = "centos" ] && [ ! -f /etc/yum.repos.d/epel.repo ]; then echo EPEL repository is required to install python-pip for CentOS. echo See http://fedoraproject.org/wiki/EPEL exit 1 fi sudo -E yum install -y python-lxml libvirt-python libvirt qemu-img qemu-kvm git python-pip openssl-devel python-devel gcc audit python-virtualenv openvswitch python-yaml net-tools redhat-lsb-core libxslt-devel jq openssh-server libffi-devel which glusterfs-api python-netaddr sudo service libvirtd restart sudo service openvswitch restart sudo chkconfig openvswitch on fi Indeed after removing the if statement for EPEL, instack-virt-setup appears to proceed without issue and provisions the instack VM, but that VM won't allow me to login as root. Any pointers would be great. -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon Oct 12 15:06:21 2015 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 12 Oct 2015 11:06:21 -0400 Subject: [Rdo-list] Test day issue: parse error In-Reply-To: References: Message-ID: Several folks have been hitting this. You most likely have a version of the rpm jq on the box that is not compatible with rdo-manager yum remove jq on the baremetal virtual host, clean up any other install artifacts and restart. On Mon, Oct 12, 2015 at 10:57 AM, Udi Kalifon wrote: > Hello, > > We are encountering an error during instack-virt-setup: > > ++ sudo virsh net-list --all --persistent > ++ grep default > ++ awk 'BEGIN{OFS=":";} {print $2,$3}' > + default_net=active:yes > + state=active > + autostart=yes > + '[' active '!=' active ']' > + '[' yes '!=' yes ']' > Domain seed has been undefined > > > seed VM not running > > seed VM not defined > Created machine seed with UUID f59eb2f0-c7ac-429e-950c-df2fd4b6f301 > Seed VM created with MAC 52:54:00:05:af:0f > parse error: Invalid string: control characters from U+0000 through U+001F > must be escaped at line 32, column 30 > > Any ideas? I don't know which file causes this parse error, it's not the > instack-virt-setup. > > Thanks. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tshefi at redhat.com Mon Oct 12 15:10:46 2015 From: tshefi at redhat.com (Tzach Shefi) Date: Mon, 12 Oct 2015 18:10:46 +0300 Subject: [Rdo-list] Overcloud deploy stuck for a long time Message-ID: Hi, Server running centos 7.1, vm running for undercloud got up to overcloud deploy stage. It looks like its stuck nothing advancing for a while. Ideas, what to check? [stack at instack ~]$ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates [91665.696658] device vnet2 entered promiscuous mode [91665.781346] device vnet3 entered promiscuous mode [91675.260324] kvm [71183]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff [91675.291232] kvm [71200]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff [91767.799404] kvm: zapping shadow pages for mmio generation wraparound [91767.880480] kvm: zapping shadow pages for mmio generation wraparound [91768.957761] device vnet2 left promiscuous mode [91769.799446] device vnet3 left promiscuous mode [91771.223273] device vnet3 entered promiscuous mode [91771.232996] device vnet2 entered promiscuous mode [91773.733967] kvm [72245]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff [91801.270510] device vnet2 left promiscuous mode Thanks Tzach -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Mon Oct 12 15:12:58 2015 From: mcornea at redhat.com (Marius Cornea) Date: Mon, 12 Oct 2015 11:12:58 -0400 (EDT) Subject: [Rdo-list] Git checkouts for the puppet modules required In-Reply-To: <2125575169.40428614.1444662423709.JavaMail.zimbra@redhat.com> Message-ID: <553834669.40434153.1444662778364.JavaMail.zimbra@redhat.com> Hi everyone, We hit today the following error during the undercloud installation: Error: Could not find class ::ironic::inspector for instack on node instack In order to get past it you need to run 'export DIB_INSTALLTYPE_puppet_modules=source' before running 'openstack undercloud install'. Thank., Marius From trown at redhat.com Mon Oct 12 15:41:02 2015 From: trown at redhat.com (John Trowbridge) Date: Mon, 12 Oct 2015 11:41:02 -0400 Subject: [Rdo-list] Git checkouts for the puppet modules required In-Reply-To: <553834669.40434153.1444662778364.JavaMail.zimbra@redhat.com> References: <553834669.40434153.1444662778364.JavaMail.zimbra@redhat.com> Message-ID: <561BD48E.10207@redhat.com> On 10/12/2015 11:12 AM, Marius Cornea wrote: > Hi everyone, > > We hit today the following error during the undercloud installation: Error: Could not find class ::ironic::inspector for instack on node instack > > In order to get past it you need to run 'export DIB_INSTALLTYPE_puppet_modules=source' before running 'openstack undercloud install'. Thanks for sending this out, the documentation has been updated with this workaround: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ > > Thank., > Marius > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From trown at redhat.com Mon Oct 12 15:46:50 2015 From: trown at redhat.com (John Trowbridge) Date: Mon, 12 Oct 2015 11:46:50 -0400 Subject: [Rdo-list] Test day issue: parse error In-Reply-To: References: Message-ID: <561BD5EA.6000800@redhat.com> On 10/12/2015 11:06 AM, Wesley Hayutin wrote: > Several folks have been hitting this. > You most likely have a version of the rpm jq on the box that is not > compatible with rdo-manager > yum remove jq on the baremetal virtual host, clean up any other install > artifacts and restart. > Indeed, this is an issue with jq 1.5. There is a fix to tripleo for this [1], but it is blocked by tripleoci being unable to build the openstack-tripleo package. Once the revert [2] merges we should be good to get the jq 1.5 patch to pass CI. [1] https://review.openstack.org/#/c/228034 [2] https://review.openstack.org/#/c/233686 > On Mon, Oct 12, 2015 at 10:57 AM, Udi Kalifon wrote: > >> Hello, >> >> We are encountering an error during instack-virt-setup: >> >> ++ sudo virsh net-list --all --persistent >> ++ grep default >> ++ awk 'BEGIN{OFS=":";} {print $2,$3}' >> + default_net=active:yes >> + state=active >> + autostart=yes >> + '[' active '!=' active ']' >> + '[' yes '!=' yes ']' >> Domain seed has been undefined >> >> >> seed VM not running >> >> seed VM not defined >> Created machine seed with UUID f59eb2f0-c7ac-429e-950c-df2fd4b6f301 >> Seed VM created with MAC 52:54:00:05:af:0f >> parse error: Invalid string: control characters from U+0000 through U+001F >> must be escaped at line 32, column 30 >> >> Any ideas? I don't know which file causes this parse error, it's not the >> instack-virt-setup. >> >> Thanks. >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ltoscano at redhat.com Mon Oct 12 15:56:21 2015 From: ltoscano at redhat.com (Luigi Toscano) Date: Mon, 12 Oct 2015 17:56:21 +0200 Subject: [Rdo-list] Git checkouts for the puppet modules required In-Reply-To: <561BD48E.10207@redhat.com> References: <553834669.40434153.1444662778364.JavaMail.zimbra@redhat.com> <561BD48E.10207@redhat.com> Message-ID: <1617206.Pym4mmV0tS@whitebase.usersys.redhat.com> On Monday 12 of October 2015 11:41:02 John Trowbridge wrote: > On 10/12/2015 11:12 AM, Marius Cornea wrote: > > Hi everyone, > > > > We hit today the following error during the undercloud installation: > > Error: Could not find class ::ironic::inspector for instack on node > > instack > > > > In order to get past it you need to run 'export > > DIB_INSTALLTYPE_puppet_modules=source' before running 'openstack > > undercloud install'. > Thanks for sending this out, the documentation has been updated with > this workaround: > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ Out of curiosity, does it mean that some components (puppet modules) are installed from source code instead of packages? Couldn't this invalidate the test by using a codepath which won't be the proper one for Liberty? Is there maybe a patch that can manually applied to the code? Ciao -- Luigi From ukalifon at redhat.com Mon Oct 12 16:04:38 2015 From: ukalifon at redhat.com (Udi Kalifon) Date: Mon, 12 Oct 2015 19:04:38 +0300 Subject: [Rdo-list] Test day issue: parse error In-Reply-To: <561BD5EA.6000800@redhat.com> References: <561BD5EA.6000800@redhat.com> Message-ID: I reprovisioned the machine, started over, and got the same error... So cleaning up everything didn't help. Is there a way for me to apply the needed patches and not wait for the new jq ? Thanks, Udi. On Mon, Oct 12, 2015 at 6:46 PM, John Trowbridge wrote: > > > On 10/12/2015 11:06 AM, Wesley Hayutin wrote: > > Several folks have been hitting this. > > You most likely have a version of the rpm jq on the box that is not > > compatible with rdo-manager > > yum remove jq on the baremetal virtual host, clean up any other install > > artifacts and restart. > > > > Indeed, this is an issue with jq 1.5. There is a fix to tripleo for this > [1], but it is blocked by tripleoci being unable to build the > openstack-tripleo package. Once the revert [2] merges we should be good > to get the jq 1.5 patch to pass CI. > > [1] https://review.openstack.org/#/c/228034 > [2] https://review.openstack.org/#/c/233686 > > > On Mon, Oct 12, 2015 at 10:57 AM, Udi Kalifon > wrote: > > > >> Hello, > >> > >> We are encountering an error during instack-virt-setup: > >> > >> ++ sudo virsh net-list --all --persistent > >> ++ grep default > >> ++ awk 'BEGIN{OFS=":";} {print $2,$3}' > >> + default_net=active:yes > >> + state=active > >> + autostart=yes > >> + '[' active '!=' active ']' > >> + '[' yes '!=' yes ']' > >> Domain seed has been undefined > >> > >> > >> seed VM not running > >> > >> seed VM not defined > >> Created machine seed with UUID f59eb2f0-c7ac-429e-950c-df2fd4b6f301 > >> Seed VM created with MAC 52:54:00:05:af:0f > >> parse error: Invalid string: control characters from U+0000 through > U+001F > >> must be escaped at line 32, column 30 > >> > >> Any ideas? I don't know which file causes this parse error, it's not the > >> instack-virt-setup. > >> > >> Thanks. > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Mon Oct 12 16:19:30 2015 From: trown at redhat.com (John Trowbridge) Date: Mon, 12 Oct 2015 12:19:30 -0400 Subject: [Rdo-list] Test day issue: parse error In-Reply-To: References: <561BD5EA.6000800@redhat.com> Message-ID: <561BDD92.1060007@redhat.com> On 10/12/2015 12:04 PM, Udi Kalifon wrote: > I reprovisioned the machine, started over, and got the same error... So > cleaning up everything didn't help. Is there a way for me to apply the > needed patches and not wait for the new jq ? > It is actually old jq that you want. So you can downgrade jq to below 1.5 and then versionlock it so that yum update will not touch it: sudo yum install -y yum-plugin-versionlock sudo yum versionlock add jq > Thanks, > Udi. > > On Mon, Oct 12, 2015 at 6:46 PM, John Trowbridge wrote: > >> >> >> On 10/12/2015 11:06 AM, Wesley Hayutin wrote: >>> Several folks have been hitting this. >>> You most likely have a version of the rpm jq on the box that is not >>> compatible with rdo-manager >>> yum remove jq on the baremetal virtual host, clean up any other install >>> artifacts and restart. >>> >> >> Indeed, this is an issue with jq 1.5. There is a fix to tripleo for this >> [1], but it is blocked by tripleoci being unable to build the >> openstack-tripleo package. Once the revert [2] merges we should be good >> to get the jq 1.5 patch to pass CI. >> >> [1] https://review.openstack.org/#/c/228034 >> [2] https://review.openstack.org/#/c/233686 >> >>> On Mon, Oct 12, 2015 at 10:57 AM, Udi Kalifon >> wrote: >>> >>>> Hello, >>>> >>>> We are encountering an error during instack-virt-setup: >>>> >>>> ++ sudo virsh net-list --all --persistent >>>> ++ grep default >>>> ++ awk 'BEGIN{OFS=":";} {print $2,$3}' >>>> + default_net=active:yes >>>> + state=active >>>> + autostart=yes >>>> + '[' active '!=' active ']' >>>> + '[' yes '!=' yes ']' >>>> Domain seed has been undefined >>>> >>>> >>>> seed VM not running >>>> >>>> seed VM not defined >>>> Created machine seed with UUID f59eb2f0-c7ac-429e-950c-df2fd4b6f301 >>>> Seed VM created with MAC 52:54:00:05:af:0f >>>> parse error: Invalid string: control characters from U+0000 through >> U+001F >>>> must be escaped at line 32, column 30 >>>> >>>> Any ideas? I don't know which file causes this parse error, it's not the >>>> instack-virt-setup. >>>> >>>> Thanks. >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> > From rbowen at redhat.com Mon Oct 12 16:33:15 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 12 Oct 2015 12:33:15 -0400 Subject: [Rdo-list] RDO Community Meetup at OpenStack Summit - Agenda Message-ID: <561BE0CB.5040809@redhat.com> For those that will be attending the RDO Commutniy Meetup in Tokyo in a few weeks, I encourage you to add items to the agenda, at https://etherpad.openstack.org/p/rdo-tokyo , that you'd like to talk about at that meetup. Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From apevec at gmail.com Mon Oct 12 18:08:57 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 12 Oct 2015 20:08:57 +0200 Subject: [Rdo-list] Git checkouts for the puppet modules required In-Reply-To: <1617206.Pym4mmV0tS@whitebase.usersys.redhat.com> References: <553834669.40434153.1444662778364.JavaMail.zimbra@redhat.com> <561BD48E.10207@redhat.com> <1617206.Pym4mmV0tS@whitebase.usersys.redhat.com> Message-ID: >> > In order to get past it you need to run 'export >> > DIB_INSTALLTYPE_puppet_modules=source' before running 'openstack >> > undercloud install'. >> Thanks for sending this out, the documentation has been updated with >> this workaround: >> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ > Out of curiosity, does it mean that some components (puppet modules) are > installed from source code instead of packages? Couldn't this invalidate the > test by using a codepath which won't be the proper one for Liberty? > Is there maybe a patch that can manually applied to the code? See John's update to docs: https://github.com/redhat-openstack/tripleo-docs/commit/4c8f10ab0bb1c5ef4b931460da647b34240c14aa Every such workaround should have an associated BZ# - please file one against RDO/openstack-puppet-modules for this! Cheers, Alan From jrichar1 at ball.com Mon Oct 12 18:10:22 2015 From: jrichar1 at ball.com (Richards, Jeff) Date: Mon, 12 Oct 2015 18:10:22 +0000 Subject: [Rdo-list] Ironic sqlite db In-Reply-To: References: Message-ID: <6D1DB475E9650E4EADE65C051EFBB98B468AE6E5@EX2010-DTN-03.AERO.BALL.com> This is a known issue that had a bug report filed last Monday. The problem is the ironic-inspector config is missing a ?/? in the sqlite database name making it a relative path instead of absolute. Change the database name to use 4 slashes instead of 3, migrate the database and rerun the introspect was the workaround I used: Config file: /etc/ironic-inspector/inspector.conf Sync: ironic-inspector-dbsync ?config-file /etc/ironic-inspector/inspector.conf upgrade Jeff Richards From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of AliReza Taleghani Sent: Sunday, October 11, 2015 2:42 AM To: rdo-list at redhat.com Subject: Re: [Rdo-list] Ironic sqlite db This also caused bare metal hardware detection failed as following: http://paste.ubuntu.com/12748714/ I think it's can't load default sqlite schema :-/ where I can find the sql template for manual importing into: inspector.sqlite [root at undercloud ~]# sqlite3 /var/lib/ironic-inspector/inspector.sqlite ".dump" PRAGMA foreign_keys=OFF; BEGIN TRANSACTION; COMMIT; -- Sincerely, Ali R. Taleghani This message and any enclosures are intended only for the addressee. Please notify the sender by email if you are not the intended recipient. If you are not the intended recipient, you may not use, copy, disclose, or distribute this message or its contents or enclosures to any other person and any such actions may be unlawful. Ball reserves the right to monitor and review all messages and enclosures sent to or from this email address. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.alone at gmail.com Mon Oct 12 18:14:41 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Mon, 12 Oct 2015 18:14:41 +0000 Subject: [Rdo-list] Ironic sqlite db In-Reply-To: <6D1DB475E9650E4EADE65C051EFBB98B468AE6E5@EX2010-DTN-03.AERO.BALL.com> References: <6D1DB475E9650E4EADE65C051EFBB98B468AE6E5@EX2010-DTN-03.AERO.BALL.com> Message-ID: Thank man... It solved via your advise... Right now i am fighting with neutron ovs agent :-? I think its in fail cos the change of mac address on interface which is turn to br-plane On Mon, Oct 12, 2015, 21:41 Richards, Jeff wrote: > This is a known issue that had a bug report filed last Monday. > > > > The problem is the ironic-inspector config is missing a ?/? in the sqlite > database name making it a relative path instead of absolute. Change the > database name to use 4 slashes instead of 3, migrate the database and rerun > the introspect was the workaround I used: > > > > Config file: /etc/ironic-inspector/inspector.conf > > > > Sync: ironic-inspector-dbsync ?config-file > /etc/ironic-inspector/inspector.conf upgrade > > > > Jeff Richards > > > > *From:* rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] *On > Behalf Of *AliReza Taleghani > *Sent:* Sunday, October 11, 2015 2:42 AM > *To:* rdo-list at redhat.com > *Subject:* Re: [Rdo-list] Ironic sqlite db > > > > This also caused bare metal hardware detection failed as following: > http://paste.ubuntu.com/12748714/ > > > I think it's can't load default sqlite schema :-/ where I can find the sql > template for manual importing into: inspector.sqlite > > > [root at undercloud ~]# sqlite3 /var/lib/ironic-inspector/inspector.sqlite > ".dump" > PRAGMA foreign_keys=OFF; > BEGIN TRANSACTION; > COMMIT; > > -- > > Sincerely, > Ali R. Taleghani > > > This message and any enclosures are intended only for the addressee. > Please > notify the sender by email if you are not the intended recipient. If you > are > not the intended recipient, you may not use, copy, disclose, or distribute > this > message or its contents or enclosures to any other person and any such > actions > may be unlawful. Ball reserves the right to monitor and review all > messages > and enclosures sent to or from this email address. > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Mon Oct 12 18:28:01 2015 From: mcornea at redhat.com (Marius Cornea) Date: Mon, 12 Oct 2015 14:28:01 -0400 (EDT) Subject: [Rdo-list] Git checkouts for the puppet modules required In-Reply-To: References: <553834669.40434153.1444662778364.JavaMail.zimbra@redhat.com> <561BD48E.10207@redhat.com> <1617206.Pym4mmV0tS@whitebase.usersys.redhat.com> Message-ID: <1477507208.40565709.1444674481630.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Alan Pevec" > To: "Luigi Toscano" , "John Trowbridge" > Cc: "Rdo-list at redhat.com" > Sent: Monday, October 12, 2015 8:08:57 PM > Subject: Re: [Rdo-list] Git checkouts for the puppet modules required > > >> > In order to get past it you need to run 'export > >> > DIB_INSTALLTYPE_puppet_modules=source' before running 'openstack > >> > undercloud install'. > >> Thanks for sending this out, the documentation has been updated with > >> this workaround: > >> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ > > Out of curiosity, does it mean that some components (puppet modules) are > > installed from source code instead of packages? Couldn't this invalidate > > the > > test by using a codepath which won't be the proper one for Liberty? > > Is there maybe a patch that can manually applied to the code? > > See John's update to docs: > https://github.com/redhat-openstack/tripleo-docs/commit/4c8f10ab0bb1c5ef4b931460da647b34240c14aa > > Every such workaround should have an associated BZ# - please file one > against RDO/openstack-puppet-modules for this! I filed BZ#1270956 to track this. > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mohammed.arafa at gmail.com Mon Oct 12 17:30:29 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 12 Oct 2015 13:30:29 -0400 Subject: [Rdo-list] [rdo-manager] instackenv.json In-Reply-To: <561BAC92.3010500@redhat.com> References: <561837E3.3060602@redhat.com> <561BAC92.3010500@redhat.com> Message-ID: Hi John The openstack.org docs are using kilo and thats what i have been following all along. I have not mixed and matched my repos so to speak. has development stopped on kilo? is liberty what i should be using? On Mon, Oct 12, 2015 at 8:50 AM, John Trowbridge wrote: > > > On 10/09/2015 08:34 PM, Mohammed Arafa wrote: > > Dan > > > > Thank you it worked > > Yes its only one node. I whittled it down to remove as many variables > > for errors as possible. > > > > [stack at rdomanager ~]$ openstack baremetal instackenv validate -f > > ~/instackenv.json > > System Power : off > > Power Overload : false > > Power Interlock : inactive > > Main Power Fault : false > > Power Control Fault : false > > Power Restore Policy : always-on > > Last Power Event : > > Chassis Intrusion : inactive > > Front-Panel Lockout : inactive > > Drive Fault : false > > Cooling/Fan Fault : false > > Front Panel Control : none > > SUCCESS: found 0 errors > > > > > > Now I have another problem, seems to be iptables related. So when I > > check the ironic-inspector service, it was stopped, and the only way i > > could get it to run was to reboot the machine. i verified it was > > started then did a bulk introspection. was i surprised when i saw that > > it failed again. and the service was stopped too. > > > > not sure why iptables would cause the service to crash and refuse to > restart. > > > > > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.676 > > 40112 INFO ironic_inspector.main [-] Enabled processing hooks: > > ['ramdisk_error', 'root_device_hint', 'scheduler', > > 'validate_interfaces'] > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.694 > > 40112 WARNING ironic_inspector.firewall [-] iptables does not support > > -w flag, please update it to at least version 1.4.21 > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.740 > > 40112 ERROR ironic_inspector.firewall [-] iptables ('-N', > > 'ironic-inspector') failed: > > Oct 10 01:40:25 rdomanager ironic-inspector: sudo: sorry, you must > > have a tty to run sudo > > This error ^ has been fixed in the latest packaging for ironic-inspector > and is the cause of the crash. This means the latest repo with the > includepkgs whitelist was not used. The repo setup instructions in the > documentation[1], are exactly what we use in CI, so YMMV if using any > other repo combination. > > [1] > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ > > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 CRITICAL ironic_inspector [-] CalledProcessError: Command > > '('sudo', 'ironic-inspector-rootwrap', > > '/etc/ironic-inspector/rootwrap.conf', 'iptables', '-N', > > 'ironic-inspector')' returned non-zero exit status 1 > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector Traceback (most recent call last): > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector File "/usr/bin/ironic-inspector", line > > 10, in > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector sys.exit(main()) > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector File > > "/usr/lib/python2.7/site-packages/ironic_inspector/main.py", line 388, > > in main > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector init() > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector File > > "/usr/lib/python2.7/site-packages/ironic_inspector/main.py", line 325, > > in init > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector firewall.init() > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector File > > "/usr/lib/python2.7/site-packages/ironic_inspector/firewall.py", line > > 81, in init > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector _iptables('-N', CHAIN) > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector File > > "/usr/lib/python2.7/site-packages/ironic_inspector/firewall.py", line > > 42, in _iptables > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector subprocess.check_output(cmd, > > **kwargs) > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector File > > "/usr/lib64/python2.7/subprocess.py", line 575, in check_output > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector raise CalledProcessError(retcode, > > cmd, output=output) > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector CalledProcessError: Command '('sudo', > > 'ironic-inspector-rootwrap', '/etc/ironic-inspector/rootwrap.conf', > > 'iptables', '-N', 'ironic-inspector')' returned non-zero exit status 1 > > Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > > 40112 ERROR ironic_inspector > > Oct 10 01:40:25 rdomanager systemd: > > openstack-ironic-inspector.service: main process exited, code=exited, > > status=1/FAILURE > > Oct 10 01:40:25 rdomanager systemd: Unit > > openstack-ironic-inspector.service entered failed state. > > > > > > On Fri, Oct 9, 2015 at 5:55 PM, Dan Sneddon wrote: > >> > >> On 10/09/2015 02:43 PM, Mohammed Arafa wrote: > >>> i seem to have hit this bug where node registration fails silently if > >>> instackenv.json is badly formatted > >>> > >>> thing is i cant seem to decipher where my config file is broken > >>> > >>> > >>> { > >>> "nodes": [ > >>> { > >>> "pm_password": "P at ssw0rd", > >>> "pm_type": "pxe_ipmitool", > >>> "mac": [ > >>> "00:17:a4:77:00:1c" > >>> ], > >>> "cpu": "2", > >>> "memory": "65536", > >>> "disk": "900", > >>> "arch": "x86_64", > >>> "pm_user": "root", > >>> "pm_addr": "192.168.11.213" > >>> }, > >>> ] > >>> } > >>> > >>> > >>> 1) > >>> > https://downloads.plex.tv/plex-media-server/0.9.12.13.1464-4ccd2ca/plexmediaserver-0.9.12.13.1464-4ccd2ca.x86_64.rpm > >>> > >> > >> Do you only have the one node? Because I don't think you want a comma > >> after the node. > >> > >> This validates: > >> > >> { > >> "nodes": [ > >> { > >> "pm_password": "P at ssw0rd", > >> "pm_type": "pxe_ipmitool", > >> "mac": [ > >> "00:17:a4:77:00:1c" > >> ], > >> "cpu": "2", > >> "memory": "65536", > >> "disk": "900", > >> "arch": "x86_64", > >> "pm_user": "root", > >> "pm_addr": "192.168.11.213" > >> } > >> ] > >> } > >> > >> By the way, when I'm doing OpenStack deployments, these resources help > >> out a lot with both JSON and YAML validation: > >> > >> http://jsonlint.com > >> http://yamllint.com > >> http://jsontoyaml.com > >> http://yamltojson.com > >> > >> -- > >> Dan Sneddon | Principal OpenStack Engineer > >> dsneddon at redhat.com | redhat.com/openstack > >> 650.254.4025 | dsneddon:irc @dxs:twitter > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Oct 12 18:40:32 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 12 Oct 2015 14:40:32 -0400 Subject: [Rdo-list] RDO docs/website hack day: October 15th Message-ID: <561BFEA0.7070308@redhat.com> For those of you who want to help take the final step towards whipping the new RDO website into shape, so that we can push it out before Summit, we're planning to have an RDO docs/website hack day on October 15th. As usual, we'll coordinate on #rdo, on Freenode, as well as in the issue tracking queue at https://github.com/redhat-openstack/website/issues In the next few days I'll be loading that up with specific tasks that need to get done. If you can spare an hour or two during that day, it would be enormously appreciated. Thanks! --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From trown at redhat.com Mon Oct 12 18:47:51 2015 From: trown at redhat.com (John Trowbridge) Date: Mon, 12 Oct 2015 14:47:51 -0400 Subject: [Rdo-list] [rdo-manager] instackenv.json In-Reply-To: References: <561837E3.3060602@redhat.com> <561BAC92.3010500@redhat.com> Message-ID: <561C0057.4000908@redhat.com> On 10/12/2015 01:30 PM, Mohammed Arafa wrote: > Hi John > > The openstack.org docs are using kilo and thats what i have been following > all along. I have not mixed and matched my repos so to speak. > > has development stopped on kilo? is liberty what i should be using? Actually if you follow the openstack.org docs you get mitaka packages, since stable/liberty has been branched for most openstack projects. Liberty is what is being actively worked on for RDO in terms of stabilizing for a GA later this month. > > On Mon, Oct 12, 2015 at 8:50 AM, John Trowbridge wrote: > >> >> >> On 10/09/2015 08:34 PM, Mohammed Arafa wrote: >>> Dan >>> >>> Thank you it worked >>> Yes its only one node. I whittled it down to remove as many variables >>> for errors as possible. >>> >>> [stack at rdomanager ~]$ openstack baremetal instackenv validate -f >>> ~/instackenv.json >>> System Power : off >>> Power Overload : false >>> Power Interlock : inactive >>> Main Power Fault : false >>> Power Control Fault : false >>> Power Restore Policy : always-on >>> Last Power Event : >>> Chassis Intrusion : inactive >>> Front-Panel Lockout : inactive >>> Drive Fault : false >>> Cooling/Fan Fault : false >>> Front Panel Control : none >>> SUCCESS: found 0 errors >>> >>> >>> Now I have another problem, seems to be iptables related. So when I >>> check the ironic-inspector service, it was stopped, and the only way i >>> could get it to run was to reboot the machine. i verified it was >>> started then did a bulk introspection. was i surprised when i saw that >>> it failed again. and the service was stopped too. >>> >>> not sure why iptables would cause the service to crash and refuse to >> restart. >>> >>> >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.676 >>> 40112 INFO ironic_inspector.main [-] Enabled processing hooks: >>> ['ramdisk_error', 'root_device_hint', 'scheduler', >>> 'validate_interfaces'] >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.694 >>> 40112 WARNING ironic_inspector.firewall [-] iptables does not support >>> -w flag, please update it to at least version 1.4.21 >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.740 >>> 40112 ERROR ironic_inspector.firewall [-] iptables ('-N', >>> 'ironic-inspector') failed: >>> Oct 10 01:40:25 rdomanager ironic-inspector: sudo: sorry, you must >>> have a tty to run sudo >> >> This error ^ has been fixed in the latest packaging for ironic-inspector >> and is the cause of the crash. This means the latest repo with the >> includepkgs whitelist was not used. The repo setup instructions in the >> documentation[1], are exactly what we use in CI, so YMMV if using any >> other repo combination. >> >> [1] >> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ >> >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 CRITICAL ironic_inspector [-] CalledProcessError: Command >>> '('sudo', 'ironic-inspector-rootwrap', >>> '/etc/ironic-inspector/rootwrap.conf', 'iptables', '-N', >>> 'ironic-inspector')' returned non-zero exit status 1 >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector Traceback (most recent call last): >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector File "/usr/bin/ironic-inspector", line >>> 10, in >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector sys.exit(main()) >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector File >>> "/usr/lib/python2.7/site-packages/ironic_inspector/main.py", line 388, >>> in main >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector init() >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector File >>> "/usr/lib/python2.7/site-packages/ironic_inspector/main.py", line 325, >>> in init >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector firewall.init() >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector File >>> "/usr/lib/python2.7/site-packages/ironic_inspector/firewall.py", line >>> 81, in init >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector _iptables('-N', CHAIN) >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector File >>> "/usr/lib/python2.7/site-packages/ironic_inspector/firewall.py", line >>> 42, in _iptables >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector subprocess.check_output(cmd, >>> **kwargs) >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector File >>> "/usr/lib64/python2.7/subprocess.py", line 575, in check_output >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector raise CalledProcessError(retcode, >>> cmd, output=output) >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector CalledProcessError: Command '('sudo', >>> 'ironic-inspector-rootwrap', '/etc/ironic-inspector/rootwrap.conf', >>> 'iptables', '-N', 'ironic-inspector')' returned non-zero exit status 1 >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 >>> 40112 ERROR ironic_inspector >>> Oct 10 01:40:25 rdomanager systemd: >>> openstack-ironic-inspector.service: main process exited, code=exited, >>> status=1/FAILURE >>> Oct 10 01:40:25 rdomanager systemd: Unit >>> openstack-ironic-inspector.service entered failed state. >>> >>> >>> On Fri, Oct 9, 2015 at 5:55 PM, Dan Sneddon wrote: >>>> >>>> On 10/09/2015 02:43 PM, Mohammed Arafa wrote: >>>>> i seem to have hit this bug where node registration fails silently if >>>>> instackenv.json is badly formatted >>>>> >>>>> thing is i cant seem to decipher where my config file is broken >>>>> >>>>> >>>>> { >>>>> "nodes": [ >>>>> { >>>>> "pm_password": "P at ssw0rd", >>>>> "pm_type": "pxe_ipmitool", >>>>> "mac": [ >>>>> "00:17:a4:77:00:1c" >>>>> ], >>>>> "cpu": "2", >>>>> "memory": "65536", >>>>> "disk": "900", >>>>> "arch": "x86_64", >>>>> "pm_user": "root", >>>>> "pm_addr": "192.168.11.213" >>>>> }, >>>>> ] >>>>> } >>>>> >>>>> >>>>> 1) >>>>> >> https://downloads.plex.tv/plex-media-server/0.9.12.13.1464-4ccd2ca/plexmediaserver-0.9.12.13.1464-4ccd2ca.x86_64.rpm >>>>> >>>> >>>> Do you only have the one node? Because I don't think you want a comma >>>> after the node. >>>> >>>> This validates: >>>> >>>> { >>>> "nodes": [ >>>> { >>>> "pm_password": "P at ssw0rd", >>>> "pm_type": "pxe_ipmitool", >>>> "mac": [ >>>> "00:17:a4:77:00:1c" >>>> ], >>>> "cpu": "2", >>>> "memory": "65536", >>>> "disk": "900", >>>> "arch": "x86_64", >>>> "pm_user": "root", >>>> "pm_addr": "192.168.11.213" >>>> } >>>> ] >>>> } >>>> >>>> By the way, when I'm doing OpenStack deployments, these resources help >>>> out a lot with both JSON and YAML validation: >>>> >>>> http://jsonlint.com >>>> http://yamllint.com >>>> http://jsontoyaml.com >>>> http://yamltojson.com >>>> >>>> -- >>>> Dan Sneddon | Principal OpenStack Engineer >>>> dsneddon at redhat.com | redhat.com/openstack >>>> 650.254.4025 | dsneddon:irc @dxs:twitter >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >>> >>> >> > > > From mcornea at redhat.com Mon Oct 12 19:11:06 2015 From: mcornea at redhat.com (Marius Cornea) Date: Mon, 12 Oct 2015 15:11:06 -0400 (EDT) Subject: [Rdo-list] Basic network isolation deployment In-Reply-To: <1839485194.40575485.1444676136208.JavaMail.zimbra@redhat.com> Message-ID: <610455385.40584327.1444677066896.JavaMail.zimbra@redhat.com> Hi everyone, I tried deploying a network isolation setup with 1 x ctrl + 1 x compute on a virtual environment and here are my findings so far: 1. You need to pass /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml as environment file (-e) to the openstack overcloud deploy command otherwise pacemaker doesn't get deployed and the IPs don't get assigned as expected: https://bugzilla.redhat.com/show_bug.cgi?id=1270910 2. After deployment some of the Neutron related pacemaker resources are stopped: https://bugzilla.redhat.com/show_bug.cgi?id=1270964 Thanks, Marius From rbowen at redhat.com Mon Oct 12 19:41:01 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 12 Oct 2015 15:41:01 -0400 Subject: [Rdo-list] [Rdo-newsletter] RDO Community newsletter, October 2015 Message-ID: <561C0CCD.5070200@redhat.com> October 2015 RDO Community Newsletter (This newsletter is also available online at https://rdoproject.org/Newsletter/2015_October ) Quick links: * Quick Start - http://rdoproject.org/quickstart * Mailing Lists - http://rdoproject.org/Mailing_lists * RDO packages - http://rdoproject.org/repos/ with the trunk packages in http://rdoproject.org/repos/openstack/openstack-trunk/ * RDO blog - http://rdoproject.org/blog * Q&A - http://ask.openstack.org/ * Open Tickets - http://tm3.org/rdobugs * Twitter - http://twitter.com/rdocommunity This month's newsletter is very late, because I've been traveling and the time got away from me. If you'd like to help get next month's newsletter out on time, or contribute to it in some other way, please let me know, at rbowen at redhat.com Mailing List Update =================== The RDO list archive from September is at https://www.redhat.com/archives/rdo-list/2015-September/thread.html Here's a few highlights from the month. * RDO Test Days In late September we had our first Liberty RDO test day - http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ - and had pretty good participation, with some test results documented at http://beta.rdoproject.org/testday/testedsetups-liberty-01/ We have a followup test day running as I write this, since we are now closer to the Liberty release. Liberty is scheduled to release on October the 15th, so what we have right now shold be mostly final. * RDO-Manager Quickstart A discussion started around the creation of an RDO-Manager Quickstart document, much like the RDO Quickstart, to give a quick and successful first run-through with RDO-Manager, producing a minimally useful cloud instance. That discussion is at https://www.redhat.com/archives/rdo-list/2015-September/msg00180.html and following. * RDO-Manager There was some further discussion of RDO-Manager at https://www.redhat.com/archives/rdo-list/2015-September/msg00161.html that's worth catching up on. * RDO Liberty and Fedora You may remember the topic of how RDO Liberty will be handled on Fedora from last month. There was additional discussion at https://www.redhat.com/archives/rdo-list/2015-September/msg00113.html for maintainers of these packages. An additional thread at https://www.redhat.com/archives/rdo-list/2015-September/msg00090.html was geared towards everyone else. * RDO-Manager and Ceph At https://www.redhat.com/archives/rdo-list/2015-September/msg00031.html there's some discussion of deploying Ceph-based storage using RDO-Manager. This is just a taste of the month's highlights. You can catch up on the entire month at https://www.redhat.com/archives/rdo-list/2015-September/thread.html beta.rdoproject.org =================== The reboot of the RDO website is almost ready to go live. You can preview it at http://beta.rdoproject.org/ and you can participate in the process at https://github.com/redhat-openstack/website The new site is based on Middleman - https://middlemanapp.com/ - and enables contributions via Git pull requests, both for the main website content, and for the blog. There's also a handy developer tool that lets you run a local copy of the site to verify your changes before you send them along. Just get a `git clone` of the repo, and run ./run-server.sh in the root of your working copy. See also the list of open tickets at https://github.com/redhat-openstack/website/issues if you're looking for a place to help out. RDO Blogs ========= Continuing the series of PTL interviews, I spoke with Haikel Guemar about the new RPM packaging project. You can read, or listen to, that interview at https://www.rdoproject.org/forum/discussion/1040/haikel-guemar-talks-about-rpm-packaging And I'm in the process of editing an interview with Mike Perez, the Cinder PTL, and should that up later this week. As always, every week I post a summary of the blog posts by RDO enthusiasts in the preceding week. These tend to increase in volume as we approach the next release. And, as of just a few weeks ago, we now have http://blogs.rdoproject.org/ where RDO engineers can post about all things OpenStack. If you'd like to have your OpenStack blog presence there, let me know. (rbowen at redhat.com) Events ====== * Docs hack day With the new website almost ready to go, we need your help taking that final step. We're planning a documentation/website hack day on Thursday, October 15th. We'll be coordinating on #rdo on Freenode, and also around the issue tracker at https://github.com/redhat-openstack/website/issues So get a fork of the website (See beta.rdoproject.org item above), and get ready. * OpenStack Summit The OpenStack Summit in Tokyo is right around the corner, and we hope to see you there. We have an RDO community meetup planned for lunch time on Wednesday. We'll be at the Red Hat booth at the Summit, where you can meet RDO engineers, and see demos of RDO, OpenShift, ManageIQ, and other OpenStack-related offerings. Also, be sure to get the new RDO tshirt, which finally defines what RDO really stands for. To prepare for summit, you might want to read this excellent article by Nick Chase about what's new in Liberty: https://www.mirantis.com/blog/53-things-new-openstack-liberty/ * FOSDEM Details are still being worked out, but we plan to have an RDO community event, in conjunction with the CentOS Dojo, on the days immediately before FOSDEM, early next year. FOSDEM is scheduled for January 30 and 31st, in Brussels, Belgium. We expect the CentOS event to be on Friday, January 29th. * Meetups It seems that every month there's more and more OpenStack meetups around the world. I try to keep the calendar of upcoming meetups updated at http://rdoproject.org/events/ so that you can see what's happening in your area. * LinuxCon Dublin Last week, we were in Dublin for LinuxCon Europe, where there was great OpenStack content by several RDO engineers, including Mark McLoughlin and Kashyap Chamarthy. We really appreciate everyone who made it to those talks, as well as everyone that stopped by the booth to talk about OpenStack. Keep in touch ============= There's lots of ways to stay in in touch with what's going on in the RDO community. The best ways are ... WWW * RDO - http://rdoproject.org/ * OpenStack Q&A - http://ask.openstack.org/ Mailing Lists: * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter IRC * IRC - #rdo on Freenode.irc.net * Puppet module development - #rdo-puppet Social Media: * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * Facebook - http://facebook.com/rdocommunity Thanks again for being part of the RDO community! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From dsneddon at redhat.com Mon Oct 12 20:25:06 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Mon, 12 Oct 2015 13:25:06 -0700 Subject: [Rdo-list] Overcloud deploy stuck for a long time In-Reply-To: References: Message-ID: <561C1722.9050608@redhat.com> On 10/12/2015 08:10 AM, Tzach Shefi wrote: > Hi, > > Server running centos 7.1, vm running for undercloud got up to > overcloud deploy stage. > It looks like its stuck nothing advancing for a while. > Ideas, what to check? > > [stack at instack ~]$ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > [91665.696658] device vnet2 entered promiscuous mode > [91665.781346] device vnet3 entered promiscuous mode > [91675.260324] kvm [71183]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff > [91675.291232] kvm [71200]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff > [91767.799404] kvm: zapping shadow pages for mmio generation wraparound > [91767.880480] kvm: zapping shadow pages for mmio generation wraparound > [91768.957761] device vnet2 left promiscuous mode > [91769.799446] device vnet3 left promiscuous mode > [91771.223273] device vnet3 entered promiscuous mode > [91771.232996] device vnet2 entered promiscuous mode > [91773.733967] kvm [72245]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff > [91801.270510] device vnet2 left promiscuous mode > > > Thanks > Tzach > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > You're going to need a more complete command line than "openstack overcloud deploy --templates". For instance, if you are using VMs for your overcloud nodes, you will need to include "--libvirt-type qemu". There are probably a couple of other parameters that you will need. You can watch the deployment using this command, which will show you the progress: watch "heat resource-list -n 5 | grep -v COMPLETE" You can also explore which resources have failed: heat resource-list [-n 5]| grep FAILED And then look more closely at the failed resources: heat resource-show overcloud There are some more complete troubleshooting instructions here: http://docs.openstack.org/developer/tripleo-docs/troubleshooting/troubleshooting-overcloud.html -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From jrichar1 at ball.com Mon Oct 12 21:36:56 2015 From: jrichar1 at ball.com (Richards, Jeff) Date: Mon, 12 Oct 2015 21:36:56 +0000 Subject: [Rdo-list] [rdo-manager] Proof of concept private cloud setup Message-ID: <6D1DB475E9650E4EADE65C051EFBB98B468AE76B@EX2010-DTN-03.AERO.BALL.com> Is it possible at this stage to setup an ultra-simple private cloud using RDO Manager just to demonstrate that it works as a consumer? I have tried twice now, CentOS 7 libvirt+kvm, all virtual and failed twice. First try was with these docs and repos: http://docs.openstack.org/developer/tripleo-docs/ http://trunk.rdoproject.org/centos7/current-tripleo/delorean.repo (and associated other repos per docs) This failed with a puppet error (unknown type) on overcloud deployment (update failed). Felt like I was 90% of the way there then hit a brick wall. Second try was with: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo (and associated other repos per docs) This one crashed and burned on instack setup (parse error: invalid string immediately after setup-seed-vm). Am I using incorrect repos or documentation? Thanks! Jeff Richards This message and any enclosures are intended only for the addressee. Please notify the sender by email if you are not the intended recipient. If you are not the intended recipient, you may not use, copy, disclose, or distribute this message or its contents or enclosures to any other person and any such actions may be unlawful. Ball reserves the right to monitor and review all messages and enclosures sent to or from this email address. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Mon Oct 12 20:41:09 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 12 Oct 2015 16:41:09 -0400 Subject: [Rdo-list] [rdo-manager] instackenv.json In-Reply-To: <561C0057.4000908@redhat.com> References: <561837E3.3060602@redhat.com> <561BAC92.3010500@redhat.com> <561C0057.4000908@redhat.com> Message-ID: ok so for others http://docs.rdoproject.org/rdo-manager/master immediately redirects to https://repos.fedorapeople.org/repos/openstack-m/docs/master/ which tells you "The RDO-Manager documentation has moved upstream. Click here to visit the new location." which is http://docs.openstack.org/developer/tripleo-docs/ i made this mistake and i hope it can be made clearer to others to avoid this mistake in future, so this question may be obvious to some of you but not to me (and therefore not everyone out there) but where are the docs for kilo? liberty? are they the same docs just different repos? and are the repos *stable *or also in flux? thank you On Mon, Oct 12, 2015 at 2:47 PM, John Trowbridge wrote: > > > On 10/12/2015 01:30 PM, Mohammed Arafa wrote: > > Hi John > > > > The openstack.org docs are using kilo and thats what i have been > following > > all along. I have not mixed and matched my repos so to speak. > > > > has development stopped on kilo? is liberty what i should be using? > > Actually if you follow the openstack.org docs you get mitaka packages, > since stable/liberty has been branched for most openstack projects. > > Liberty is what is being actively worked on for RDO in terms of > stabilizing for a GA later this month. > > > > On Mon, Oct 12, 2015 at 8:50 AM, John Trowbridge > wrote: > > > >> > >> > >> On 10/09/2015 08:34 PM, Mohammed Arafa wrote: > >>> Dan > >>> > >>> Thank you it worked > >>> Yes its only one node. I whittled it down to remove as many variables > >>> for errors as possible. > >>> > >>> [stack at rdomanager ~]$ openstack baremetal instackenv validate -f > >>> ~/instackenv.json > >>> System Power : off > >>> Power Overload : false > >>> Power Interlock : inactive > >>> Main Power Fault : false > >>> Power Control Fault : false > >>> Power Restore Policy : always-on > >>> Last Power Event : > >>> Chassis Intrusion : inactive > >>> Front-Panel Lockout : inactive > >>> Drive Fault : false > >>> Cooling/Fan Fault : false > >>> Front Panel Control : none > >>> SUCCESS: found 0 errors > >>> > >>> > >>> Now I have another problem, seems to be iptables related. So when I > >>> check the ironic-inspector service, it was stopped, and the only way i > >>> could get it to run was to reboot the machine. i verified it was > >>> started then did a bulk introspection. was i surprised when i saw that > >>> it failed again. and the service was stopped too. > >>> > >>> not sure why iptables would cause the service to crash and refuse to > >> restart. > >>> > >>> > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.676 > >>> 40112 INFO ironic_inspector.main [-] Enabled processing hooks: > >>> ['ramdisk_error', 'root_device_hint', 'scheduler', > >>> 'validate_interfaces'] > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.694 > >>> 40112 WARNING ironic_inspector.firewall [-] iptables does not support > >>> -w flag, please update it to at least version 1.4.21 > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.740 > >>> 40112 ERROR ironic_inspector.firewall [-] iptables ('-N', > >>> 'ironic-inspector') failed: > >>> Oct 10 01:40:25 rdomanager ironic-inspector: sudo: sorry, you must > >>> have a tty to run sudo > >> > >> This error ^ has been fixed in the latest packaging for ironic-inspector > >> and is the cause of the crash. This means the latest repo with the > >> includepkgs whitelist was not used. The repo setup instructions in the > >> documentation[1], are exactly what we use in CI, so YMMV if using any > >> other repo combination. > >> > >> [1] > >> > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ > >> > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 CRITICAL ironic_inspector [-] CalledProcessError: Command > >>> '('sudo', 'ironic-inspector-rootwrap', > >>> '/etc/ironic-inspector/rootwrap.conf', 'iptables', '-N', > >>> 'ironic-inspector')' returned non-zero exit status 1 > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector Traceback (most recent call last): > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector File "/usr/bin/ironic-inspector", line > >>> 10, in > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector sys.exit(main()) > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector File > >>> "/usr/lib/python2.7/site-packages/ironic_inspector/main.py", line 388, > >>> in main > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector init() > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector File > >>> "/usr/lib/python2.7/site-packages/ironic_inspector/main.py", line 325, > >>> in init > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector firewall.init() > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector File > >>> "/usr/lib/python2.7/site-packages/ironic_inspector/firewall.py", line > >>> 81, in init > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector _iptables('-N', CHAIN) > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector File > >>> "/usr/lib/python2.7/site-packages/ironic_inspector/firewall.py", line > >>> 42, in _iptables > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector subprocess.check_output(cmd, > >>> **kwargs) > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector File > >>> "/usr/lib64/python2.7/subprocess.py", line 575, in check_output > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector raise CalledProcessError(retcode, > >>> cmd, output=output) > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector CalledProcessError: Command '('sudo', > >>> 'ironic-inspector-rootwrap', '/etc/ironic-inspector/rootwrap.conf', > >>> 'iptables', '-N', 'ironic-inspector')' returned non-zero exit status 1 > >>> Oct 10 01:40:25 rdomanager ironic-inspector: 2015-10-10 01:40:25.741 > >>> 40112 ERROR ironic_inspector > >>> Oct 10 01:40:25 rdomanager systemd: > >>> openstack-ironic-inspector.service: main process exited, code=exited, > >>> status=1/FAILURE > >>> Oct 10 01:40:25 rdomanager systemd: Unit > >>> openstack-ironic-inspector.service entered failed state. > >>> > >>> > >>> On Fri, Oct 9, 2015 at 5:55 PM, Dan Sneddon > wrote: > >>>> > >>>> On 10/09/2015 02:43 PM, Mohammed Arafa wrote: > >>>>> i seem to have hit this bug where node registration fails silently if > >>>>> instackenv.json is badly formatted > >>>>> > >>>>> thing is i cant seem to decipher where my config file is broken > >>>>> > >>>>> > >>>>> { > >>>>> "nodes": [ > >>>>> { > >>>>> "pm_password": "P at ssw0rd", > >>>>> "pm_type": "pxe_ipmitool", > >>>>> "mac": [ > >>>>> "00:17:a4:77:00:1c" > >>>>> ], > >>>>> "cpu": "2", > >>>>> "memory": "65536", > >>>>> "disk": "900", > >>>>> "arch": "x86_64", > >>>>> "pm_user": "root", > >>>>> "pm_addr": "192.168.11.213" > >>>>> }, > >>>>> ] > >>>>> } > >>>>> > >>>>> > >>>>> 1) > >>>>> > >> > https://downloads.plex.tv/plex-media-server/0.9.12.13.1464-4ccd2ca/plexmediaserver-0.9.12.13.1464-4ccd2ca.x86_64.rpm > >>>>> > >>>> > >>>> Do you only have the one node? Because I don't think you want a comma > >>>> after the node. > >>>> > >>>> This validates: > >>>> > >>>> { > >>>> "nodes": [ > >>>> { > >>>> "pm_password": "P at ssw0rd", > >>>> "pm_type": "pxe_ipmitool", > >>>> "mac": [ > >>>> "00:17:a4:77:00:1c" > >>>> ], > >>>> "cpu": "2", > >>>> "memory": "65536", > >>>> "disk": "900", > >>>> "arch": "x86_64", > >>>> "pm_user": "root", > >>>> "pm_addr": "192.168.11.213" > >>>> } > >>>> ] > >>>> } > >>>> > >>>> By the way, when I'm doing OpenStack deployments, these resources help > >>>> out a lot with both JSON and YAML validation: > >>>> > >>>> http://jsonlint.com > >>>> http://yamllint.com > >>>> http://jsontoyaml.com > >>>> http://yamltojson.com > >>>> > >>>> -- > >>>> Dan Sneddon | Principal OpenStack Engineer > >>>> dsneddon at redhat.com | redhat.com/openstack > >>>> 650.254.4025 | dsneddon:irc @dxs:twitter > >>>> > >>>> _______________________________________________ > >>>> Rdo-list mailing list > >>>> Rdo-list at redhat.com > >>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>> > >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > >>> > >>> > >>> > >> > > > > > > > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Mon Oct 12 21:52:07 2015 From: mcornea at redhat.com (Marius Cornea) Date: Mon, 12 Oct 2015 17:52:07 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] Proof of concept private cloud setup In-Reply-To: <6D1DB475E9650E4EADE65C051EFBB98B468AE76B@EX2010-DTN-03.AERO.BALL.com> References: <6D1DB475E9650E4EADE65C051EFBB98B468AE76B@EX2010-DTN-03.AERO.BALL.com> Message-ID: <1272802342.40639956.1444686727612.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Jeff Richards" > To: rdo-list at redhat.com > Sent: Monday, October 12, 2015 11:36:56 PM > Subject: [Rdo-list] [rdo-manager] Proof of concept private cloud setup > > Is it possible at this stage to setup an ultra-simple private cloud using RDO > Manager just to demonstrate that it works as a consumer?? I have tried twice > now, CentOS 7 libvirt+kvm, all virtual and failed twice. > First try was with these docs and repos: > http://docs.openstack.org/developer/tripleo-docs/ > http://trunk.rdoproject.org/centos7/current-tripleo/delorean.repo (and > associated other repos per docs) > This failed with a puppet error (unknown type) on overcloud deployment > (update failed). Felt like I was 90% of the way there then hit a brick wall. > Second try was with: > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > (and associated other repos per docs) These are the right docs to use(for Liberty). Please check the following workaround to your problem: https://www.redhat.com/archives/rdo-list/2015-October/msg00105.html > > > > This one crashed and burned on instack setup (parse error: invalid string > immediately after setup-seed-vm). > > > > Am I using incorrect repos or documentation? > > > > Thanks! > > > > Jeff Richards > > > This message and any enclosures are intended only for the addressee. Please > notify the sender by email if you are not the intended recipient. If you are > not the intended recipient, you may not use, copy, disclose, or distribute > this > message or its contents or enclosures to any other person and any such > actions > may be unlawful. Ball reserves the right to monitor and review all messages > and enclosures sent to or from this email address. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rohara at redhat.com Mon Oct 12 23:03:01 2015 From: rohara at redhat.com (Ryan O'Hara) Date: Mon, 12 Oct 2015 18:03:01 -0500 Subject: [Rdo-list] How to know which galera server haproxy is pointing in an overcloud HA env In-Reply-To: <56167A9F.2080705@redhat.com> References: <5614F7CC.6040808@redhat.com> <1852667655.37767349.1444216050936.JavaMail.zimbra@redhat.com> <5614FF90.7090908@redhat.com> <1173238997.38020056.1444231271758.JavaMail.zimbra@redhat.com> <56167A9F.2080705@redhat.com> Message-ID: <20151012230301.GA15779@redhat.com> B1;4002;0cOn Thu, Oct 08, 2015 at 04:15:59PM +0200, Raoul Scarazzini wrote: > Il giorno 7/10/2015 17:21:11, Marius Cornea ha scritto: > > I was just suggesting a way to see to which of the backend nodes > haproxy is directing the traffic. Please see the attachment. > > Thanks again Marius, > from your point of view, does this script make sense? > > #!/bin/bash > > # haproxy bind address > VIP=$1 > > # Associative array for controller -> bytes list > declare -A controllers > > function get_stats { > # 2nd field -> controller name | 10th field -> bytes in | 11th field -> > bytes out > stats=$(echo "show stat" | socat /var/run/haproxy stdio | grep > mysql,overcloud | cut -f2,10 -d,) > } > > get_stats > > # Put the first byte values in the array > for line in $stats > do > controller=$(echo $line | cut -f1 -d,) > controllers[$controller]=$(echo $line|cut -f2 -d,) > done > > # Do something (nothing) on the VIP's db > mysql -u nonexistant -h $VIP &> /dev/null > > get_stats > > # Compare the stats the one different is the master > for controller in ${!controllers[@]} > do > value2=$(echo "$stats"|grep $controller|cut -f2 -d,) > [ ${controllers[$controller]} -ne $value2 ] && echo "$controller is > MASTER" || echo "$controller is slave" > done > > I know it's ugly, but since we don't have any other method to get those > informations I don't see any other solution. Of course it can be adapted > to get values from http instead of the socket (that by default is not > enabled). > > What do you think? Why not just send a query to the database VIP to get the hostname? Ryan From ukalifon at redhat.com Tue Oct 13 06:57:36 2015 From: ukalifon at redhat.com (Udi Kalifon) Date: Tue, 13 Oct 2015 09:57:36 +0300 Subject: [Rdo-list] Test day issue: parse error In-Reply-To: <561BDD92.1060007@redhat.com> References: <561BD5EA.6000800@redhat.com> <561BDD92.1060007@redhat.com> Message-ID: Thanks, this worked: sudo yum downgrade jq-1.3-2.el7 sudo yum install -y yum-plugin-versionlock sudo yum versionlock add jq Regards, Udi. On Mon, Oct 12, 2015 at 7:19 PM, John Trowbridge wrote: > > > On 10/12/2015 12:04 PM, Udi Kalifon wrote: > > I reprovisioned the machine, started over, and got the same error... So > > cleaning up everything didn't help. Is there a way for me to apply the > > needed patches and not wait for the new jq ? > > > > It is actually old jq that you want. So you can downgrade jq to below > 1.5 and then versionlock it so that yum update will not touch it: > > sudo yum install -y yum-plugin-versionlock > sudo yum versionlock add jq > > > > Thanks, > > Udi. > > > > On Mon, Oct 12, 2015 at 6:46 PM, John Trowbridge > wrote: > > > >> > >> > >> On 10/12/2015 11:06 AM, Wesley Hayutin wrote: > >>> Several folks have been hitting this. > >>> You most likely have a version of the rpm jq on the box that is not > >>> compatible with rdo-manager > >>> yum remove jq on the baremetal virtual host, clean up any other install > >>> artifacts and restart. > >>> > >> > >> Indeed, this is an issue with jq 1.5. There is a fix to tripleo for this > >> [1], but it is blocked by tripleoci being unable to build the > >> openstack-tripleo package. Once the revert [2] merges we should be good > >> to get the jq 1.5 patch to pass CI. > >> > >> [1] https://review.openstack.org/#/c/228034 > >> [2] https://review.openstack.org/#/c/233686 > >> > >>> On Mon, Oct 12, 2015 at 10:57 AM, Udi Kalifon > >> wrote: > >>> > >>>> Hello, > >>>> > >>>> We are encountering an error during instack-virt-setup: > >>>> > >>>> ++ sudo virsh net-list --all --persistent > >>>> ++ grep default > >>>> ++ awk 'BEGIN{OFS=":";} {print $2,$3}' > >>>> + default_net=active:yes > >>>> + state=active > >>>> + autostart=yes > >>>> + '[' active '!=' active ']' > >>>> + '[' yes '!=' yes ']' > >>>> Domain seed has been undefined > >>>> > >>>> > >>>> seed VM not running > >>>> > >>>> seed VM not defined > >>>> Created machine seed with UUID f59eb2f0-c7ac-429e-950c-df2fd4b6f301 > >>>> Seed VM created with MAC 52:54:00:05:af:0f > >>>> parse error: Invalid string: control characters from U+0000 through > >> U+001F > >>>> must be escaped at line 32, column 30 > >>>> > >>>> Any ideas? I don't know which file causes this parse error, it's not > the > >>>> instack-virt-setup. > >>>> > >>>> Thanks. > >>>> > >>>> _______________________________________________ > >>>> Rdo-list mailing list > >>>> Rdo-list at redhat.com > >>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>> > >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> Rdo-list mailing list > >>> Rdo-list at redhat.com > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ukalifon at redhat.com Tue Oct 13 08:30:36 2015 From: ukalifon at redhat.com (Udi Kalifon) Date: Tue, 13 Oct 2015 11:30:36 +0300 Subject: [Rdo-list] fatal: The remote end hung up unexpectedly Message-ID: I failed during build images in the RDO test day: /var/tmp/image.Hu0HkfDD/mnt/opt/stack/puppet-modules/tempest ~/images From /home/stack/.cache/image-create/source-repositories/puppet_tempest_2aa8dee360256cbbbbc450f20322094249aa9dba * [new branch] master -> fetch_master HEAD is now at 09b2b5c Try to use zuul-cloner to prepare fixtures ~/images 0+1 records in 0+1 records out 34 bytes (34 B) copied, 7.9125e-05 s, 430 kB/s Caching puppetlabs-vcsrepo from https://github.com/puppetlabs/puppetlabs-vcsrepo.git in /home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3 Cloning into '/home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3.tmp'... error: RPC failed; result=7, HTTP code = 0 fatal: The remote end hung up unexpectedly Has anyone ever seen an issue like this ? Thanks, Udi. -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederic.lepied at redhat.com Tue Oct 13 09:31:04 2015 From: frederic.lepied at redhat.com (=?UTF-8?B?RnLDqWTDqXJpYyBMZXBpZWQ=?=) Date: Tue, 13 Oct 2015 11:31:04 +0200 Subject: [Rdo-list] Software Factory for RDO experiment Message-ID: <561CCF58.30704@redhat.com> Hi, After some discussions, we came up with the idea to have the RDO project owns its build infrastructure. To play with this idea, we propose to experiment to use an OpenStack like workflow to build RDO packages by deploying a Software Factory instance (http://softwarefactory-project.io) on top of an RDO or RHEL-OSP cloud. That will allow us to have our own Gerrit, Zuul, Nodepool and Jenkins instances like in the OpenStack upstream project while adding our package building specific needs like using the Delorean and Mock/Koji/Mash machineries. The objectives from these changes are: 1. to have a full gating CI for RDO to never break the package repository. 2. to be in control of our infrastructure. 3. to simplify the work-flow where we can to make it more efficient and easier to grasp. Nothing is set in stone so feel free to comment or ask questions. Cheers, -- Fred - May the Source be with you From tshefi at redhat.com Tue Oct 13 10:01:48 2015 From: tshefi at redhat.com (Tzach Shefi) Date: Tue, 13 Oct 2015 13:01:48 +0300 Subject: [Rdo-list] Overcloud deploy stuck for a long time In-Reply-To: <561C1722.9050608@redhat.com> References: <561C1722.9050608@redhat.com> Message-ID: So gave it a few more hours, on heat resource nothing is failed only create_complete and some init_complete. Nova show | 61aaed37-4993-4165-93a7-3c9bf6b10a21 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.8 | | 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 | overcloud-novacompute-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.9 | nova show 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | instack.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | 4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 | | OS-EXT-SRV-ATTR:instance_name | instance-00000002 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | spawning | | OS-EXT-STS:vm_state | building | Checking nova log this is what I see: nova-compute.log:{"nodes": [{"target_power_state": null, "links": [{"href": "http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", "rel": "self"}, {"href": " http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", "rel": "bookmark"}], "extra": {}, "last_error": "*Failed to change power state to 'power on'. Error: Failed to execute command via SSH*: LC_ALL=C /usr/bin/virsh --connect qemu:///system start baremetalbrbm_1.", "updated_at": "2015-10-12T14:36:08+00:00", "maintenance_reason": null, "provision_state": "deploying", "clean_step": {}, "uuid": "4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", "console_enabled": false, "target_provision_state": "active", "provision_updated_at": "2015-10-12T14:35:18+00:00", "power_state": "power off", "inspection_started_at": null, "inspection_finished_at": null, "maintenance": false, "driver": "pxe_ssh", "reservation": null, "properties": {"memory_mb": "4096", "cpu_arch": "x86_64", "local_gb": "40", "cpus": "1", "capabilities": "boot_option:local"}, "instance_uuid": "7f9f4f52-3ee6-42d9-9275-ff88582dd6e7", "name": null, "driver_info": {"ssh_username": "root", "deploy_kernel": "94cc528d-d91f-4ca7-876e-2d8cbec66f1b", "deploy_ramdisk": "057d3b42-002a-4c24-bb3f-2032b8086108", "ssh_key_contents": "-----BEGIN( I removed key..)END RSA PRIVATE KEY-----", "ssh_virt_type": "virsh", "ssh_address": "192.168.122.1"}, "created_at": "2015-10-12T14:26:30+00:00", "ports": [{"href": " http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports", "rel": "self"}, {"href": " http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports", "rel": "bookmark"}], "driver_internal_info": {"clean_steps": null, "root_uuid_or_disk_id": "9ff90423-9d18-4dd1-ae96-a4466b52d9d9", "is_whole_disk_image": false}, "instance_info": {"ramdisk": "82639516-289d-4603-bf0e-8131fa75ec46", "kernel": "665ffcb0-2afe-4e04-8910-45b92826e328", "root_gb": "40", "display_name": "overcloud-novacompute-0", "image_source": "d99f460e-c6d9-4803-99e4-51347413f348", "capabilities": "{\"boot_option\": \"local\"}", "memory_mb": "4096", "vcpus": "1", "deploy_key": "BI0FRWDTD4VGHII9JK2BYDDFR8WB1WUG", "local_gb": "40", "configdrive": "H4sICGDEG1YC/3RtcHpwcWlpZQDt3WuT29iZ2HH02Bl7Fe/G5UxSqS3vLtyesaSl2CR4p1zyhk2Ct+ateScdVxcIgiR4A5sAr95xxa/iVOUz7EfJx8m7rXyE5IDslro1mpbGox15Zv6/lrpJ4AAHN/LBwXMIShIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADhJpvx+5UQq5EqNtvzldGs+MIfewJeNv53f/7n354F6xT/3v/TjH0v/chz0L5+8Gv2f3V+n0s+Pz34u/dj982PJfvSTvxFVfXQ7vfyBlRfGvOZo+kQuWWtNVgJn/jO/d6kHzvrGWlHOjGn0TDfmjmXL30kZtZSrlXPFREaVxQM5Hon4fdl0TU7nCmqtU6urRTlZVRP1clV+knwqK/F4UFbPOuVGKZNKFNTbgVFvwO+PyPmzipqo1solX/6slszmCuKozBzKuKPdMlE5ma Any ideas on how to resolve a stuck spawning compute node, it's stuck hasn't changed for a few hours now. Tzach Tzach On Mon, Oct 12, 2015 at 11:25 PM, Dan Sneddon wrote: > On 10/12/2015 08:10 AM, Tzach Shefi wrote: > > Hi, > > > > Server running centos 7.1, vm running for undercloud got up to > > overcloud deploy stage. > > It looks like its stuck nothing advancing for a while. > > Ideas, what to check? > > > > [stack at instack ~]$ openstack overcloud deploy --templates > > Deploying templates in the directory > > /usr/share/openstack-tripleo-heat-templates > > [91665.696658] device vnet2 entered promiscuous mode > > [91665.781346] device vnet3 entered promiscuous mode > > [91675.260324] kvm [71183]: vcpu0 disabled perfctr wrmsr: 0xc1 data > 0xffff > > [91675.291232] kvm [71200]: vcpu0 disabled perfctr wrmsr: 0xc1 data > 0xffff > > [91767.799404] kvm: zapping shadow pages for mmio generation wraparound > > [91767.880480] kvm: zapping shadow pages for mmio generation wraparound > > [91768.957761] device vnet2 left promiscuous mode > > [91769.799446] device vnet3 left promiscuous mode > > [91771.223273] device vnet3 entered promiscuous mode > > [91771.232996] device vnet2 entered promiscuous mode > > [91773.733967] kvm [72245]: vcpu0 disabled perfctr wrmsr: 0xc1 data > 0xffff > > [91801.270510] device vnet2 left promiscuous mode > > > > > > Thanks > > Tzach > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > You're going to need a more complete command line than "openstack > overcloud deploy --templates". For instance, if you are using VMs for > your overcloud nodes, you will need to include "--libvirt-type qemu". > There are probably a couple of other parameters that you will need. > > You can watch the deployment using this command, which will show you > the progress: > > watch "heat resource-list -n 5 | grep -v COMPLETE" > > You can also explore which resources have failed: > > heat resource-list [-n 5]| grep FAILED > > And then look more closely at the failed resources: > > heat resource-show overcloud > > There are some more complete troubleshooting instructions here: > > > http://docs.openstack.org/developer/tripleo-docs/troubleshooting/troubleshooting-overcloud.html > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *Tzach Shefi* Quality Engineer, Redhat OSP +972-54-4701080 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Tue Oct 13 10:19:04 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 13 Oct 2015 06:19:04 -0400 (EDT) Subject: [Rdo-list] fatal: The remote end hung up unexpectedly In-Reply-To: References: Message-ID: <174120928.40883554.1444731544069.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Udi Kalifon" > To: "rdo-list" , "openstack-management-team-list" > Sent: Tuesday, October 13, 2015 10:30:36 AM > Subject: [Rdo-list] fatal: The remote end hung up unexpectedly > > I failed during build images in the RDO test day: > > /var/tmp/image.Hu0HkfDD/mnt/opt/stack/puppet-modules/tempest ~/images > From > /home/stack/.cache/image-create/source-repositories/puppet_tempest_2aa8dee360256cbbbbc450f20322094249aa9dba > * [new branch] master -> fetch_master > HEAD is now at 09b2b5c Try to use zuul-cloner to prepare fixtures > ~/images > 0+1 records in > 0+1 records out > 34 bytes (34 B) copied, 7.9125e-05 s, 430 kB/s > Caching puppetlabs-vcsrepo from > https://github.com/puppetlabs/puppetlabs-vcsrepo.git in > /home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3 > Cloning into > '/home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3.tmp'... > error: RPC failed; result=7, HTTP code = 0 > fatal: The remote end hung up unexpectedly Looks like some connection error with github. Can you rerun it to see if it shows up again? My guess is that it was temporary. > Has anyone ever seen an issue like this ? > > Thanks, > Udi. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From celik.esra at tubitak.gov.tr Tue Oct 13 10:27:20 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Tue, 13 Oct 2015 13:27:20 +0300 (EEST) Subject: [Rdo-list] Maintenance mode for deployment In-Reply-To: <561BAB8A.9020106@redhat.com> References: <856940727.1952940.1444374918883.JavaMail.zimbra@tubitak.gov.tr> <561BAB8A.9020106@redhat.com> Message-ID: <1554663036.3639091.1444732040576.JavaMail.zimbra@tubitak.gov.tr> When I switched to liberty version and this doc (https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty) this issue solved. Thanks.. ----- Orijinal Mesaj -----Kimden: John Trowbridge Kime: Esra Celik , rdo-list at redhat.comGönderilenler: Mon, 12 Oct 2015 15:46:02 +0300 (EEST)Konu: Re: [Rdo-list] Maintenance mode for deployment On 10/09/2015 03:15 AM, Esra Celik wrote:> > Hi all,> > After a succusful introspection I see my nodes in available state and maintenance=True> > [stack at undercloud ~]$ openstack baremetal introspection bulk startSetting available nodes to manageable...Starting introspection of node: 36777b8b-401e-47e9-9eb0-8c2f6b372da6Starting introspection of node: 8de0f3eb-3581-4080-bea4-28125bd7ee1aWaiting for introspection to finish...Introspection for UUID 36777b8b-401e-47e9-9eb0-8c2f6b372da6 finished successfully.Introspection for UUID 8de0f3eb-3581-4080-bea4-28125bd7ee1a finished successfully.Setting manageable nodes to available...Node 36777b8b-401e-47e9-9eb0-8c2f6b372da6 has been set to available.Node 8de0f3eb-3581-4080-bea4-28125bd7ee1a has been set to available.Introspection completed.> [stack at undercloud ~]$ ironic node-list+--------------------------------------+------+---------------+-------------+--------------------+-------------+| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |+--------------------------------------+------+---------------+-------------+--------------------+-------------+| 36777b8b-401e-47e9-9eb0-8c2f6b372da6 | None | None | power off | available | True || 8de0f3eb-3581-4080-bea4-28125bd7ee1a | None | None | power off | available | True |+--------------------------------------+------+---------------+-------------+--------------------+-------------+> However when I start deploying I get the following error> > [stack at undercloud ~]$ openstack overcloud deploy --templatesDeployment failed: Not enough nodes - available: 0, requested: 2In tripleoclient/utils.py I noticed that available node means that it is not in maintenance mode:> 423: available = len(baremetal_client.node.list(associated=False,424: maintenance=False))> Should I set my node's maintenance = false before deployment?> Actually this is not mentioned in doc (http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#prepare-your-environment) The nodes being in maintenance is indicative of an actual issue. Theyshould not be in maintenance after doing introspection. Theironic-conductor logs would be a good place to look for why the nodeswere put into maintenance mode. > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > _______________________________________________> Rdo-list mailing list> Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > To unsubscribe: rdo-list-unsubscribe at redhat.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ukalifon at redhat.com Tue Oct 13 10:43:50 2015 From: ukalifon at redhat.com (Udi Kalifon) Date: Tue, 13 Oct 2015 13:43:50 +0300 Subject: [Rdo-list] fatal: The remote end hung up unexpectedly In-Reply-To: <174120928.40883554.1444731544069.JavaMail.zimbra@redhat.com> References: <174120928.40883554.1444731544069.JavaMail.zimbra@redhat.com> Message-ID: Re-ran the build and failed on a different issue: package rdo-release is not installed + install-packages http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm WARNING: map-packages is deprecated. Please use the pkg-map element. Running install-packages install. Package list: http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm Loading "fastestmirror" plugin Config time: 0.010 Yum version: 3.4.3 rpmdb time: 0.000 Cannot open: http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm. Skipping. Error: Nothing to do Trying again, will let you know ... :) Udi. On Tue, Oct 13, 2015 at 1:19 PM, Marius Cornea wrote: > > ----- Original Message ----- > > From: "Udi Kalifon" > > To: "rdo-list" , "openstack-management-team-list" < > openstack-management-team-list at redhat.com> > > Sent: Tuesday, October 13, 2015 10:30:36 AM > > Subject: [Rdo-list] fatal: The remote end hung up unexpectedly > > > > I failed during build images in the RDO test day: > > > > /var/tmp/image.Hu0HkfDD/mnt/opt/stack/puppet-modules/tempest ~/images > > From > > > /home/stack/.cache/image-create/source-repositories/puppet_tempest_2aa8dee360256cbbbbc450f20322094249aa9dba > > * [new branch] master -> fetch_master > > HEAD is now at 09b2b5c Try to use zuul-cloner to prepare fixtures > > ~/images > > 0+1 records in > > 0+1 records out > > 34 bytes (34 B) copied, 7.9125e-05 s, 430 kB/s > > Caching puppetlabs-vcsrepo from > > https://github.com/puppetlabs/puppetlabs-vcsrepo.git in > > > /home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3 > > Cloning into > > > '/home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3.tmp'... > > error: RPC failed; result=7, HTTP code = 0 > > fatal: The remote end hung up unexpectedly > > Looks like some connection error with github. Can you rerun it to see if > it shows up again? My guess is that it was temporary. > > > Has anyone ever seen an issue like this ? > > > > Thanks, > > Udi. > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ukalifon at redhat.com Tue Oct 13 10:51:52 2015 From: ukalifon at redhat.com (Udi Kalifon) Date: Tue, 13 Oct 2015 13:51:52 +0300 Subject: [Rdo-list] fatal: The remote end hung up unexpectedly In-Reply-To: References: <174120928.40883554.1444731544069.JavaMail.zimbra@redhat.com> Message-ID: Failed with the same yum error again... Reverting to the pre-compiled images that can be found here: http://ikook.tlv.redhat.com/images/rdo_test_day/ On Tue, Oct 13, 2015 at 1:43 PM, Udi Kalifon wrote: > Re-ran the build and failed on a different issue: > > package rdo-release is not installed > + install-packages > http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm > WARNING: map-packages is deprecated. Please use the pkg-map element. > Running install-packages install. Package list: > http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm > Loading "fastestmirror" plugin > Config time: 0.010 > Yum version: 3.4.3 > rpmdb time: 0.000 > Cannot open: > http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm. Skipping. > Error: Nothing to do > > Trying again, will let you know ... :) > > Udi. > > On Tue, Oct 13, 2015 at 1:19 PM, Marius Cornea wrote: > >> >> ----- Original Message ----- >> > From: "Udi Kalifon" >> > To: "rdo-list" , "openstack-management-team-list" >> >> > Sent: Tuesday, October 13, 2015 10:30:36 AM >> > Subject: [Rdo-list] fatal: The remote end hung up unexpectedly >> > >> > I failed during build images in the RDO test day: >> > >> > /var/tmp/image.Hu0HkfDD/mnt/opt/stack/puppet-modules/tempest ~/images >> > From >> > >> /home/stack/.cache/image-create/source-repositories/puppet_tempest_2aa8dee360256cbbbbc450f20322094249aa9dba >> > * [new branch] master -> fetch_master >> > HEAD is now at 09b2b5c Try to use zuul-cloner to prepare fixtures >> > ~/images >> > 0+1 records in >> > 0+1 records out >> > 34 bytes (34 B) copied, 7.9125e-05 s, 430 kB/s >> > Caching puppetlabs-vcsrepo from >> > https://github.com/puppetlabs/puppetlabs-vcsrepo.git in >> > >> /home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3 >> > Cloning into >> > >> '/home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3.tmp'... >> > error: RPC failed; result=7, HTTP code = 0 >> > fatal: The remote end hung up unexpectedly >> >> Looks like some connection error with github. Can you rerun it to see if >> it shows up again? My guess is that it was temporary. >> >> > Has anyone ever seen an issue like this ? >> > >> > Thanks, >> > Udi. >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From celik.esra at tubitak.gov.tr Tue Oct 13 13:25:48 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Tue, 13 Oct 2015 16:25:48 +0300 (EEST) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" Message-ID: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> Hi all, OverCloud deploy fails with error "No valid host was found" [stack at undercloud ~]$ openstack overcloud deploy --templatesDeploying templates in the directory /usr/share/openstack-tripleo-heat-templatesStack failed with status: Resource CREATE failed: resources.Compute: ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"Heat Stack create failed. Here are some logs: Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue Oct 13 16:18:17 2015+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+| resource_name | physical_resource_id | resource_type | resource_status | updated_time | stack_name |+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+| Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | OS::Heat::ResourceGroup | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud || Controller | 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud || 0 | 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller | CREATE_IN_PROGRESS | 2015-10-13T10:20:52 | overcloud-Controller-45bbw24xxhxs || 0 | e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute | CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r || Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server | CREATE_IN_PROGRESS | 2015-10-13T10:20:54 | overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk || NovaCompute | 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server | CREATE_FAILED | 2015-10-13T10:20:56 | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef |+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ [stack at undercloud ~]$ heat resource-show overcloud Compute+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| Property | Value |+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| attributes | { || | "attributes": null, || | "refs": null || | } || creation_time | 2015-10-13T10:20:36 || description | || links | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute (self) || | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70 (stack) || | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393 (nested) || logical_resource_id | Compute || physical_resource_id | e33b6b1e-8740-4ded-ad7f-720617a03393 || required_by | ComputeAllNodesDeployment || | ComputeNodesPostDeployment || | ComputeCephDeployment || | ComputeAllNodesValidationDeployment || | AllNodesExtraConfig || | allNodesConfig || resource_name | Compute || resource_status | CREATE_FAILED || resource_status_reason | resources.Compute: ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" || resource_type | OS::Heat::ResourceGroup || updated_time | 2015-10-13T10:20:36 |+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ This is my instackenv.json for 1 compute and 1 control node to be deployed. { "nodes": [ { "pm_type":"pxe_ipmitool", "mac":[ "08:9E:01:58:CC:A1" ], "cpu":"4", "memory":"8192", "disk":"10", "arch":"x86_64", "pm_user":"root", "pm_password":"calvin", "pm_addr":"192.168.0.18" }, { "pm_type":"pxe_ipmitool", "mac":[ "08:9E:01:58:D0:3D" ], "cpu":"4", "memory":"8192", "disk":"100", "arch":"x86_64", "pm_user":"root", "pm_password":"calvin", "pm_addr":"192.168.0.19" } ]}Any ideas? Thanks in advance Esra ÇEL?K TÜB?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Tue Oct 13 13:27:24 2015 From: trown at redhat.com (John Trowbridge) Date: Tue, 13 Oct 2015 09:27:24 -0400 Subject: [Rdo-list] fatal: The remote end hung up unexpectedly In-Reply-To: References: <174120928.40883554.1444731544069.JavaMail.zimbra@redhat.com> Message-ID: <561D06BC.9020006@redhat.com> On 10/13/2015 06:51 AM, Udi Kalifon wrote: > Failed with the same yum error again... Reverting to the pre-compiled > images that can be found here: > http://ikook.tlv.redhat.com/images/rdo_test_day/ That is a Red Hat internal link. The current-passed-ci images are available here: https://repos.fedorapeople.org/repos/openstack-m/rdo-images-centos-liberty/ > > On Tue, Oct 13, 2015 at 1:43 PM, Udi Kalifon wrote: > >> Re-ran the build and failed on a different issue: >> >> package rdo-release is not installed >> + install-packages >> http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm >> WARNING: map-packages is deprecated. Please use the pkg-map element. >> Running install-packages install. Package list: >> http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm >> Loading "fastestmirror" plugin >> Config time: 0.010 >> Yum version: 3.4.3 >> rpmdb time: 0.000 >> Cannot open: >> http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm. Skipping. >> Error: Nothing to do >> >> Trying again, will let you know ... :) >> >> Udi. >> >> On Tue, Oct 13, 2015 at 1:19 PM, Marius Cornea wrote: >> >>> >>> ----- Original Message ----- >>>> From: "Udi Kalifon" >>>> To: "rdo-list" , "openstack-management-team-list" >>> >>>> Sent: Tuesday, October 13, 2015 10:30:36 AM >>>> Subject: [Rdo-list] fatal: The remote end hung up unexpectedly >>>> >>>> I failed during build images in the RDO test day: >>>> >>>> /var/tmp/image.Hu0HkfDD/mnt/opt/stack/puppet-modules/tempest ~/images >>>> From >>>> >>> /home/stack/.cache/image-create/source-repositories/puppet_tempest_2aa8dee360256cbbbbc450f20322094249aa9dba >>>> * [new branch] master -> fetch_master >>>> HEAD is now at 09b2b5c Try to use zuul-cloner to prepare fixtures >>>> ~/images >>>> 0+1 records in >>>> 0+1 records out >>>> 34 bytes (34 B) copied, 7.9125e-05 s, 430 kB/s >>>> Caching puppetlabs-vcsrepo from >>>> https://github.com/puppetlabs/puppetlabs-vcsrepo.git in >>>> >>> /home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3 >>>> Cloning into >>>> >>> '/home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3.tmp'... >>>> error: RPC failed; result=7, HTTP code = 0 >>>> fatal: The remote end hung up unexpectedly >>> >>> Looks like some connection error with github. Can you rerun it to see if >>> it shows up again? My guess is that it was temporary. >>> >>>> Has anyone ever seen an issue like this ? >>>> >>>> Thanks, >>>> Udi. >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mmosesohn at mirantis.com Tue Oct 13 13:32:21 2015 From: mmosesohn at mirantis.com (Matthew Mosesohn) Date: Tue, 13 Oct 2015 16:32:21 +0300 Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> References: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> Message-ID: Hi Esra, I was testing overcloud deploy last week and found I had issues with my downloaded base image files. I had this error: ironic.conductor.manager [-] Error in deploy of node dbe22019-290e-4f1f-8314-f48eb06e0f4c: Unexpected error while running command. Command: qemu-img convert -O raw /var/lib/ironic/master_images/tmpxfNdZk/bcfeaad9-c2b4-4086-9d3c-a77c35308da1.part /var/lib/ironic/master_images/tmpxfNdZk/bcfeaad9-c2b4-4086-9d3c-a77c35308da1.converted Exit code: 1 Stdout: u'' Stderr: u'qemu-img: error while reading sector 885120: Input/output error\n' Redownloading the file fixed the issue for me. Best Regards, Matthew Mosesohn On Tue, Oct 13, 2015 at 4:25 PM, Esra Celik wrote: > Hi all, > OverCloud deploy fails with error "No valid host was found" > [stack at undercloud ~]$ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > Stack failed with status: Resource CREATE failed: resources.Compute: > ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR > due to "Message: No valid host was found. There are not enough hosts > available., Code: 500" > Heat Stack create failed. > > Here are some logs: > Every 2.0s: heat resource-list -n 5 overcloud | grep -v > COMPLETE Tue Oct 13 16:18:17 2015 > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ > | resource_name | > physical_resource_id | > resource_type | resource_status | > updated_time | stack_name | > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ > | Compute | > e33b6b1e-8740-4ded-ad7f-720617a03393 | > OS::Heat::ResourceGroup | CREATE_FAILED | > 2015-10-13T10:20:36 | overcloud | > | Controller | > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > OS::Heat::ResourceGroup | CREATE_FAILED | > 2015-10-13T10:20:36 | overcloud | > | 0 | > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > OS::TripleO::Controller | CREATE_IN_PROGRESS | > 2015-10-13T10:20:52 | > overcloud-Controller-45bbw24xxhxs | > | 0 | > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | > OS::TripleO::Compute | CREATE_FAILED | > 2015-10-13T10:20:54 | > overcloud-Compute-vqk632ysg64r | > | Controller | > 2e9ac712-0566-49b5-958f-c3e151bb24d7 | > OS::Nova::Server | CREATE_IN_PROGRESS | > 2015-10-13T10:20:54 | > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk > | > | NovaCompute | > 96efee56-81cb-46af-beef-84f4a3af761a | > OS::Nova::Server | CREATE_FAILED | > 2015-10-13T10:20:56 | > overcloud-Compute-vqk632ysg64r-0-32nalzkofmef | > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ > > [stack at undercloud ~]$ heat resource-show overcloud Compute > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Property | > Value > | > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | attributes | > { > | > | | "attributes": > null, > | > | | "refs": > null > | > | | > } > | > | creation_time | > 2015-10-13T10:20:36 > | > | description > | > | > | links | > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute > (self) | > | | > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70 > (stack) | > | | > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393 > (nested) | > | logical_resource_id | > Compute > | > | physical_resource_id | > e33b6b1e-8740-4ded-ad7f-720617a03393 > | > | required_by | > ComputeAllNodesDeployment > | > | | > ComputeNodesPostDeployment > | > | | > ComputeCephDeployment > | > | | > ComputeAllNodesValidationDeployment > | > | | > AllNodesExtraConfig > | > | | > allNodesConfig > | > | resource_name | > Compute > | > | resource_status | > CREATE_FAILED > | > | resource_status_reason | resources.Compute: ResourceInError: > resources[0].resources.NovaCompute: Went to status ERROR due to "Message: > No valid host was found. There are not enough hosts available., Code: 500" | > | resource_type | > OS::Heat::ResourceGroup > | > | updated_time | > 2015-10-13T10:20:36 > | > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > This is my instackenv.json for 1 compute and 1 control node to be deployed. > { > "nodes": [ > { > "pm_type":"pxe_ipmitool", > "mac":[ > "08:9E:01:58:CC:A1" > ], > "cpu":"4", > "memory":"8192", > "disk":"10", > "arch":"x86_64", > "pm_user":"root", > "pm_password":"calvin", > "pm_addr":"192.168.0.18" > }, > { > "pm_type":"pxe_ipmitool", > "mac":[ > "08:9E:01:58:D0:3D" > ], > "cpu":"4", > "memory":"8192", > "disk":"100", > "arch":"x86_64", > "pm_user":"root", > "pm_password":"calvin", > "pm_addr":"192.168.0.19" > } > ] > } > > > Any ideas? Thanks in advance > *Esra ?EL?K* > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Tue Oct 13 13:36:06 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Tue, 13 Oct 2015 09:36:06 -0400 Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> References: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> Message-ID: Esra, I encountered the same problem after deleting the stack and re-deploying. It turns out that 'heat stack-delete overcloud? does remove the nodes from ?nova list? and one would assume that the baremetal servers are now ready to be used for the next stack, but when redeploying, I get the same message of not enough hosts available. You can look into the nova logs and it mentions something about ?node xxx is already associated with UUID yyyy? and ?I tried 3 times and I?m giving up?. The issue is that the UUID yyyy belonged to a prior unsuccessful deployment. I?m now redeploying the basic OS to start from scratch again. IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Oct 13, 2015, at 9:25 AM, Esra Celik wrote: > > Hi all, > OverCloud deploy fails with error "No valid host was found" > [stack at undercloud ~]$ openstack overcloud deploy --templates > Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates > Stack failed with status: Resource CREATE failed: resources.Compute: ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" > Heat Stack create failed. > > Here are some logs: > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue Oct 13 16:18:17 2015 > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ > | resource_name | physical_resource_id | resource_type | resource_status | updated_time | stack_name | > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | OS::Heat::ResourceGroup | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud | > | Controller | 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud | > | 0 | 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller | CREATE_IN_PROGRESS | 2015-10-13T10:20:52 | overcloud-Controller-45bbw24xxhxs | > | 0 | e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute | CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r | > | Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server | CREATE_IN_PROGRESS | 2015-10-13T10:20:54 | overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk | > | NovaCompute | 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server | CREATE_FAILED | 2015-10-13T10:20:56 | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef | > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ > > [stack at undercloud ~]$ heat resource-show overcloud Compute > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Property | Value | > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | attributes | { | > | | "attributes": null, | > | | "refs": null | > | | } | > | creation_time | 2015-10-13T10:20:36 | > | description | | > | links | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute (self) | > | | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70 (stack) | > | | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393 (nested) | > | logical_resource_id | Compute | > | physical_resource_id | e33b6b1e-8740-4ded-ad7f-720617a03393 | > | required_by | ComputeAllNodesDeployment | > | | ComputeNodesPostDeployment | > | | ComputeCephDeployment | > | | ComputeAllNodesValidationDeployment | > | | AllNodesExtraConfig | > | | allNodesConfig | > | resource_name | Compute | > | resource_status | CREATE_FAILED | > | resource_status_reason | resources.Compute: ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" | > | resource_type | OS::Heat::ResourceGroup | > | updated_time | 2015-10-13T10:20:36 | > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > This is my instackenv.json for 1 compute and 1 control node to be deployed. > { > "nodes": [ > { > "pm_type":"pxe_ipmitool", > "mac":[ > "08:9E:01:58:CC:A1" > ], > "cpu":"4", > "memory":"8192", > "disk":"10", > "arch":"x86_64", > "pm_user":"root", > "pm_password":"calvin", > "pm_addr":"192.168.0.18" > }, > { > "pm_type":"pxe_ipmitool", > "mac":[ > "08:9E:01:58:D0:3D" > ], > "cpu":"4", > "memory":"8192", > "disk":"100", > "arch":"x86_64", > "pm_user":"root", > "pm_password":"calvin", > "pm_addr":"192.168.0.19" > } > ] > } > > > Any ideas? Thanks in advance > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From celik.esra at tubitak.gov.tr Tue Oct 13 13:47:57 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Tue, 13 Oct 2015 16:47:57 +0300 (EEST) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: References: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1834381527.3752816.1444744077063.JavaMail.zimbra@tubitak.gov.tr> Actually I re-installed the OS for Undercloud before deploying. However I did not re-install the OS in Compute and Controller nodes.. I will reinstall basic OS for them too, and retry.. Thanks Esra ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ----- Orijinal Mesaj ----- Kimden: "Ignacio Bravo" Kime: "Esra Celik" Kk: rdo-list at redhat.com G?nderilenler: 13 Ekim Sal? 2015 16:36:06 Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" Esra, I encountered the same problem after deleting the stack and re-deploying. It turns out that 'heat stack-delete overcloud? does remove the nodes from ?nova list? and one would assume that the baremetal servers are now ready to be used for the next stack, but when redeploying, I get the same message of not enough hosts available. You can look into the nova logs and it mentions something about ?node xxx is already associated with UUID yyyy? and ?I tried 3 times and I?m giving up?. The issue is that the UUID yyyy belonged to a prior unsuccessful deployment. I?m now redeploying the basic OS to start from scratch again. IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 On Oct 13, 2015, at 9:25 AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote: Hi all, OverCloud deploy fails with error "No valid host was found" [stack at undercloud ~]$ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates Stack failed with status: Resource CREATE failed: resources.Compute: ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" Heat Stack create failed. Here are some logs: Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue Oct 13 16:18:17 2015 +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | stack_name | +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | OS::Heat::ResourceGroup | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud | | Controller | 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud | | 0 | 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller | CREATE_IN_PROGRESS | 2015-10-13T10:20:52 | overcloud-Controller-45bbw24xxhxs | | 0 | e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute | CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r | | Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server | CREATE_IN_PROGRESS | 2015-10-13T10:20:54 | overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk | | NovaCompute | 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server | CREATE_FAILED | 2015-10-13T10:20:56 | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef | +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ [stack at undercloud ~]$ heat resource-show overcloud Compute +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | { | | | "attributes": null, | | | "refs": null | | | } | | creation_time | 2015-10-13T10:20:36 | | description | | | links | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute (self) | | | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70 (stack) | | | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393 (nested) | | logical_resource_id | Compute | | physical_resource_id | e33b6b1e-8740-4ded-ad7f-720617a03393 | | required_by | ComputeAllNodesDeployment | | | ComputeNodesPostDeployment | | | ComputeCephDeployment | | | ComputeAllNodesValidationDeployment | | | AllNodesExtraConfig | | | allNodesConfig | | resource_name | Compute | | resource_status | CREATE_FAILED | | resource_status_reason | resources.Compute: ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" | | resource_type | OS::Heat::ResourceGroup | | updated_time | 2015-10-13T10:20:36 | +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ This is my instackenv.json for 1 compute and 1 control node to be deployed. { "nodes": [ { "pm_type":"pxe_ipmitool", "mac":[ "08:9E:01:58:CC:A1" ], "cpu":"4", "memory":"8192", "disk":"10", "arch":"x86_64", "pm_user":"root", "pm_password":"calvin", "pm_addr":"192.168.0.18" }, { "pm_type":"pxe_ipmitool", "mac":[ "08:9E:01:58:D0:3D" ], "cpu":"4", "memory":"8192", "disk":"100", "arch":"x86_64", "pm_user":"root", "pm_password":"calvin", "pm_addr":"192.168.0.19" } ] } Any ideas? Thanks in advance Esra ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnemec at redhat.com Tue Oct 13 14:12:24 2015 From: bnemec at redhat.com (Ben Nemec) Date: Tue, 13 Oct 2015 09:12:24 -0500 Subject: [Rdo-list] fatal: The remote end hung up unexpectedly In-Reply-To: References: Message-ID: <561D1148.20302@redhat.com> On 10/13/2015 03:30 AM, Udi Kalifon wrote: > I failed during build images in the RDO test day: > > /var/tmp/image.Hu0HkfDD/mnt/opt/stack/puppet-modules/tempest ~/images > From > /home/stack/.cache/image-create/source-repositories/puppet_tempest_2aa8dee360256cbbbbc450f20322094249aa9dba > * [new branch] master -> fetch_master > HEAD is now at 09b2b5c Try to use zuul-cloner to prepare fixtures > ~/images > 0+1 records in > 0+1 records out > 34 bytes (34 B) copied, 7.9125e-05 s, 430 kB/s > Caching puppetlabs-vcsrepo from > https://github.com/puppetlabs/puppetlabs-vcsrepo.git in > /home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3 > Cloning into > '/home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3.tmp'... > error: RPC failed; result=7, HTTP code = 0 > fatal: The remote end hung up unexpectedly That's typically a transient github problem. There's a reason upstream infra stopped using github in their CI jobs. Why are we pulling the puppet modules from there anyway though? Is OPM in RDO not new enough? > > Has anyone ever seen an issue like this ? > > Thanks, > Udi. From mcornea at redhat.com Tue Oct 13 14:25:00 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 13 Oct 2015 10:25:00 -0400 (EDT) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1834381527.3752816.1444744077063.JavaMail.zimbra@tubitak.gov.tr> References: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> <1834381527.3752816.1444744077063.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <136417348.41059871.1444746300472.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Esra Celik" > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com > Sent: Tuesday, October 13, 2015 3:47:57 PM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > > > Actually I re-installed the OS for Undercloud before deploying. However I did > not re-install the OS in Compute and Controller nodes.. I will reinstall > basic OS for them too, and retry.. You don't need to reinstall the OS on the controller and compute, they will get the image served by the undercloud. I'd recommend that during deployment you watch the servers console and make sure they get powered on, pxe boot, and actually get the image deployed. Thanks > Thanks > > > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > > Kimden: "Ignacio Bravo" > Kime: "Esra Celik" > Kk: rdo-list at redhat.com > G?nderilenler: 13 Ekim Sal? 2015 16:36:06 > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > found" > > Esra, > > I encountered the same problem after deleting the stack and re-deploying. > > It turns out that 'heat stack-delete overcloud? does remove the nodes from > ?nova list? and one would assume that the baremetal servers are now ready to > be used for the next stack, but when redeploying, I get the same message of > not enough hosts available. > > You can look into the nova logs and it mentions something about ?node xxx is > already associated with UUID yyyy? and ?I tried 3 times and I?m giving up?. > The issue is that the UUID yyyy belonged to a prior unsuccessful deployment. > > I?m now redeploying the basic OS to start from scratch again. > > IB > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > > > On Oct 13, 2015, at 9:25 AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote: > > > Hi all, > > OverCloud deploy fails with error "No valid host was found" > > [stack at undercloud ~]$ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > Stack failed with status: Resource CREATE failed: resources.Compute: > ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR > due to "Message: No valid host was found. There are not enough hosts > available., Code: 500" > Heat Stack create failed. > > Here are some logs: > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue Oct 13 > 16:18:17 2015 > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ > | resource_name | physical_resource_id | resource_type | resource_status | > | updated_time | stack_name | > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | OS::Heat::ResourceGroup | > | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud | > | Controller | 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup > | | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud | > | 0 | 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller | > | CREATE_IN_PROGRESS | 2015-10-13T10:20:52 | > | overcloud-Controller-45bbw24xxhxs | > | 0 | e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute | > | CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r | > | Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server | > | CREATE_IN_PROGRESS | 2015-10-13T10:20:54 | > | overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk | > | NovaCompute | 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server | > | CREATE_FAILED | 2015-10-13T10:20:56 | > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef | > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+ > > > [stack at undercloud ~]$ heat resource-show overcloud Compute > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Property | Value | > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | attributes | { | > | | "attributes": null, | > | | "refs": null | > | | } | > | creation_time | 2015-10-13T10:20:36 | > | description | | > | links | > | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute > | (self) | > | | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70 > | | (stack) | > | | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393 > | | (nested) | > | logical_resource_id | Compute | > | physical_resource_id | e33b6b1e-8740-4ded-ad7f-720617a03393 | > | required_by | ComputeAllNodesDeployment | > | | ComputeNodesPostDeployment | > | | ComputeCephDeployment | > | | ComputeAllNodesValidationDeployment | > | | AllNodesExtraConfig | > | | allNodesConfig | > | resource_name | Compute | > | resource_status | CREATE_FAILED | > | resource_status_reason | resources.Compute: ResourceInError: > | resources[0].resources.NovaCompute: Went to status ERROR due to "Message: > | No valid host was found. There are not enough hosts available., Code: 500" > | | > | resource_type | OS::Heat::ResourceGroup | > | updated_time | 2015-10-13T10:20:36 | > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > This is my instackenv.json for 1 compute and 1 control node to be deployed. > > { > "nodes": [ > { > "pm_type":"pxe_ipmitool", > "mac":[ > "08:9E:01:58:CC:A1" > ], > "cpu":"4", > "memory":"8192", > "disk":"10", > "arch":"x86_64", > "pm_user":"root", > "pm_password":"calvin", > "pm_addr":"192.168.0.18" > }, > { > "pm_type":"pxe_ipmitool", > "mac":[ > "08:9E:01:58:D0:3D" > ], > "cpu":"4", > "memory":"8192", > "disk":"100", > "arch":"x86_64", > "pm_user":"root", > "pm_password":"calvin", > "pm_addr":"192.168.0.19" > } > ] > } > > > Any ideas? Thanks in advance > > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From trown at redhat.com Tue Oct 13 14:30:30 2015 From: trown at redhat.com (John Trowbridge) Date: Tue, 13 Oct 2015 10:30:30 -0400 Subject: [Rdo-list] fatal: The remote end hung up unexpectedly In-Reply-To: <561D1148.20302@redhat.com> References: <561D1148.20302@redhat.com> Message-ID: <561D1586.8080903@redhat.com> On 10/13/2015 10:12 AM, Ben Nemec wrote: > > > On 10/13/2015 03:30 AM, Udi Kalifon wrote: >> I failed during build images in the RDO test day: >> >> /var/tmp/image.Hu0HkfDD/mnt/opt/stack/puppet-modules/tempest ~/images >> From >> /home/stack/.cache/image-create/source-repositories/puppet_tempest_2aa8dee360256cbbbbc450f20322094249aa9dba >> * [new branch] master -> fetch_master >> HEAD is now at 09b2b5c Try to use zuul-cloner to prepare fixtures >> ~/images >> 0+1 records in >> 0+1 records out >> 34 bytes (34 B) copied, 7.9125e-05 s, 430 kB/s >> Caching puppetlabs-vcsrepo from >> https://github.com/puppetlabs/puppetlabs-vcsrepo.git in >> /home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3 >> Cloning into >> '/home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3.tmp'... >> error: RPC failed; result=7, HTTP code = 0 >> fatal: The remote end hung up unexpectedly > > That's typically a transient github problem. There's a reason upstream > infra stopped using github in their CI jobs. > > Why are we pulling the puppet modules from there anyway though? Is OPM > in RDO not new enough? Ya we were missing the puppet-ironic patch, but pulling in the instack-undercloud patch that used it: https://bugzilla.redhat.com/show_bug.cgi?id=1270957 This should be fixed in OPM now, I am reverting the workaround in the docs, and in CI. > >> >> Has anyone ever seen an issue like this ? >> >> Thanks, >> Udi. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mcornea at redhat.com Tue Oct 13 15:00:12 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 13 Oct 2015 11:00:12 -0400 (EDT) Subject: [Rdo-list] Basic HA deployment In-Reply-To: <1904322400.41037208.1444745188031.JavaMail.zimbra@redhat.com> Message-ID: <1562966692.41096911.1444748412152.JavaMail.zimbra@redhat.com> Hi everyone, I tried a deployment on virt with 3 x ctrls + 1 x compute and it currently fails due to a ceilometer dbsync issue(BZ#1271002). To workaround it I did the following. This gets the deployment successful but some of the Neutron related pacemaker resources are stopped(same as BZ#1270964): 1. Mount the overcloud-full.qcow2 image on a host with libguestfs-tools installed (I used the physical machine where I run the virt env for this) guestfish --rw -a overcloud-full.qcow2 > run > mount /dev/sda / > vi /etc/puppet/modules/ceilometer/manifests/init.pp #Apply the changes below: diff -c2 init.pp.orig init.pp.new *** init.pp.orig 2015-10-13 14:35:57.514488094 +0000 --- init.pp.new 2015-10-13 14:35:01.614488094 +0000 *************** *** 154,157 **** --- 154,158 ---- $qpid_reconnect_interval_max = 0, $qpid_reconnect_interval = 0, + $mongodb_replica_set = 'tripleo', ) { *************** *** 293,296 **** --- 294,298 ---- 'database/metering_time_to_live' : value => $metering_time_to_live; 'database/alarm_history_time_to_live' : value => $alarm_history_time_to_live; + 'database/mongodb_replica_set' : value => $mongodb_replica_set; } > quit 2. Get the overcloud-full.qcow2 image back on the undercloud and update the existing Glance image: openstack overcloud image upload --update-existing 3. Deploy overcloud: openstack overcloud deploy --templates ~/templates/my-overcloud -e ~/templates/my-overcloud/environments/network-isolation.yaml -e ~/templates/network-environment.yaml --control-scale 3 --compute-scale 1 --libvirt-type qemu -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server clock.redhat.com Thanks, Marius From celik.esra at tubitak.gov.tr Tue Oct 13 15:02:09 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Tue, 13 Oct 2015 18:02:09 +0300 (EEST) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <136417348.41059871.1444746300472.JavaMail.zimbra@redhat.com> References: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> <1834381527.3752816.1444744077063.JavaMail.zimbra@tubitak.gov.tr> <136417348.41059871.1444746300472.JavaMail.zimbra@redhat.com> Message-ID: <1316900159.3778086.1444748529044.JavaMail.zimbra@tubitak.gov.tr> During deployment they are powering on and deploying the images. I see lot of connection error messages about ironic-python-agent but ignore them as mentioned here (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) In instackenv.json file I do not need to add the undercloud node, or do I? And which log files should I watch during deployment? Thanks Esra ----- Orijinal Mesaj -----Kimden: Marius Cornea Kime: Esra Celik Kk: Ignacio Bravo , rdo-list at redhat.comGönderilenler: Tue, 13 Oct 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" ----- Original Message -----> From: "Esra Celik" > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> Sent: Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found"> > > > Actually I re-installed the OS for Undercloud before deploying. However I did> not re-install the OS in Compute and Controller nodes.. I will reinstall> basic OS for them too, and retry.. You don't need to reinstall the OS on the controller and compute, they will get the image served by the undercloud. I'd recommend that during deployment you watch the servers console and make sure they get powered on, pxe boot, and actually get the image deployed. Thanks > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: "Ignacio Bravo" > Kime: "Esra Celik" > Kk: rdo-list at redhat.com> Gönderilenler: 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was> found"> > Esra,> > I encountered the same problem after deleting the stack and re-deploying.> > It turns out that 'heat stack-delete overcloud’ does remove the nodes from> ‘nova list’ and one would assume that the baremetal servers are now ready to> be used for the next stack, but when redeploying, I get the same message of> not enough hosts available.> > You can look into the nova logs and it mentions something about ‘node xxx is> already associated with UUID yyyy’ and ‘I tried 3 times and I’m giving up’.> The issue is that the UUID yyyy belonged to a prior unsuccessful deployment.> > I’m now redeploying the basic OS to start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, Inc> www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, at 9:25 AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > OverCloud deploy fails with error "No valid host was found"> > [stack at undercloud ~]$ openstack overcloud deploy --templates> Deploying templates in the directory> /usr/share/openstack-tripleo-heat-templates> Stack failed with status: Resource CREATE failed: resources.Compute:> ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR> due to "Message: No valid host was found. There are not enough hosts> available., Code: 500"> Heat Stack create failed.> > Here are some logs:> > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue Oct 13> 16:18:17 2015> > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> | resource_name | physical_resource_id | resource_type | resource_status |> | updated_time | stack_name |> +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | OS::Heat::ResourceGroup |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller | 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | overcloud-Controller-45bbw24xxhxs |> | 0 | e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r |> | Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server |> | CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | CREATE_FAILED | 2015-10-13T10:20:56 |> | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef |> +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > [stack at undercloud ~]$ heat resource-show overcloud Compute> +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> | Property | Value |> +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> | attributes | { |> | | "attributes": null, |> | | "refs": null |> | | } |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | links |> | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> | (self) |> | | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> | | (stack) |> | | http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> | | (nested) |> | logical_resource_id | Compute |> | physical_resource_id | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> | | AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | Compute |> | resource_status | CREATE_FAILED |> | resource_status_reason | resources.Compute: ResourceInError:> | resources[0].resources.NovaCompute: Went to status ERROR due to "Message:> | No valid host was found. There are not enough hosts available., Code: 500"> | |> | resource_type | OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > This is my instackenv.json for 1 compute and 1 control node to be deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> "disk":"10",> "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> "mac":[> "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> "disk":"100",> "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in advance> > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > _______________________________________________> Rdo-list mailing list> Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > _______________________________________________> Rdo-list mailing list> Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Tue Oct 13 15:48:38 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 13 Oct 2015 08:48:38 -0700 Subject: [Rdo-list] Overcloud deploy stuck for a long time In-Reply-To: References: <561C1722.9050608@redhat.com> Message-ID: <561D27D6.4040805@redhat.com> On 10/13/2015 03:01 AM, Tzach Shefi wrote: > So gave it a few more hours, on heat resource nothing is failed only > create_complete and some init_complete. > > Nova show > | 61aaed37-4993-4165-93a7-3c9bf6b10a21 | overcloud-controller-0 | > ACTIVE | - | Running | ctlplane=192.0.2.8 | > | 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 | overcloud-novacompute-0 | > BUILD | spawning | NOSTATE | ctlplane=192.0.2.9 | > > > nova show 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 > +--------------------------------------+----------------------------------------------------------+ > | Property | > Value | > +--------------------------------------+----------------------------------------------------------+ > | OS-DCF:diskConfig | > MANUAL | > | OS-EXT-AZ:availability_zone | > nova | > | OS-EXT-SRV-ATTR:host | > instack.localdomain | > | OS-EXT-SRV-ATTR:hypervisor_hostname | > 4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 | > | OS-EXT-SRV-ATTR:instance_name | > instance-00000002 | > | OS-EXT-STS:power_state | > 0 | > | OS-EXT-STS:task_state | > spawning | > | OS-EXT-STS:vm_state | > building | > > Checking nova log this is what I see: > > nova-compute.log:{"nodes": [{"target_power_state": null, "links": > [{"href": > "http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", > "rel": "self"}, {"href": > "http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", > "rel": "bookmark"}], "extra": {}, "last_error": "*Failed to change > power state to 'power on'. Error: Failed to execute command via SSH*: > LC_ALL=C /usr/bin/virsh --connect qemu:///system start > baremetalbrbm_1.", "updated_at": "2015-10-12T14:36:08+00:00", > "maintenance_reason": null, "provision_state": "deploying", > "clean_step": {}, "uuid": "4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", > "console_enabled": false, "target_provision_state": "active", > "provision_updated_at": "2015-10-12T14:35:18+00:00", "power_state": > "power off", "inspection_started_at": null, "inspection_finished_at": > null, "maintenance": false, "driver": "pxe_ssh", "reservation": null, > "properties": {"memory_mb": "4096", "cpu_arch": "x86_64", "local_gb": > "40", "cpus": "1", "capabilities": "boot_option:local"}, > "instance_uuid": "7f9f4f52-3ee6-42d9-9275-ff88582dd6e7", "name": null, > "driver_info": {"ssh_username": "root", "deploy_kernel": > "94cc528d-d91f-4ca7-876e-2d8cbec66f1b", "deploy_ramdisk": > "057d3b42-002a-4c24-bb3f-2032b8086108", "ssh_key_contents": > "-----BEGIN( I removed key..)END RSA PRIVATE KEY-----", > "ssh_virt_type": "virsh", "ssh_address": "192.168.122.1"}, > "created_at": "2015-10-12T14:26:30+00:00", "ports": [{"href": > "http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports", > "rel": "self"}, {"href": > "http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports", > "rel": "bookmark"}], "driver_internal_info": {"clean_steps": null, > "root_uuid_or_disk_id": "9ff90423-9d18-4dd1-ae96-a4466b52d9d9", > "is_whole_disk_image": false}, "instance_info": {"ramdisk": > "82639516-289d-4603-bf0e-8131fa75ec46", "kernel": > "665ffcb0-2afe-4e04-8910-45b92826e328", "root_gb": "40", > "display_name": "overcloud-novacompute-0", "image_source": > "d99f460e-c6d9-4803-99e4-51347413f348", "capabilities": > "{\"boot_option\": \"local\"}", "memory_mb": "4096", "vcpus": "1", > "deploy_key": "BI0FRWDTD4VGHII9JK2BYDDFR8WB1WUG", "local_gb": "40", > "configdrive": > "H4sICGDEG1YC/3RtcHpwcWlpZQDt3WuT29iZ2HH02Bl7Fe/G5UxSqS3vLtyesaSl2CR4p1zyhk2Ct+ateScdVxcIgiR4A5sAr95xxa/iVOUz7EfJx8m7rXyE5IDslro1mpbGox15Zv6/lrpJ4AAHN/LBwXMIShIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADhJpvx+5UQq5EqNtvzldGs+MIfewJeNv53f/7n354F6xT/3v/TjH0v/chz0L5+8Gv2f3V+n0s+Pz34u/dj982PJfvSTvxFVfXQ7vfyBlRfGvOZo+kQuWWtNVgJn/jO/d6kHzvrGWlHOjGn0TDfmjmXL30kZtZSrlXPFREaVxQM5Hon4fdl0TU7nCmqtU6urRTlZVRP1clV+knwqK/F4UFbPOuVGKZNKFNTbgVFvwO+PyPmzipqo1solX/6slszmCuKozBzKuKPdMlE5ma > > > Any ideas on how to resolve a stuck spawning compute node, it's stuck > hasn't changed for a few hours now. > > Tzach > > Tzach > > > On Mon, Oct 12, 2015 at 11:25 PM, Dan Sneddon > wrote: > > On 10/12/2015 08:10 AM, Tzach Shefi wrote: > > Hi, > > > > Server running centos 7.1, vm running for undercloud got up to > > overcloud deploy stage. > > It looks like its stuck nothing advancing for a while. > > Ideas, what to check? > > > > [stack at instack ~]$ openstack overcloud deploy --templates > > Deploying templates in the directory > > /usr/share/openstack-tripleo-heat-templates > > [91665.696658] device vnet2 entered promiscuous mode > > [91665.781346] device vnet3 entered promiscuous mode > > [91675.260324] kvm [71183]: vcpu0 disabled perfctr wrmsr: 0xc1 > data 0xffff > > [91675.291232] kvm [71200]: vcpu0 disabled perfctr wrmsr: 0xc1 > data 0xffff > > [91767.799404] kvm: zapping shadow pages for mmio generation > wraparound > > [91767.880480] kvm: zapping shadow pages for mmio generation > wraparound > > [91768.957761] device vnet2 left promiscuous mode > > [91769.799446] device vnet3 left promiscuous mode > > [91771.223273] device vnet3 entered promiscuous mode > > [91771.232996] device vnet2 entered promiscuous mode > > [91773.733967] kvm [72245]: vcpu0 disabled perfctr wrmsr: 0xc1 > data 0xffff > > [91801.270510] device vnet2 left promiscuous mode > > > > > > Thanks > > Tzach > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > You're going to need a more complete command line than "openstack > overcloud deploy --templates". For instance, if you are using VMs for > your overcloud nodes, you will need to include "--libvirt-type qemu". > There are probably a couple of other parameters that you will need. > > You can watch the deployment using this command, which will show you > the progress: > > watch "heat resource-list -n 5 | grep -v COMPLETE" > > You can also explore which resources have failed: > > heat resource-list [-n 5]| grep FAILED > > And then look more closely at the failed resources: > > heat resource-show overcloud > > There are some more complete troubleshooting instructions here: > > http://docs.openstack.org/developer/tripleo-docs/troubleshooting/troubleshooting-overcloud.html > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | > redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -- > *Tzach Shefi* > Quality Engineer, Redhat OSP > +972-54-4701080 The deployment looks like it is stuck to me. The problem, though, appears to be an inability to set the power state on one of the VM nodes through libvirt. What the SSH driver does for virt is to SSH from the Undercloud VM to the VM host system, and issue libvirt commands to start/stop VMs. That process failed when setting the power state of one of your nodes, and it doesn't look like the deployment is recovering from that error. I'm not quite sure why that is happening, but I can think of a few possible reasons: * SSH daemon not running on the virt host * The virt host was not able to respond to the request, perhaps it was overloaded? * Firewall blocking SSH connections from the Instack VM to the virt host? One tip for the next deployment: You can set the timeout. That way, if it does get hung up you don't have to wait 4 hours for it to fail. Conservatively, you could set --timeout 90 to set the timeout to 90 minutes. A 2-node deployment will definitely either deploy or fail in that amount of time (probably much less, but I wouldn't want you to cut off a deployment that might be successful if given a little more time). -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From mpavlase at redhat.com Tue Oct 13 15:55:47 2015 From: mpavlase at redhat.com (=?windows-1252?Q?Martin_Pavl=E1sek?=) Date: Tue, 13 Oct 2015 17:55:47 +0200 Subject: [Rdo-list] Test day issue: parse error In-Reply-To: References: Message-ID: <561D2983.7080900@redhat.com> Hi, I also hit the same issue on my CentOS 7 deployment. It's already reported issue [1], [2]. After manual fix I went a little bit further, but "instack-virt-setup" still failing [3]: $ tail -n 30 virt-setup-instack.log Map file for redhat-common element does not exist. WARNING: map-packages is deprecated. Please use the pkg-map element. There are no enabled repos. Run "yum repolist all" to see the repos you have. You can enable repos with yum-config-manager --enable installing wget from epel installing wget from epel installing redhat-lsb-core from redhat-common installing gettext from redhat-common installing grub2-tools from redhat-common installing system-logos from redhat-common installing os-prober from redhat-common installing redhat-lsb-core from redhat-common installing gettext from redhat-common installing grub2-tools from redhat-common installing system-logos from redhat-common installing os-prober from redhat-common install failed with error Running install-packages install. Package list: system-logos redhat-lsb-core gettext grub2-tools wget os-prober Not loading "rhnplugin" plugin, as it is disabled Not loading "product-id" plugin, as it is disabled Not loading "subscription-manager" plugin, as it is disabled Config time: 0.022 Yum version: 3.4.3 Setting up Package Sacks I've tried to debug the situation, but I left it on file (just below) because I don't have so deep insight into it: /usr/share/diskimage-builder/lib/img-functions:83 check_break before-$1 run_in_target bash [1] https://bugzilla.redhat.com/show_bug.cgi?id=1266101 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1270585 [3] https://bugzilla.redhat.com/show_bug.cgi?id=1271317 Martin On 12/10/15 16:57, Udi Kalifon wrote: > Hello, > > We are encountering an error during instack-virt-setup: > > ++ sudo virsh net-list --all --persistent > ++ grep default > ++ awk 'BEGIN{OFS=":";} {print $2,$3}' > + default_net=active:yes > + state=active > + autostart=yes > + '[' active '!=' active ']' > + '[' yes '!=' yes ']' > Domain seed has been undefined > > > seed VM not running > > seed VM not defined > Created machine seed with UUID f59eb2f0-c7ac-429e-950c-df2fd4b6f301 > Seed VM created with MAC 52:54:00:05:af:0f > parse error: Invalid string: control characters from U+0000 through U+001F > must be escaped at line 32, column 30 > > Any ideas? I don't know which file causes this parse error, it's not the > instack-virt-setup. > > Thanks. > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Tue Oct 13 15:57:28 2015 From: dms at redhat.com (David Moreau Simard) Date: Tue, 13 Oct 2015 11:57:28 -0400 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: <561CCF58.30704@redhat.com> References: <561CCF58.30704@redhat.com> Message-ID: Hey Fred, Our setup is a bit all over the place right now - in my day-to-day I deal with three gerrits, four jenkins - delorean and mirrors on trunk.rdoproject, cbs, fedorapeople and so on. As a relatively new guy, it took me far longer than I'd like to admit to just wrap my head around what is where and how it works.. and there's still some things I'm confused about. So, the idea of centralizing stuff in one place is IMO a good one but we need to make sure we get everyone aligned on the objectives and the solutions as to not create yet another standard [1]. I think we're in the process of moving the Jenkins portion of the infrastructure to ci.centos.org right now. Perhaps you can chat about Wes to get more information on the how, when and why. My one question that comes to mind about making your experiment a reality: do we have the resources (human, time and capex) to make something like this happen ? [1]: https://xkcd.com/927/ David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Tue, Oct 13, 2015 at 5:31 AM, Fr?d?ric Lepied wrote: > Hi, > > After some discussions, we came up with the idea to have the RDO > project owns its build infrastructure. To play with this idea, we > propose to experiment to use an OpenStack like workflow to build RDO > packages by deploying a Software Factory instance > (http://softwarefactory-project.io) on top of an RDO or RHEL-OSP > cloud. That will allow us to have our own Gerrit, Zuul, Nodepool and > Jenkins instances like in the OpenStack upstream project while adding > our package building specific needs like using the Delorean and > Mock/Koji/Mash machineries. > > The objectives from these changes are: > > 1. to have a full gating CI for RDO to never break the package repository. > 2. to be in control of our infrastructure. > 3. to simplify the work-flow where we can to make it more efficient and > easier to grasp. > > Nothing is set in stone so feel free to comment or ask questions. > > Cheers, > -- > Fred - May the Source be with you > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mcornea at redhat.com Tue Oct 13 16:35:41 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 13 Oct 2015 12:35:41 -0400 (EDT) Subject: [Rdo-list] Basic HA deployment In-Reply-To: <1562966692.41096911.1444748412152.JavaMail.zimbra@redhat.com> References: <1562966692.41096911.1444748412152.JavaMail.zimbra@redhat.com> Message-ID: <1073698808.41168325.1444754141778.JavaMail.zimbra@redhat.com> Thanks to yprokule I found what was causing the neutron resources to be stopped: https://bugzilla.redhat.com/show_bug.cgi?id=1270964#c1 ----- Original Message ----- > From: "Marius Cornea" > To: "rdo-list" > Sent: Tuesday, October 13, 2015 5:00:12 PM > Subject: [Rdo-list] Basic HA deployment > > Hi everyone, > > I tried a deployment on virt with 3 x ctrls + 1 x compute and it currently > fails due to a ceilometer dbsync issue(BZ#1271002). To workaround it I did > the following. This gets the deployment successful but some of the Neutron > related pacemaker resources are stopped(same as BZ#1270964): > > 1. Mount the overcloud-full.qcow2 image on a host with libguestfs-tools > installed (I used the physical machine where I run the virt env for this) > > guestfish --rw -a overcloud-full.qcow2 > > run > > mount /dev/sda / > > vi /etc/puppet/modules/ceilometer/manifests/init.pp > #Apply the changes below: > diff -c2 init.pp.orig init.pp.new > *** init.pp.orig 2015-10-13 14:35:57.514488094 +0000 > --- init.pp.new 2015-10-13 14:35:01.614488094 +0000 > *************** > *** 154,157 **** > --- 154,158 ---- > $qpid_reconnect_interval_max = 0, > $qpid_reconnect_interval = 0, > + $mongodb_replica_set = 'tripleo', > ) { > > *************** > *** 293,296 **** > --- 294,298 ---- > 'database/metering_time_to_live' : value => > $metering_time_to_live; > 'database/alarm_history_time_to_live' : value => > $alarm_history_time_to_live; > + 'database/mongodb_replica_set' : value => $mongodb_replica_set; > } > > quit > > 2. Get the overcloud-full.qcow2 image back on the undercloud and update the > existing Glance image: > openstack overcloud image upload --update-existing > > 3. Deploy overcloud: > openstack overcloud deploy --templates ~/templates/my-overcloud -e > ~/templates/my-overcloud/environments/network-isolation.yaml -e > ~/templates/network-environment.yaml --control-scale 3 --compute-scale 1 > --libvirt-type qemu -e > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > --ntp-server clock.redhat.com > > Thanks, > Marius > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From frederic.lepied at redhat.com Tue Oct 13 17:39:24 2015 From: frederic.lepied at redhat.com (=?UTF-8?B?RnLDqWTDqXJpYyBMZXBpZWQ=?=) Date: Tue, 13 Oct 2015 19:39:24 +0200 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: References: <561CCF58.30704@redhat.com> Message-ID: <561D41CC.7060909@redhat.com> On 10/13/2015 05:57 PM, David Moreau Simard wrote: > Hey Fred, > > Our setup is a bit all over the place right now - in my day-to-day I > deal with three gerrits, four jenkins - delorean and mirrors on > trunk.rdoproject, cbs, fedorapeople and so on. > As a relatively new guy, it took me far longer than I'd like to admit > to just wrap my head around what is where and how it works.. and > there's still some things I'm confused about. > > So, the idea of centralizing stuff in one place is IMO a good one but > we need to make sure we get everyone aligned on the objectives and the > solutions as to not create yet another standard [1]. > > I think we're in the process of moving the Jenkins portion of the > infrastructure to ci.centos.org right now. Perhaps you can chat about > Wes to get more information on the how, when and why. > > My one question that comes to mind about making your experiment a > reality: do we have the resources (human, time and capex) to make > something like this happen ? Hi David, I will sync up with Wes regarding Jenkins. I understand your concern regarding the needed investment and don't worry we have what we need to manage the experiment: dedicated members of the Software Factory team and some hardware. That will help us size what will be needed for the full scale use. Thanks for the feedback, -- Fred - May the Source be with you From mcornea at redhat.com Tue Oct 13 17:56:38 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 13 Oct 2015 13:56:38 -0400 (EDT) Subject: [Rdo-list] Overcloud deploy stuck for a long time In-Reply-To: <561D27D6.4040805@redhat.com> References: <561C1722.9050608@redhat.com> <561D27D6.4040805@redhat.com> Message-ID: <799613411.41216890.1444758998734.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Dan Sneddon" > To: "Tzach Shefi" > Cc: rdo-list at redhat.com > Sent: Tuesday, October 13, 2015 5:48:38 PM > Subject: Re: [Rdo-list] Overcloud deploy stuck for a long time > > On 10/13/2015 03:01 AM, Tzach Shefi wrote: > > So gave it a few more hours, on heat resource nothing is failed only > > create_complete and some init_complete. > > > > Nova show > > | 61aaed37-4993-4165-93a7-3c9bf6b10a21 | overcloud-controller-0 | > > ACTIVE | - | Running | ctlplane=192.0.2.8 | > > | 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 | overcloud-novacompute-0 | > > BUILD | spawning | NOSTATE | ctlplane=192.0.2.9 | > > > > > > nova show 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 > > +--------------------------------------+----------------------------------------------------------+ > > | Property | > > Value | > > +--------------------------------------+----------------------------------------------------------+ > > | OS-DCF:diskConfig | > > MANUAL | > > | OS-EXT-AZ:availability_zone | > > nova | > > | OS-EXT-SRV-ATTR:host | > > instack.localdomain | > > | OS-EXT-SRV-ATTR:hypervisor_hostname | > > 4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 | > > | OS-EXT-SRV-ATTR:instance_name | > > instance-00000002 | > > | OS-EXT-STS:power_state | > > 0 | > > | OS-EXT-STS:task_state | > > spawning | > > | OS-EXT-STS:vm_state | > > building | > > > > Checking nova log this is what I see: > > > > nova-compute.log:{"nodes": [{"target_power_state": null, "links": > > [{"href": > > "http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", > > "rel": "self"}, {"href": > > "http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", > > "rel": "bookmark"}], "extra": {}, "last_error": "*Failed to change > > power state to 'power on'. Error: Failed to execute command via SSH*: > > LC_ALL=C /usr/bin/virsh --connect qemu:///system start > > baremetalbrbm_1.", "updated_at": "2015-10-12T14:36:08+00:00", > > "maintenance_reason": null, "provision_state": "deploying", > > "clean_step": {}, "uuid": "4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", > > "console_enabled": false, "target_provision_state": "active", > > "provision_updated_at": "2015-10-12T14:35:18+00:00", "power_state": > > "power off", "inspection_started_at": null, "inspection_finished_at": > > null, "maintenance": false, "driver": "pxe_ssh", "reservation": null, > > "properties": {"memory_mb": "4096", "cpu_arch": "x86_64", "local_gb": > > "40", "cpus": "1", "capabilities": "boot_option:local"}, > > "instance_uuid": "7f9f4f52-3ee6-42d9-9275-ff88582dd6e7", "name": null, > > "driver_info": {"ssh_username": "root", "deploy_kernel": > > "94cc528d-d91f-4ca7-876e-2d8cbec66f1b", "deploy_ramdisk": > > "057d3b42-002a-4c24-bb3f-2032b8086108", "ssh_key_contents": > > "-----BEGIN( I removed key..)END RSA PRIVATE KEY-----", > > "ssh_virt_type": "virsh", "ssh_address": "192.168.122.1"}, > > "created_at": "2015-10-12T14:26:30+00:00", "ports": [{"href": > > "http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports", > > "rel": "self"}, {"href": > > "http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports", > > "rel": "bookmark"}], "driver_internal_info": {"clean_steps": null, > > "root_uuid_or_disk_id": "9ff90423-9d18-4dd1-ae96-a4466b52d9d9", > > "is_whole_disk_image": false}, "instance_info": {"ramdisk": > > "82639516-289d-4603-bf0e-8131fa75ec46", "kernel": > > "665ffcb0-2afe-4e04-8910-45b92826e328", "root_gb": "40", > > "display_name": "overcloud-novacompute-0", "image_source": > > "d99f460e-c6d9-4803-99e4-51347413f348", "capabilities": > > "{\"boot_option\": \"local\"}", "memory_mb": "4096", "vcpus": "1", > > "deploy_key": "BI0FRWDTD4VGHII9JK2BYDDFR8WB1WUG", "local_gb": "40", > > "configdrive": > > "H4sICGDEG1YC/3RtcHpwcWlpZQDt3WuT29iZ2HH02Bl7Fe/G5UxSqS3vLtyesaSl2CR4p1zyhk2Ct+ateScdVxcIgiR4A5sAr95xxa/iVOUz7EfJx8m7rXyE5IDslro1mpbGox15Zv6/lrpJ4AAHN/LBwXMIShIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADhJpvx+5UQq5EqNtvzldGs+MIfewJeNv53f/7n354F6xT/3v/TjH0v/chz0L5+8Gv2f3V+n0s+Pz34u/dj982PJfvSTvxFVfXQ7vfyBlRfGvOZo+kQuWWtNVgJn/jO/d6kHzvrGWlHOjGn0TDfmjmXL30kZtZSrlXPFREaVxQM5Hon4fdl0TU7nCmqtU6urRTlZVRP1clV+knwqK/F4UFbPOuVGKZNKFNTbgVFvwO+PyPmzipqo1solX/6slszmCuKozBzKuKPdMlE5ma > > > > > > Any ideas on how to resolve a stuck spawning compute node, it's stuck > > hasn't changed for a few hours now. > > > > Tzach > > > > Tzach > > > > > > On Mon, Oct 12, 2015 at 11:25 PM, Dan Sneddon > > wrote: > > > > On 10/12/2015 08:10 AM, Tzach Shefi wrote: > > > Hi, > > > > > > Server running centos 7.1, vm running for undercloud got up to > > > overcloud deploy stage. > > > It looks like its stuck nothing advancing for a while. > > > Ideas, what to check? > > > > > > [stack at instack ~]$ openstack overcloud deploy --templates > > > Deploying templates in the directory > > > /usr/share/openstack-tripleo-heat-templates > > > [91665.696658] device vnet2 entered promiscuous mode > > > [91665.781346] device vnet3 entered promiscuous mode > > > [91675.260324] kvm [71183]: vcpu0 disabled perfctr wrmsr: 0xc1 > > data 0xffff > > > [91675.291232] kvm [71200]: vcpu0 disabled perfctr wrmsr: 0xc1 > > data 0xffff > > > [91767.799404] kvm: zapping shadow pages for mmio generation > > wraparound > > > [91767.880480] kvm: zapping shadow pages for mmio generation > > wraparound > > > [91768.957761] device vnet2 left promiscuous mode > > > [91769.799446] device vnet3 left promiscuous mode > > > [91771.223273] device vnet3 entered promiscuous mode > > > [91771.232996] device vnet2 entered promiscuous mode > > > [91773.733967] kvm [72245]: vcpu0 disabled perfctr wrmsr: 0xc1 > > data 0xffff > > > [91801.270510] device vnet2 left promiscuous mode > > > > > > > > > Thanks > > > Tzach > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > You're going to need a more complete command line than "openstack > > overcloud deploy --templates". For instance, if you are using VMs for > > your overcloud nodes, you will need to include "--libvirt-type qemu". > > There are probably a couple of other parameters that you will need. > > > > You can watch the deployment using this command, which will show you > > the progress: > > > > watch "heat resource-list -n 5 | grep -v COMPLETE" > > > > You can also explore which resources have failed: > > > > heat resource-list [-n 5]| grep FAILED > > > > And then look more closely at the failed resources: > > > > heat resource-show overcloud > > > > There are some more complete troubleshooting instructions here: > > > > http://docs.openstack.org/developer/tripleo-docs/troubleshooting/troubleshooting-overcloud.html > > > > -- > > Dan Sneddon | Principal OpenStack Engineer > > dsneddon at redhat.com | > > redhat.com/openstack > > 650.254.4025 | dsneddon:irc @dxs:twitter > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > -- > > *Tzach Shefi* > > Quality Engineer, Redhat OSP > > +972-54-4701080 > > The deployment looks like it is stuck to me. The problem, though, > appears to be an inability to set the power state on one of the VM > nodes through libvirt. > > What the SSH driver does for virt is to SSH from the Undercloud VM to > the VM host system, and issue libvirt commands to start/stop VMs. That > process failed when setting the power state of one of your nodes, and > it doesn't look like the deployment is recovering from that error. > > I'm not quite sure why that is happening, but I can think of a few > possible reasons: > > * SSH daemon not running on the virt host > * The virt host was not able to respond to the request, perhaps it was > overloaded? > * Firewall blocking SSH connections from the Instack VM to the virt host? > > One tip for the next deployment: You can set the timeout. That way, if > it does get hung up you don't have to wait 4 hours for it to fail. > Conservatively, you could set --timeout 90 to set the timeout to 90 > minutes. A 2-node deployment will definitely either deploy or fail in > that amount of time (probably much less, but I wouldn't want you to cut > off a deployment that might be successful if given a little more time). For virt environments you'll also find useful to use virt-manager and connect to the virt host so you can see whether the VMs are running and watch their consoles during introspection/deploy. Also watch the libvirtd logs on the virt host(journalctl -fl -u libvirtd) > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mcornea at redhat.com Tue Oct 13 18:16:14 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 13 Oct 2015 14:16:14 -0400 (EDT) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1316900159.3778086.1444748529044.JavaMail.zimbra@tubitak.gov.tr> References: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> <1834381527.3752816.1444744077063.JavaMail.zimbra@tubitak.gov.tr> <136417348.41059871.1444746300472.JavaMail.zimbra@redhat.com> <1316900159.3778086.1444748529044.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <278705446.41253505.1444760174635.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Esra Celik" > To: "Marius Cornea" > Cc: "Ignacio Bravo" , rdo-list at redhat.com > Sent: Tuesday, October 13, 2015 5:02:09 PM > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > During deployment they are powering on and deploying the images. I see lot of > connection error messages about ironic-python-agent but ignore them as > mentioned here > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) That was referring to the introspection stage. From what I can tell you are experiencing issues during deployment as it fails to provision the nova instances, can you check if during that stage the nodes get powered on? Make sure that before overcloud deploy the ironic nodes are available for provisioning (ironic node-list and check the provisioning state column). Also check that you didn't miss any step in the docs in regards to kernel and ramdisk assignment, introspection, flavor creation(so it matches the nodes resources) https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > In instackenv.json file I do not need to add the undercloud node, or do I? No, the nodes details should be enough. > And which log files should I watch during deployment? You can check the openstack-ironic-conductor logs(journalctl -fl -u openstack-ironic-conductor.service) and the logs in /var/log/nova. > Thanks > Esra > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea Kime: > Esra Celik Kk: Ignacio Bravo > , rdo-list at redhat.comGönderilenler: Tue, 13 Oct > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails with > error "No valid host was found" > > ----- Original Message -----> From: "Esra Celik" > > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> Sent: > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud > deploy fails with error "No valid host was found"> > > > Actually I > re-installed the OS for Undercloud before deploying. However I did> not > re-install the OS in Compute and Controller nodes.. I will reinstall> basic > OS for them too, and retry.. > > You don't need to reinstall the OS on the controller and compute, they will > get the image served by the undercloud. I'd recommend that during deployment > you watch the servers console and make sure they get powered on, pxe boot, > and actually get the image deployed. > > Thanks > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: "Ignacio > > Bravo" > Kime: "Esra Celik" > > > Kk: rdo-list at redhat.com> Gönderilenler: > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy fails > > with error "No valid host was> found"> > Esra,> > I encountered the same > > problem after deleting the stack and re-deploying.> > It turns out that > > 'heat stack-delete overcloud’ does remove the nodes from> > > ‘nova list’ and one would assume that the baremetal servers > > are now ready to> be used for the next stack, but when redeploying, I get > > the same message of> not enough hosts available.> > You can look into the > > nova logs and it mentions something about ‘node xxx is> already > > associated with UUID yyyy’ and ‘I tried 3 times and I’m > > giving up’.> The issue is that the UUID yyyy belonged to a prior > > unsuccessful deployment.> > I’m now redeploying the basic OS to > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, Inc> > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, at 9:25 > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > OverCloud deploy fails with error "No valid host was found"> > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> Deploying > > templates in the directory> /usr/share/openstack-tripleo-heat-templates> > > Stack failed with status: Resource CREATE failed: resources.Compute:> > > ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR> > > due to "Message: No valid host was found. There are not enough hosts> > > available., Code: 500"> Heat Stack create failed.> > Here are some logs:> > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue Oct > > 13> 16:18:17 2015> > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > | resource_name | physical_resource_id | resource_type | resource_status > > |> | updated_time | stack_name |> > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | OS::Heat::ResourceGroup > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller | > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r |> | > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server |> | > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | CREATE_FAILED > > | 2015-10-13T10:20:56 |> | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > |> > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > | Property | Value |> > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > | attributes | { |> | | "attributes": null, |> | | "refs": null |> | | } > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | links |> > > | > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > | (self) |> | | > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > | | (stack) |> | | > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > | | (nested) |> | logical_resource_id | Compute |> | physical_resource_id > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> | | > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | Compute |> > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > resources.Compute: ResourceInError:> | resources[0].resources.NovaCompute: > > Went to status ERROR due to "Message:> | No valid host was found. There > > are not enough hosts available., Code: 500"> | |> | resource_type | > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > This is my instackenv.json for 1 compute and 1 control node to be > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> "disk":"10",> > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> "mac":[> > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> "disk":"100",> > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in advance> > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> > > celik.esra at tubitak.gov.tr> > > > _______________________________________________> Rdo-list mailing list> > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > _______________________________________________> Rdo-list mailing list> > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rbowen at redhat.com Tue Oct 13 19:36:42 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 13 Oct 2015 15:36:42 -0400 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: <561CCF58.30704@redhat.com> References: <561CCF58.30704@redhat.com> Message-ID: <561D5D4A.30205@redhat.com> On 10/13/2015 05:31 AM, Fr?d?ric Lepied wrote: > Hi, > > After some discussions, we came up with the idea to have the RDO > project owns its build infrastructure. To play with this idea, we > propose to experiment to use an OpenStack like workflow to build RDO > packages by deploying a Software Factory instance > (http://softwarefactory-project.io) on top of an RDO or RHEL-OSP > cloud. That will allow us to have our own Gerrit, Zuul, Nodepool and > Jenkins instances like in the OpenStack upstream project while adding > our package building specific needs like using the Delorean and > Mock/Koji/Mash machineries. > > The objectives from these changes are: > > 1. to have a full gating CI for RDO to never break the package repository. > 2. to be in control of our infrastructure. > 3. to simplify the work-flow where we can to make it more efficient and > easier to grasp. > > Nothing is set in stone so feel free to comment or ask questions. > My question would be what hardware this runs on, and who's on the hook to be the sysadmin. OSAS (The Open Source and Standards group within Red Hat) has a community server cage that can be dipped into for this, but if you have something else in mind, that's fine. I'm also wondering if this is Yet Another, or if this will actually consolidate various of the far-flung bits. I echo David's concern that at the end of this we'll have N+1 bits to keep track of. I just opened https://github.com/redhat-openstack/website/issues/131 to try to put together a document that shows where all the bits are, and how they work with one another. It seems to get more complicated all the time, and I don't know that we have it well documented anywhere. If we do, I haven't seen it. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From shayne.alone at gmail.com Tue Oct 13 19:50:23 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Tue, 13 Oct 2015 19:50:23 +0000 Subject: [Rdo-list] Overcloud Horizon Message-ID: The overcloud has been finally deployed via the following : $ openstack overcloud deploy --compute-scale 4 --templates --compute-flavor compute --control-flavor control http://paste.ubuntu.com/12775291/ there seem's I has missed some things cos I wished to have Horizon at the end! but seems it's not evolved right now. Do i need to add any other templates or better how can I force my controller to serve horizon service! if it's possible... tnx -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Tue Oct 13 18:43:53 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 13 Oct 2015 20:43:53 +0200 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: <561CCF58.30704@redhat.com> References: <561CCF58.30704@redhat.com> Message-ID: As David said, our infrastructure is scattered and as been a constantly moving target so I'd consider this a step toward the right direction. The advantages of this proposal being: 1. stable infrastructure => less time spent on fixing the infrastructure 2. customizable workflow thanks to zuul which could only result in improving quality 3. close the gap between upstream and downstream infrastructure 4. self-hosted on RDO => that's a very important one I don't see any real negative points, so unless someone has a different proposal, I suggest that we start with a PoC. As the Liberty cycle is about to finish, we'll get some bandwidth to work on our infrastructure so this is the best time to discuss this. Consolidating our infrastructure is a primary goal to open up RDO governance further. Regards, H. From dsneddon at redhat.com Tue Oct 13 22:54:25 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 13 Oct 2015 15:54:25 -0700 Subject: [Rdo-list] Overcloud Horizon In-Reply-To: References: Message-ID: <561D8BA1.8030004@redhat.com> On 10/13/2015 12:50 PM, AliReza Taleghani wrote: > The overcloud has been finally deployed via the following : > $ openstack overcloud deploy --compute-scale 4 --templates > --compute-flavor compute --control-flavor control > http://paste.ubuntu.com/12775291/ > > there seem's I has missed some things cos I wished to have Horizon at > the end! but seems it's not evolved right now. > > Do i need to add any other templates or better how can I force my > controller to serve horizon service! if it's possible... > > tnx > -- > Sincerely, > Ali R. Taleghani > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > Are you sure that Horizon isn't listening somewhere? You can run "keystone endpoint-list | grep dashboard" against the overcloud (source overcloudrc on the Undercloud, for instance). You should have something like: | a4b435f0917c42e9b84184c1502e4327 | regionOne | http://10.0.0.4:80/dashboard/ | http://10.0.0.4:80/dashboard/ | http://10.0.0.4:80/dashboard/admin | cce915f019684d17a601254437ab59ee | It's possible that Horizon is listening on a different IP than you are expecting. If so, you can use SSH tunnels to connect to the external interface and have SSH port forward to the real IP/port. Something like: ssh -L 8080:10.0.0.4:80 heat-admin at controller-external-IP Then you can connect to http://localhost:8080/dashboard to connect to Horizon. If you want more control over where the dashboard is listening, then you need to use the Advanced Configuration instructions for Network Isolation. Please report back if you don't find that Horizon is listening on any IP/port. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From outbackdingo at gmail.com Wed Oct 14 00:33:14 2015 From: outbackdingo at gmail.com (Outback Dingo) Date: Wed, 14 Oct 2015 11:33:14 +1100 Subject: [Rdo-list] Best known working OS for RDO packstack Message-ID: ok so whats the current best known working iso for RDO packstack... Ive got a couple blades here id like to do an all-in-one on one blade then join a secondary compute only node., thoughts and input appreciated, as i dont want to jump through hoops like last time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkassawara at gmail.com Wed Oct 14 01:00:53 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Tue, 13 Oct 2015 19:00:53 -0600 Subject: [Rdo-list] Liberty package availability Message-ID: Hi, The OpenStack documentation team is updating the installation guide for RHEL/CentOS/Fedora on docs.openstack.org and came across the following issues: 1) The MySQL database library changed from MySQL-python to PyMySQL [1], but we cannot find PyMySQL packages for RHEL or CentOS 7. 2) The usual place for packages [2] contains nothing for any of these distributions. The trunk directory [3] provides a redirect to packages built from master (Mitaka) instead of Liberty. The Liberty Test Day 1 site [4] mentions Fedora, but the packages are built from master instead of Liberty. The Liberty Test Day 2 site [5] appears to lack Fedora packages. We cannot publish the installation guide for any of these distributions without sufficient packages. Can anyone help? Thanks, Matt [1] https://review.openstack.org/#/c/184392/ [2] https://repos.fedorapeople.org/repos/openstack/openstack-liberty/ [3] https://repos.fedorapeople.org/repos/openstack/openstack-trunk/00README.txt [4] http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ [5] http://beta.rdoproject.org/testday/rdo-test-day-liberty-02/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Wed Oct 14 00:53:40 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 13 Oct 2015 20:53:40 -0400 Subject: [Rdo-list] Best known working OS for RDO packstack In-Reply-To: References: Message-ID: i do believe centos/rhel 7.1 is the latest and greatest and all development kilo onwards has been on the 7 series - if that is your question On Tue, Oct 13, 2015 at 8:33 PM, Outback Dingo wrote: > ok so whats the current best known working iso for RDO packstack... Ive > got a couple blades here > id like to do an all-in-one on one blade then join a secondary compute > only node., > > thoughts and input appreciated, as i dont want to jump through hoops like > last time. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasha at redhat.com Wed Oct 14 02:07:45 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Tue, 13 Oct 2015 22:07:45 -0400 (EDT) Subject: [Rdo-list] Overcloud deploy stuck for a long time In-Reply-To: References: <561C1722.9050608@redhat.com> Message-ID: <985767218.56816371.1444788465718.JavaMail.zimbra@redhat.com> I hit the same (or similar) issue on my BM environment, though I manage to complete the 1+1 deployment on VM successfully. I see it's reported already: https://bugzilla.redhat.com/show_bug.cgi?id=1271289 Ran a deployment with: openstack overcloud deploy --templates --timeout 90 --compute-scale 3 --control-scale 1 The deployment fails, and I see that "all minus one" overcloud nodes are still in BUILD status. [stack at undercloud ~]$ nova list +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | b15f499e-79ed-46b2-b990-878dbe6310b1 | overcloud-controller-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.23 | | 4877d14a-e34e-406b-8005-dad3d79f5bab | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.9 | | 0fd1a7ed-367e-448e-8602-8564bf087e92 | overcloud-novacompute-1 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.21 | | 51630a7d-c140-47b9-a071-1f2fdb45f4b4 | overcloud-novacompute-2 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.22 | Will try to investigate further tomorrow. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Tzach Shefi" > To: "Dan Sneddon" > Cc: rdo-list at redhat.com > Sent: Tuesday, October 13, 2015 6:01:48 AM > Subject: Re: [Rdo-list] Overcloud deploy stuck for a long time > > So gave it a few more hours, on heat resource nothing is failed only > create_complete and some init_complete. > > Nova show > | 61aaed37-4993-4165-93a7-3c9bf6b10a21 | overcloud-controller-0 | ACTIVE | - > | | Running | ctlplane=192.0.2.8 | > | 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 | overcloud-novacompute-0 | BUILD | > | spawning | NOSTATE | ctlplane=192.0.2.9 | > > > nova show 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 > +--------------------------------------+----------------------------------------------------------+ > | Property | Value | > +--------------------------------------+----------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | nova | > | OS-EXT-SRV-ATTR:host | instack.localdomain | > | OS-EXT-SRV-ATTR:hypervisor_hostname | 4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 > | | > | OS-EXT-SRV-ATTR:instance_name | instance-00000002 | > | OS-EXT-STS:power_state | 0 | > | OS-EXT-STS:task_state | spawning | > | OS-EXT-STS:vm_state | building | > > Checking nova log this is what I see: > > nova-compute.log:{"nodes": [{"target_power_state": null, "links": [{"href": " > http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 ", > "rel": "self"}, {"href": " > http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 ", "rel": > "bookmark"}], "extra": {}, "last_error": " Failed to change power state to > 'power on'. Error: Failed to execute command via SSH : LC_ALL=C > /usr/bin/virsh --connect qemu:///system start baremetalbrbm_1.", > "updated_at": "2015-10-12T14:36:08+00:00", "maintenance_reason": null, > "provision_state": "deploying", "clean_step": {}, "uuid": > "4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", "console_enabled": false, > "target_provision_state": "active", "provision_updated_at": > "2015-10-12T14:35:18+00:00", "power_state": "power off", > "inspection_started_at": null, "inspection_finished_at": null, > "maintenance": false, "driver": "pxe_ssh", "reservation": null, > "properties": {"memory_mb": "4096", "cpu_arch": "x86_64", "local_gb": "40", > "cpus": "1", "capabilities": "boot_option:local"}, "instance_uuid": > "7f9f4f52-3ee6-42d9-9275-ff88582dd6e7", "name": null, "driver_info": > {"ssh_username": "root", "deploy_kernel": > "94cc528d-d91f-4ca7-876e-2d8cbec66f1b", "deploy_ramdisk": > "057d3b42-002a-4c24-bb3f-2032b8086108", "ssh_key_contents": "-----BEGIN( I > removed key..)END RSA PRIVATE KEY-----", "ssh_virt_type": "virsh", > "ssh_address": "192.168.122.1"}, "created_at": "2015-10-12T14:26:30+00:00", > "ports": [{"href": " > http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports ", > "rel": "self"}, {"href": " > http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports ", > "rel": "bookmark"}], "driver_internal_info": {"clean_steps": null, > "root_uuid_or_disk_id": "9ff90423-9d18-4dd1-ae96-a4466b52d9d9", > "is_whole_disk_image": false}, "instance_info": {"ramdisk": > "82639516-289d-4603-bf0e-8131fa75ec46", "kernel": > "665ffcb0-2afe-4e04-8910-45b92826e328", "root_gb": "40", "display_name": > "overcloud-novacompute-0", "image_source": > "d99f460e-c6d9-4803-99e4-51347413f348", "capabilities": "{\"boot_option\": > \"local\"}", "memory_mb": "4096", "vcpus": "1", "deploy_key": > "BI0FRWDTD4VGHII9JK2BYDDFR8WB1WUG", "local_gb": "40", "configdrive": > "H4sICGDEG1YC/3RtcHpwcWlpZQDt3WuT29iZ2HH02Bl7Fe/G5UxSqS3vLtyesaSl2CR4p1zyhk2Ct+ateScdVxcIgiR4A5sAr95xxa/iVOUz7EfJx8m7rXyE5IDslro1mpbGox15Zv6/lrpJ4AAHN/LBwXMIShIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADhJpvx+5UQq5EqNtvzldGs+MIfewJeNv53f/7n354F6xT/3v/TjH0v/chz0L5+8Gv2f3V+n0s+Pz34u/dj982PJfvSTvxFVfXQ7vfyBlRfGvOZo+kQuWWtNVgJn/jO/d6kHzvrGWlHOjGn0TDfmjmXL30kZtZSrlXPFREaVxQM5Hon4fdl0TU7nCmqtU6urRTlZVRP1clV+knwqK/F4UFbPOuVGKZNKFNTbgVFvwO+PyPmzipqo1solX/6slszmCuKozBzKuKPdMlE5ma > > > Any ideas on how to resolve a stuck spawning compute node, it's stuck hasn't > changed for a few hours now. > > Tzach > > Tzach > > > On Mon, Oct 12, 2015 at 11:25 PM, Dan Sneddon < dsneddon at redhat.com > wrote: > > > > On 10/12/2015 08:10 AM, Tzach Shefi wrote: > > Hi, > > > > Server running centos 7.1, vm running for undercloud got up to > > overcloud deploy stage. > > It looks like its stuck nothing advancing for a while. > > Ideas, what to check? > > > > [stack at instack ~]$ openstack overcloud deploy --templates > > Deploying templates in the directory > > /usr/share/openstack-tripleo-heat-templates > > [91665.696658] device vnet2 entered promiscuous mode > > [91665.781346] device vnet3 entered promiscuous mode > > [91675.260324] kvm [71183]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff > > [91675.291232] kvm [71200]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff > > [91767.799404] kvm: zapping shadow pages for mmio generation wraparound > > [91767.880480] kvm: zapping shadow pages for mmio generation wraparound > > [91768.957761] device vnet2 left promiscuous mode > > [91769.799446] device vnet3 left promiscuous mode > > [91771.223273] device vnet3 entered promiscuous mode > > [91771.232996] device vnet2 entered promiscuous mode > > [91773.733967] kvm [72245]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff > > [91801.270510] device vnet2 left promiscuous mode > > > > > > Thanks > > Tzach > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > You're going to need a more complete command line than "openstack > overcloud deploy --templates". For instance, if you are using VMs for > your overcloud nodes, you will need to include "--libvirt-type qemu". > There are probably a couple of other parameters that you will need. > > You can watch the deployment using this command, which will show you > the progress: > > watch "heat resource-list -n 5 | grep -v COMPLETE" > > You can also explore which resources have failed: > > heat resource-list [-n 5]| grep FAILED > > And then look more closely at the failed resources: > > heat resource-show overcloud > > There are some more complete troubleshooting instructions here: > > http://docs.openstack.org/developer/tripleo-docs/troubleshooting/troubleshooting-overcloud.html > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -- > Tzach Shefi > Quality Engineer, Redhat OSP > +972-54-4701080 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From shayne.alone at gmail.com Wed Oct 14 06:07:20 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Wed, 14 Oct 2015 09:37:20 +0330 Subject: [Rdo-list] Cinder API Call error Message-ID: Hi; When I Lunch new instance there is an error ins logs as following: ##### Oct 14 06:00:25 overcloud-controller-0 cinder-api[27185]: 2015-10-14 06:00:25.332 27232 ERROR cinder.api.middleware.fault [req-e0e7a1f4-6caf-422e-ace1-381d87a85b11 9664b863bbba4ff4a1bf5936ce2202c2 a1572260d6f14c4a8f0e1a209eeeb7b4 - - -] Caught error: Authorization failed: Unable to establish connection to http://localhost:5000/v3/auth/tokens ###### instance disk image can't be attached so the instance don't get boot from disk... I change glance api version on cinder.conf into 1 and restart all cinder services but do not help to overcome the problem. I have deployed overcloud via TripleO good known Trunk undercloud.... Sincerely, Ali R. Taleghani @linkedIn -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Wed Oct 14 06:58:09 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 14 Oct 2015 08:58:09 +0200 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: References: <561CCF58.30704@redhat.com> Message-ID: > As David said, our infrastructure is scattered and as been a > constantly moving target so I'd consider this a step toward the right direction. Agreed but we also started consolidation around CentOS CloudSIG using CentOS provided infra and I think we should keep that direction. IMHO it would make sense to include this available hardware and this new RDO based cloud to ci.centos.org so we can continue already choosen consolidation direction. Cheers, Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tshefi at redhat.com Wed Oct 14 08:16:46 2015 From: tshefi at redhat.com (Tzach Shefi) Date: Wed, 14 Oct 2015 11:16:46 +0300 Subject: [Rdo-list] Overcloud deploy stuck for a long time In-Reply-To: <985767218.56816371.1444788465718.JavaMail.zimbra@redhat.com> References: <561C1722.9050608@redhat.com> <985767218.56816371.1444788465718.JavaMail.zimbra@redhat.com> Message-ID: Hi Sasha\Dan, Yep that's my bug I opened yesterday about this. sshd and firewall rules look OK having tested below: I can ssh into the virt host from my laptop with root user, checking 10.X.X.X net Can also ssh from instack vm to virt host, checking 192.168.122.X net. Unless I should check ssh with other user, if so which ? I doubt ssh user/firewall caused the problem as controller was installed successfully and it too uses same procedure ssh virt power-on method. Deployment is still up & stuck if any one ones to take a look contact me for access details in private. Will review/use virt console, virt journal and timeout tips on next deployment. Thanks Tzach On Wed, Oct 14, 2015 at 5:07 AM, Sasha Chuzhoy wrote: > I hit the same (or similar) issue on my BM environment, though I manage to > complete the 1+1 deployment on VM successfully. > I see it's reported already: > https://bugzilla.redhat.com/show_bug.cgi?id=1271289 > > Ran a deployment with: openstack overcloud deploy --templates --timeout > 90 --compute-scale 3 --control-scale 1 > The deployment fails, and I see that "all minus one" overcloud nodes are > still in BUILD status. > > [stack at undercloud ~]$ nova list > > +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ > | ID | Name | Status > | Task State | Power State | Networks | > > +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ > | b15f499e-79ed-46b2-b990-878dbe6310b1 | overcloud-controller-0 | BUILD > | spawning | NOSTATE | ctlplane=192.0.2.23 | > | 4877d14a-e34e-406b-8005-dad3d79f5bab | overcloud-novacompute-0 | ACTIVE > | - | Running | ctlplane=192.0.2.9 | > | 0fd1a7ed-367e-448e-8602-8564bf087e92 | overcloud-novacompute-1 | BUILD > | spawning | NOSTATE | ctlplane=192.0.2.21 | > | 51630a7d-c140-47b9-a071-1f2fdb45f4b4 | overcloud-novacompute-2 | BUILD > | spawning | NOSTATE | ctlplane=192.0.2.22 | > > > Will try to investigate further tomorrow. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Tzach Shefi" > > To: "Dan Sneddon" > > Cc: rdo-list at redhat.com > > Sent: Tuesday, October 13, 2015 6:01:48 AM > > Subject: Re: [Rdo-list] Overcloud deploy stuck for a long time > > > > So gave it a few more hours, on heat resource nothing is failed only > > create_complete and some init_complete. > > > > Nova show > > | 61aaed37-4993-4165-93a7-3c9bf6b10a21 | overcloud-controller-0 | ACTIVE > | - > > | | Running | ctlplane=192.0.2.8 | > > | 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 | overcloud-novacompute-0 | BUILD > | > > | spawning | NOSTATE | ctlplane=192.0.2.9 | > > > > > > nova show 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 > > > +--------------------------------------+----------------------------------------------------------+ > > | Property | Value | > > > +--------------------------------------+----------------------------------------------------------+ > > | OS-DCF:diskConfig | MANUAL | > > | OS-EXT-AZ:availability_zone | nova | > > | OS-EXT-SRV-ATTR:host | instack.localdomain | > > | OS-EXT-SRV-ATTR:hypervisor_hostname | > 4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 > > | | > > | OS-EXT-SRV-ATTR:instance_name | instance-00000002 | > > | OS-EXT-STS:power_state | 0 | > > | OS-EXT-STS:task_state | spawning | > > | OS-EXT-STS:vm_state | building | > > > > Checking nova log this is what I see: > > > > nova-compute.log:{"nodes": [{"target_power_state": null, "links": > [{"href": " > > http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 ", > > "rel": "self"}, {"href": " > > http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 ", > "rel": > > "bookmark"}], "extra": {}, "last_error": " Failed to change power state > to > > 'power on'. Error: Failed to execute command via SSH : LC_ALL=C > > /usr/bin/virsh --connect qemu:///system start baremetalbrbm_1.", > > "updated_at": "2015-10-12T14:36:08+00:00", "maintenance_reason": null, > > "provision_state": "deploying", "clean_step": {}, "uuid": > > "4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", "console_enabled": false, > > "target_provision_state": "active", "provision_updated_at": > > "2015-10-12T14:35:18+00:00", "power_state": "power off", > > "inspection_started_at": null, "inspection_finished_at": null, > > "maintenance": false, "driver": "pxe_ssh", "reservation": null, > > "properties": {"memory_mb": "4096", "cpu_arch": "x86_64", "local_gb": > "40", > > "cpus": "1", "capabilities": "boot_option:local"}, "instance_uuid": > > "7f9f4f52-3ee6-42d9-9275-ff88582dd6e7", "name": null, "driver_info": > > {"ssh_username": "root", "deploy_kernel": > > "94cc528d-d91f-4ca7-876e-2d8cbec66f1b", "deploy_ramdisk": > > "057d3b42-002a-4c24-bb3f-2032b8086108", "ssh_key_contents": "-----BEGIN( > I > > removed key..)END RSA PRIVATE KEY-----", "ssh_virt_type": "virsh", > > "ssh_address": "192.168.122.1"}, "created_at": > "2015-10-12T14:26:30+00:00", > > "ports": [{"href": " > > > http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports > ", > > "rel": "self"}, {"href": " > > http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports > ", > > "rel": "bookmark"}], "driver_internal_info": {"clean_steps": null, > > "root_uuid_or_disk_id": "9ff90423-9d18-4dd1-ae96-a4466b52d9d9", > > "is_whole_disk_image": false}, "instance_info": {"ramdisk": > > "82639516-289d-4603-bf0e-8131fa75ec46", "kernel": > > "665ffcb0-2afe-4e04-8910-45b92826e328", "root_gb": "40", "display_name": > > "overcloud-novacompute-0", "image_source": > > "d99f460e-c6d9-4803-99e4-51347413f348", "capabilities": > "{\"boot_option\": > > \"local\"}", "memory_mb": "4096", "vcpus": "1", "deploy_key": > > "BI0FRWDTD4VGHII9JK2BYDDFR8WB1WUG", "local_gb": "40", "configdrive": > > > "H4sICGDEG1YC/3RtcHpwcWlpZQDt3WuT29iZ2HH02Bl7Fe/G5UxSqS3vLtyesaSl2CR4p1zyhk2Ct+ateScdVxcIgiR4A5sAr95xxa/iVOUz7EfJx8m7rXyE5IDslro1mpbGox15Zv6/lrpJ4AAHN/LBwXMIShIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADhJpvx+5UQq5EqNtvzldGs+MIfewJeNv53f/7n354F6xT/3v/TjH0v/chz0L5+8Gv2f3V+n0s+Pz34u/dj982PJfvSTvxFVfXQ7vfyBlRfGvOZo+kQuWWtNVgJn/jO/d6kHzvrGWlHOjGn0TDfmjmXL30kZtZSrlXPFREaVxQM5Hon4fdl0TU7nCmqtU6urRTlZVRP1clV+knwqK/F4UFbPOuVGKZNKFNTbgVFvwO+PyPmzipqo1solX/6slszmCuKozBzKuKPdMlE5ma > > > > > > Any ideas on how to resolve a stuck spawning compute node, it's stuck > hasn't > > changed for a few hours now. > > > > Tzach > > > > Tzach > > > > > > On Mon, Oct 12, 2015 at 11:25 PM, Dan Sneddon < dsneddon at redhat.com > > wrote: > > > > > > > > On 10/12/2015 08:10 AM, Tzach Shefi wrote: > > > Hi, > > > > > > Server running centos 7.1, vm running for undercloud got up to > > > overcloud deploy stage. > > > It looks like its stuck nothing advancing for a while. > > > Ideas, what to check? > > > > > > [stack at instack ~]$ openstack overcloud deploy --templates > > > Deploying templates in the directory > > > /usr/share/openstack-tripleo-heat-templates > > > [91665.696658] device vnet2 entered promiscuous mode > > > [91665.781346] device vnet3 entered promiscuous mode > > > [91675.260324] kvm [71183]: vcpu0 disabled perfctr wrmsr: 0xc1 data > 0xffff > > > [91675.291232] kvm [71200]: vcpu0 disabled perfctr wrmsr: 0xc1 data > 0xffff > > > [91767.799404] kvm: zapping shadow pages for mmio generation wraparound > > > [91767.880480] kvm: zapping shadow pages for mmio generation wraparound > > > [91768.957761] device vnet2 left promiscuous mode > > > [91769.799446] device vnet3 left promiscuous mode > > > [91771.223273] device vnet3 entered promiscuous mode > > > [91771.232996] device vnet2 entered promiscuous mode > > > [91773.733967] kvm [72245]: vcpu0 disabled perfctr wrmsr: 0xc1 data > 0xffff > > > [91801.270510] device vnet2 left promiscuous mode > > > > > > > > > Thanks > > > Tzach > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > You're going to need a more complete command line than "openstack > > overcloud deploy --templates". For instance, if you are using VMs for > > your overcloud nodes, you will need to include "--libvirt-type qemu". > > There are probably a couple of other parameters that you will need. > > > > You can watch the deployment using this command, which will show you > > the progress: > > > > watch "heat resource-list -n 5 | grep -v COMPLETE" > > > > You can also explore which resources have failed: > > > > heat resource-list [-n 5]| grep FAILED > > > > And then look more closely at the failed resources: > > > > heat resource-show overcloud > > > > There are some more complete troubleshooting instructions here: > > > > > http://docs.openstack.org/developer/tripleo-docs/troubleshooting/troubleshooting-overcloud.html > > > > -- > > Dan Sneddon | Principal OpenStack Engineer > > dsneddon at redhat.com | redhat.com/openstack > > 650.254.4025 | dsneddon:irc @dxs:twitter > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > -- > > Tzach Shefi > > Quality Engineer, Redhat OSP > > +972-54-4701080 > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *Tzach Shefi* Quality Engineer, Redhat OSP +972-54-4701080 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Wed Oct 14 08:40:34 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 14 Oct 2015 10:40:34 +0200 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: References: <561CCF58.30704@redhat.com> Message-ID: 2015-10-14 8:58 GMT+02:00 Alan Pevec : >> As David said, our infrastructure is scattered and as been a >> constantly moving target so I'd consider this a step toward the right >> direction. > > Agreed but we also started consolidation around CentOS CloudSIG using CentOS > provided infra and I think we should keep that direction. > IMHO it would make sense to include this available hardware and this new RDO > based cloud to ci.centos.org so we can continue already choosen > consolidation direction. > > Cheers, > Alan +2 CC'ing KB as he might be interested in hearing about Software Factory and this RDO based Cloud. http://softwarefactory.enovance.com/ Regards, H. From celik.esra at tubitak.gov.tr Wed Oct 14 08:49:01 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Wed, 14 Oct 2015 11:49:01 +0300 (EEST) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <278705446.41253505.1444760174635.JavaMail.zimbra@redhat.com> References: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> <1834381527.3752816.1444744077063.JavaMail.zimbra@tubitak.gov.tr> <136417348.41059871.1444746300472.JavaMail.zimbra@redhat.com> <1316900159.3778086.1444748529044.JavaMail.zimbra@tubitak.gov.tr> <278705446.41253505.1444760174635.JavaMail.zimbra@redhat.com> Message-ID: <637795359.4087130.1444812541830.JavaMail.zimbra@tubitak.gov.tr> Well today I started with re-installing the OS and nothing seems wrong with undercloud installation, then; I see an error during image build [stack at undercloud ~]$ openstack overcloud image build --all ... a lot of log ... ++ cat /etc/dib_dracut_drivers + dracut -N --install ' curl partprobe lsblk targetcli tail head awk ifconfig cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell rd.debug rd.neednet=1 rd.driver.pre=ahci' --include /var/tmp/image.YVhwuArQ/mnt/ / --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net virtio_blk target_core_mod iscsi_target_mod target_core_iblock target_core_file target_core_pscsi configfs' -o 'dash plymouth' /tmp/ramdisk cat: write error: Broken pipe + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel + chmod o+r /tmp/kernel + trap EXIT + target_tag=99-build-dracut-ramdisk + date +%s.%N + output '99-build-dracut-ramdisk completed' ... a lot of log ... Then, during introspection stage I see ironic-python-agent errors on nodes (screenshot attached) and the following warnings [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service | grep -i "warning\|error" Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 10:30:12.119 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] Option "http_url" from group "pxe" is deprecated. Use option "http_url" from group "deploy". Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 10:30:12.119 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] Option "http_root" from group "pxe" is deprecated. Use option "http_root" from group "deploy". Before deployment ironic node-list: [stack at undercloud ~]$ ironic node-list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | available | False | | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | available | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ During deployment I get following errors [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service | grep -i "warning\|error" Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 11:29:01.739 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while attempting "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 -f /tmp/tmpSCKHIv power status"for node b5811c06-d5d1-41f1-87b3-2fd55ae63553. Error: Unexpected error while running command. Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 11:29:01.739 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status failed for node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error while running command. Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 11:29:01.740 619 WARNING ironic.conductor.manager [-] During sync_power_state, could not get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt 1 of 3. Error: IPMI call failed: power status.. Thanks a lot ----- Orijinal Mesaj ----- Kimden: "Marius Cornea" Kime: "Esra Celik" Kk: "Ignacio Bravo" , rdo-list at redhat.com G?nderilenler: 13 Ekim Sal? 2015 21:16:14 Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" ----- Original Message ----- > From: "Esra Celik" > To: "Marius Cornea" > Cc: "Ignacio Bravo" , rdo-list at redhat.com > Sent: Tuesday, October 13, 2015 5:02:09 PM > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > During deployment they are powering on and deploying the images. I see lot of > connection error messages about ironic-python-agent but ignore them as > mentioned here > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) That was referring to the introspection stage. From what I can tell you are experiencing issues during deployment as it fails to provision the nova instances, can you check if during that stage the nodes get powered on? Make sure that before overcloud deploy the ironic nodes are available for provisioning (ironic node-list and check the provisioning state column). Also check that you didn't miss any step in the docs in regards to kernel and ramdisk assignment, introspection, flavor creation(so it matches the nodes resources) https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > In instackenv.json file I do not need to add the undercloud node, or do I? No, the nodes details should be enough. > And which log files should I watch during deployment? You can check the openstack-ironic-conductor logs(journalctl -fl -u openstack-ironic-conductor.service) and the logs in /var/log/nova. > Thanks > Esra > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea Kime: > Esra Celik Kk: Ignacio Bravo > , rdo-list at redhat.comGönderilenler: Tue, 13 Oct > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails with > error "No valid host was found" > > ----- Original Message -----> From: "Esra Celik" > > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> Sent: > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud > deploy fails with error "No valid host was found"> > > > Actually I > re-installed the OS for Undercloud before deploying. However I did> not > re-install the OS in Compute and Controller nodes.. I will reinstall> basic > OS for them too, and retry.. > > You don't need to reinstall the OS on the controller and compute, they will > get the image served by the undercloud. I'd recommend that during deployment > you watch the servers console and make sure they get powered on, pxe boot, > and actually get the image deployed. > > Thanks > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: "Ignacio > > Bravo" > Kime: "Esra Celik" > > > Kk: rdo-list at redhat.com> Gönderilenler: > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy fails > > with error "No valid host was> found"> > Esra,> > I encountered the same > > problem after deleting the stack and re-deploying.> > It turns out that > > 'heat stack-delete overcloud’ does remove the nodes from> > > ‘nova list’ and one would assume that the baremetal servers > > are now ready to> be used for the next stack, but when redeploying, I get > > the same message of> not enough hosts available.> > You can look into the > > nova logs and it mentions something about ‘node xxx is> already > > associated with UUID yyyy’ and ‘I tried 3 times and I’m > > giving up’.> The issue is that the UUID yyyy belonged to a prior > > unsuccessful deployment.> > I’m now redeploying the basic OS to > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, Inc> > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, at 9:25 > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > OverCloud deploy fails with error "No valid host was found"> > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> Deploying > > templates in the directory> /usr/share/openstack-tripleo-heat-templates> > > Stack failed with status: Resource CREATE failed: resources.Compute:> > > ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR> > > due to "Message: No valid host was found. There are not enough hosts> > > available., Code: 500"> Heat Stack create failed.> > Here are some logs:> > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue Oct > > 13> 16:18:17 2015> > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > | resource_name | physical_resource_id | resource_type | resource_status > > |> | updated_time | stack_name |> > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | OS::Heat::ResourceGroup > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller | > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r |> | > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server |> | > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | CREATE_FAILED > > | 2015-10-13T10:20:56 |> | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > |> > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > | Property | Value |> > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > | attributes | { |> | | "attributes": null, |> | | "refs": null |> | | } > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | links |> > > | > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > | (self) |> | | > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > | | (stack) |> | | > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > | | (nested) |> | logical_resource_id | Compute |> | physical_resource_id > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> | | > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | Compute |> > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > resources.Compute: ResourceInError:> | resources[0].resources.NovaCompute: > > Went to status ERROR due to "Message:> | No valid host was found. There > > are not enough hosts available., Code: 500"> | |> | resource_type | > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > This is my instackenv.json for 1 compute and 1 control node to be > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> "disk":"10",> > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> "mac":[> > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> "disk":"100",> > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in advance> > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> > > celik.esra at tubitak.gov.tr> > > > _______________________________________________> Rdo-list mailing list> > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > _______________________________________________> Rdo-list mailing list> > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rdo-screenshot.png Type: image/png Size: 138529 bytes Desc: not available URL: From bderzhavets at hotmail.com Wed Oct 14 09:18:57 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 14 Oct 2015 05:18:57 -0400 Subject: [Rdo-list] Best known working OS for RDO packstack In-Reply-To: References: Message-ID: 1. Best OS is CentOS 7.1 ( RHEL 7.1 ) 2. In general , (Controller/Network) node is not supposed to run VMs. Traffic coming outside && between VMs via AIO host might drop performance too much. Packstack does allow to set up configs like:- a) (Controller/Network) + (Compute) b) (Controller)+(Network)+(Compute) VXLAN tunnels between nodes seems to be a standard solution for RDO. Would you need answer file for 2 Node deployment it would be submitted Boris. From: outbackdingo at gmail.com Date: Wed, 14 Oct 2015 11:33:14 +1100 To: rdo-list at redhat.com Subject: [Rdo-list] Best known working OS for RDO packstack ok so whats the current best known working iso for RDO packstack... Ive got a couple blades hereid like to do an all-in-one on one blade then join a secondary compute only node., thoughts and input appreciated, as i dont want to jump through hoops like last time. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederic.lepied at redhat.com Wed Oct 14 09:24:51 2015 From: frederic.lepied at redhat.com (=?UTF-8?B?RnLDqWTDqXJpYyBMZXBpZWQ=?=) Date: Wed, 14 Oct 2015 11:24:51 +0200 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: References: <561CCF58.30704@redhat.com> Message-ID: <561E1F63.30104@redhat.com> On 10/14/2015 10:40 AM, Ha?kel wrote: > 2015-10-14 8:58 GMT+02:00 Alan Pevec : >>> As David said, our infrastructure is scattered and as been a >>> constantly moving target so I'd consider this a step toward the right >>> direction. >> Agreed but we also started consolidation around CentOS CloudSIG using CentOS >> provided infra and I think we should keep that direction. >> IMHO it would make sense to include this available hardware and this new RDO >> based cloud to ci.centos.org so we can continue already choosen >> consolidation direction. >> >> Cheers, >> Alan > +2 > CC'ing KB as he might be interested in hearing about Software Factory > and this RDO based Cloud. > http://softwarefactory.enovance.com/ > The right URL is now: http://softwarefactory-project.io/ -- Fred - May the Source be with you From mail-lists at karan.org Wed Oct 14 09:30:41 2015 From: mail-lists at karan.org (Karanbir Singh) Date: Wed, 14 Oct 2015 10:30:41 +0100 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: References: <561CCF58.30704@redhat.com> Message-ID: <561E20C1.6020702@karan.org> On 13/10/15 19:43, Ha?kel wrote: > As David said, our infrastructure is scattered and as been a > constantly moving target so I'd consider this a step toward the right > direction. > > The advantages of this proposal being: > 1. stable infrastructure => less time spent on fixing the infrastructure > 2. customizable workflow thanks to zuul which could only result in > improving quality > 3. close the gap between upstream and downstream infrastructure > 4. self-hosted on RDO => that's a very important one > > I don't see any real negative points, so unless someone has a > different proposal, I suggest that we start with a PoC. > As the Liberty cycle is about to finish, we'll get some bandwidth to > work on our infrastructure so this is the best time to discuss this. > > Consolidating our infrastructure is a primary goal to open up RDO > governance further. > One thing that I dont understand here is what value this adds over the CentOS Build Services.. we can integrate with an existing source control setup ( or use git.c.o ) and we can do fairly extensive test hosting and a release cadence built on that. Or is the intention here to host software-factory as a RDO specific UI backed by the CentOS pipeline ? -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From mangelajo at redhat.com Wed Oct 14 10:44:07 2015 From: mangelajo at redhat.com (Miguel Angel Ajo) Date: Wed, 14 Oct 2015 12:44:07 +0200 Subject: [Rdo-list] Best known working OS for RDO packstack In-Reply-To: References: Message-ID: <561E31F7.3030907@redhat.com> Boris Derzhavets wrote: > 1. Best OS is CentOS 7.1 ( RHEL 7.1 ) > 2. In general , (Controller/Network) node is not supposed to run VMs. > Traffic coming outside&& between VMs via AIO host might drop performance > too much. > > Packstack does allow to set up configs like:- > a) (Controller/Network) + (Compute) > b) (Controller)+(Network)+(Compute) Is b still available in packstack?, I thought it was unified to only support a, but I could be wrong. > VXLAN tunnels between nodes seems to be a standard solution for RDO. > Would you need answer file for 2 Node deployment it would be submitted Cheers, Miguel ?ngel. > Boris. > From: outbackdingo at gmail.com > Date: Wed, 14 Oct 2015 11:33:14 +1100 > To: rdo-list at redhat.com > Subject: [Rdo-list] Best known working OS for RDO packstack > > ok so whats the current best known working iso for RDO packstack... Ive got a couple blades hereid like to do an all-in-one on one blade then join a secondary compute only node., > thoughts and input appreciated, as i dont want to jump through hoops like last time. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ggillies at redhat.com Wed Oct 14 11:25:15 2015 From: ggillies at redhat.com (Graeme Gillies) Date: Wed, 14 Oct 2015 21:25:15 +1000 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: <561E20C1.6020702@karan.org> References: <561CCF58.30704@redhat.com> <561E20C1.6020702@karan.org> Message-ID: <561E3B9B.9080200@redhat.com> On 10/14/2015 07:30 PM, Karanbir Singh wrote: > On 13/10/15 19:43, Ha?kel wrote: >> As David said, our infrastructure is scattered and as been a >> constantly moving target so I'd consider this a step toward the right >> direction. >> >> The advantages of this proposal being: >> 1. stable infrastructure => less time spent on fixing the infrastructure >> 2. customizable workflow thanks to zuul which could only result in >> improving quality >> 3. close the gap between upstream and downstream infrastructure >> 4. self-hosted on RDO => that's a very important one >> >> I don't see any real negative points, so unless someone has a >> different proposal, I suggest that we start with a PoC. >> As the Liberty cycle is about to finish, we'll get some bandwidth to >> work on our infrastructure so this is the best time to discuss this. >> >> Consolidating our infrastructure is a primary goal to open up RDO >> governance further. >> > > One thing that I dont understand here is what value this adds over the > CentOS Build Services.. we can integrate with an existing source control > setup ( or use git.c.o ) and we can do fairly extensive test hosting and > a release cadence built on that. > > Or is the intention here to host software-factory as a RDO specific UI > backed by the CentOS pipeline ? > > > I think it might also be worthwhile making something a bit clearer as well. There is potentially two separate efforts in the works 1) To unify the hosting infrastructure and platform that all services that are part of RDO run on. The current proposal is for a new Openstack installation based on RDO itself to be deployed, in such a way that it's deployed and maintained transparent to the community (and indeed, conductive to community involvement). This has the potential to not only be useful for the RDO project, but potentially the CentOS project as well. The details of this are still being worked out, and I hope to speak more about this when I have something more concrete. 2) The potential to run an instance of the Software Factory software to leverage it as a workflow control tool for the development shipping process of RDO itself. This discussion I think was the original purpose of this thread, and is not tied to the final outcome of 1) Hope this helps. Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From ukalifon at redhat.com Wed Oct 14 11:40:23 2015 From: ukalifon at redhat.com (Udi Kalifon) Date: Wed, 14 Oct 2015 14:40:23 +0300 Subject: [Rdo-list] Overcloud deploy stuck for a long time In-Reply-To: References: <561C1722.9050608@redhat.com> <985767218.56816371.1444788465718.JavaMail.zimbra@redhat.com> Message-ID: My overcloud deployment also hangs for 4 hours and then fails. This is what I got on the 1st run: [stack at instack ~]$ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates ERROR: Authentication failed. Please try again with option --include-password or export HEAT_INCLUDE_PASSWORD=1 Authentication required I am assuming the authentication error is due to the expiration of the token after 4 hours, and not because I forgot the rc file. I tried to run the deployment again and it failed after another 4 hours with a different error: [stack at instack ~]$ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates Stack failed with status: resources.Controller: resources[0]: ResourceInError: resources.Controller: Went to status ERROR due to "Message: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 9eedda9e-f381-47d4-a883-0fe40db0eb5e. Last exception: [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1, Code: 500" Heat Stack update failed. The failed resources are: heat resource-list -n 5 overcloud |egrep -v COMPLETE +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+-----------------+---------------------+---------------------------------------------------------------------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | stack_name | +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+-----------------+---------------------+---------------------------------------------------------------------------------+ | Compute | aee2604f-2580-44c9-bc38-45046970fd63 | OS::Heat::ResourceGroup | UPDATE_FAILED | 2015-10-14T06:32:34 | overcloud | | 0 | 2199c1c6-60ca-42a4-927c-8bf0fb8763b7 | OS::TripleO::Compute | UPDATE_FAILED | 2015-10-14T06:32:36 | overcloud-Compute-dq426vplp2nu | | Controller | 2ae19a5f-f88c-4d8b-98ec-952657b70cd6 | OS::Heat::ResourceGroup | UPDATE_FAILED | 2015-10-14T06:32:36 | overcloud | | 0 | 2fc3ed0c-da5c-45e4-a255-4b4a8ef58dd7 | OS::TripleO::Controller | UPDATE_FAILED | 2015-10-14T06:32:38 | overcloud-Controller-ktbqsolaqm4u | | NovaCompute | 7938bbe0-ab97-499f-8859-15f903e7c09b | OS::Nova::Server | CREATE_FAILED | 2015-10-14T06:32:55 | overcloud-Compute-dq426vplp2nu-0-4acm6pstctor | | Controller | c1cd6b72-ec0d-4c13-b21c-10d0f6c45788 | OS::Nova::Server | CREATE_FAILED | 2015-10-14T06:32:58 | overcloud-Controller-ktbqsolaqm4u-0-d76rtersrtyt | +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+-----------------+---------------------+---------------------------------------------------------------------------------+ I was unable to run resource-show or deployment-show on the failed resources, it kept complaining that those resources are not found. Thanks, Udi. On Wed, Oct 14, 2015 at 11:16 AM, Tzach Shefi wrote: > Hi Sasha\Dan, > Yep that's my bug I opened yesterday about this. > > sshd and firewall rules look OK having tested below: > I can ssh into the virt host from my laptop with root user, checking > 10.X.X.X net > Can also ssh from instack vm to virt host, checking 192.168.122.X net. > > Unless I should check ssh with other user, if so which ? > I doubt ssh user/firewall caused the problem as controller was installed > successfully and it too uses same procedure ssh virt power-on method. > > Deployment is still up & stuck if any one ones to take a look contact me > for access details in private. > > Will review/use virt console, virt journal and timeout tips on next > deployment. > > Thanks > Tzach > > > On Wed, Oct 14, 2015 at 5:07 AM, Sasha Chuzhoy wrote: > >> I hit the same (or similar) issue on my BM environment, though I manage >> to complete the 1+1 deployment on VM successfully. >> I see it's reported already: >> https://bugzilla.redhat.com/show_bug.cgi?id=1271289 >> >> Ran a deployment with: openstack overcloud deploy --templates --timeout >> 90 --compute-scale 3 --control-scale 1 >> The deployment fails, and I see that "all minus one" overcloud nodes are >> still in BUILD status. >> >> [stack at undercloud ~]$ nova list >> >> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >> | ID | Name | Status >> | Task State | Power State | Networks | >> >> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >> | b15f499e-79ed-46b2-b990-878dbe6310b1 | overcloud-controller-0 | BUILD >> | spawning | NOSTATE | ctlplane=192.0.2.23 | >> | 4877d14a-e34e-406b-8005-dad3d79f5bab | overcloud-novacompute-0 | ACTIVE >> | - | Running | ctlplane=192.0.2.9 | >> | 0fd1a7ed-367e-448e-8602-8564bf087e92 | overcloud-novacompute-1 | BUILD >> | spawning | NOSTATE | ctlplane=192.0.2.21 | >> | 51630a7d-c140-47b9-a071-1f2fdb45f4b4 | overcloud-novacompute-2 | BUILD >> | spawning | NOSTATE | ctlplane=192.0.2.22 | >> >> >> Will try to investigate further tomorrow. >> >> Best regards, >> Sasha Chuzhoy. >> >> ----- Original Message ----- >> > From: "Tzach Shefi" >> > To: "Dan Sneddon" >> > Cc: rdo-list at redhat.com >> > Sent: Tuesday, October 13, 2015 6:01:48 AM >> > Subject: Re: [Rdo-list] Overcloud deploy stuck for a long time >> > >> > So gave it a few more hours, on heat resource nothing is failed only >> > create_complete and some init_complete. >> > >> > Nova show >> > | 61aaed37-4993-4165-93a7-3c9bf6b10a21 | overcloud-controller-0 | >> ACTIVE | - >> > | | Running | ctlplane=192.0.2.8 | >> > | 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 | overcloud-novacompute-0 | >> BUILD | >> > | spawning | NOSTATE | ctlplane=192.0.2.9 | >> > >> > >> > nova show 7f9f4f52-3ee6-42d9-9275-ff88582dd6e7 >> > >> +--------------------------------------+----------------------------------------------------------+ >> > | Property | Value | >> > >> +--------------------------------------+----------------------------------------------------------+ >> > | OS-DCF:diskConfig | MANUAL | >> > | OS-EXT-AZ:availability_zone | nova | >> > | OS-EXT-SRV-ATTR:host | instack.localdomain | >> > | OS-EXT-SRV-ATTR:hypervisor_hostname | >> 4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 >> > | | >> > | OS-EXT-SRV-ATTR:instance_name | instance-00000002 | >> > | OS-EXT-STS:power_state | 0 | >> > | OS-EXT-STS:task_state | spawning | >> > | OS-EXT-STS:vm_state | building | >> > >> > Checking nova log this is what I see: >> > >> > nova-compute.log:{"nodes": [{"target_power_state": null, "links": >> [{"href": " >> > http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 ", >> > "rel": "self"}, {"href": " >> > http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6 ", >> "rel": >> > "bookmark"}], "extra": {}, "last_error": " Failed to change power state >> to >> > 'power on'. Error: Failed to execute command via SSH : LC_ALL=C >> > /usr/bin/virsh --connect qemu:///system start baremetalbrbm_1.", >> > "updated_at": "2015-10-12T14:36:08+00:00", "maintenance_reason": null, >> > "provision_state": "deploying", "clean_step": {}, "uuid": >> > "4626bf90-7f95-4bd7-8bee-5f5b0a0981c6", "console_enabled": false, >> > "target_provision_state": "active", "provision_updated_at": >> > "2015-10-12T14:35:18+00:00", "power_state": "power off", >> > "inspection_started_at": null, "inspection_finished_at": null, >> > "maintenance": false, "driver": "pxe_ssh", "reservation": null, >> > "properties": {"memory_mb": "4096", "cpu_arch": "x86_64", "local_gb": >> "40", >> > "cpus": "1", "capabilities": "boot_option:local"}, "instance_uuid": >> > "7f9f4f52-3ee6-42d9-9275-ff88582dd6e7", "name": null, "driver_info": >> > {"ssh_username": "root", "deploy_kernel": >> > "94cc528d-d91f-4ca7-876e-2d8cbec66f1b", "deploy_ramdisk": >> > "057d3b42-002a-4c24-bb3f-2032b8086108", "ssh_key_contents": >> "-----BEGIN( I >> > removed key..)END RSA PRIVATE KEY-----", "ssh_virt_type": "virsh", >> > "ssh_address": "192.168.122.1"}, "created_at": >> "2015-10-12T14:26:30+00:00", >> > "ports": [{"href": " >> > >> http://192.0.2.1:6385/v1/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports >> ", >> > "rel": "self"}, {"href": " >> > http://192.0.2.1:6385/nodes/4626bf90-7f95-4bd7-8bee-5f5b0a0981c6/ports >> ", >> > "rel": "bookmark"}], "driver_internal_info": {"clean_steps": null, >> > "root_uuid_or_disk_id": "9ff90423-9d18-4dd1-ae96-a4466b52d9d9", >> > "is_whole_disk_image": false}, "instance_info": {"ramdisk": >> > "82639516-289d-4603-bf0e-8131fa75ec46", "kernel": >> > "665ffcb0-2afe-4e04-8910-45b92826e328", "root_gb": "40", "display_name": >> > "overcloud-novacompute-0", "image_source": >> > "d99f460e-c6d9-4803-99e4-51347413f348", "capabilities": >> "{\"boot_option\": >> > \"local\"}", "memory_mb": "4096", "vcpus": "1", "deploy_key": >> > "BI0FRWDTD4VGHII9JK2BYDDFR8WB1WUG", "local_gb": "40", "configdrive": >> > >> "H4sICGDEG1YC/3RtcHpwcWlpZQDt3WuT29iZ2HH02Bl7Fe/G5UxSqS3vLtyesaSl2CR4p1zyhk2Ct+ateScdVxcIgiR4A5sAr95xxa/iVOUz7EfJx8m7rXyE5IDslro1mpbGox15Zv6/lrpJ4AAHN/LBwXMIShIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADhJpvx+5UQq5EqNtvzldGs+MIfewJeNv53f/7n354F6xT/3v/TjH0v/chz0L5+8Gv2f3V+n0s+Pz34u/dj982PJfvSTvxFVfXQ7vfyBlRfGvOZo+kQuWWtNVgJn/jO/d6kHzvrGWlHOjGn0TDfmjmXL30kZtZSrlXPFREaVxQM5Hon4fdl0TU7nCmqtU6urRTlZVRP1clV+knwqK/F4UFbPOuVGKZNKFNTbgVFvwO+PyPmzipqo1solX/6slszmCuKozBzKuKPdMlE5ma >> > >> > >> > Any ideas on how to resolve a stuck spawning compute node, it's stuck >> hasn't >> > changed for a few hours now. >> > >> > Tzach >> > >> > Tzach >> > >> > >> > On Mon, Oct 12, 2015 at 11:25 PM, Dan Sneddon < dsneddon at redhat.com > >> wrote: >> > >> > >> > >> > On 10/12/2015 08:10 AM, Tzach Shefi wrote: >> > > Hi, >> > > >> > > Server running centos 7.1, vm running for undercloud got up to >> > > overcloud deploy stage. >> > > It looks like its stuck nothing advancing for a while. >> > > Ideas, what to check? >> > > >> > > [stack at instack ~]$ openstack overcloud deploy --templates >> > > Deploying templates in the directory >> > > /usr/share/openstack-tripleo-heat-templates >> > > [91665.696658] device vnet2 entered promiscuous mode >> > > [91665.781346] device vnet3 entered promiscuous mode >> > > [91675.260324] kvm [71183]: vcpu0 disabled perfctr wrmsr: 0xc1 data >> 0xffff >> > > [91675.291232] kvm [71200]: vcpu0 disabled perfctr wrmsr: 0xc1 data >> 0xffff >> > > [91767.799404] kvm: zapping shadow pages for mmio generation >> wraparound >> > > [91767.880480] kvm: zapping shadow pages for mmio generation >> wraparound >> > > [91768.957761] device vnet2 left promiscuous mode >> > > [91769.799446] device vnet3 left promiscuous mode >> > > [91771.223273] device vnet3 entered promiscuous mode >> > > [91771.232996] device vnet2 entered promiscuous mode >> > > [91773.733967] kvm [72245]: vcpu0 disabled perfctr wrmsr: 0xc1 data >> 0xffff >> > > [91801.270510] device vnet2 left promiscuous mode >> > > >> > > >> > > Thanks >> > > Tzach >> > > >> > > >> > > _______________________________________________ >> > > Rdo-list mailing list >> > > Rdo-list at redhat.com >> > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > >> > >> > You're going to need a more complete command line than "openstack >> > overcloud deploy --templates". For instance, if you are using VMs for >> > your overcloud nodes, you will need to include "--libvirt-type qemu". >> > There are probably a couple of other parameters that you will need. >> > >> > You can watch the deployment using this command, which will show you >> > the progress: >> > >> > watch "heat resource-list -n 5 | grep -v COMPLETE" >> > >> > You can also explore which resources have failed: >> > >> > heat resource-list [-n 5]| grep FAILED >> > >> > And then look more closely at the failed resources: >> > >> > heat resource-show overcloud >> > >> > There are some more complete troubleshooting instructions here: >> > >> > >> http://docs.openstack.org/developer/tripleo-docs/troubleshooting/troubleshooting-overcloud.html >> > >> > -- >> > Dan Sneddon | Principal OpenStack Engineer >> > dsneddon at redhat.com | redhat.com/openstack >> > 650.254.4025 | dsneddon:irc @dxs:twitter >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> > >> > >> > -- >> > Tzach Shefi >> > Quality Engineer, Redhat OSP >> > +972-54-4701080 >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > -- > *Tzach Shefi* > Quality Engineer, Redhat OSP > +972-54-4701080 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Oct 14 11:49:07 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 14 Oct 2015 07:49:07 -0400 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <561DF4B5.7040303@lanabrindley.com> References: <561DF4B5.7040303@lanabrindley.com> Message-ID: <561E4133.8060804@redhat.com> I wanted to be certain that everyone has seen this message to OpenStack-docs, and the subsequent conversation at http://lists.openstack.org/pipermail/openstack-docs/2015-October/007622.html This is quite serious, as Lana is basically saying that RDO isn't a viable way to deploy OpenStack in Liberty, and so it's being removed from the docs. It would be helpful if someone closer to Liberty packages, and Delorean, could participate there in a constructive way to bring this to a happy conclusion before the release tomorrow. Thanks. --Rich -------- Forwarded Message -------- Subject: [OpenStack-docs] [install-guide] Status of RDO Date: Wed, 14 Oct 2015 16:22:45 +1000 From: Lana Brindley To: openstack-docs at lists.openstack.org -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi everyone, We've been unable to obtain good pre-release packages from Red Hat for the Fedora and Red Hat/CentOS repos, despite our best efforts. This has left the RDO Install Guide in a largely untested state, so I don't feel confident publishing it at this stage. As far as we can tell, Fedora are no longer planning on having pre-release packages available, so this might be a permanent change for that OS. For Red Hat/CentOS, it seems to be a temporary problem, so hopefully we can get the packages, complete testing, and publish the book soon. The patch to remove RDO is here, for anyone who cares to comment: https://review.openstack.org/#/c/234584/ Lana - -- Lana Brindley Technical Writer Rackspace Cloud Builders Australia http://lanabrindley.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCAAGBQJWHfS1AAoJELppzVb4+KUyM7cH/ii5Ekz5vjTe3dTykXBUbWGt bR2XJTAbS/mFB+xayecNNPLvgejI6Nxvk8msSFNnN7/ZyDNwr+eceQw7ftMKuJnR h7qKBb6o5iayLJxgNRK3Kjo13NjGdaiXwfLTbB5br/aiP2HHsrDRexAcLteUCKGt eHbZUEYqg4VADUvodxNpbZ+7fHuXrIRZoH4aDQ4+o1p0dCdw+vkjzF/MzPSgZFar Rq9L94rpofDat9ymuW48c+SgUeOnmTvxwEN8ExTENNMXo4nUOJwcUS65J6XURO9K RUGvjPmSmm7ZaQGE+koKyGZSzF/Oqoa+vBUwxdeQqmtr2tWo//jlUVV/PDc8QV0= =rQp4 -----END PGP SIGNATURE----- _______________________________________________ OpenStack-docs mailing list OpenStack-docs at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs From bderzhavets at hotmail.com Wed Oct 14 11:48:18 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 14 Oct 2015 07:48:18 -0400 Subject: [Rdo-list] Best known working OS for RDO packstack In-Reply-To: <561E31F7.3030907@redhat.com> References: , <561E31F7.3030907@redhat.com> Message-ID: > Date: Wed, 14 Oct 2015 12:44:07 +0200 > From: mangelajo at redhat.com > To: bderzhavets at hotmail.com > CC: outbackdingo at gmail.com; rdo-list at redhat.com > Subject: Re: [Rdo-list] Best known working OS for RDO packstack > > > > Boris Derzhavets wrote: > > 1. Best OS is CentOS 7.1 ( RHEL 7.1 ) > > 2. In general , (Controller/Network) node is not supposed to run VMs. > > Traffic coming outside&& between VMs via AIO host might drop performance > > too much. > > > > Packstack does allow to set up configs like:- > > a) (Controller/Network) + (Compute) > > b) (Controller)+(Network)+(Compute) > > Is b still available in packstack?, I thought it was unified to only > support a, but I could be wrong. Please, see http://beta.rdoproject.org/blog/2015/10/rdo-blog-roundup-week-of-october-12/ Two my blogs entries mentioned in link above are written for VMs to make a reproducible POC. Actually, it works on bare metal landscapes been deployed via packstack. I do understand RDO Manager strength and importance, unfortunately I don't have hardware for testing and learning it. Also it seems to me, that RDO Manager is not ready for production right now. > > > VXLAN tunnels between nodes seems to be a standard solution for RDO. > > Would you need answer file for 2 Node deployment it would be submitted > > Cheers, > Miguel ?ngel. > > > Boris. > > From: outbackdingo at gmail.com > > Date: Wed, 14 Oct 2015 11:33:14 +1100 > > To: rdo-list at redhat.com > > Subject: [Rdo-list] Best known working OS for RDO packstack > > > > ok so whats the current best known working iso for RDO packstack... Ive got a couple blades hereid like to do an all-in-one on one blade then join a secondary compute only node., > > thoughts and input appreciated, as i dont want to jump through hoops like last time. > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Wed Oct 14 11:59:30 2015 From: mcornea at redhat.com (Marius Cornea) Date: Wed, 14 Oct 2015 07:59:30 -0400 (EDT) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <637795359.4087130.1444812541830.JavaMail.zimbra@tubitak.gov.tr> References: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> <1834381527.3752816.1444744077063.JavaMail.zimbra@tubitak.gov.tr> <136417348.41059871.1444746300472.JavaMail.zimbra@redhat.com> <1316900159.3778086.1444748529044.JavaMail.zimbra@tubitak.gov.tr> <278705446.41253505.1444760174635.JavaMail.zimbra@redhat.com> <637795359.4087130.1444812541830.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1587469518.41803328.1444823970839.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Esra Celik" > To: "Marius Cornea" > Cc: "Ignacio Bravo" , rdo-list at redhat.com > Sent: Wednesday, October 14, 2015 10:49:01 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > > Well today I started with re-installing the OS and nothing seems wrong with > undercloud installation, then; > > > > > > > I see an error during image build > > > [stack at undercloud ~]$ openstack overcloud image build --all > ... > a lot of log > ... > ++ cat /etc/dib_dracut_drivers > + dracut -N --install ' curl partprobe lsblk targetcli tail head awk ifconfig > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell rd.debug > rd.neednet=1 rd.driver.pre=ahci' --include /var/tmp/image.YVhwuArQ/mnt/ / > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > target_core_file target_core_pscsi configfs' -o 'dash plymouth' /tmp/ramdisk > cat: write error: Broken pipe > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > + chmod o+r /tmp/kernel > + trap EXIT > + target_tag=99-build-dracut-ramdisk > + date +%s.%N > + output '99-build-dracut-ramdisk completed' > ... > a lot of log > ... You can ignore that afaik, if you end up having all the required images it should be ok. > > Then, during introspection stage I see ironic-python-agent errors on nodes > (screenshot attached) and the following warnings > That looks odd. Is it showing up in the early stage of the introspection? At some point it should receive an address by DHCP and the Network is unreachable error should disappear. Does the introspection complete and the nodes are turned off? > > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service | > grep -i "warning\|error" > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 10:30:12.119 > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > Option "http_url" from group "pxe" is deprecated. Use option "http_url" from > group "deploy". > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 10:30:12.119 > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > Option "http_root" from group "pxe" is deprecated. Use option "http_root" > from group "deploy". > > > Before deployment ironic node-list: > This is odd too as I'm expecting the nodes to be powered off before running deployment. > > > [stack at undercloud ~]$ ironic node-list > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power State | Provisioning State | > | Maintenance | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | available | > | False | > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | available | > | False | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > During deployment I get following errors > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service | > grep -i "warning\|error" > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 11:29:01.739 > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while attempting > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 -f > /tmp/tmpSCKHIv power status"for node b5811c06-d5d1-41f1-87b3-2fd55ae63553. > Error: Unexpected error while running command. > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 11:29:01.739 > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status failed for > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error while > running command. > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 11:29:01.740 > 619 WARNING ironic.conductor.manager [-] During sync_power_state, could not > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt 1 of > 3. Error: IPMI call failed: power status.. > This looks like an ipmi error, can you try to manually run commands using the ipmitool and see if you get any success? It's also worth filing a bug with details such as the ipmitool version, server model, drac firmware version. > > > > > > Thanks a lot > > > > ----- Orijinal Mesaj ----- > > Kimden: "Marius Cornea" > Kime: "Esra Celik" > Kk: "Ignacio Bravo" , rdo-list at redhat.com > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > host was found" > > > ----- Original Message ----- > > From: "Esra Celik" > > To: "Marius Cornea" > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > host was found" > > > > During deployment they are powering on and deploying the images. I see lot > > of > > connection error messages about ironic-python-agent but ignore them as > > mentioned here > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > That was referring to the introspection stage. From what I can tell you are > experiencing issues during deployment as it fails to provision the nova > instances, can you check if during that stage the nodes get powered on? > > Make sure that before overcloud deploy the ironic nodes are available for > provisioning (ironic node-list and check the provisioning state column). > Also check that you didn't miss any step in the docs in regards to kernel > and ramdisk assignment, introspection, flavor creation(so it matches the > nodes resources) > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > In instackenv.json file I do not need to add the undercloud node, or do I? > > No, the nodes details should be enough. > > > And which log files should I watch during deployment? > > You can check the openstack-ironic-conductor logs(journalctl -fl -u > openstack-ironic-conductor.service) and the logs in /var/log/nova. > > > Thanks > > Esra > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea Kime: > > Esra Celik Kk: Ignacio Bravo > > , rdo-list at redhat.comGönderilenler: Tue, 13 Oct > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails with > > error "No valid host was found" > > > > ----- Original Message -----> From: "Esra Celik" > > > > > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> Sent: > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud > > deploy fails with error "No valid host was found"> > > > Actually I > > re-installed the OS for Undercloud before deploying. However I did> not > > re-install the OS in Compute and Controller nodes.. I will reinstall> basic > > OS for them too, and retry.. > > > > You don't need to reinstall the OS on the controller and compute, they will > > get the image served by the undercloud. I'd recommend that during > > deployment > > you watch the servers console and make sure they get powered on, pxe boot, > > and actually get the image deployed. > > > > Thanks > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: > > > "Ignacio > > > Bravo" > Kime: "Esra Celik" > > > > Kk: rdo-list at redhat.com> Gönderilenler: > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy fails > > > with error "No valid host was> found"> > Esra,> > I encountered the same > > > problem after deleting the stack and re-deploying.> > It turns out that > > > 'heat stack-delete overcloud’ does remove the nodes from> > > > ‘nova list’ and one would assume that the baremetal servers > > > are now ready to> be used for the next stack, but when redeploying, I get > > > the same message of> not enough hosts available.> > You can look into the > > > nova logs and it mentions something about ‘node xxx is> already > > > associated with UUID yyyy’ and ‘I tried 3 times and I’m > > > giving up’.> The issue is that the UUID yyyy belonged to a prior > > > unsuccessful deployment.> > I’m now redeploying the basic OS to > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, Inc> > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, at > > > 9:25 > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > > OverCloud deploy fails with error "No valid host was found"> > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> Deploying > > > templates in the directory> /usr/share/openstack-tripleo-heat-templates> > > > Stack failed with status: Resource CREATE failed: resources.Compute:> > > > ResourceInError: resources[0].resources.NovaCompute: Went to status > > > ERROR> > > > due to "Message: No valid host was found. There are not enough hosts> > > > available., Code: 500"> Heat Stack create failed.> > Here are some logs:> > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue > > > > Oct > > > 13> 16:18:17 2015> > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > | resource_name | physical_resource_id | resource_type | resource_status > > > |> | updated_time | stack_name |> > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > | OS::Heat::ResourceGroup > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller | > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > > CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r |> | > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server |> | > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > CREATE_FAILED > > > | 2015-10-13T10:20:56 |> | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > |> > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > | Property | Value |> > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > | attributes | { |> | | "attributes": null, |> | | "refs": null |> | | } > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | links |> > > > | > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > | (self) |> | | > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > | | (stack) |> | | > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > | | (nested) |> | logical_resource_id | Compute |> | physical_resource_id > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | > > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> | | > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | Compute |> > > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > > resources.Compute: ResourceInError:> | > > > resources[0].resources.NovaCompute: > > > Went to status ERROR due to "Message:> | No valid host was found. There > > > are not enough hosts available., Code: 500"> | |> | resource_type | > > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > This is my instackenv.json for 1 compute and 1 control node to be > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> "disk":"10",> > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> "disk":"100",> > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in advance> > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> > > > celik.esra at tubitak.gov.tr> > > > > _______________________________________________> Rdo-list mailing list> > > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > _______________________________________________> Rdo-list mailing list> > > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > From erming at ualberta.ca Tue Oct 13 22:59:33 2015 From: erming at ualberta.ca (Erming Pei) Date: Tue, 13 Oct 2015 16:59:33 -0600 Subject: [Rdo-list] error in doing deployment with RDO-Manager Message-ID: <561D8CD5.9010009@ualberta.ca> Hi, I am trying with deploying Openstack with RDO Manager, but am having an issue for now with executing "openstack overcloud deploy --templates" command (I am just following the user guide for a basic deployment without changing/creating any template yet): [stack at gcloudcon-3 ~]$ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates ERROR: openstack Heat Stack create failed. [stack at gcloudcon-3 ~]$ heat stack-list +--------------------------------------+------------+---------------+----------------------+ | id | stack_name | stack_status | creation_time | +--------------------------------------+------------+---------------+----------------------+ | 34eb7053-e504-4183-b39b-e87d0d3f7b4c | overcloud | CREATE_FAILED | 2015-10-09T17:40:17Z | +--------------------------------------+------------+---------------+----------------------+ [stack at gcloudcon-3 ~]$ ironic node-list +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ | UUID | Name | Instance UUID | Power State | Provision State | Maintenance | +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ | 248f2695-c43b-4d13-8aca-a3f5732f72ac | None | 4971f77b-d233-4431-a8ee-b29d18262394 | power off | error | False | | 3cdc8f0e-eb3f-47df-b4f0-bc68b671e23f | None | 6388c30a-97f3-4141-b02f-b53d36782cbd | power off | error | False | +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ [stack at gcloudcon-3 ~]$ ironic node-show 248f2695-c43b-4d13-8aca-a3f5732f72ac +------------------------+------------------------------------------------------------------------+ | Property | Value | +------------------------+------------------------------------------------------------------------+ | target_power_state | None | | extra | {u'newly_discovered': u'true', u'block_devices': {u'serials': | | | [u'600605b0016ae53012feea5d1b60cdb9', | | | u'600605b0016ae53012ff5378180b6c6f']}, u'hardware_swift_object': u | | | 'extra_hardware-248f2695-c43b-4d13-8aca-a3f5732f72ac'} | | last_error | Failed to tear down. Error: [Errno 13] Permission denied: | | | '/tftpboot/master_images' | | updated_at | 2015-10-13T21:19:14+00:00 | | maintenance_reason | None | | provision_state | error | | uuid | 248f2695-c43b-4d13-8aca-a3f5732f72ac | | console_enabled | False | | target_provision_state | available | | maintenance | False | | inspection_started_at | None | | inspection_finished_at | None | | power_state | power off | | driver | pxe_ipmitool | | reservation | None | | properties | {u'memory_mb': u'131072', u'cpu_arch': u'x86_64', u'local_gb': u'463', | | | u'cpus': u'8', u'capabilities': u'boot_option:local'} | | instance_uuid | 4971f77b-d233-4431-a8ee-b29d18262394 | | name | None | | driver_info | {u'ipmi_password': u'******', u'ipmi_address': u'10.0.8.30', | | | u'ipmi_username': u'USERID', u'deploy_kernel': u'9e82182f-c1a0-420c- | | | a7dc-b532c36892ca', u'deploy_ramdisk': u'982008b4-2d53-41db-803c- | | | 3d97405a2e0a'} | | created_at | 2015-09-02T20:10:39+00:00 | | driver_internal_info | {u'clean_steps': None, u'is_whole_disk_image': False} | | chassis_uuid | | | instance_info | {} | +------------------------+------------------------------------------------------------------------+ Is there any hint for me? Thanks, Erming -- --------------------------------------------- Erming Pei, Ph.D Senior System Analyst; Grid/Cloud Specialist Research Computing Group Information Services & Technology University of Alberta, Canada Tel: +1 7804929914 Fax: +1 7804921729 --------------------------------------------- From apevec at gmail.com Wed Oct 14 10:24:24 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 14 Oct 2015 12:24:24 +0200 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: <561E20C1.6020702@karan.org> References: <561CCF58.30704@redhat.com> <561E20C1.6020702@karan.org> Message-ID: 2015-10-14 11:30 GMT+02:00 Karanbir Singh : > One thing that I dont understand here is what value this adds over the > CentOS Build Services.. we can integrate with an existing source control > setup ( or use git.c.o ) and we can do fairly extensive test hosting and > a release cadence built on that. > > Or is the intention here to host software-factory as a RDO specific UI > backed by the CentOS pipeline ? We would keep CBS for final, production builds. I'd like to imagine SF as an additional service provided in the CentOS community for all projects to use, not just RDO/Cloud SIG! Value add AFAICT is that it combines tools in one nice UI and enables automated workflows e.g. I would like to see bot proposing gerrit changes to bump versions in spec Requires: when upstream global-requirements are changed. I'm discovering what SF can do myself, but it looks promising. Cheers, Alan From hguemar at fedoraproject.org Wed Oct 14 12:08:02 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 14 Oct 2015 14:08:02 +0200 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: <561E3B9B.9080200@redhat.com> References: <561CCF58.30704@redhat.com> <561E20C1.6020702@karan.org> <561E3B9B.9080200@redhat.com> Message-ID: 2015-10-14 13:25 GMT+02:00 Graeme Gillies : > On 10/14/2015 07:30 PM, Karanbir Singh wrote: >> On 13/10/15 19:43, Ha?kel wrote: >>> As David said, our infrastructure is scattered and as been a >>> constantly moving target so I'd consider this a step toward the right >>> direction. >>> >>> The advantages of this proposal being: >>> 1. stable infrastructure => less time spent on fixing the infrastructure >>> 2. customizable workflow thanks to zuul which could only result in >>> improving quality >>> 3. close the gap between upstream and downstream infrastructure >>> 4. self-hosted on RDO => that's a very important one >>> >>> I don't see any real negative points, so unless someone has a >>> different proposal, I suggest that we start with a PoC. >>> As the Liberty cycle is about to finish, we'll get some bandwidth to >>> work on our infrastructure so this is the best time to discuss this. >>> >>> Consolidating our infrastructure is a primary goal to open up RDO >>> governance further. >>> >> >> One thing that I dont understand here is what value this adds over the >> CentOS Build Services.. we can integrate with an existing source control >> setup ( or use git.c.o ) and we can do fairly extensive test hosting and >> a release cadence built on that. >> >> Or is the intention here to host software-factory as a RDO specific UI >> backed by the CentOS pipeline ? >> >> >> > > I think it might also be worthwhile making something a bit clearer as > well. There is potentially two separate efforts in the works > > 1) To unify the hosting infrastructure and platform that all services > that are part of RDO run on. The current proposal is for a new Openstack > installation based on RDO itself to be deployed, in such a way that it's > deployed and maintained transparent to the community (and indeed, > conductive to community involvement). This has the potential to not only > be useful for the RDO project, but potentially the CentOS project as > well. The details of this are still being worked out, and I hope to > speak more about this when I have something more concrete. > Yup, that's why I wanted to raise KB attention on this proposal, as I hope that we could leverage a) the new RDO-based cloud for CentOS cloud images testing b) Software Factory in CentOS as project gating and automation for SIGs as it could be a real multiplier. > 2) The potential to run an instance of the Software Factory software to > leverage it as a workflow control tool for the development shipping > process of RDO itself. This discussion I think was the original purpose > of this thread, and is not tied to the final outcome of 1) > Exactly, bits of infrastructure (build system, jenkins, repositories) that already moved to CentOS will remain there. If we follow that path, we'll look after integrating CentOS infrastructure with Software Factory. Regards, H. > Hope this helps. > > Regards, > > Graeme > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dms at redhat.com Wed Oct 14 12:17:36 2015 From: dms at redhat.com (David Moreau Simard) Date: Wed, 14 Oct 2015 08:17:36 -0400 Subject: [Rdo-list] Software Factory for RDO experiment In-Reply-To: References: <561CCF58.30704@redhat.com> <561E20C1.6020702@karan.org> Message-ID: Not making Software factory exclusive to RDO would be nice but potentially multiplies the amount effort involved. It would definitely be great if we can afford it. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Oct 14, 2015 8:13 AM, "Alan Pevec" wrote: > 2015-10-14 11:30 GMT+02:00 Karanbir Singh : > > One thing that I dont understand here is what value this adds over the > > CentOS Build Services.. we can integrate with an existing source control > > setup ( or use git.c.o ) and we can do fairly extensive test hosting and > > a release cadence built on that. > > > > Or is the intention here to host software-factory as a RDO specific UI > > backed by the CentOS pipeline ? > > We would keep CBS for final, production builds. > I'd like to imagine SF as an additional service provided in the CentOS > community for all projects to use, not just RDO/Cloud SIG! > Value add AFAICT is that it combines tools in one nice UI and enables > automated workflows e.g. I would like to see bot proposing gerrit > changes to bump versions in spec Requires: when upstream > global-requirements are changed. > I'm discovering what SF can do myself, but it looks promising. > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Wed Oct 14 12:28:54 2015 From: mcornea at redhat.com (Marius Cornea) Date: Wed, 14 Oct 2015 08:28:54 -0400 (EDT) Subject: [Rdo-list] error in doing deployment with RDO-Manager In-Reply-To: <561D8CD5.9010009@ualberta.ca> References: <561D8CD5.9010009@ualberta.ca> Message-ID: <167118591.41822844.1444825733999.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Erming Pei" > To: rdo-list at redhat.com > Sent: Wednesday, October 14, 2015 12:59:33 AM > Subject: [Rdo-list] error in doing deployment with RDO-Manager > > Hi, > > I am trying with deploying Openstack with RDO Manager, but am > having an issue for now with executing "openstack overcloud deploy > --templates" command (I am just following the user guide for a basic > deployment without changing/creating any template yet): > > [stack at gcloudcon-3 ~]$ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > ERROR: openstack Heat Stack create failed. > > > > [stack at gcloudcon-3 ~]$ heat stack-list > +--------------------------------------+------------+---------------+----------------------+ > | id | stack_name | stack_status | > creation_time | > +--------------------------------------+------------+---------------+----------------------+ > | 34eb7053-e504-4183-b39b-e87d0d3f7b4c | overcloud | CREATE_FAILED | > 2015-10-09T17:40:17Z | > +--------------------------------------+------------+---------------+----------------------+ > > > [stack at gcloudcon-3 ~]$ ironic node-list > +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ > | UUID | Name | Instance > UUID | Power State | Provision State | Maintenance | > +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ > | 248f2695-c43b-4d13-8aca-a3f5732f72ac | None | > 4971f77b-d233-4431-a8ee-b29d18262394 | power off | error | False | > | 3cdc8f0e-eb3f-47df-b4f0-bc68b671e23f | None | > 6388c30a-97f3-4141-b02f-b53d36782cbd | power off | error | False | > +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ > > > [stack at gcloudcon-3 ~]$ ironic node-show 248f2695-c43b-4d13-8aca-a3f5732f72ac > +------------------------+------------------------------------------------------------------------+ > | Property | Value | > +------------------------+------------------------------------------------------------------------+ > | target_power_state | None | > | extra | {u'newly_discovered': u'true', > u'block_devices': {u'serials': | > | | [u'600605b0016ae53012feea5d1b60cdb9', | > | | u'600605b0016ae53012ff5378180b6c6f']}, > u'hardware_swift_object': u | > | | > 'extra_hardware-248f2695-c43b-4d13-8aca-a3f5732f72ac'} | > | last_error | Failed to tear down. Error: [Errno 13] > Permission denied: | > | | '/tftpboot/master_images' | > | updated_at | 2015-10-13T21:19:14+00:00 | > | maintenance_reason | None | > | provision_state | error | > | uuid | 248f2695-c43b-4d13-8aca-a3f5732f72ac | > | console_enabled | False | > | target_provision_state | available | > | maintenance | False | > | inspection_started_at | None | > | inspection_finished_at | None | > | power_state | power > off | > | driver | pxe_ipmitool | > | reservation | None | > | properties | {u'memory_mb': u'131072', u'cpu_arch': > u'x86_64', u'local_gb': u'463', | > | | u'cpus': u'8', u'capabilities': > u'boot_option:local'} | > | instance_uuid | 4971f77b-d233-4431-a8ee-b29d18262394 | > | name | None | > | driver_info | {u'ipmi_password': u'******', > u'ipmi_address': u'10.0.8.30', | > | | u'ipmi_username': u'USERID', > u'deploy_kernel': u'9e82182f-c1a0-420c- | > | | a7dc-b532c36892ca', u'deploy_ramdisk': > u'982008b4-2d53-41db-803c- | > | | 3d97405a2e0a'} | > | created_at | 2015-09-02T20:10:39+00:00 | > | driver_internal_info | {u'clean_steps': None, > u'is_whole_disk_image': False} | > | chassis_uuid | | > | instance_info | {} | > +------------------------+------------------------------------------------------------------------+ > > > Is there any hint for me? Try to watch nova list during deployment. Also logs in /var/log/nova and journalctl -u openstack-ironic-conductor.service can be useful. The following docs cover some debugging steps: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/troubleshooting/troubleshooting-overcloud.html > Thanks, > > Erming > > -- > --------------------------------------------- > Erming Pei, Ph.D > Senior System Analyst; Grid/Cloud Specialist > > Research Computing Group > Information Services & Technology > University of Alberta, Canada > > Tel: +1 7804929914 Fax: +1 7804921729 > --------------------------------------------- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From sgordon at redhat.com Wed Oct 14 12:32:37 2015 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 14 Oct 2015 08:32:37 -0400 (EDT) Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <561E184B.9050800@lanabrindley.com> References: <561DF4B5.7040303@lanabrindley.com> <561E05FE.8030506@berendt.io> <561E184B.9050800@lanabrindley.com> Message-ID: <344168114.71975365.1444825957117.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Lana Brindley" > To: openstack-docs at lists.openstack.org > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On 14/10/15 17:36, Christian Berendt wrote: > > On 10/14/2015 08:22 AM, Lana Brindley wrote: > >> We've been unable to obtain good pre-release packages from Red Hat for the > >> Fedora and Red Hat/CentOS repos, despite our best efforts. > > > > We tested with the Delorean repository. Why does this not work? > > > > The Delorean repo is a pretty hilarious combination of old, out of date > config files, and a few Mitaka packages thrown in for good measure. Red Hat > have confirmed that Delorean is the only pre-release packages repo available > to us as of Liberty, but because the packages aren't tied to a release it > makes it virtually impossible to test against. > > The Red Hat packages, on the other hand, are missing quite a few crucial > deps, including the PyMySQL deps. Are there other examples of missing deps? My understanding is that the package name is python-mysql in Fedora etc.: https://www.redhat.com/archives/rdo-list/2015-October/msg00004.html -Steve > Right now, the Fedora testing situation is slightly better than the Red > Hat/CentOS one, thanks to Delorean being in slightly better shape, and > thanks to Brian Moss's dogged determination in getting it working. But we're > not confident enough in any of the RDO work right now to want to release > this. We really need to wait for the packages so we can test properly and > release. > > We've spoken to a few Red Hat contacts today to try and get a better > understanding of what's going on, but at the moment, that's all we have. > > It's very disappointing, but I'm hoping we can test and publish this very > soon. > > Lana > > - -- > Lana Brindley > Technical Writer > Rackspace Cloud Builders Australia > http://lanabrindley.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iQEcBAEBCAAGBQJWHhhLAAoJELppzVb4+KUyGjcIAMlUdrL4gRXEvrEjjUrQUjHq > frMVIyLfoyPhrvvRTGXduWMt9HqX6HROqpvsfXuPmOaQzfQ+nniAZ9m0uF6qYolG > qc5a96V+Emhz0InIcHcMxO9hDsVAWpf/7rC+IBhHvwt/NBOmWgu7pmAxRDSXdwoh > klxwzPtvnFmShj6Xtiit0MVukgKoBTbtfZkXZ30765xbZd/uOzyiyUBUon9aiD/Q > BQC9LVu391vBRXEqHioPMlL9wE5oG71BuYnlNF7A/4q+drqsgwhBJIoxYOtiOO4z > 37nen8kMQ4YeqwT+cZQdZpJwfwMu6+/uy7ZCYCKz/KrOd2PRjybUiRnaox9kddg= > =RcW7 > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-docs mailing list > OpenStack-docs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs > -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From dtantsur at redhat.com Wed Oct 14 12:54:01 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 14 Oct 2015 14:54:01 +0200 Subject: [Rdo-list] error in doing deployment with RDO-Manager In-Reply-To: <561D8CD5.9010009@ualberta.ca> References: <561D8CD5.9010009@ualberta.ca> Message-ID: <561E5069.7000201@redhat.com> On 10/14/2015 12:59 AM, Erming Pei wrote: > Hi, > > I am trying with deploying Openstack with RDO Manager, but am > having an issue for now with executing "openstack overcloud deploy > --templates" command (I am just following the user guide for a basic > deployment without changing/creating any template yet): > > [stack at gcloudcon-3 ~]$ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > ERROR: openstack Heat Stack create failed. > > > > [stack at gcloudcon-3 ~]$ heat stack-list > +--------------------------------------+------------+---------------+----------------------+ > > | id | stack_name | stack_status | > creation_time | > +--------------------------------------+------------+---------------+----------------------+ > > | 34eb7053-e504-4183-b39b-e87d0d3f7b4c | overcloud | CREATE_FAILED | > 2015-10-09T17:40:17Z | > +--------------------------------------+------------+---------------+----------------------+ > > > > [stack at gcloudcon-3 ~]$ ironic node-list > +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ > > | UUID | Name | Instance > UUID | Power State | Provision State | Maintenance | > +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ > > | 248f2695-c43b-4d13-8aca-a3f5732f72ac | None | > 4971f77b-d233-4431-a8ee-b29d18262394 | power off | error | False | > | 3cdc8f0e-eb3f-47df-b4f0-bc68b671e23f | None | > 6388c30a-97f3-4141-b02f-b53d36782cbd | power off | error | False | > +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ > > > > [stack at gcloudcon-3 ~]$ ironic node-show > 248f2695-c43b-4d13-8aca-a3f5732f72ac > +------------------------+------------------------------------------------------------------------+ > > | Property | Value | > +------------------------+------------------------------------------------------------------------+ > > | target_power_state | None | > | extra | {u'newly_discovered': u'true', > u'block_devices': {u'serials': | > | | [u'600605b0016ae53012feea5d1b60cdb9', | > | | u'600605b0016ae53012ff5378180b6c6f']}, > u'hardware_swift_object': u | > | | > 'extra_hardware-248f2695-c43b-4d13-8aca-a3f5732f72ac'} | > | last_error | Failed to tear down. Error: [Errno 13] > Permission denied: | > | | '/tftpboot/master_images' | Seems like a permission problem. Make sure that /tftpboot and its subdirectories are writable by ironic and have correct selinux attributes. > | updated_at | 2015-10-13T21:19:14+00:00 | > | maintenance_reason | None | > | provision_state | error | > | uuid | 248f2695-c43b-4d13-8aca-a3f5732f72ac | > | console_enabled | False | > | target_provision_state | available | > | maintenance | False | > | inspection_started_at | None | > | inspection_finished_at | None | > | power_state | power > off | > | driver | pxe_ipmitool | > | reservation | None | > | properties | {u'memory_mb': u'131072', u'cpu_arch': > u'x86_64', u'local_gb': u'463', | > | | u'cpus': u'8', u'capabilities': > u'boot_option:local'} | > | instance_uuid | 4971f77b-d233-4431-a8ee-b29d18262394 | > | name | None | > | driver_info | {u'ipmi_password': u'******', > u'ipmi_address': u'10.0.8.30', | > | | u'ipmi_username': u'USERID', > u'deploy_kernel': u'9e82182f-c1a0-420c- | > | | a7dc-b532c36892ca', u'deploy_ramdisk': > u'982008b4-2d53-41db-803c- | > | | 3d97405a2e0a'} | > | created_at | 2015-09-02T20:10:39+00:00 | > | driver_internal_info | {u'clean_steps': None, > u'is_whole_disk_image': False} | > | chassis_uuid | | > | instance_info | {} | > +------------------------+------------------------------------------------------------------------+ > > > > Is there any hint for me? > > Thanks, > > Erming > From mcornea at redhat.com Wed Oct 14 13:10:12 2015 From: mcornea at redhat.com (Marius Cornea) Date: Wed, 14 Oct 2015 09:10:12 -0400 (EDT) Subject: [Rdo-list] Overcloud Horizon In-Reply-To: <561D8BA1.8030004@redhat.com> References: <561D8BA1.8030004@redhat.com> Message-ID: <748435605.41853669.1444828212952.JavaMail.zimbra@redhat.com> FWIW overcloud Horizon is failing to load for me: https://bugzilla.redhat.com/show_bug.cgi?id=1271433 ----- Original Message ----- > From: "Dan Sneddon" > To: rdo-list at redhat.com > Sent: Wednesday, October 14, 2015 12:54:25 AM > Subject: Re: [Rdo-list] Overcloud Horizon > > On 10/13/2015 12:50 PM, AliReza Taleghani wrote: > > The overcloud has been finally deployed via the following : > > $ openstack overcloud deploy --compute-scale 4 --templates > > --compute-flavor compute --control-flavor control > > http://paste.ubuntu.com/12775291/ > > > > there seem's I has missed some things cos I wished to have Horizon at > > the end! but seems it's not evolved right now. > > > > Do i need to add any other templates or better how can I force my > > controller to serve horizon service! if it's possible... > > > > tnx > > -- > > Sincerely, > > Ali R. Taleghani > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > Are you sure that Horizon isn't listening somewhere? You can run > "keystone endpoint-list | grep dashboard" against the overcloud (source > overcloudrc on the Undercloud, for instance). You should have something > like: > > | a4b435f0917c42e9b84184c1502e4327 | regionOne | > http://10.0.0.4:80/dashboard/ | > http://10.0.0.4:80/dashboard/ | > http://10.0.0.4:80/dashboard/admin | cce915f019684d17a601254437ab59ee | > > It's possible that Horizon is listening on a different IP than you are > expecting. If so, you can use SSH tunnels to connect to the external > interface and have SSH port forward to the real IP/port. Something like: > > ssh -L 8080:10.0.0.4:80 heat-admin at controller-external-IP > > Then you can connect to http://localhost:8080/dashboard to connect to > Horizon. > > If you want more control over where the dashboard is listening, then > you need to use the Advanced Configuration instructions for Network > Isolation. > > Please report back if you don't find that Horizon is listening on any > IP/port. > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From hguemar at fedoraproject.org Wed Oct 14 13:11:26 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 14 Oct 2015 15:11:26 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <561E4133.8060804@redhat.com> References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> Message-ID: 2015-10-14 13:49 GMT+02:00 Rich Bowen : > I wanted to be certain that everyone has seen this message to > OpenStack-docs, and the subsequent conversation at > http://lists.openstack.org/pipermail/openstack-docs/2015-October/007622.html > > This is quite serious, as Lana is basically saying that RDO isn't a viable > way to deploy OpenStack in Liberty, and so it's being removed from the docs. > > It would be helpful if someone closer to Liberty packages, and Delorean, > could participate there in a constructive way to bring this to a happy > conclusion before the release tomorrow. > > Thanks. > > --Rich > Message pending moderation, but I registered to that list and volunteered to act as liaison. There are a lot of misunderstandings and few issues that were not reported So if they encounter any issue, at the very least, they should shout me an email or CC me so I could sort it out asap. Regards, H. > > -------- Forwarded Message -------- > Subject: [OpenStack-docs] [install-guide] Status of RDO > Date: Wed, 14 Oct 2015 16:22:45 +1000 > From: Lana Brindley > To: openstack-docs at lists.openstack.org > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hi everyone, > > We've been unable to obtain good pre-release packages from Red Hat for the > Fedora and Red Hat/CentOS repos, despite our best efforts. This has left the > RDO Install Guide in a largely untested state, so I don't feel confident > publishing it at this stage. > > As far as we can tell, Fedora are no longer planning on having pre-release > packages available, so this might be a permanent change for that OS. For Red > Hat/CentOS, it seems to be a temporary problem, so hopefully we can get the > packages, complete testing, and publish the book soon. > > The patch to remove RDO is here, for anyone who cares to comment: > https://review.openstack.org/#/c/234584/ > > Lana > > - -- Lana Brindley > Technical Writer > Rackspace Cloud Builders Australia > http://lanabrindley.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iQEcBAEBCAAGBQJWHfS1AAoJELppzVb4+KUyM7cH/ii5Ekz5vjTe3dTykXBUbWGt > bR2XJTAbS/mFB+xayecNNPLvgejI6Nxvk8msSFNnN7/ZyDNwr+eceQw7ftMKuJnR > h7qKBb6o5iayLJxgNRK3Kjo13NjGdaiXwfLTbB5br/aiP2HHsrDRexAcLteUCKGt > eHbZUEYqg4VADUvodxNpbZ+7fHuXrIRZoH4aDQ4+o1p0dCdw+vkjzF/MzPSgZFar > Rq9L94rpofDat9ymuW48c+SgUeOnmTvxwEN8ExTENNMXo4nUOJwcUS65J6XURO9K > RUGvjPmSmm7ZaQGE+koKyGZSzF/Oqoa+vBUwxdeQqmtr2tWo//jlUVV/PDc8QV0= > =rQp4 > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-docs mailing list > OpenStack-docs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From celik.esra at tubitak.gov.tr Wed Oct 14 13:22:20 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Wed, 14 Oct 2015 16:22:20 +0300 (EEST) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1587469518.41803328.1444823970839.JavaMail.zimbra@redhat.com> References: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> <1834381527.3752816.1444744077063.JavaMail.zimbra@tubitak.gov.tr> <136417348.41059871.1444746300472.JavaMail.zimbra@redhat.com> <1316900159.3778086.1444748529044.JavaMail.zimbra@tubitak.gov.tr> <278705446.41253505.1444760174635.JavaMail.zimbra@redhat.com> <637795359.4087130.1444812541830.JavaMail.zimbra@tubitak.gov.tr> <1587469518.41803328.1444823970839.JavaMail.zimbra@redhat.com> Message-ID: <883829133.4240052.1444828940563.JavaMail.zimbra@tubitak.gov.tr> Well in the early stage of the introspection I can see Client IP of nodes (screenshot attached). But then I see continuous ironic-python-agent errors (screenshot-2 attached). Errors repeat after time out.. And the nodes are not powered off. Seems like I am stuck in introspection stage.. I can use ipmitool command to successfully power on/off the nodes [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 -P power status Chassis Power is on [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P chassis power status Chassis Power is on [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P chassis power off Chassis Power Control: Down/Off [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P chassis power status Chassis Power is off [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P chassis power on Chassis Power Control: Up/On [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P chassis power status Chassis Power is on Esra ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ----- Orijinal Mesaj ----- Kimden: "Marius Cornea" Kime: "Esra Celik" Kk: "Ignacio Bravo" , rdo-list at redhat.com G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" ----- Original Message ----- > From: "Esra Celik" > To: "Marius Cornea" > Cc: "Ignacio Bravo" , rdo-list at redhat.com > Sent: Wednesday, October 14, 2015 10:49:01 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > > Well today I started with re-installing the OS and nothing seems wrong with > undercloud installation, then; > > > > > > > I see an error during image build > > > [stack at undercloud ~]$ openstack overcloud image build --all > ... > a lot of log > ... > ++ cat /etc/dib_dracut_drivers > + dracut -N --install ' curl partprobe lsblk targetcli tail head awk ifconfig > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell rd.debug > rd.neednet=1 rd.driver.pre=ahci' --include /var/tmp/image.YVhwuArQ/mnt/ / > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > target_core_file target_core_pscsi configfs' -o 'dash plymouth' /tmp/ramdisk > cat: write error: Broken pipe > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > + chmod o+r /tmp/kernel > + trap EXIT > + target_tag=99-build-dracut-ramdisk > + date +%s.%N > + output '99-build-dracut-ramdisk completed' > ... > a lot of log > ... You can ignore that afaik, if you end up having all the required images it should be ok. > > Then, during introspection stage I see ironic-python-agent errors on nodes > (screenshot attached) and the following warnings > That looks odd. Is it showing up in the early stage of the introspection? At some point it should receive an address by DHCP and the Network is unreachable error should disappear. Does the introspection complete and the nodes are turned off? > > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service | > grep -i "warning\|error" > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 10:30:12.119 > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > Option "http_url" from group "pxe" is deprecated. Use option "http_url" from > group "deploy". > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 10:30:12.119 > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > Option "http_root" from group "pxe" is deprecated. Use option "http_root" > from group "deploy". > > > Before deployment ironic node-list: > This is odd too as I'm expecting the nodes to be powered off before running deployment. > > > [stack at undercloud ~]$ ironic node-list > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power State | Provisioning State | > | Maintenance | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | available | > | False | > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | available | > | False | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > During deployment I get following errors > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service | > grep -i "warning\|error" > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 11:29:01.739 > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while attempting > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 -f > /tmp/tmpSCKHIv power status"for node b5811c06-d5d1-41f1-87b3-2fd55ae63553. > Error: Unexpected error while running command. > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 11:29:01.739 > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status failed for > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error while > running command. > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 11:29:01.740 > 619 WARNING ironic.conductor.manager [-] During sync_power_state, could not > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt 1 of > 3. Error: IPMI call failed: power status.. > This looks like an ipmi error, can you try to manually run commands using the ipmitool and see if you get any success? It's also worth filing a bug with details such as the ipmitool version, server model, drac firmware version. > > > > > > Thanks a lot > > > > ----- Orijinal Mesaj ----- > > Kimden: "Marius Cornea" > Kime: "Esra Celik" > Kk: "Ignacio Bravo" , rdo-list at redhat.com > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > host was found" > > > ----- Original Message ----- > > From: "Esra Celik" > > To: "Marius Cornea" > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > host was found" > > > > During deployment they are powering on and deploying the images. I see lot > > of > > connection error messages about ironic-python-agent but ignore them as > > mentioned here > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > That was referring to the introspection stage. From what I can tell you are > experiencing issues during deployment as it fails to provision the nova > instances, can you check if during that stage the nodes get powered on? > > Make sure that before overcloud deploy the ironic nodes are available for > provisioning (ironic node-list and check the provisioning state column). > Also check that you didn't miss any step in the docs in regards to kernel > and ramdisk assignment, introspection, flavor creation(so it matches the > nodes resources) > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > In instackenv.json file I do not need to add the undercloud node, or do I? > > No, the nodes details should be enough. > > > And which log files should I watch during deployment? > > You can check the openstack-ironic-conductor logs(journalctl -fl -u > openstack-ironic-conductor.service) and the logs in /var/log/nova. > > > Thanks > > Esra > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea Kime: > > Esra Celik Kk: Ignacio Bravo > > , rdo-list at redhat.comGönderilenler: Tue, 13 Oct > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails with > > error "No valid host was found" > > > > ----- Original Message -----> From: "Esra Celik" > > > > > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> Sent: > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud > > deploy fails with error "No valid host was found"> > > > Actually I > > re-installed the OS for Undercloud before deploying. However I did> not > > re-install the OS in Compute and Controller nodes.. I will reinstall> basic > > OS for them too, and retry.. > > > > You don't need to reinstall the OS on the controller and compute, they will > > get the image served by the undercloud. I'd recommend that during > > deployment > > you watch the servers console and make sure they get powered on, pxe boot, > > and actually get the image deployed. > > > > Thanks > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: > > > "Ignacio > > > Bravo" > Kime: "Esra Celik" > > > > Kk: rdo-list at redhat.com> Gönderilenler: > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy fails > > > with error "No valid host was> found"> > Esra,> > I encountered the same > > > problem after deleting the stack and re-deploying.> > It turns out that > > > 'heat stack-delete overcloud’ does remove the nodes from> > > > ‘nova list’ and one would assume that the baremetal servers > > > are now ready to> be used for the next stack, but when redeploying, I get > > > the same message of> not enough hosts available.> > You can look into the > > > nova logs and it mentions something about ‘node xxx is> already > > > associated with UUID yyyy’ and ‘I tried 3 times and I’m > > > giving up’.> The issue is that the UUID yyyy belonged to a prior > > > unsuccessful deployment.> > I’m now redeploying the basic OS to > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, Inc> > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, at > > > 9:25 > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > > OverCloud deploy fails with error "No valid host was found"> > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> Deploying > > > templates in the directory> /usr/share/openstack-tripleo-heat-templates> > > > Stack failed with status: Resource CREATE failed: resources.Compute:> > > > ResourceInError: resources[0].resources.NovaCompute: Went to status > > > ERROR> > > > due to "Message: No valid host was found. There are not enough hosts> > > > available., Code: 500"> Heat Stack create failed.> > Here are some logs:> > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue > > > > Oct > > > 13> 16:18:17 2015> > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > | resource_name | physical_resource_id | resource_type | resource_status > > > |> | updated_time | stack_name |> > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > | OS::Heat::ResourceGroup > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller | > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > > CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r |> | > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server |> | > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > CREATE_FAILED > > > | 2015-10-13T10:20:56 |> | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > |> > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > | Property | Value |> > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > | attributes | { |> | | "attributes": null, |> | | "refs": null |> | | } > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | links |> > > > | > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > | (self) |> | | > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > | | (stack) |> | | > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > | | (nested) |> | logical_resource_id | Compute |> | physical_resource_id > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | > > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> | | > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | Compute |> > > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > > resources.Compute: ResourceInError:> | > > > resources[0].resources.NovaCompute: > > > Went to status ERROR due to "Message:> | No valid host was found. There > > > are not enough hosts available., Code: 500"> | |> | resource_type | > > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > This is my instackenv.json for 1 compute and 1 control node to be > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> "disk":"10",> > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> "disk":"100",> > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in advance> > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> > > > celik.esra at tubitak.gov.tr> > > > > _______________________________________________> Rdo-list mailing list> > > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > _______________________________________________> Rdo-list mailing list> > > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rdo-introspection-screenshot-2.png Type: image/png Size: 146063 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rdo-introspection-screenshot.png Type: image/png Size: 96977 bytes Desc: not available URL: From rbowen at redhat.com Wed Oct 14 14:40:56 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 14 Oct 2015 10:40:56 -0400 Subject: [Rdo-list] RDO test day - Thanks! Message-ID: <561E6978.1050902@redhat.com> A big thanks to everyone that participated in the RDO Test Day over the last 48 hours[1]. 29 people wrote 74 email messages to this list. (Of course, they weren't all about the test day, but many of them were.) 42 participants sent a collective 1074 messages to the IRC channel. (Likewise, not all about test day.) We had 23 new issues opened in the ticketing system. Some people have put their day's experience in the etherpad [2] and some have updated the Tested Scenarios page[3]. If you didn't put your notes anywhere, please do update one of those in the coming day or two, so that we don't lose what we learned. So, thanks to all of you. Particular thanks to trown for getting things rolling and herding the cats. [1] http://beta.rdoproject.org/testday/rdo-test-day-liberty-02/ [2] https://etherpad.openstack.org/p/rdo_test_day_oct_2015 [3] http://beta.rdoproject.org/testday/testedsetups-liberty-02/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From chkumar246 at gmail.com Wed Oct 14 16:01:37 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 14 Oct 2015 21:31:37 +0530 Subject: [Rdo-list] [Meeting] RDO meeting (2015-10-14) Message-ID: ============================== #rdo: RDO meeting (2015-10-14) ============================== Meeting started by chandankumar at 15:01:48 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-10-14/rdo.2015-10-14-15.01.log.html . Meeting summary --------------- * Updates on RDO test day (chandankumar, 15:04:24) * LINK: https://www.rdoproject.org/forum/discussion/1043/rdo-test-day-thanks (chandankumar, 15:05:11) * RDO Liberty GA blockers (chandankumar, 15:07:13) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1271002 could be a blocker if it is supposed to work (trown, 15:08:34) * LINK: https://bugs.centos.org/view.php?id=9606 (apevec, 15:11:38) * LINK: http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo (jschlueter, 15:11:40) * Kilo 2015.1.2 rebases (chandankumar, 15:14:32) * RDO doc hack day (chandankumar, 15:17:34) * LINK: https://github.com/redhat-openstack/website/issues (rbowen, 15:19:05) * LINK: https://github.com/redhat-openstack/website/issues (jschlueter, 15:19:16) * Package Version Bump (chandankumar, 15:26:43) * Please do NOT update Fedora master to any Mitaka release just yet! (apevec, 15:27:00) * updates on verwatch (chandankumar, 15:37:42) * LINK: https://github.com/yac/verwatch (chandankumar, 15:38:20) * LINK: http://versiontracker.dmsimard.com/compare/liberty (dmsimard, 15:40:42) * OCT RDO bug triage Day. (chandankumar, 15:45:59) * LINK: http://sched.co/4MYy (rbowen, 15:49:09) * RDO community meetup at openstack summit (rbowen, 15:49:19) * EOL for Juno? (chandankumar, 15:51:21) * LINK: https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fjuno_releases_.2812_months.29 (apevec, 15:52:12) * LINK: https://wiki.openstack.org/wiki/Releases (rbowen, 15:52:13) * chair for next meeting (chandankumar, 15:53:58) * ACTION: trown to chair for next meeting (chandankumar, 15:55:21) * open floor (chandankumar, 15:55:31) Meeting ended at 15:58:29 UTC. Action Items ------------ * trown to chair for next meeting Action Items, by person ----------------------- * trown * trown to chair for next meeting * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * chandankumar (76) * apevec (71) * rbowen (43) * number80 (23) * garrett (23) * dmsimard (20) * trown (17) * zodbot (17) * alphacc (7) * sasha2 (3) * jpena (3) * jschlueter (3) * eggmaster (2) * coolsvap (1) * Humbedooh (1) * jruzicka (1) * social (1) * elmiko (1) * egafford (0) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Wed Oct 14 16:35:57 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 14 Oct 2015 16:35:57 +0000 Subject: [Rdo-list] openstack-app-catalog-ui package/RDO process Message-ID: <1A3C52DFCD06494D8528644858247BF01B7D96E4@EX10MBOX06.pnnl.gov> I've been trying to contribute the openstack-app-catalog-ui package for months now. I'm sure I'm doing something wrong. I really could use some help. We recently got the same package into Debian, and it took less then a week from start to finish. Not trying to be mean or anything here, just trying to identify obstacles that we can take down to make the process smoother, to bring contributing to RDO in line with other distro's. With the big tent being a thing, I believe lowering that bar will become increasingly important to RDO's success. What can we do to make this better? Some links: https://github.com/kfox1111/app-catalog-ui https://bugzilla.redhat.com/show_bug.cgi?id=1264072 https://bugzilla.redhat.com/show_bug.cgi?id=1268372 Thanks, Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Wed Oct 14 16:40:07 2015 From: mcornea at redhat.com (Marius Cornea) Date: Wed, 14 Oct 2015 12:40:07 -0400 (EDT) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <883829133.4240052.1444828940563.JavaMail.zimbra@tubitak.gov.tr> References: <2055486532.3740589.1444742748758.JavaMail.zimbra@tubitak.gov.tr> <1834381527.3752816.1444744077063.JavaMail.zimbra@tubitak.gov.tr> <136417348.41059871.1444746300472.JavaMail.zimbra@redhat.com> <1316900159.3778086.1444748529044.JavaMail.zimbra@tubitak.gov.tr> <278705446.41253505.1444760174635.JavaMail.zimbra@redhat.com> <637795359.4087130.1444812541830.JavaMail.zimbra@tubitak.gov.tr> <1587469518.41803328.1444823970839.JavaMail.zimbra@redhat.com> <883829133.4240052.1444828940563.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1468495656.42006088.1444840807316.JavaMail.zimbra@redhat.com> Can you do ironic node-show for your ironic nodes and post the results? Also check the following suggestion if you're experiencing the same issue: https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html ----- Original Message ----- > From: "Esra Celik" > To: "Marius Cornea" > Cc: "Ignacio Bravo" , rdo-list at redhat.com > Sent: Wednesday, October 14, 2015 3:22:20 PM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > > > Well in the early stage of the introspection I can see Client IP of nodes > (screenshot attached). But then I see continuous ironic-python-agent errors > (screenshot-2 attached). Errors repeat after time out.. And the nodes are > not powered off. > > Seems like I am stuck in introspection stage.. > > I can use ipmitool command to successfully power on/off the nodes > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U > root -R 3 -N 5 -P power status > Chassis Power is on > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > chassis power status > Chassis Power is on > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > chassis power off > Chassis Power Control: Down/Off > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > chassis power status > Chassis Power is off > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > chassis power on > Chassis Power Control: Up/On > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > chassis power status > Chassis Power is on > > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > > ----- Orijinal Mesaj ----- > > Kimden: "Marius Cornea" > Kime: "Esra Celik" > Kk: "Ignacio Bravo" , rdo-list at redhat.com > G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > found" > > > ----- Original Message ----- > > From: "Esra Celik" > > To: "Marius Cornea" > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > was found" > > > > > > Well today I started with re-installing the OS and nothing seems wrong with > > undercloud installation, then; > > > > > > > > > > > > > > I see an error during image build > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > ... > > a lot of log > > ... > > ++ cat /etc/dib_dracut_drivers > > + dracut -N --install ' curl partprobe lsblk targetcli tail head awk > > ifconfig > > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell rd.debug > > rd.neednet=1 rd.driver.pre=ahci' --include /var/tmp/image.YVhwuArQ/mnt/ / > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > > target_core_file target_core_pscsi configfs' -o 'dash plymouth' > > /tmp/ramdisk > > cat: write error: Broken pipe > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > + chmod o+r /tmp/kernel > > + trap EXIT > > + target_tag=99-build-dracut-ramdisk > > + date +%s.%N > > + output '99-build-dracut-ramdisk completed' > > ... > > a lot of log > > ... > > You can ignore that afaik, if you end up having all the required images it > should be ok. > > > > > Then, during introspection stage I see ironic-python-agent errors on nodes > > (screenshot attached) and the following warnings > > > > That looks odd. Is it showing up in the early stage of the introspection? At > some point it should receive an address by DHCP and the Network is > unreachable error should disappear. Does the introspection complete and the > nodes are turned off? > > > > > > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service | > > grep -i "warning\|error" > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > 10:30:12.119 > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > > Option "http_url" from group "pxe" is deprecated. Use option "http_url" > > from > > group "deploy". > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > 10:30:12.119 > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > > Option "http_root" from group "pxe" is deprecated. Use option "http_root" > > from group "deploy". > > > > > > Before deployment ironic node-list: > > > > This is odd too as I'm expecting the nodes to be powered off before running > deployment. > > > > > > > [stack at undercloud ~]$ ironic node-list > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > | Maintenance | > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | available > > | | > > | False | > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | available > > | | > > | False | > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > During deployment I get following errors > > > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service | > > grep -i "warning\|error" > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > 11:29:01.739 > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while attempting > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 -f > > /tmp/tmpSCKHIv power status"for node b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > Error: Unexpected error while running command. > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > 11:29:01.739 > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status failed > > for > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error > > while > > running command. > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > 11:29:01.740 > > 619 WARNING ironic.conductor.manager [-] During sync_power_state, could not > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt 1 of > > 3. Error: IPMI call failed: power status.. > > > > This looks like an ipmi error, can you try to manually run commands using the > ipmitool and see if you get any success? It's also worth filing a bug with > details such as the ipmitool version, server model, drac firmware version. > > > > > > > > > > > > > Thanks a lot > > > > > > > > ----- Orijinal Mesaj ----- > > > > Kimden: "Marius Cornea" > > Kime: "Esra Celik" > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > host was found" > > > > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Marius Cornea" > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > host was found" > > > > > > During deployment they are powering on and deploying the images. I see > > > lot > > > of > > > connection error messages about ironic-python-agent but ignore them as > > > mentioned here > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > That was referring to the introspection stage. From what I can tell you are > > experiencing issues during deployment as it fails to provision the nova > > instances, can you check if during that stage the nodes get powered on? > > > > Make sure that before overcloud deploy the ironic nodes are available for > > provisioning (ironic node-list and check the provisioning state column). > > Also check that you didn't miss any step in the docs in regards to kernel > > and ramdisk assignment, introspection, flavor creation(so it matches the > > nodes resources) > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > In instackenv.json file I do not need to add the undercloud node, or do > > > I? > > > > No, the nodes details should be enough. > > > > > And which log files should I watch during deployment? > > > > You can check the openstack-ironic-conductor logs(journalctl -fl -u > > openstack-ironic-conductor.service) and the logs in /var/log/nova. > > > > > Thanks > > > Esra > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea Kime: > > > Esra Celik Kk: Ignacio Bravo > > > , rdo-list at redhat.comGönderilenler: Tue, 13 > > > Oct > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails > > > with > > > error "No valid host was found" > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> > > > Sent: > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud > > > deploy fails with error "No valid host was found"> > > > Actually I > > > re-installed the OS for Undercloud before deploying. However I did> not > > > re-install the OS in Compute and Controller nodes.. I will reinstall> > > > basic > > > OS for them too, and retry.. > > > > > > You don't need to reinstall the OS on the controller and compute, they > > > will > > > get the image served by the undercloud. I'd recommend that during > > > deployment > > > you watch the servers console and make sure they get powered on, pxe > > > boot, > > > and actually get the image deployed. > > > > > > Thanks > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: > > > > "Ignacio > > > > Bravo" > Kime: "Esra Celik" > > > > > Kk: rdo-list at redhat.com> > > > > Gönderilenler: > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy fails > > > > with error "No valid host was> found"> > Esra,> > I encountered the > > > > same > > > > problem after deleting the stack and re-deploying.> > It turns out that > > > > 'heat stack-delete overcloud’ does remove the nodes from> > > > > ‘nova list’ and one would assume that the baremetal servers > > > > are now ready to> be used for the next stack, but when redeploying, I > > > > get > > > > the same message of> not enough hosts available.> > You can look into > > > > the > > > > nova logs and it mentions something about ‘node xxx is> already > > > > associated with UUID yyyy’ and ‘I tried 3 times and > > > > I’m > > > > giving up’.> The issue is that the UUID yyyy belonged to a prior > > > > unsuccessful deployment.> > I’m now redeploying the basic OS to > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, Inc> > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, at > > > > 9:25 > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > > > OverCloud deploy fails with error "No valid host was found"> > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> Deploying > > > > templates in the directory> > > > > /usr/share/openstack-tripleo-heat-templates> > > > > Stack failed with status: Resource CREATE failed: resources.Compute:> > > > > ResourceInError: resources[0].resources.NovaCompute: Went to status > > > > ERROR> > > > > due to "Message: No valid host was found. There are not enough hosts> > > > > available., Code: 500"> Heat Stack create failed.> > Here are some > > > > logs:> > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue > > > > > Oct > > > > 13> 16:18:17 2015> > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > | resource_name | physical_resource_id | resource_type | > > > > | resource_status > > > > |> | updated_time | stack_name |> > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > | OS::Heat::ResourceGroup > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller | > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > > > CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r |> > > > > | > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server |> > > > > | > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > > CREATE_FAILED > > > > | 2015-10-13T10:20:56 |> | > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > |> > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > | Property | Value |> > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > | attributes | { |> | | "attributes": null, |> | | "refs": null |> | | > > > > | } > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | links > > > > |> | |> > > > > | > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > | (self) |> | | > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > | | (stack) |> | | > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > | | physical_resource_id > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | > > > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> | | > > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | Compute > > > > |> > > > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > > > resources.Compute: ResourceInError:> | > > > > resources[0].resources.NovaCompute: > > > > Went to status ERROR due to "Message:> | No valid host was found. There > > > > are not enough hosts available., Code: 500"> | |> | resource_type | > > > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > This is my instackenv.json for 1 compute and 1 control node to be > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> "disk":"10",> > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> "disk":"100",> > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in advance> > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> > > > > celik.esra at tubitak.gov.tr> > > > > > _______________________________________________> Rdo-list mailing list> > > > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > _______________________________________________> Rdo-list mailing list> > > > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > From chkumar246 at gmail.com Wed Oct 14 14:22:02 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 14 Oct 2015 19:52:02 +0530 Subject: [Rdo-list] Bug statistics for 2015-10-14 Message-ID: # RDO Bugs on 2015-10-14 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 310 - Fixed (MODIFIED, POST, ON_QA): 184 ## Number of open bugs by component diskimage-builder [ 4] ++ distribution [ 14] ++++++++++ dnsmasq [ 1] instack [ 4] ++ instack-undercloud [ 28] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 12] ++++++++ openstack-cinder [ 14] ++++++++++ openstack-foreman-inst... [ 3] ++ openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 1] openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] +++++ openstack-manila [ 1] openstack-neutron [ 7] +++++ openstack-nova [ 18] +++++++++++++ openstack-packstack [ 55] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] ++++++++ openstack-selinux [ 13] +++++++++ openstack-swift [ 2] + openstack-tripleo [ 24] +++++++++++++++++ openstack-tripleo-heat... [ 5] +++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 4] ++ openvswitch [ 1] python-glanceclient [ 3] ++ python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] +++ python-oslo-config [ 1] rdo-manager [ 42] ++++++++++++++++++++++++++++++ rdo-manager-cli [ 6] ++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (310 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (14 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-10-07 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1266923 ] http://bugzilla.redhat.com/1266923 (NEW) Component: distribution Last change: 2015-10-07 Summary: RDO's hdf5 rpm/yum dependencies conflicts [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1264072 ] http://bugzilla.redhat.com/1264072 (NEW) Component: distribution Last change: 2015-10-02 Summary: app-catalog-ui new package [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-13 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-12 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (12 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: Ceilometer requires pymongo>=3.0.2 [1214928 ] http://bugzilla.redhat.com/1214928 (NEW) Component: openstack-ceilometer Last change: 2015-04-23 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1265721 ] http://bugzilla.redhat.com/1265721 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (NEW) Component: openstack-ceilometer Last change: 2015-09-23 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1265818 ] http://bugzilla.redhat.com/1265818 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-28 Summary: ceilometer polling agent does not start [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library [1271002 ] http://bugzilla.redhat.com/1271002 (NEW) Component: openstack-ceilometer Last change: 2015-10-13 Summary: Ceilometer dbsync failing during HA deployment [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (14 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (1 bug) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-08-25 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class ### openstack-manila (1 bug) [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-manila Last change: 2015-10-13 Summary: puppet module for manila should include service type - shareV2 ### openstack-neutron (7 bugs) [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-10-13 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (18 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-06-14 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-01-08 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-06-08 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-06-23 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-04-10 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: novnc init script doesnt write to log [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2015-10-13 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-02-09 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-06-04 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: Nova AVC messages ### openstack-packstack (55 bugs) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1269158 ] http://bugzilla.redhat.com/1269158 (NEW) Component: openstack-packstack Last change: 2015-10-06 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-packstack Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1266028 ] http://bugzilla.redhat.com/1266028 (NEW) Component: openstack-packstack Last change: 2015-10-08 Summary: Packstack should use pymysql database driver since Liberty [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1269255 ] http://bugzilla.redhat.com/1269255 (NEW) Component: openstack-packstack Last change: 2015-10-06 Summary: Failed to start RabbitMQ broker. [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters ### openstack-puppet-modules (11 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing ### openstack-selinux (13 bugs) [1261465 ] http://bugzilla.redhat.com/1261465 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: OpenStack Keystone is not functional [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-07-23 Summary: Glance over nfs fails due to selinux [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2015-10-02 Summary: Nova rootwrap-daemon requires a selinux exception [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux ### openstack-swift (2 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (24 bugs) [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (4 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1270615 ] http://bugzilla.redhat.com/1270615 (NEW) Component: openstack-utils Last change: 2015-10-11 Summary: openstack status still checking mysql not mariadb [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### python-glanceclient (3 bugs) [1271474 ] http://bugzilla.redhat.com/1271474 (NEW) Component: python-glanceclient Last change: 2015-10-14 Summary: Running glance image-list fails with 'Expected endpoint' [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-09-17 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-06-04 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (42 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Support IPv6 [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-10-12 Summary: Unexpected exception in background introspection thread [1269610 ] http://bugzilla.redhat.com/1269610 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-10-09 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-12 Summary: Glance client returning 'Expected endpoint' [1271335 ] http://bugzilla.redhat.com/1271335 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] Support explicit configuration of L2 population [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1271433 ] http://bugzilla.redhat.com/1271433 (NEW) Component: rdo-manager Last change: 2015-10-14 Summary: Horizon fails to load [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] support override of API and RPC worker counts [1271389 ] http://bugzilla.redhat.com/1271389 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: neutron-server fails to start when using short name (ml2) for core_plugin [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-10-14 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (184 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (1 bug) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] ### openstack-glance (4 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (14 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (POST) Component: openstack-neutron Last change: 2015-10-14 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (59 bugs) [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-07-21 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) ### openstack-puppet-modules (19 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (1 bug) [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (12 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (1 bug) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### Package Review (1 bug) [1243550 ] http://bugzilla.redhat.com/1243550 (ON_QA) Component: Package Review Last change: 2015-10-09 Summary: Review Request: openstack-aodh - OpenStack Telemetry Alarming ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-10-05 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (8 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1268992 ] http://bugzilla.redhat.com/1268992 (MODIFIED) Component: rdo-manager Last change: 2015-10-08 Summary: [RDO-Manager][Liberty] : openstack baremetal introspection bulk start causes "Internal server error" ( introspection fails) . [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-05-29 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (8 bugs) [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Wed Oct 14 16:05:47 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 14 Oct 2015 21:35:47 +0530 Subject: [Rdo-list] RDO bug statistics (2015-10-14) Message-ID: # RDO Bugs on 2015-10-14 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 310 - Fixed (MODIFIED, POST, ON_QA): 184 ## Number of open bugs by component diskimage-builder [ 4] ++ distribution [ 14] ++++++++++ dnsmasq [ 1] instack [ 4] ++ instack-undercloud [ 28] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 12] ++++++++ openstack-cinder [ 14] ++++++++++ openstack-foreman-inst... [ 3] ++ openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 1] openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] +++++ openstack-manila [ 1] openstack-neutron [ 7] +++++ openstack-nova [ 18] +++++++++++++ openstack-packstack [ 55] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] ++++++++ openstack-selinux [ 13] +++++++++ openstack-swift [ 2] + openstack-tripleo [ 24] +++++++++++++++++ openstack-tripleo-heat... [ 5] +++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 4] ++ openvswitch [ 1] python-glanceclient [ 3] ++ python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] +++ python-oslo-config [ 1] rdo-manager [ 42] ++++++++++++++++++++++++++++++ rdo-manager-cli [ 6] ++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (310 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (14 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-10-07 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1266923 ] http://bugzilla.redhat.com/1266923 (NEW) Component: distribution Last change: 2015-10-07 Summary: RDO's hdf5 rpm/yum dependencies conflicts [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1264072 ] http://bugzilla.redhat.com/1264072 (NEW) Component: distribution Last change: 2015-10-02 Summary: app-catalog-ui new package [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-13 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-12 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (12 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: Ceilometer requires pymongo>=3.0.2 [1214928 ] http://bugzilla.redhat.com/1214928 (NEW) Component: openstack-ceilometer Last change: 2015-04-23 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1265721 ] http://bugzilla.redhat.com/1265721 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (NEW) Component: openstack-ceilometer Last change: 2015-09-23 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1265818 ] http://bugzilla.redhat.com/1265818 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-28 Summary: ceilometer polling agent does not start [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library [1271002 ] http://bugzilla.redhat.com/1271002 (NEW) Component: openstack-ceilometer Last change: 2015-10-13 Summary: Ceilometer dbsync failing during HA deployment [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (14 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (1 bug) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-08-25 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class ### openstack-manila (1 bug) [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-manila Last change: 2015-10-13 Summary: puppet module for manila should include service type - shareV2 ### openstack-neutron (7 bugs) [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-10-13 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (18 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-06-14 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-01-08 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-06-08 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-06-23 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-04-10 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: novnc init script doesnt write to log [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2015-10-13 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-02-09 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-06-04 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: Nova AVC messages ### openstack-packstack (55 bugs) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1269158 ] http://bugzilla.redhat.com/1269158 (NEW) Component: openstack-packstack Last change: 2015-10-06 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-packstack Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1266028 ] http://bugzilla.redhat.com/1266028 (NEW) Component: openstack-packstack Last change: 2015-10-08 Summary: Packstack should use pymysql database driver since Liberty [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1269255 ] http://bugzilla.redhat.com/1269255 (NEW) Component: openstack-packstack Last change: 2015-10-06 Summary: Failed to start RabbitMQ broker. [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters ### openstack-puppet-modules (11 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing ### openstack-selinux (13 bugs) [1261465 ] http://bugzilla.redhat.com/1261465 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: OpenStack Keystone is not functional [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-07-23 Summary: Glance over nfs fails due to selinux [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2015-10-02 Summary: Nova rootwrap-daemon requires a selinux exception [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux ### openstack-swift (2 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (24 bugs) [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (4 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1270615 ] http://bugzilla.redhat.com/1270615 (NEW) Component: openstack-utils Last change: 2015-10-11 Summary: openstack status still checking mysql not mariadb [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### python-glanceclient (3 bugs) [1271474 ] http://bugzilla.redhat.com/1271474 (NEW) Component: python-glanceclient Last change: 2015-10-14 Summary: Running glance image-list fails with 'Expected endpoint' [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-09-17 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-06-04 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (42 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Support IPv6 [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-10-12 Summary: Unexpected exception in background introspection thread [1269610 ] http://bugzilla.redhat.com/1269610 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-10-09 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-12 Summary: Glance client returning 'Expected endpoint' [1271335 ] http://bugzilla.redhat.com/1271335 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] Support explicit configuration of L2 population [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1271433 ] http://bugzilla.redhat.com/1271433 (NEW) Component: rdo-manager Last change: 2015-10-14 Summary: Horizon fails to load [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] support override of API and RPC worker counts [1271389 ] http://bugzilla.redhat.com/1271389 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: neutron-server fails to start when using short name (ml2) for core_plugin [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-10-14 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (184 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (1 bug) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] ### openstack-glance (4 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (14 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (POST) Component: openstack-neutron Last change: 2015-10-14 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (59 bugs) [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-07-21 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) ### openstack-puppet-modules (19 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (1 bug) [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (12 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (1 bug) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### Package Review (1 bug) [1243550 ] http://bugzilla.redhat.com/1243550 (ON_QA) Component: Package Review Last change: 2015-10-09 Summary: Review Request: openstack-aodh - OpenStack Telemetry Alarming ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-10-05 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (8 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1268992 ] http://bugzilla.redhat.com/1268992 (MODIFIED) Component: rdo-manager Last change: 2015-10-08 Summary: [RDO-Manager][Liberty] : openstack baremetal introspection bulk start causes "Internal server error" ( introspection fails) . [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-05-29 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (8 bugs) [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrichar1 at ball.com Wed Oct 14 17:21:25 2015 From: jrichar1 at ball.com (Richards, Jeff) Date: Wed, 14 Oct 2015 17:21:25 +0000 Subject: [Rdo-list] [rdo-manager] Workaround for neutron server ml2 bug (Bug 1271389)? Message-ID: <6D1DB475E9650E4EADE65C051EFBB98B468B0433@EX2010-DTN-03.AERO.BALL.com> Is there a workaround for the bug John logged yesterday? https://bugzilla.redhat.com/show_bug.cgi?id=1271389 I tried changing the neutron.conf on the instack undercloud, rebuilding the images (not sure if that was required), destroying the overcloud stack and redeploying but got the same error. Note: I am using current-passed-ci repo and have this bug... Jeff Richards This message and any enclosures are intended only for the addressee. Please notify the sender by email if you are not the intended recipient. If you are not the intended recipient, you may not use, copy, disclose, or distribute this message or its contents or enclosures to any other person and any such actions may be unlawful. Ball reserves the right to monitor and review all messages and enclosures sent to or from this email address. -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Wed Oct 14 17:32:32 2015 From: trown at redhat.com (John Trowbridge) Date: Wed, 14 Oct 2015 13:32:32 -0400 Subject: [Rdo-list] [rdo-manager] Workaround for neutron server ml2 bug (Bug 1271389)? In-Reply-To: <6D1DB475E9650E4EADE65C051EFBB98B468B0433@EX2010-DTN-03.AERO.BALL.com> References: <6D1DB475E9650E4EADE65C051EFBB98B468B0433@EX2010-DTN-03.AERO.BALL.com> Message-ID: <561E91B0.1050902@redhat.com> On 10/14/2015 01:21 PM, Richards, Jeff wrote: > Is there a workaround for the bug John logged yesterday? The best workaround would be to use the pre-built images in the tar files here: https://repos.fedorapeople.org/repos/openstack-m/rdo-images-centos-liberty/ The non-tar files are old images, but should also work. I plan to replace those with what is in the tar files later today. The actual fix is likely just a documentation issue, which I am also working on validating before putting in the doc patch. > > https://bugzilla.redhat.com/show_bug.cgi?id=1271389 > > I tried changing the neutron.conf on the instack undercloud, rebuilding the images (not sure if that was required), destroying the overcloud stack and redeploying but got the same error. > > Note: I am using current-passed-ci repo and have this bug... > > Jeff Richards > > > > This message and any enclosures are intended only for the addressee. Please > notify the sender by email if you are not the intended recipient. If you are > not the intended recipient, you may not use, copy, disclose, or distribute this > message or its contents or enclosures to any other person and any such actions > may be unlawful. Ball reserves the right to monitor and review all messages > and enclosures sent to or from this email address. > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From erming at ualberta.ca Wed Oct 14 17:36:02 2015 From: erming at ualberta.ca (Erming Pei) Date: Wed, 14 Oct 2015 11:36:02 -0600 Subject: [Rdo-list] [rdo-manager] baremetal, undercloud and instackenv.json Message-ID: <561E9282.1010207@ualberta.ca> Hi, I am wondering that when doing baremetal provisioning with RDO-manager, should the undercloud node info be included in the instackenv.json file? My understanding is that the instackenv.json file should only include the overcloud controller and compute nodes, etc, right? Thanks, Erming -- --------------------------------------------- Erming Pei, Ph.D Senior System Analyst; Grid/Cloud Specialist Research Computing Group Information Services & Technology University of Alberta, Canada Tel: +1 7804929914 Fax: +1 7804921729 --------------------------------------------- From ibravo at ltgfederal.com Wed Oct 14 17:43:12 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 14 Oct 2015 13:43:12 -0400 Subject: [Rdo-list] [rdo-manager] baremetal, undercloud and instackenv.json In-Reply-To: <561E9282.1010207@ualberta.ca> References: <561E9282.1010207@ualberta.ca> Message-ID: <965F7C45-BBE7-4D37-A5BE-A805FD54F79C@ltgfederal.com> The instackenv.json file should only include the overcloud computers. That file includes the nodes that will be used to deploy the overcloud into. Those nodes will be wiped out and provided with a image disk that is stored in the undercloud computer. So no, you don?t want the undercloud computer included there. __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Oct 14, 2015, at 1:36 PM, Erming Pei wrote: > > Hi, > > I am wondering that when doing baremetal provisioning with RDO-manager, should the undercloud node info be included in the instackenv.json file? > My understanding is that the instackenv.json file should only include the overcloud controller and compute nodes, etc, right? > > Thanks, > > Erming > > > -- > --------------------------------------------- > Erming Pei, Ph.D > Senior System Analyst; Grid/Cloud Specialist > > Research Computing Group > Information Services & Technology > University of Alberta, Canada > > Tel: +1 7804929914 Fax: +1 7804921729 > --------------------------------------------- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Wed Oct 14 17:49:40 2015 From: mcornea at redhat.com (Marius Cornea) Date: Wed, 14 Oct 2015 13:49:40 -0400 (EDT) Subject: [Rdo-list] Undercloud UI In-Reply-To: <261419076.42053229.1444844835580.JavaMail.zimbra@redhat.com> Message-ID: <887825232.42054881.1444844980924.JavaMail.zimbra@redhat.com> Hi everyone, Do we have any undercloud UI at this point? When accessing the undercloud via HTTP on port 80 I get the default welcome page. Thanks, Marius From ibravo at ltgfederal.com Wed Oct 14 17:52:57 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 14 Oct 2015 13:52:57 -0400 Subject: [Rdo-list] Undercloud UI In-Reply-To: <887825232.42054881.1444844980924.JavaMail.zimbra@redhat.com> References: <887825232.42054881.1444844980924.JavaMail.zimbra@redhat.com> Message-ID: <7CE77DAD-289F-4E8A-A99F-7D20C213E313@ltgfederal.com> John mentioned that there was an issue upstream with triple that was causing this. Don?t know the issue #. __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Oct 14, 2015, at 1:49 PM, Marius Cornea wrote: > > Hi everyone, > > Do we have any undercloud UI at this point? When accessing the undercloud via HTTP on port 80 I get the default welcome page. > > Thanks, > Marius > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Oct 14 20:43:40 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 14 Oct 2015 16:43:40 -0400 Subject: [Rdo-list] ask.openstack unanswered questions Message-ID: <561EBE7C.6070700@redhat.com> I've fallen behind on the unanswered ask.openstack.org RDO questions, and could use some help. Now that Liberty is ready, I'm sure that you all have tons of extra time ;-) so, if you could pick your favorite category and help out a little, that would be great: Keystone ======== https://ask.openstack.org/en/question/82689/getting-an-authentication-token-for-tenant-using-admin-credentials-devstack-works-rdo-fails/ Summary: Authentication differences between Devstack and RDO deployment General ======= https://ask.openstack.org/en/question/82477/cannot-createupdate-flavor-metadata-from-horizon/ Missing option (create/edit flavor metadaat) in RDO Horizon https://ask.openstack.org/en/question/82205/cant-start-instances-after-upgradereboot/ Can't start instances after an upgrade. https://ask.openstack.org/en/question/81899/cloud-init-metadata-failure-on-kilo/ cloud-init metadata failure on Kilo https://ask.openstack.org/en/question/32319/metadata-service-not-working-in-multi-node/ Another metadata failure question. Packstack ========= https://ask.openstack.org/en/question/82473/installing-openstack-using-packstack-rdo-failed/ Failure running packstack on F21 Network ======= https://ask.openstack.org/en/question/82161/not-able-to-ssh-into-instance-created-on-second-compute-node/ Unable to ssh to instance. https://ask.openstack.org/en/question/81709/external-nfs-network-for-all-vms/ I don't understand what's being asked here https://ask.openstack.org/en/question/80556/missing-veth-pair-bond-and-wrongsuperfluous-physical-interface/ https://ask.openstack.org/en/question/79617/rdos-kilo-unable-to-create-vlan-external-network/ Very sparse on details, and hasn't answerd requests for more info. Storage ======= https://ask.openstack.org/en/question/82031/cinder-lvm-iscsi-cant-attach/ Can't attach volumes to instances -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From erming at ualberta.ca Wed Oct 14 22:03:50 2015 From: erming at ualberta.ca (Erming Pei) Date: Wed, 14 Oct 2015 16:03:50 -0600 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment Message-ID: <561ED146.2070606@ualberta.ca> Hi, I am deploying the overcloud in baremetal way and after a couple of hours, it showed: $ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again with option --include-password or export HEAT_INCLUDE_PASSWORD=1 Authentication required But I checked the nodes are now running: [stack at gcloudcon-3 ~]$ nova list +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=10.0.6.60 | | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=10.0.6.61 | +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ 1. Should I re-deploy the nodes or there is a way to do update/makeup for the authentication issue? 2. I don't know how to access to the nodes. There is not an overcloudrc file produced. $ ls overcloud* overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 overcloud-full.vmlinuz overcloud-full.d: dib-manifests Is it via ssh key or password? Should I set the authentication method somewhere? Thanks, Erming From sasha at redhat.com Wed Oct 14 22:20:15 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Wed, 14 Oct 2015 18:20:15 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <561ED146.2070606@ualberta.ca> References: <561ED146.2070606@ualberta.ca> Message-ID: <100630902.57596166.1444861215110.JavaMail.zimbra@redhat.com> Hi, So by default, when things work as expected, you should be able to login to your overcloud nodes as heat-admin (i.e. ssh heat-admin@). I haven't seen this error, did you source the /home/stack/stackrc file prior to attempting the deployment? I'd recomment you to remove the running/failed deployment and re-attempt to deploy again. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Erming Pei" > To: rdo-list at redhat.com > Sent: Wednesday, October 14, 2015 6:03:50 PM > Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment > > Hi, > > I am deploying the overcloud in baremetal way and after a couple of > hours, it showed: > > $ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again > with option --include-password or export HEAT_INCLUDE_PASSWORD=1 > Authentication required > > > But I checked the nodes are now running: > > [stack at gcloudcon-3 ~]$ nova list > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > | ID | Name | > Status | Task State | Power State | Networks | > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | > ACTIVE | - | Running | ctlplane=10.0.6.60 | > | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | > ACTIVE | - | Running | ctlplane=10.0.6.61 | > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > 1. Should I re-deploy the nodes or there is a way to do update/makeup > for the authentication issue? > > 2. > I don't know how to access to the nodes. > There is not an overcloudrc file produced. > > $ ls overcloud* > overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 > overcloud-full.vmlinuz > > overcloud-full.d: > dib-manifests > > Is it via ssh key or password? Should I set the authentication method > somewhere? > > > > Thanks, > > Erming > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From sasha at redhat.com Wed Oct 14 22:41:26 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Wed, 14 Oct 2015 18:41:26 -0400 (EDT) Subject: [Rdo-list] Deployed 1+1 on bare metal. In-Reply-To: <970726028.57597526.1444861840515.JavaMail.zimbra@redhat.com> Message-ID: <865488986.57601204.1444862486458.JavaMail.zimbra@redhat.com> Hi all, I was finally able to deploy a nonHA deployment (1 controller + 1 compute) on bare metal. https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c6 Thanks. Best regards, Sasha Chuzhoy. From erming at ualberta.ca Wed Oct 14 22:43:11 2015 From: erming at ualberta.ca (Erming Pei) Date: Wed, 14 Oct 2015 16:43:11 -0600 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <100630902.57596166.1444861215110.JavaMail.zimbra@redhat.com> References: <561ED146.2070606@ualberta.ca> <100630902.57596166.1444861215110.JavaMail.zimbra@redhat.com> Message-ID: <561EDA7F.90802@ualberta.ca> Hi Sasha, Thanks for the useful information. I can access the nodes in the way you indicated. (Just in case you have any comment on this) Before that I tried to verify the overcloud as shown in the guide (step after depolyment): $openstack overcloud validate --overcloud-auth-url $OS_AUTH_URL --overcloud-admin-password $OS_PASSWORD --network-id fe427999-d1ee-4bc1-b765-8cb91dbb4db7 All I got are the "Invalid credentials" errors in each test: / setUpClass (tempest.thirdparty.boto.test_s3_ec2_images.S3ImagesTest) -------------------------------------------------------------------- Captured traceback: ~~~~~~~~~~~~~~~~~~~ Traceback (most recent call last): File "/home/stack/tempest/tempest/test.py", line 272, in setUpClass six.reraise(etype, value, trace) File "/home/stack/tempest/tempest/test.py", line 260, in setUpClass cls.setup_credentials() File "/home/stack/tempest/tempest/test.py", line 351, in setup_credentials credential_type=credentials_type) File "/home/stack/tempest/tempest/test.py", line 474, in get_client_manager cred_provider = cls._get_credentials_provider() File "/home/stack/tempest/tempest/test.py", line 452, in _get_credentials_provider identity_version=identity_version) File "/home/stack/tempest/tempest/common/credentials.py", line 39, in get_isolated_credentials identity_version=identity_version) File "/home/stack/tempest/tempest/common/isolated_creds.py", line 149, in __init__ identity_version=self.identity_version) File "/home/stack/tempest/tempest/common/cred_provider.py", line 67, in get_configured_credentials identity_version=identity_version, **params) File "/home/stack/tempest/tempest/common/cred_provider.py", line 96, in get_credentials **params) File "/usr/lib/python2.7/site-packages/tempest_lib/auth.py", line 481, in get_credentials ca_certs=ca_certs, trace_requests=trace_requests) File "/usr/lib/python2.7/site-packages/tempest_lib/auth.py", line 182, in __init__ super(KeystoneAuthProvider, self).__init__(credentials) File "/usr/lib/python2.7/site-packages/tempest_lib/auth.py", line 45, in __init__ raise TypeError("Invalid credentials") TypeError: Invalid credentials/ Besides, I logged into each node and checked the services but most of them are not running. Not sure if it's normal. But I will try to re-deploy again as you suggested with adding the HEAT_INCLUDE_PASSWORD=1 option. Thanks, Erming On 10/14/15, 4:20 PM, Sasha Chuzhoy wrote: > Hi, > So by default, when things work as expected, you should be able to login to your overcloud nodes as heat-admin (i.e. ssh heat-admin@). > > I haven't seen this error, did you source the /home/stack/stackrc file prior to attempting the deployment? > > I'd recomment you to remove the running/failed deployment and re-attempt to deploy again. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- >> From: "Erming Pei" >> To: rdo-list at redhat.com >> Sent: Wednesday, October 14, 2015 6:03:50 PM >> Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment >> >> Hi, >> >> I am deploying the overcloud in baremetal way and after a couple of >> hours, it showed: >> >> $ openstack overcloud deploy --templates >> Deploying templates in the directory >> /usr/share/openstack-tripleo-heat-templates >> ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again >> with option --include-password or export HEAT_INCLUDE_PASSWORD=1 >> Authentication required >> >> >> But I checked the nodes are now running: >> >> [stack at gcloudcon-3 ~]$ nova list >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >> | ID | Name | >> Status | Task State | Power State | Networks | >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >> | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | >> ACTIVE | - | Running | ctlplane=10.0.6.60 | >> | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | >> ACTIVE | - | Running | ctlplane=10.0.6.61 | >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >> >> 1. Should I re-deploy the nodes or there is a way to do update/makeup >> for the authentication issue? >> >> 2. >> I don't know how to access to the nodes. >> There is not an overcloudrc file produced. >> >> $ ls overcloud* >> overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 >> overcloud-full.vmlinuz >> >> overcloud-full.d: >> dib-manifests >> >> Is it via ssh key or password? Should I set the authentication method >> somewhere? >> >> >> >> Thanks, >> >> Erming >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> -- --------------------------------------------- Erming Pei, Ph.D Senior System Analyst; Grid/Cloud Specialist Research Computing Group Information Services & Technology University of Alberta, Canada Tel: +1 7804929914 Fax: +1 7804921729 --------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Wed Oct 14 23:23:52 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 14 Oct 2015 16:23:52 -0700 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <561ED146.2070606@ualberta.ca> References: <561ED146.2070606@ualberta.ca> Message-ID: <561EE408.6030302@redhat.com> On 10/14/2015 03:03 PM, Erming Pei wrote: > Hi, > > I am deploying the overcloud in baremetal way and after a couple of > hours, it showed: > > $ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again > with option --include-password or export HEAT_INCLUDE_PASSWORD=1 > Authentication required > > > But I checked the nodes are now running: > > [stack at gcloudcon-3 ~]$ nova list > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > | ID | Name | > Status | Task State | Power State | Networks | > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | > ACTIVE | - | Running | ctlplane=10.0.6.60 | > | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | > ACTIVE | - | Running | ctlplane=10.0.6.61 | > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > > 1. Should I re-deploy the nodes or there is a way to do update/makeup > for the authentication issue? > > 2. > I don't know how to access to the nodes. > There is not an overcloudrc file produced. > > $ ls overcloud* > overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 > overcloud-full.vmlinuz > > overcloud-full.d: > dib-manifests > > Is it via ssh key or password? Should I set the authentication method > somewhere? > > > > Thanks, > > Erming > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com This error generally means that something in the deployment got stuck, and the deployment hung until the token expired after 4 hours. When that happens, there is no overcloudrc generated (because there is not a working overcloud). You won't be able to recover with a stack update, you'll need to perform a stack-delete and redeploy once you know what went wrong. Generally a deployment shouldn't take anywhere near that long, a bare metal deployment with 6 hosts takes me less than an hour, and less than 2 including a Ceph deployment. In fact, I usually set a timeout using the --timeout option, because if it hasn't finished after, say 90 minutes (depending on how complicated the deployment is), then I want it to bomb out so I can diagnose what went wrong and redeploy. Often when a deployment times out it is because there were connectivity issues between the nodes. Since you can log in to the hosts, you might want to do some basic network troubleshooting, such as: $ ip address # check to see that all the interfaces are there, and that the IP addresses have been assigned $ sudo ovs-vsctl show # make sure that the bridges have the proper interfaces, vlans, and that all the expected bridges show up $ ping # you can try this on all VLANs to make sure that any VLAN trunks are working properly $ sudo ovs-appctl bond/show # if running bonding, check to see the bond status $ sudo os-net-config --debug -c /etc/os-net-config/config.json # run the network configuration script again to make sure that it is able to configure the interfaces without error. WARNING, MAY BE DISRUPTIVE as this will reset the network interfaces, run on console if possible. However, I want to first double-check that you had a valid command line. You only show "openstack deploy overcloud --templates" in your original email. You did have a full command-line, right? Refer to the official installation guide for the right parameters. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From sasha at redhat.com Wed Oct 14 23:26:05 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Wed, 14 Oct 2015 19:26:05 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <561EDA7F.90802@ualberta.ca> References: <561ED146.2070606@ualberta.ca> <100630902.57596166.1444861215110.JavaMail.zimbra@redhat.com> <561EDA7F.90802@ualberta.ca> Message-ID: <957853273.57608147.1444865165744.JavaMail.zimbra@redhat.com> So, successful deployment creates the ~/overcloudrc file. Once you source that file, the OS_AUTH_URL and OS_PASSWORD variables are initialized with the right values. What you see is most probably because either the file wasn't sourced or the last deployment didn't complete as expected. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Erming Pei" > To: "Sasha Chuzhoy" > Cc: rdo-list at redhat.com > Sent: Wednesday, October 14, 2015 6:43:11 PM > Subject: Re: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment > > Hi Sasha, > > Thanks for the useful information. I can access the nodes in the way > you indicated. > > (Just in case you have any comment on this) > Before that I tried to verify the overcloud as shown in the guide > (step after depolyment): > > $openstack overcloud validate --overcloud-auth-url $OS_AUTH_URL > --overcloud-admin-password $OS_PASSWORD --network-id > fe427999-d1ee-4bc1-b765-8cb91dbb4db7 > > All I got are the "Invalid credentials" errors in each test: > / > setUpClass (tempest.thirdparty.boto.test_s3_ec2_images.S3ImagesTest) > -------------------------------------------------------------------- > > Captured traceback: > ~~~~~~~~~~~~~~~~~~~ > Traceback (most recent call last): > File "/home/stack/tempest/tempest/test.py", line 272, in setUpClass > six.reraise(etype, value, trace) > File "/home/stack/tempest/tempest/test.py", line 260, in setUpClass > cls.setup_credentials() > File "/home/stack/tempest/tempest/test.py", line 351, in > setup_credentials > credential_type=credentials_type) > File "/home/stack/tempest/tempest/test.py", line 474, in > get_client_manager > cred_provider = cls._get_credentials_provider() > File "/home/stack/tempest/tempest/test.py", line 452, in > _get_credentials_provider > identity_version=identity_version) > File "/home/stack/tempest/tempest/common/credentials.py", line > 39, in get_isolated_credentials > identity_version=identity_version) > File "/home/stack/tempest/tempest/common/isolated_creds.py", line > 149, in __init__ > identity_version=self.identity_version) > File "/home/stack/tempest/tempest/common/cred_provider.py", line > 67, in get_configured_credentials > identity_version=identity_version, **params) > File "/home/stack/tempest/tempest/common/cred_provider.py", line > 96, in get_credentials > **params) > File "/usr/lib/python2.7/site-packages/tempest_lib/auth.py", line > 481, in get_credentials > ca_certs=ca_certs, trace_requests=trace_requests) > File "/usr/lib/python2.7/site-packages/tempest_lib/auth.py", line > 182, in __init__ > super(KeystoneAuthProvider, self).__init__(credentials) > File "/usr/lib/python2.7/site-packages/tempest_lib/auth.py", line > 45, in __init__ > raise TypeError("Invalid credentials") > TypeError: Invalid credentials/ > > > Besides, I logged into each node and checked the services but most of > them are not running. > Not sure if it's normal. > > > > But I will try to re-deploy again as you suggested with adding the > HEAT_INCLUDE_PASSWORD=1 option. > > > > Thanks, > > Erming > > On 10/14/15, 4:20 PM, Sasha Chuzhoy wrote: > > Hi, > > So by default, when things work as expected, you should be able to login to > > your overcloud nodes as heat-admin (i.e. ssh heat-admin@). > > > > I haven't seen this error, did you source the /home/stack/stackrc file > > prior to attempting the deployment? > > > > I'd recomment you to remove the running/failed deployment and re-attempt to > > deploy again. > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > >> From: "Erming Pei" > >> To: rdo-list at redhat.com > >> Sent: Wednesday, October 14, 2015 6:03:50 PM > >> Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud > >> deployment > >> > >> Hi, > >> > >> I am deploying the overcloud in baremetal way and after a couple of > >> hours, it showed: > >> > >> $ openstack overcloud deploy --templates > >> Deploying templates in the directory > >> /usr/share/openstack-tripleo-heat-templates > >> ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again > >> with option --include-password or export HEAT_INCLUDE_PASSWORD=1 > >> Authentication required > >> > >> > >> But I checked the nodes are now running: > >> > >> [stack at gcloudcon-3 ~]$ nova list > >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > >> | ID | Name | > >> Status | Task State | Power State | Networks | > >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > >> | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | > >> ACTIVE | - | Running | ctlplane=10.0.6.60 | > >> | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | > >> ACTIVE | - | Running | ctlplane=10.0.6.61 | > >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > >> > >> 1. Should I re-deploy the nodes or there is a way to do update/makeup > >> for the authentication issue? > >> > >> 2. > >> I don't know how to access to the nodes. > >> There is not an overcloudrc file produced. > >> > >> $ ls overcloud* > >> overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 > >> overcloud-full.vmlinuz > >> > >> overcloud-full.d: > >> dib-manifests > >> > >> Is it via ssh key or password? Should I set the authentication method > >> somewhere? > >> > >> > >> > >> Thanks, > >> > >> Erming > >> > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > -- > --------------------------------------------- > Erming Pei, Ph.D > Senior System Analyst; Grid/Cloud Specialist > > Research Computing Group > Information Services & Technology > University of Alberta, Canada > > Tel: +1 7804929914 Fax: +1 7804921729 > --------------------------------------------- > > From robin at tune.com Wed Oct 14 22:22:52 2015 From: robin at tune.com (Robin Yamaguchi) Date: Wed, 14 Oct 2015 15:22:52 -0700 Subject: [Rdo-list] Missing ironic-discoverd-ramdisk Message-ID: Greetings, I have provisioned an undercloud on centos 7 in a virtualized environment, using the instructions hosted here: http://docs.openstack.org/developer/tripleo-docs/ I am attempting to build my images based on these instructions: http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#get-images However, I continue to get this error: ++ which ironic-discoverd-ramdisk which: no ironic-discoverd-ramdisk in (/usr/lib64/ccache:/usr/lib/ccache:$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin) ++ echo '' + _LOCATION= + '[' -z '' ']' + echo 'ironic-discoverd-ramdisk is not found in PATH. Please ensure your elements install it' ironic-discoverd-ramdisk is not found in PATH. Please ensure your elements install it + exit 1 Command 'ramdisk-image-create -a amd64 -o discovery-ramdisk --ramdisk-element dracut-ramdisk centos7 ironic-discoverd-ramdisk-instack centos-cr selinux-permissive centos-cloud-repo element-manifest network-gateway epel rdo-release undercloud-package-install pip-and-virtualenv-override 2>&1 | tee dib-discovery.log' returned non-zero exit status 1 "openstack overcloud image build --all" however did successfully build the overcloud-full and deploy-ramdisk images as expected. Running "openstack overcloud image build --type discovery-ramdisk" gives the same error as above. Searching my yum repos for "ironic-discoverd-ramdisk" doesn't yield anything, and its not clear to me how i'd go about supplying this file. Any suggestions would be greatly appreciated. thank you, Robin Yamaguchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Thu Oct 15 01:24:09 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 14 Oct 2015 21:24:09 -0400 Subject: [Rdo-list] Missing ironic-discoverd-ramdisk In-Reply-To: References: Message-ID: a few days ago i had an issue with building the images and found the resolution on this mailing list, i just opened https://bugzilla.redhat.com/show_bug.cgi?id=1271888 for it i am not sure this is the solution to your problem though hope that helps On Wed, Oct 14, 2015 at 6:22 PM, Robin Yamaguchi wrote: > Greetings, > > I have provisioned an undercloud on centos 7 in a virtualized environment, > using the instructions hosted here: > http://docs.openstack.org/developer/tripleo-docs/ > > I am attempting to build my images based on these instructions: > http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#get-images > > However, I continue to get this error: > > ++ which ironic-discoverd-ramdisk > which: no ironic-discoverd-ramdisk in > (/usr/lib64/ccache:/usr/lib/ccache:$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin) > ++ echo '' > + _LOCATION= > + '[' -z '' ']' > + echo 'ironic-discoverd-ramdisk is not found in PATH. Please ensure your > elements install it' > ironic-discoverd-ramdisk is not found in PATH. Please ensure your elements > install it > + exit 1 > Command 'ramdisk-image-create -a amd64 -o discovery-ramdisk > --ramdisk-element dracut-ramdisk centos7 ironic-discoverd-ramdisk-instack > centos-cr selinux-permissive centos-cloud-repo element-manifest > network-gateway epel rdo-release undercloud-package-install > pip-and-virtualenv-override 2>&1 | tee dib-discovery.log' returned non-zero > exit status 1 > > > > "openstack overcloud image build --all" however did successfully build the > overcloud-full and deploy-ramdisk images as expected. Running "openstack > overcloud image build --type discovery-ramdisk" gives the same error as > above. > > Searching my yum repos for "ironic-discoverd-ramdisk" doesn't yield > anything, and its not clear to me how i'd go about supplying this file. > Any suggestions would be greatly appreciated. > > thank you, > Robin Yamaguchi > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Thu Oct 15 03:06:04 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 15 Oct 2015 05:06:04 +0200 Subject: [Rdo-list] [rdo-manager] liberty missing ironic user for undercloud Message-ID: Hello I am attempting to deploy liberty via https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html the undercloud installation hasnt progressed much, i have this from the /var/log/messages Oct 15 05:00:36 rdo ironic-inspector: 2015-10-15 05:00:36.717 20874 ERROR ironic_inspector.main Unauthorized: Could not find user: ironic (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-f833e8e3-7acd-409d-8abc-f744565af798) Oct 15 05:00:36 rdo ironic-inspector: 2015-10-15 05:00:36.717 20874 ERROR ironic_inspector.main i wonder whats missing. any hints? -- *805010942448935* * * *GR750055912MA* *Link to me on LinkedIn * From celik.esra at tubitak.gov.tr Thu Oct 15 08:40:46 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Thu, 15 Oct 2015 11:40:46 +0300 (EEST) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1468495656.42006088.1444840807316.JavaMail.zimbra@redhat.com> Message-ID: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> Sorry for the late reply ironic node-show results are below. I have my nodes power on after introspection bulk start. And I get the following warning Introspection didn't finish for nodes 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 Doesn't seem to be the same issue with https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html [stack at undercloud ~]$ ironic node-list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | available | False | | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | available | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ [stack at undercloud ~]$ ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed +------------------------+-------------------------------------------------------------------------+ | Property | Value | +------------------------+-------------------------------------------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | updated_at | 2015-10-15T08:26:42+00:00 | | maintenance_reason | None | | provision_state | available | | clean_step | {} | | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | | console_enabled | False | | target_provision_state | None | | provision_updated_at | 2015-10-15T08:26:42+00:00 | | maintenance | False | | inspection_started_at | None | | inspection_finished_at | None | | power_state | power on | | driver | pxe_ipmitool | | reservation | None | | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': u'10', | | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | | instance_uuid | None | | name | None | | driver_info | {u'ipmi_password': u'******', u'ipmi_address': u'192.168.0.18', | | | u'ipmi_username': u'root', u'deploy_kernel': u'49a2c8d4-a283-4bdf-8d6f- | | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | | | 0d88-4632-af98-8defb05ca6e2'} | | created_at | 2015-10-15T07:49:08+00:00 | | driver_internal_info | {u'clean_steps': None} | | chassis_uuid | | | instance_info | {} | +------------------------+-------------------------------------------------------------------------+ [stack at undercloud ~]$ ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 +------------------------+-------------------------------------------------------------------------+ | Property | Value | +------------------------+-------------------------------------------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | updated_at | 2015-10-15T08:26:42+00:00 | | maintenance_reason | None | | provision_state | available | | clean_step | {} | | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | | console_enabled | False | | target_provision_state | None | | provision_updated_at | 2015-10-15T08:26:42+00:00 | | maintenance | False | | inspection_started_at | None | | inspection_finished_at | None | | power_state | power on | | driver | pxe_ipmitool | | reservation | None | | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': u'100', | | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | | instance_uuid | None | | name | None | | driver_info | {u'ipmi_password': u'******', u'ipmi_address': u'192.168.0.19', | | | u'ipmi_username': u'root', u'deploy_kernel': u'49a2c8d4-a283-4bdf-8d6f- | | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | | | 0d88-4632-af98-8defb05ca6e2'} | | created_at | 2015-10-15T07:49:08+00:00 | | driver_internal_info | {u'clean_steps': None} | | chassis_uuid | | | instance_info | {} | +------------------------+-------------------------------------------------------------------------+ [stack at undercloud ~]$ And below I added my history for the stack user. I don't think I am doing something other than https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty doc 1 vi instackenv.json 2 sudo yum -y install epel-release 3 sudo curl -o /etc/yum.repos.d/delorean.repo http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' /etc/yum.repos.d/delorean-current.repo 6 sudo /bin/bash -c "cat <>/etc/yum.repos.d/delorean-current.repo includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules EOF" 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo 8 sudo yum -y install yum-plugin-priorities 9 sudo yum install -y python-tripleoclient 10 cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf 11 vi undercloud.conf 12 export DIB_INSTALLTYPE_puppet_modules=source 13 openstack undercloud install 14 source stackrc 15 export NODE_DIST=centos7 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" 17 export DIB_INSTALLTYPE_puppet_modules=source 18 openstack overcloud image build --all 19 ls 20 openstack overcloud image upload 21 openstack baremetal import --json instackenv.json 22 openstack baremetal configure boot 23 ironic node-list 24 openstack baremetal introspection bulk start 25 ironic node-list 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 28 history Thanks Esra ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ----- Orijinal Mesaj ----- Kimden: "Marius Cornea" Kime: "Esra Celik" Kk: "Ignacio Bravo" , rdo-list at redhat.com G?nderilenler: 14 Ekim ?ar?amba 2015 19:40:07 Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" Can you do ironic node-show for your ironic nodes and post the results? Also check the following suggestion if you're experiencing the same issue: https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html ----- Original Message ----- > From: "Esra Celik" > To: "Marius Cornea" > Cc: "Ignacio Bravo" , rdo-list at redhat.com > Sent: Wednesday, October 14, 2015 3:22:20 PM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > > > Well in the early stage of the introspection I can see Client IP of nodes > (screenshot attached). But then I see continuous ironic-python-agent errors > (screenshot-2 attached). Errors repeat after time out.. And the nodes are > not powered off. > > Seems like I am stuck in introspection stage.. > > I can use ipmitool command to successfully power on/off the nodes > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U > root -R 3 -N 5 -P power status > Chassis Power is on > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > chassis power status > Chassis Power is on > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > chassis power off > Chassis Power Control: Down/Off > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > chassis power status > Chassis Power is off > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > chassis power on > Chassis Power Control: Up/On > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > chassis power status > Chassis Power is on > > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > > ----- Orijinal Mesaj ----- > > Kimden: "Marius Cornea" > Kime: "Esra Celik" > Kk: "Ignacio Bravo" , rdo-list at redhat.com > G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > found" > > > ----- Original Message ----- > > From: "Esra Celik" > > To: "Marius Cornea" > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > was found" > > > > > > Well today I started with re-installing the OS and nothing seems wrong with > > undercloud installation, then; > > > > > > > > > > > > > > I see an error during image build > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > ... > > a lot of log > > ... > > ++ cat /etc/dib_dracut_drivers > > + dracut -N --install ' curl partprobe lsblk targetcli tail head awk > > ifconfig > > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell rd.debug > > rd.neednet=1 rd.driver.pre=ahci' --include /var/tmp/image.YVhwuArQ/mnt/ / > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > > target_core_file target_core_pscsi configfs' -o 'dash plymouth' > > /tmp/ramdisk > > cat: write error: Broken pipe > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > + chmod o+r /tmp/kernel > > + trap EXIT > > + target_tag=99-build-dracut-ramdisk > > + date +%s.%N > > + output '99-build-dracut-ramdisk completed' > > ... > > a lot of log > > ... > > You can ignore that afaik, if you end up having all the required images it > should be ok. > > > > > Then, during introspection stage I see ironic-python-agent errors on nodes > > (screenshot attached) and the following warnings > > > > That looks odd. Is it showing up in the early stage of the introspection? At > some point it should receive an address by DHCP and the Network is > unreachable error should disappear. Does the introspection complete and the > nodes are turned off? > > > > > > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service | > > grep -i "warning\|error" > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > 10:30:12.119 > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > > Option "http_url" from group "pxe" is deprecated. Use option "http_url" > > from > > group "deploy". > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > 10:30:12.119 > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > > Option "http_root" from group "pxe" is deprecated. Use option "http_root" > > from group "deploy". > > > > > > Before deployment ironic node-list: > > > > This is odd too as I'm expecting the nodes to be powered off before running > deployment. > > > > > > > [stack at undercloud ~]$ ironic node-list > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > | Maintenance | > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | available > > | | > > | False | > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | available > > | | > > | False | > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > During deployment I get following errors > > > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service | > > grep -i "warning\|error" > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > 11:29:01.739 > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while attempting > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 -f > > /tmp/tmpSCKHIv power status"for node b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > Error: Unexpected error while running command. > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > 11:29:01.739 > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status failed > > for > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error > > while > > running command. > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > 11:29:01.740 > > 619 WARNING ironic.conductor.manager [-] During sync_power_state, could not > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt 1 of > > 3. Error: IPMI call failed: power status.. > > > > This looks like an ipmi error, can you try to manually run commands using the > ipmitool and see if you get any success? It's also worth filing a bug with > details such as the ipmitool version, server model, drac firmware version. > > > > > > > > > > > > > Thanks a lot > > > > > > > > ----- Orijinal Mesaj ----- > > > > Kimden: "Marius Cornea" > > Kime: "Esra Celik" > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > host was found" > > > > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Marius Cornea" > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > host was found" > > > > > > During deployment they are powering on and deploying the images. I see > > > lot > > > of > > > connection error messages about ironic-python-agent but ignore them as > > > mentioned here > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > That was referring to the introspection stage. From what I can tell you are > > experiencing issues during deployment as it fails to provision the nova > > instances, can you check if during that stage the nodes get powered on? > > > > Make sure that before overcloud deploy the ironic nodes are available for > > provisioning (ironic node-list and check the provisioning state column). > > Also check that you didn't miss any step in the docs in regards to kernel > > and ramdisk assignment, introspection, flavor creation(so it matches the > > nodes resources) > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > In instackenv.json file I do not need to add the undercloud node, or do > > > I? > > > > No, the nodes details should be enough. > > > > > And which log files should I watch during deployment? > > > > You can check the openstack-ironic-conductor logs(journalctl -fl -u > > openstack-ironic-conductor.service) and the logs in /var/log/nova. > > > > > Thanks > > > Esra > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea Kime: > > > Esra Celik Kk: Ignacio Bravo > > > , rdo-list at redhat.comGönderilenler: Tue, 13 > > > Oct > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails > > > with > > > error "No valid host was found" > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> > > > Sent: > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud > > > deploy fails with error "No valid host was found"> > > > Actually I > > > re-installed the OS for Undercloud before deploying. However I did> not > > > re-install the OS in Compute and Controller nodes.. I will reinstall> > > > basic > > > OS for them too, and retry.. > > > > > > You don't need to reinstall the OS on the controller and compute, they > > > will > > > get the image served by the undercloud. I'd recommend that during > > > deployment > > > you watch the servers console and make sure they get powered on, pxe > > > boot, > > > and actually get the image deployed. > > > > > > Thanks > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: > > > > "Ignacio > > > > Bravo" > Kime: "Esra Celik" > > > > > Kk: rdo-list at redhat.com> > > > > Gönderilenler: > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy fails > > > > with error "No valid host was> found"> > Esra,> > I encountered the > > > > same > > > > problem after deleting the stack and re-deploying.> > It turns out that > > > > 'heat stack-delete overcloud’ does remove the nodes from> > > > > ‘nova list’ and one would assume that the baremetal servers > > > > are now ready to> be used for the next stack, but when redeploying, I > > > > get > > > > the same message of> not enough hosts available.> > You can look into > > > > the > > > > nova logs and it mentions something about ‘node xxx is> already > > > > associated with UUID yyyy’ and ‘I tried 3 times and > > > > I’m > > > > giving up’.> The issue is that the UUID yyyy belonged to a prior > > > > unsuccessful deployment.> > I’m now redeploying the basic OS to > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, Inc> > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, at > > > > 9:25 > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > > > OverCloud deploy fails with error "No valid host was found"> > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> Deploying > > > > templates in the directory> > > > > /usr/share/openstack-tripleo-heat-templates> > > > > Stack failed with status: Resource CREATE failed: resources.Compute:> > > > > ResourceInError: resources[0].resources.NovaCompute: Went to status > > > > ERROR> > > > > due to "Message: No valid host was found. There are not enough hosts> > > > > available., Code: 500"> Heat Stack create failed.> > Here are some > > > > logs:> > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue > > > > > Oct > > > > 13> 16:18:17 2015> > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > | resource_name | physical_resource_id | resource_type | > > > > | resource_status > > > > |> | updated_time | stack_name |> > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > | OS::Heat::ResourceGroup > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller | > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > > > CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r |> > > > > | > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server |> > > > > | > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > > CREATE_FAILED > > > > | 2015-10-13T10:20:56 |> | > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > |> > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > | Property | Value |> > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > | attributes | { |> | | "attributes": null, |> | | "refs": null |> | | > > > > | } > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | links > > > > |> | |> > > > > | > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > | (self) |> | | > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > | | (stack) |> | | > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > | | physical_resource_id > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | > > > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> | | > > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | Compute > > > > |> > > > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > > > resources.Compute: ResourceInError:> | > > > > resources[0].resources.NovaCompute: > > > > Went to status ERROR due to "Message:> | No valid host was found. There > > > > are not enough hosts available., Code: 500"> | |> | resource_type | > > > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > This is my instackenv.json for 1 compute and 1 control node to be > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> "disk":"10",> > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> "disk":"100",> > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in advance> > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> > > > > celik.esra at tubitak.gov.tr> > > > > > _______________________________________________> Rdo-list mailing list> > > > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > _______________________________________________> Rdo-list mailing list> > > > > Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Thu Oct 15 10:40:32 2015 From: mcornea at redhat.com (Marius Cornea) Date: Thu, 15 Oct 2015 06:40:32 -0400 (EDT) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <370767486.42538011.1444905632463.JavaMail.zimbra@redhat.com> Dmitry, any recommendation for this kind of scenario? It looks like introspection is stuck and the nodes are kept powered on. It would be great for debugging purposes to get shell access via the console and check what went wrong. Is this possible at this time? Thanks, Marius ----- Original Message ----- > From: "Esra Celik" > To: "Marius Cornea" > Cc: "Ignacio Bravo" , rdo-list at redhat.com > Sent: Thursday, October 15, 2015 10:40:46 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > > Sorry for the late reply > > ironic node-show results are below. I have my nodes power on after > introspection bulk start. And I get the following warning > Introspection didn't finish for nodes > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > Doesn't seem to be the same issue with > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > [stack at undercloud ~]$ ironic node-list > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power State | Provisioning State | > | Maintenance | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | available | > | False | > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | available | > | False | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > [stack at undercloud ~]$ ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > +------------------------+-------------------------------------------------------------------------+ > | Property | Value | > +------------------------+-------------------------------------------------------------------------+ > | target_power_state | None | > | extra | {} | > | last_error | None | > | updated_at | 2015-10-15T08:26:42+00:00 | > | maintenance_reason | None | > | provision_state | available | > | clean_step | {} | > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > | console_enabled | False | > | target_provision_state | None | > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > | maintenance | False | > | inspection_started_at | None | > | inspection_finished_at | None | > | power_state | power on | > | driver | pxe_ipmitool | > | reservation | None | > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': > | u'10', | > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > | instance_uuid | None | > | name | None | > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > | u'192.168.0.18', | > | | u'ipmi_username': u'root', u'deploy_kernel': u'49a2c8d4-a283-4bdf-8d6f- | > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > | | 0d88-4632-af98-8defb05ca6e2'} | > | created_at | 2015-10-15T07:49:08+00:00 | > | driver_internal_info | {u'clean_steps': None} | > | chassis_uuid | | > | instance_info | {} | > +------------------------+-------------------------------------------------------------------------+ > > > [stack at undercloud ~]$ ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > +------------------------+-------------------------------------------------------------------------+ > | Property | Value | > +------------------------+-------------------------------------------------------------------------+ > | target_power_state | None | > | extra | {} | > | last_error | None | > | updated_at | 2015-10-15T08:26:42+00:00 | > | maintenance_reason | None | > | provision_state | available | > | clean_step | {} | > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > | console_enabled | False | > | target_provision_state | None | > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > | maintenance | False | > | inspection_started_at | None | > | inspection_finished_at | None | > | power_state | power on | > | driver | pxe_ipmitool | > | reservation | None | > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': > | u'100', | > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > | instance_uuid | None | > | name | None | > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > | u'192.168.0.19', | > | | u'ipmi_username': u'root', u'deploy_kernel': u'49a2c8d4-a283-4bdf-8d6f- | > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > | | 0d88-4632-af98-8defb05ca6e2'} | > | created_at | 2015-10-15T07:49:08+00:00 | > | driver_internal_info | {u'clean_steps': None} | > | chassis_uuid | | > | instance_info | {} | > +------------------------+-------------------------------------------------------------------------+ > [stack at undercloud ~]$ > > > > > > > > > > And below I added my history for the stack user. I don't think I am doing > something other than > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > doc > > > > > > > > 1 vi instackenv.json > 2 sudo yum -y install epel-release > 3 sudo curl -o /etc/yum.repos.d/delorean.repo > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > /etc/yum.repos.d/delorean-current.repo > 6 sudo /bin/bash -c "cat <>/etc/yum.repos.d/delorean-current.repo > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > EOF" > 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > 8 sudo yum -y install yum-plugin-priorities > 9 sudo yum install -y python-tripleoclient > 10 cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf > 11 vi undercloud.conf > 12 export DIB_INSTALLTYPE_puppet_modules=source > 13 openstack undercloud install > 14 source stackrc > 15 export NODE_DIST=centos7 > 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > /etc/yum.repos.d/delorean-deps.repo" > 17 export DIB_INSTALLTYPE_puppet_modules=source > 18 openstack overcloud image build --all > 19 ls > 20 openstack overcloud image upload > 21 openstack baremetal import --json instackenv.json > 22 openstack baremetal configure boot > 23 ironic node-list > 24 openstack baremetal introspection bulk start > 25 ironic node-list > 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > 28 history > > > > > > > > Thanks > > > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > ----- Orijinal Mesaj ----- > > Kimden: "Marius Cornea" > Kime: "Esra Celik" > Kk: "Ignacio Bravo" , rdo-list at redhat.com > G?nderilenler: 14 Ekim ?ar?amba 2015 19:40:07 > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > found" > > Can you do ironic node-show for your ironic nodes and post the results? Also > check the following suggestion if you're experiencing the same issue: > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > ----- Original Message ----- > > From: "Esra Celik" > > To: "Marius Cornea" > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > was found" > > > > > > > > Well in the early stage of the introspection I can see Client IP of nodes > > (screenshot attached). But then I see continuous ironic-python-agent errors > > (screenshot-2 attached). Errors repeat after time out.. And the nodes are > > not powered off. > > > > Seems like I am stuck in introspection stage.. > > > > I can use ipmitool command to successfully power on/off the nodes > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR > > -U > > root -R 3 -N 5 -P power status > > Chassis Power is on > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > chassis power status > > Chassis Power is on > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > chassis power off > > Chassis Power Control: Down/Off > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > chassis power status > > Chassis Power is off > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > chassis power on > > Chassis Power Control: Up/On > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > chassis power status > > Chassis Power is on > > > > > > Esra ?EL?K > > T?B?TAK B?LGEM > > www.bilgem.tubitak.gov.tr > > celik.esra at tubitak.gov.tr > > > > > > ----- Orijinal Mesaj ----- > > > > Kimden: "Marius Cornea" > > Kime: "Esra Celik" > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > found" > > > > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Marius Cornea" > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > was found" > > > > > > > > > Well today I started with re-installing the OS and nothing seems wrong > > > with > > > undercloud installation, then; > > > > > > > > > > > > > > > > > > > > > I see an error during image build > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > ... > > > a lot of log > > > ... > > > ++ cat /etc/dib_dracut_drivers > > > + dracut -N --install ' curl partprobe lsblk targetcli tail head awk > > > ifconfig > > > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell > > > rd.debug > > > rd.neednet=1 rd.driver.pre=ahci' --include /var/tmp/image.YVhwuArQ/mnt/ / > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net > > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > > > target_core_file target_core_pscsi configfs' -o 'dash plymouth' > > > /tmp/ramdisk > > > cat: write error: Broken pipe > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > + chmod o+r /tmp/kernel > > > + trap EXIT > > > + target_tag=99-build-dracut-ramdisk > > > + date +%s.%N > > > + output '99-build-dracut-ramdisk completed' > > > ... > > > a lot of log > > > ... > > > > You can ignore that afaik, if you end up having all the required images it > > should be ok. > > > > > > > > Then, during introspection stage I see ironic-python-agent errors on > > > nodes > > > (screenshot attached) and the following warnings > > > > > > > That looks odd. Is it showing up in the early stage of the introspection? > > At > > some point it should receive an address by DHCP and the Network is > > unreachable error should disappear. Does the introspection complete and the > > nodes are turned off? > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service > > > | > > > grep -i "warning\|error" > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > 10:30:12.119 > > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > > > Option "http_url" from group "pxe" is deprecated. Use option "http_url" > > > from > > > group "deploy". > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > 10:30:12.119 > > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > > > Option "http_root" from group "pxe" is deprecated. Use option "http_root" > > > from group "deploy". > > > > > > > > > Before deployment ironic node-list: > > > > > > > This is odd too as I'm expecting the nodes to be powered off before running > > deployment. > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > | Maintenance | > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | > > > | available > > > | | > > > | False | > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | > > > | available > > > | | > > > | False | > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > During deployment I get following errors > > > > > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service > > > | > > > grep -i "warning\|error" > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > 11:29:01.739 > > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while attempting > > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 > > > -f > > > /tmp/tmpSCKHIv power status"for node > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > Error: Unexpected error while running command. > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > 11:29:01.739 > > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status failed > > > for > > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error > > > while > > > running command. > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > 11:29:01.740 > > > 619 WARNING ironic.conductor.manager [-] During sync_power_state, could > > > not > > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt 1 > > > of > > > 3. Error: IPMI call failed: power status.. > > > > > > > This looks like an ipmi error, can you try to manually run commands using > > the > > ipmitool and see if you get any success? It's also worth filing a bug with > > details such as the ipmitool version, server model, drac firmware version. > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > Kimden: "Marius Cornea" > > > Kime: "Esra Celik" > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > host was found" > > > > > > > > > ----- Original Message ----- > > > > From: "Esra Celik" > > > > To: "Marius Cornea" > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > valid > > > > host was found" > > > > > > > > During deployment they are powering on and deploying the images. I see > > > > lot > > > > of > > > > connection error messages about ironic-python-agent but ignore them as > > > > mentioned here > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > That was referring to the introspection stage. From what I can tell you > > > are > > > experiencing issues during deployment as it fails to provision the nova > > > instances, can you check if during that stage the nodes get powered on? > > > > > > Make sure that before overcloud deploy the ironic nodes are available for > > > provisioning (ironic node-list and check the provisioning state column). > > > Also check that you didn't miss any step in the docs in regards to kernel > > > and ramdisk assignment, introspection, flavor creation(so it matches the > > > nodes resources) > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > In instackenv.json file I do not need to add the undercloud node, or do > > > > I? > > > > > > No, the nodes details should be enough. > > > > > > > And which log files should I watch during deployment? > > > > > > You can check the openstack-ironic-conductor logs(journalctl -fl -u > > > openstack-ironic-conductor.service) and the logs in /var/log/nova. > > > > > > > Thanks > > > > Esra > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > Kime: > > > > Esra Celik Kk: Ignacio Bravo > > > > , rdo-list at redhat.comGönderilenler: Tue, 13 > > > > Oct > > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails > > > > with > > > > error "No valid host was found" > > > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > > > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> > > > > Sent: > > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud > > > > deploy fails with error "No valid host was found"> > > > Actually I > > > > re-installed the OS for Undercloud before deploying. However I did> not > > > > re-install the OS in Compute and Controller nodes.. I will reinstall> > > > > basic > > > > OS for them too, and retry.. > > > > > > > > You don't need to reinstall the OS on the controller and compute, they > > > > will > > > > get the image served by the undercloud. I'd recommend that during > > > > deployment > > > > you watch the servers console and make sure they get powered on, pxe > > > > boot, > > > > and actually get the image deployed. > > > > > > > > Thanks > > > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: > > > > > "Ignacio > > > > > Bravo" > Kime: "Esra Celik" > > > > > > Kk: rdo-list at redhat.com> > > > > > Gönderilenler: > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy > > > > > fails > > > > > with error "No valid host was> found"> > Esra,> > I encountered the > > > > > same > > > > > problem after deleting the stack and re-deploying.> > It turns out > > > > > that > > > > > 'heat stack-delete overcloud’ does remove the nodes from> > > > > > ‘nova list’ and one would assume that the baremetal > > > > > servers > > > > > are now ready to> be used for the next stack, but when redeploying, I > > > > > get > > > > > the same message of> not enough hosts available.> > You can look into > > > > > the > > > > > nova logs and it mentions something about ‘node xxx is> already > > > > > associated with UUID yyyy’ and ‘I tried 3 times and > > > > > I’m > > > > > giving up’.> The issue is that the UUID yyyy belonged to a > > > > > prior > > > > > unsuccessful deployment.> > I’m now redeploying the basic OS to > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, > > > > > Inc> > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, at > > > > > 9:25 > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > > > > OverCloud deploy fails with error "No valid host was found"> > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> > > > > > Deploying > > > > > templates in the directory> > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > Stack failed with status: Resource CREATE failed: resources.Compute:> > > > > > ResourceInError: resources[0].resources.NovaCompute: Went to status > > > > > ERROR> > > > > > due to "Message: No valid host was found. There are not enough hosts> > > > > > available., Code: 500"> Heat Stack create failed.> > Here are some > > > > > logs:> > > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE > > > > > > Tue > > > > > > Oct > > > > > 13> 16:18:17 2015> > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > | resource_name | physical_resource_id | resource_type | > > > > > | resource_status > > > > > |> | updated_time | stack_name |> > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > | OS::Heat::ResourceGroup > > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller > > > > > |> | | > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r > > > > > |> > > > > > | > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server > > > > > |> > > > > > | > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > > > CREATE_FAILED > > > > > | 2015-10-13T10:20:56 |> | > > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > |> > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > | Property | Value |> > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > | attributes | { |> | | "attributes": null, |> | | "refs": null |> | > > > > > | | > > > > > | } > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | > > > > > |> | links > > > > > |> | |> > > > > > | > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > | (self) |> | | > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > | | (stack) |> | | > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > | | physical_resource_id > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | > > > > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> | > > > > > | > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | > > > > > Compute > > > > > |> > > > > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > > > > resources.Compute: ResourceInError:> | > > > > > resources[0].resources.NovaCompute: > > > > > Went to status ERROR due to "Message:> | No valid host was found. > > > > > There > > > > > are not enough hosts available., Code: 500"> | |> | resource_type | > > > > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > This is my instackenv.json for 1 compute and 1 control node to > > > > > > > > be > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> "disk":"10",> > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> "disk":"100",> > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in advance> > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> > > > > > celik.esra at tubitak.gov.tr> > > > > > > _______________________________________________> Rdo-list mailing > > > > > list> > > > > > Rdo-list at redhat.com> > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > _______________________________________________> Rdo-list mailing > > > > > list> > > > > > Rdo-list at redhat.com> > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > From mcornea at redhat.com Thu Oct 15 10:45:53 2015 From: mcornea at redhat.com (Marius Cornea) Date: Thu, 15 Oct 2015 06:45:53 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <561EE408.6030302@redhat.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> Message-ID: <882059868.42539134.1444905953214.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Dan Sneddon" > To: rdo-list at redhat.com > Sent: Thursday, October 15, 2015 1:23:52 AM > Subject: Re: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment > > On 10/14/2015 03:03 PM, Erming Pei wrote: > > Hi, > > > > I am deploying the overcloud in baremetal way and after a couple of > > hours, it showed: > > > > $ openstack overcloud deploy --templates > > Deploying templates in the directory > > /usr/share/openstack-tripleo-heat-templates > > ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again > > with option --include-password or export HEAT_INCLUDE_PASSWORD=1 > > Authentication required > > > > > > But I checked the nodes are now running: > > > > [stack at gcloudcon-3 ~]$ nova list > > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > > > | ID | Name | > > Status | Task State | Power State | Networks | > > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > > > | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | > > ACTIVE | - | Running | ctlplane=10.0.6.60 | > > | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | > > ACTIVE | - | Running | ctlplane=10.0.6.61 | > > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > > > > > 1. Should I re-deploy the nodes or there is a way to do update/makeup > > for the authentication issue? > > > > 2. > > I don't know how to access to the nodes. > > There is not an overcloudrc file produced. > > > > $ ls overcloud* > > overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 > > overcloud-full.vmlinuz > > > > overcloud-full.d: > > dib-manifests > > > > Is it via ssh key or password? Should I set the authentication method > > somewhere? > > > > > > > > Thanks, > > > > Erming > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > This error generally means that something in the deployment got stuck, > and the deployment hung until the token expired after 4 hours. When > that happens, there is no overcloudrc generated (because there is not a > working overcloud). You won't be able to recover with a stack update, > you'll need to perform a stack-delete and redeploy once you know what > went wrong. > > Generally a deployment shouldn't take anywhere near that long, a bare > metal deployment with 6 hosts takes me less than an hour, and less than > 2 including a Ceph deployment. In fact, I usually set a timeout using > the --timeout option, because if it hasn't finished after, say 90 > minutes (depending on how complicated the deployment is), then I want > it to bomb out so I can diagnose what went wrong and redeploy. > > Often when a deployment times out it is because there were connectivity > issues between the nodes. Since you can log in to the hosts, you might > want to do some basic network troubleshooting, such as: > > $ ip address # check to see that all the interfaces are there, and > that the IP addresses have been assigned > > $ sudo ovs-vsctl show # make sure that the bridges have the proper > interfaces, vlans, and that all the expected bridges show up > > $ ping # you can try this on all VLANs to make > sure that any VLAN trunks are working properly > > $ sudo ovs-appctl bond/show # if running bonding, check to see the > bond status > > $ sudo os-net-config --debug -c /etc/os-net-config/config.json # run > the network configuration script again to make sure that it is able to > configure the interfaces without error. WARNING, MAY BE DISRUPTIVE as > this will reset the network interfaces, run on console if possible. Also looking for erros in os-collect-config logs might prove to be useful( journalctl -u os-collect-config ), more details on the debugging docs: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/troubleshooting/troubleshooting-overcloud.html > However, I want to first double-check that you had a valid command > line. You only show "openstack deploy overcloud --templates" in your > original email. You did have a full command-line, right? Refer to the > official installation guide for the right parameters. > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mcornea at redhat.com Thu Oct 15 10:51:48 2015 From: mcornea at redhat.com (Marius Cornea) Date: Thu, 15 Oct 2015 06:51:48 -0400 (EDT) Subject: [Rdo-list] Missing ironic-discoverd-ramdisk In-Reply-To: References: Message-ID: <1524457575.42540624.1444906308106.JavaMail.zimbra@redhat.com> Hi Robin, There shouldn't be a ironic-discoverd-ramdisk image anymore. I recommend that you follow the flow described in the Liberty docs: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ Thanks, Marius ----- Original Message ----- > From: "Robin Yamaguchi" > To: rdo-list at redhat.com > Sent: Thursday, October 15, 2015 12:22:52 AM > Subject: [Rdo-list] Missing ironic-discoverd-ramdisk > > Greetings, > > I have provisioned an undercloud on centos 7 in a virtualized environment, > using the instructions hosted here: > http://docs.openstack.org/developer/tripleo-docs/ > > I am attempting to build my images based on these instructions: > http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#get-images > > However, I continue to get this error: > > > > > ++ which ironic-discoverd-ramdisk > which: no ironic-discoverd-ramdisk in > (/usr/lib64/ccache:/usr/lib/ccache:$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin) > ++ echo '' > + _LOCATION= > + '[' -z '' ']' > + echo 'ironic-discoverd-ramdisk is not found in PATH. Please ensure your > elements install it' > ironic-discoverd-ramdisk is not found in PATH. Please ensure your elements > install it > + exit 1 > Command 'ramdisk-image-create -a amd64 -o discovery-ramdisk --ramdisk-element > dracut-ramdisk centos7 ironic-discoverd-ramdisk-instack centos-cr > selinux-permissive centos-cloud-repo element-manifest network-gateway epel > rdo-release undercloud-package-install pip-and-virtualenv-override 2>&1 | > tee dib-discovery.log' returned non-zero exit status 1 > > > "openstack overcloud image build --all" however did successfully build the > overcloud-full and deploy-ramdisk images as expected. Running "openstack > overcloud image build --type discovery-ramdisk" gives the same error as > above. > > Searching my yum repos for "ironic-discoverd-ramdisk" doesn't yield anything, > and its not clear to me how i'd go about supplying this file. Any > suggestions would be greatly appreciated. > > thank you, > Robin Yamaguchi > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mcornea at redhat.com Thu Oct 15 10:59:33 2015 From: mcornea at redhat.com (Marius Cornea) Date: Thu, 15 Oct 2015 06:59:33 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] liberty missing ironic user for undercloud In-Reply-To: References: Message-ID: <1743332195.42542874.1444906773581.JavaMail.zimbra@redhat.com> Hi, I've seen similar error when I didn't do the includepkgs in delorean-current.repo. Can you double check that your repos are set according to the docs? Here's how mine look like: http://paste.openstack.org/show/476349/ Thanks, Marius ----- Original Message ----- > From: "Mohammed Arafa" > To: rdo-list at redhat.com > Sent: Thursday, October 15, 2015 5:06:04 AM > Subject: [Rdo-list] [rdo-manager] liberty missing ironic user for undercloud > > Hello > > I am attempting to deploy liberty via > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html > > the undercloud installation hasnt progressed much, i have this from > the /var/log/messages > > Oct 15 05:00:36 rdo ironic-inspector: 2015-10-15 05:00:36.717 20874 > ERROR ironic_inspector.main Unauthorized: Could not find user: ironic > (Disable debug mode to suppress these details.) (HTTP 401) > (Request-ID: req-f833e8e3-7acd-409d-8abc-f744565af798) > Oct 15 05:00:36 rdo ironic-inspector: 2015-10-15 05:00:36.717 20874 > ERROR ironic_inspector.main > > i wonder whats missing. any hints? > -- > > > > > *805010942448935* > * * > > *GR750055912MA* > > *Link to me on LinkedIn * > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ibravo at ltgfederal.com Thu Oct 15 12:38:59 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 15 Oct 2015 08:38:59 -0400 Subject: [Rdo-list] [rdo-manager] [Ceph] Ceph deployment / usage Message-ID: <561F9E63.1020906@ltgfederal.com> All, I need to deploy oVirt to put some VMs today and it requires a either CephFS or GlusterFS as the backends. Ideally, I would chose CephFS so that oVirt + RDO-Manager can share the same storage resources with Ceph and not having two distinct storage products. After the last couple of days, a lot of bug fixes have been done to RDO-Manager, but I have not yet being able to perform a HA deployment (3 controllers) plus Ceph as the backend. So my question is can I deploy Ceph as a stand alone product and then configure RDO-Manager to use this pool without deploying a new Ceph instance? or shall I deploy everything through RDO-Manager and then build from there? Thanks for your insight. IB -- Ignacio Bravo LTG Federal Inc From rbowen at redhat.com Thu Oct 15 13:01:03 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 15 Oct 2015 09:01:03 -0400 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <561E4133.8060804@redhat.com> References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> Message-ID: <561FA38F.6050508@redhat.com> Just to follow up: http://docs.openstack.org/ Installation Guide for Debian 8 (not yet available) Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 22 (not yet available) --Rich On 10/14/2015 07:49 AM, Rich Bowen wrote: > I wanted to be certain that everyone has seen this message to > OpenStack-docs, and the subsequent conversation at > http://lists.openstack.org/pipermail/openstack-docs/2015-October/007622.html > > > This is quite serious, as Lana is basically saying that RDO isn't a > viable way to deploy OpenStack in Liberty, and so it's being removed > from the docs. > > It would be helpful if someone closer to Liberty packages, and Delorean, > could participate there in a constructive way to bring this to a happy > conclusion before the release tomorrow. > > Thanks. > > --Rich > > > -------- Forwarded Message -------- > Subject: [OpenStack-docs] [install-guide] Status of RDO > Date: Wed, 14 Oct 2015 16:22:45 +1000 > From: Lana Brindley > To: openstack-docs at lists.openstack.org > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hi everyone, > > We've been unable to obtain good pre-release packages from Red Hat for > the Fedora and Red Hat/CentOS repos, despite our best efforts. This has > left the RDO Install Guide in a largely untested state, so I don't feel > confident publishing it at this stage. > > As far as we can tell, Fedora are no longer planning on having > pre-release packages available, so this might be a permanent change for > that OS. For Red Hat/CentOS, it seems to be a temporary problem, so > hopefully we can get the packages, complete testing, and publish the > book soon. > > The patch to remove RDO is here, for anyone who cares to comment: > https://review.openstack.org/#/c/234584/ > > Lana > > - -- Lana Brindley > Technical Writer > Rackspace Cloud Builders Australia > http://lanabrindley.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iQEcBAEBCAAGBQJWHfS1AAoJELppzVb4+KUyM7cH/ii5Ekz5vjTe3dTykXBUbWGt > bR2XJTAbS/mFB+xayecNNPLvgejI6Nxvk8msSFNnN7/ZyDNwr+eceQw7ftMKuJnR > h7qKBb6o5iayLJxgNRK3Kjo13NjGdaiXwfLTbB5br/aiP2HHsrDRexAcLteUCKGt > eHbZUEYqg4VADUvodxNpbZ+7fHuXrIRZoH4aDQ4+o1p0dCdw+vkjzF/MzPSgZFar > Rq9L94rpofDat9ymuW48c+SgUeOnmTvxwEN8ExTENNMXo4nUOJwcUS65J6XURO9K > RUGvjPmSmm7ZaQGE+koKyGZSzF/Oqoa+vBUwxdeQqmtr2tWo//jlUVV/PDc8QV0= > =rQp4 > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-docs mailing list > OpenStack-docs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From jrichar1 at ball.com Thu Oct 15 13:54:10 2015 From: jrichar1 at ball.com (Richards, Jeff) Date: Thu, 15 Oct 2015 13:54:10 +0000 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <561EE408.6030302@redhat.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> Message-ID: <6D1DB475E9650E4EADE65C051EFBB98B468B0642@EX2010-DTN-03.AERO.BALL.com> So where is this official installation guide? The docs on this page: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html Just show "openstack overcloud deploy --templates". Where is the best place to find more detailed info on overcload deploy CLI (other than --help)? Jeff Richards > -----Original Message----- > However, I want to first double-check that you had a valid command > line. You only show "openstack deploy overcloud --templates" in your > original email. You did have a full command-line, right? Refer to the > official installation guide for the right parameters. This message and any enclosures are intended only for the addressee. Please notify the sender by email if you are not the intended recipient. If you are not the intended recipient, you may not use, copy, disclose, or distribute this message or its contents or enclosures to any other person and any such actions may be unlawful. Ball reserves the right to monitor and review all messages and enclosures sent to or from this email address. From sasha at redhat.com Thu Oct 15 13:58:41 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Thu, 15 Oct 2015 09:58:41 -0400 (EDT) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1505152794.58180750.1444917521867.JavaMail.zimbra@redhat.com> Just my 2 cents. Did you make sure that all the registered nodes are configured to boot off the right NIC first? Can you watch the console and see what happens on the problematic nodes upon boot? Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Esra Celik" > To: "Marius Cornea" > Cc: rdo-list at redhat.com > Sent: Thursday, October 15, 2015 4:40:46 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > > Sorry for the late reply > > ironic node-show results are below. I have my nodes power on after > introspection bulk start. And I get the following warning > Introspection didn't finish for nodes > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > Doesn't seem to be the same issue with > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > [stack at undercloud ~]$ ironic node-list > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power State | Provisioning State | > | Maintenance | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | available | > | False | > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | available | > | False | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > [stack at undercloud ~]$ ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > +------------------------+-------------------------------------------------------------------------+ > | Property | Value | > +------------------------+-------------------------------------------------------------------------+ > | target_power_state | None | > | extra | {} | > | last_error | None | > | updated_at | 2015-10-15T08:26:42+00:00 | > | maintenance_reason | None | > | provision_state | available | > | clean_step | {} | > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > | console_enabled | False | > | target_provision_state | None | > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > | maintenance | False | > | inspection_started_at | None | > | inspection_finished_at | None | > | power_state | power on | > | driver | pxe_ipmitool | > | reservation | None | > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': > | u'10', | > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > | instance_uuid | None | > | name | None | > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > | u'192.168.0.18', | > | | u'ipmi_username': u'root', u'deploy_kernel': u'49a2c8d4-a283-4bdf-8d6f- | > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > | | 0d88-4632-af98-8defb05ca6e2'} | > | created_at | 2015-10-15T07:49:08+00:00 | > | driver_internal_info | {u'clean_steps': None} | > | chassis_uuid | | > | instance_info | {} | > +------------------------+-------------------------------------------------------------------------+ > > > [stack at undercloud ~]$ ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > +------------------------+-------------------------------------------------------------------------+ > | Property | Value | > +------------------------+-------------------------------------------------------------------------+ > | target_power_state | None | > | extra | {} | > | last_error | None | > | updated_at | 2015-10-15T08:26:42+00:00 | > | maintenance_reason | None | > | provision_state | available | > | clean_step | {} | > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > | console_enabled | False | > | target_provision_state | None | > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > | maintenance | False | > | inspection_started_at | None | > | inspection_finished_at | None | > | power_state | power on | > | driver | pxe_ipmitool | > | reservation | None | > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': > | u'100', | > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > | instance_uuid | None | > | name | None | > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > | u'192.168.0.19', | > | | u'ipmi_username': u'root', u'deploy_kernel': u'49a2c8d4-a283-4bdf-8d6f- | > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > | | 0d88-4632-af98-8defb05ca6e2'} | > | created_at | 2015-10-15T07:49:08+00:00 | > | driver_internal_info | {u'clean_steps': None} | > | chassis_uuid | | > | instance_info | {} | > +------------------------+-------------------------------------------------------------------------+ > [stack at undercloud ~]$ > > > > > > > > > > And below I added my history for the stack user. I don't think I am doing > something other than > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > doc > > > > > > > > 1 vi instackenv.json > 2 sudo yum -y install epel-release > 3 sudo curl -o /etc/yum.repos.d/delorean.repo > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > /etc/yum.repos.d/delorean-current.repo > 6 sudo /bin/bash -c "cat <>/etc/yum.repos.d/delorean-current.repo > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > EOF" > 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > 8 sudo yum -y install yum-plugin-priorities > 9 sudo yum install -y python-tripleoclient > 10 cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf > 11 vi undercloud.conf > 12 export DIB_INSTALLTYPE_puppet_modules=source > 13 openstack undercloud install > 14 source stackrc > 15 export NODE_DIST=centos7 > 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > /etc/yum.repos.d/delorean-deps.repo" > 17 export DIB_INSTALLTYPE_puppet_modules=source > 18 openstack overcloud image build --all > 19 ls > 20 openstack overcloud image upload > 21 openstack baremetal import --json instackenv.json > 22 openstack baremetal configure boot > 23 ironic node-list > 24 openstack baremetal introspection bulk start > 25 ironic node-list > 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > 28 history > > > > > > > > Thanks > > > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > > Kimden: "Marius Cornea" > Kime: "Esra Celik" > Kk: "Ignacio Bravo" , rdo-list at redhat.com > G?nderilenler: 14 Ekim ?ar?amba 2015 19:40:07 > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > found" > > Can you do ironic node-show for your ironic nodes and post the results? Also > check the following suggestion if you're experiencing the same issue: > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > ----- Original Message ----- > > From: "Esra Celik" > > To: "Marius Cornea" > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > was found" > > > > > > > > Well in the early stage of the introspection I can see Client IP of nodes > > (screenshot attached). But then I see continuous ironic-python-agent errors > > (screenshot-2 attached). Errors repeat after time out.. And the nodes are > > not powered off. > > > > Seems like I am stuck in introspection stage.. > > > > I can use ipmitool command to successfully power on/off the nodes > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR > > -U > > root -R 3 -N 5 -P power status > > Chassis Power is on > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > chassis power status > > Chassis Power is on > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > chassis power off > > Chassis Power Control: Down/Off > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > chassis power status > > Chassis Power is off > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > chassis power on > > Chassis Power Control: Up/On > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > chassis power status > > Chassis Power is on > > > > > > Esra ?EL?K > > T?B?TAK B?LGEM > > www.bilgem.tubitak.gov.tr > > celik.esra at tubitak.gov.tr > > > > > > ----- Orijinal Mesaj ----- > > > > Kimden: "Marius Cornea" > > Kime: "Esra Celik" > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > found" > > > > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Marius Cornea" > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > was found" > > > > > > > > > Well today I started with re-installing the OS and nothing seems wrong > > > with > > > undercloud installation, then; > > > > > > > > > > > > > > > > > > > > > I see an error during image build > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > ... > > > a lot of log > > > ... > > > ++ cat /etc/dib_dracut_drivers > > > + dracut -N --install ' curl partprobe lsblk targetcli tail head awk > > > ifconfig > > > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell > > > rd.debug > > > rd.neednet=1 rd.driver.pre=ahci' --include /var/tmp/image.YVhwuArQ/mnt/ / > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net > > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > > > target_core_file target_core_pscsi configfs' -o 'dash plymouth' > > > /tmp/ramdisk > > > cat: write error: Broken pipe > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > + chmod o+r /tmp/kernel > > > + trap EXIT > > > + target_tag=99-build-dracut-ramdisk > > > + date +%s.%N > > > + output '99-build-dracut-ramdisk completed' > > > ... > > > a lot of log > > > ... > > > > You can ignore that afaik, if you end up having all the required images it > > should be ok. > > > > > > > > Then, during introspection stage I see ironic-python-agent errors on > > > nodes > > > (screenshot attached) and the following warnings > > > > > > > That looks odd. Is it showing up in the early stage of the introspection? > > At > > some point it should receive an address by DHCP and the Network is > > unreachable error should disappear. Does the introspection complete and the > > nodes are turned off? > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service > > > | > > > grep -i "warning\|error" > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > 10:30:12.119 > > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > > > Option "http_url" from group "pxe" is deprecated. Use option "http_url" > > > from > > > group "deploy". > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > 10:30:12.119 > > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > > > Option "http_root" from group "pxe" is deprecated. Use option "http_root" > > > from group "deploy". > > > > > > > > > Before deployment ironic node-list: > > > > > > > This is odd too as I'm expecting the nodes to be powered off before running > > deployment. > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > | Maintenance | > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | > > > | available > > > | | > > > | False | > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | > > > | available > > > | | > > > | False | > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > During deployment I get following errors > > > > > > [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service > > > | > > > grep -i "warning\|error" > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > 11:29:01.739 > > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while attempting > > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 > > > -f > > > /tmp/tmpSCKHIv power status"for node > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > Error: Unexpected error while running command. > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > 11:29:01.739 > > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status failed > > > for > > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error > > > while > > > running command. > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > 11:29:01.740 > > > 619 WARNING ironic.conductor.manager [-] During sync_power_state, could > > > not > > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt 1 > > > of > > > 3. Error: IPMI call failed: power status.. > > > > > > > This looks like an ipmi error, can you try to manually run commands using > > the > > ipmitool and see if you get any success? It's also worth filing a bug with > > details such as the ipmitool version, server model, drac firmware version. > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > Kimden: "Marius Cornea" > > > Kime: "Esra Celik" > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > host was found" > > > > > > > > > ----- Original Message ----- > > > > From: "Esra Celik" > > > > To: "Marius Cornea" > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > valid > > > > host was found" > > > > > > > > During deployment they are powering on and deploying the images. I see > > > > lot > > > > of > > > > connection error messages about ironic-python-agent but ignore them as > > > > mentioned here > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > That was referring to the introspection stage. From what I can tell you > > > are > > > experiencing issues during deployment as it fails to provision the nova > > > instances, can you check if during that stage the nodes get powered on? > > > > > > Make sure that before overcloud deploy the ironic nodes are available for > > > provisioning (ironic node-list and check the provisioning state column). > > > Also check that you didn't miss any step in the docs in regards to kernel > > > and ramdisk assignment, introspection, flavor creation(so it matches the > > > nodes resources) > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > In instackenv.json file I do not need to add the undercloud node, or do > > > > I? > > > > > > No, the nodes details should be enough. > > > > > > > And which log files should I watch during deployment? > > > > > > You can check the openstack-ironic-conductor logs(journalctl -fl -u > > > openstack-ironic-conductor.service) and the logs in /var/log/nova. > > > > > > > Thanks > > > > Esra > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > Kime: > > > > Esra Celik Kk: Ignacio Bravo > > > > , rdo-list at redhat.comGönderilenler: Tue, 13 > > > > Oct > > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails > > > > with > > > > error "No valid host was found" > > > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > > > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> > > > > Sent: > > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud > > > > deploy fails with error "No valid host was found"> > > > Actually I > > > > re-installed the OS for Undercloud before deploying. However I did> not > > > > re-install the OS in Compute and Controller nodes.. I will reinstall> > > > > basic > > > > OS for them too, and retry.. > > > > > > > > You don't need to reinstall the OS on the controller and compute, they > > > > will > > > > get the image served by the undercloud. I'd recommend that during > > > > deployment > > > > you watch the servers console and make sure they get powered on, pxe > > > > boot, > > > > and actually get the image deployed. > > > > > > > > Thanks > > > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: > > > > > "Ignacio > > > > > Bravo" > Kime: "Esra Celik" > > > > > > Kk: rdo-list at redhat.com> > > > > > Gönderilenler: > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy > > > > > fails > > > > > with error "No valid host was> found"> > Esra,> > I encountered the > > > > > same > > > > > problem after deleting the stack and re-deploying.> > It turns out > > > > > that > > > > > 'heat stack-delete overcloud’ does remove the nodes from> > > > > > ‘nova list’ and one would assume that the baremetal > > > > > servers > > > > > are now ready to> be used for the next stack, but when redeploying, I > > > > > get > > > > > the same message of> not enough hosts available.> > You can look into > > > > > the > > > > > nova logs and it mentions something about ‘node xxx is> already > > > > > associated with UUID yyyy’ and ‘I tried 3 times and > > > > > I’m > > > > > giving up’.> The issue is that the UUID yyyy belonged to a > > > > > prior > > > > > unsuccessful deployment.> > I’m now redeploying the basic OS to > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, > > > > > Inc> > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, at > > > > > 9:25 > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > > > > OverCloud deploy fails with error "No valid host was found"> > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> > > > > > Deploying > > > > > templates in the directory> > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > Stack failed with status: Resource CREATE failed: resources.Compute:> > > > > > ResourceInError: resources[0].resources.NovaCompute: Went to status > > > > > ERROR> > > > > > due to "Message: No valid host was found. There are not enough hosts> > > > > > available., Code: 500"> Heat Stack create failed.> > Here are some > > > > > logs:> > > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE > > > > > > Tue > > > > > > Oct > > > > > 13> 16:18:17 2015> > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > | resource_name | physical_resource_id | resource_type | > > > > > | resource_status > > > > > |> | updated_time | stack_name |> > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > | OS::Heat::ResourceGroup > > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller > > > > > |> | | > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r > > > > > |> > > > > > | > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server > > > > > |> > > > > > | > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > > > CREATE_FAILED > > > > > | 2015-10-13T10:20:56 |> | > > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > |> > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > | Property | Value |> > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > | attributes | { |> | | "attributes": null, |> | | "refs": null |> | > > > > > | | > > > > > | } > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | > > > > > |> | links > > > > > |> | |> > > > > > | > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > | (self) |> | | > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > | | (stack) |> | | > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > | | physical_resource_id > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | > > > > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> | > > > > > | > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | > > > > > Compute > > > > > |> > > > > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > > > > resources.Compute: ResourceInError:> | > > > > > resources[0].resources.NovaCompute: > > > > > Went to status ERROR due to "Message:> | No valid host was found. > > > > > There > > > > > are not enough hosts available., Code: 500"> | |> | resource_type | > > > > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > This is my instackenv.json for 1 compute and 1 control node to > > > > > > > > be > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> "disk":"10",> > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> "disk":"100",> > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in advance> > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> > > > > > celik.esra at tubitak.gov.tr> > > > > > > _______________________________________________> Rdo-list mailing > > > > > list> > > > > > Rdo-list at redhat.com> > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > _______________________________________________> Rdo-list mailing > > > > > list> > > > > > Rdo-list at redhat.com> > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ibravo at ltgfederal.com Thu Oct 15 14:13:40 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 15 Oct 2015 10:13:40 -0400 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <6D1DB475E9650E4EADE65C051EFBB98B468B0642@EX2010-DTN-03.AERO.BALL.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <6D1DB475E9650E4EADE65C051EFBB98B468B0642@EX2010-DTN-03.AERO.BALL.com> Message-ID: <59DA35DC-C3B2-41B4-95F4-5655EB03D2CA@ltgfederal.com> Jeff, I do agree with you that the docs should have some examples of successful deployments with basic scenarios: 1 controller + 1 compute: openstack deploy blahblahblah HA: 3 controllers + 2 compute + 3 ceph: openstack deploy blahblahblah In the mean time, take a look at this bugzilla with some information that might be useful: https://bugzilla.redhat.com/show_bug.cgi?id=1251533 __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Oct 15, 2015, at 9:54 AM, Richards, Jeff wrote: > > So where is this official installation guide? The docs on this page: > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > Just show "openstack overcloud deploy --templates". > > Where is the best place to find more detailed info on overcload deploy CLI (other than --help)? > > Jeff Richards > >> -----Original Message----- >> However, I want to first double-check that you had a valid command >> line. You only show "openstack deploy overcloud --templates" in your >> original email. You did have a full command-line, right? Refer to the >> official installation guide for the right parameters. > > > > This message and any enclosures are intended only for the addressee. Please > notify the sender by email if you are not the intended recipient. If you are > not the intended recipient, you may not use, copy, disclose, or distribute this > message or its contents or enclosures to any other person and any such actions > may be unlawful. Ball reserves the right to monitor and review all messages > and enclosures sent to or from this email address. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Thu Oct 15 14:43:20 2015 From: sgordon at redhat.com (Steve Gordon) Date: Thu, 15 Oct 2015 10:43:20 -0400 (EDT) Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <561FA38F.6050508@redhat.com> References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> Message-ID: <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Rich Bowen" > To: rdo-list at redhat.com > Sent: Thursday, October 15, 2015 9:01:03 AM > Subject: Re: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO > > Just to follow up: > > http://docs.openstack.org/ > > Installation Guide for Debian 8 (not yet available) > > Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora > 22 (not yet available) > > > > --Rich To get this updated what is required is for someone to walk through the draft install guide [1] vetting each procedure on each target distro and updating the test matrix here: https://wiki.openstack.org/wiki/Documentation/LibertyDocTesting We also need to determine which "known issues" need to be resolved, and document + file bugs for any new ones that pop up (and ideally resolve them). In future as part of the RDO test day I think we should: a) Broadcast where the correct packages are more widely (e.g. include openstack-docs at lists.openstack.org in the distribution list for the test day). There seems to be a contention that they weren't available or were available but were mixed up with Mitaka packages which was true at a point in time but was quickly resolved (there was an issue with the config files being shipped though). b) Integrate the install guide test matrix into the test day so that we (the RDO community) can help drive vetting it earlier. Thanks, Steve [1] http://docs.openstack.org/draft/install-guide-rdo/ > On 10/14/2015 07:49 AM, Rich Bowen wrote: > > I wanted to be certain that everyone has seen this message to > > OpenStack-docs, and the subsequent conversation at > > http://lists.openstack.org/pipermail/openstack-docs/2015-October/007622.html > > > > > > This is quite serious, as Lana is basically saying that RDO isn't a > > viable way to deploy OpenStack in Liberty, and so it's being removed > > from the docs. > > > > It would be helpful if someone closer to Liberty packages, and Delorean, > > could participate there in a constructive way to bring this to a happy > > conclusion before the release tomorrow. > > > > Thanks. > > > > --Rich > > > > > > -------- Forwarded Message -------- > > Subject: [OpenStack-docs] [install-guide] Status of RDO > > Date: Wed, 14 Oct 2015 16:22:45 +1000 > > From: Lana Brindley > > To: openstack-docs at lists.openstack.org > > > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA256 > > > > Hi everyone, > > > > We've been unable to obtain good pre-release packages from Red Hat for > > the Fedora and Red Hat/CentOS repos, despite our best efforts. This has > > left the RDO Install Guide in a largely untested state, so I don't feel > > confident publishing it at this stage. > > > > As far as we can tell, Fedora are no longer planning on having > > pre-release packages available, so this might be a permanent change for > > that OS. For Red Hat/CentOS, it seems to be a temporary problem, so > > hopefully we can get the packages, complete testing, and publish the > > book soon. > > > > The patch to remove RDO is here, for anyone who cares to comment: > > https://review.openstack.org/#/c/234584/ > > > > Lana > > > > - -- Lana Brindley > > Technical Writer > > Rackspace Cloud Builders Australia > > http://lanabrindley.com > > -----BEGIN PGP SIGNATURE----- > > Version: GnuPG v2 > > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > > > iQEcBAEBCAAGBQJWHfS1AAoJELppzVb4+KUyM7cH/ii5Ekz5vjTe3dTykXBUbWGt > > bR2XJTAbS/mFB+xayecNNPLvgejI6Nxvk8msSFNnN7/ZyDNwr+eceQw7ftMKuJnR > > h7qKBb6o5iayLJxgNRK3Kjo13NjGdaiXwfLTbB5br/aiP2HHsrDRexAcLteUCKGt > > eHbZUEYqg4VADUvodxNpbZ+7fHuXrIRZoH4aDQ4+o1p0dCdw+vkjzF/MzPSgZFar > > Rq9L94rpofDat9ymuW48c+SgUeOnmTvxwEN8ExTENNMXo4nUOJwcUS65J6XURO9K > > RUGvjPmSmm7ZaQGE+koKyGZSzF/Oqoa+vBUwxdeQqmtr2tWo//jlUVV/PDc8QV0= > > =rQp4 > > -----END PGP SIGNATURE----- > > > > _______________________________________________ > > OpenStack-docs mailing list > > OpenStack-docs at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From dsneddon at redhat.com Thu Oct 15 14:48:57 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Thu, 15 Oct 2015 07:48:57 -0700 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <6D1DB475E9650E4EADE65C051EFBB98B468B0642@EX2010-DTN-03.AERO.BALL.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <6D1DB475E9650E4EADE65C051EFBB98B468B0642@EX2010-DTN-03.AERO.BALL.com> Message-ID: <561FBCD9.4010209@redhat.com> On 10/15/2015 06:54 AM, Richards, Jeff wrote: > So where is this official installation guide? The docs on this page: > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > Just show "openstack overcloud deploy --templates". > > Where is the best place to find more detailed info on overcload deploy CLI (other than --help)? > > Jeff Richards > >> -----Original Message----- >> However, I want to first double-check that you had a valid command >> line. You only show "openstack deploy overcloud --templates" in your >> original email. You did have a full command-line, right? Refer to the >> official installation guide for the right parameters. > > > > This message and any enclosures are intended only for the addressee. Please > notify the sender by email if you are not the intended recipient. If you are > not the intended recipient, you may not use, copy, disclose, or distribute this > message or its contents or enclosures to any other person and any such actions > may be unlawful. Ball reserves the right to monitor and review all messages > and enclosures sent to or from this email address. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > My apologies, you're absolutely right. I have submitted a bug for us to improve that section of the document: https://bugzilla.redhat.com/show_bug.cgi?id=1272144 Hopefully my example command lines will help you get closer to a successful deployment in the mean time. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From jrichar1 at ball.com Thu Oct 15 18:22:25 2015 From: jrichar1 at ball.com (Richards, Jeff) Date: Thu, 15 Oct 2015 18:22:25 +0000 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <561FBCD9.4010209@redhat.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <6D1DB475E9650E4EADE65C051EFBB98B468B0642@EX2010-DTN-03.AERO.BALL.com> <561FBCD9.4010209@redhat.com> Message-ID: <6D1DB475E9650E4EADE65C051EFBB98B468B06BC@EX2010-DTN-03.AERO.BALL.com> Dan/Ignacio, Thanks a ton for those Bugzilla references, with that information in hand I have finally achieved a successful basic deployment! Now I can start tinkering with advanced configurations and get really dangerous! Jeff Richards > -----Original Message----- > From: Dan Sneddon [mailto:dsneddon at redhat.com] > > My apologies, you're absolutely right. I have submitted a bug for us to > improve that section of the document: > > Hopefully my example command lines will help you get closer to a > successful deployment in the mean time. This message and any enclosures are intended only for the addressee. Please notify the sender by email if you are not the intended recipient. If you are not the intended recipient, you may not use, copy, disclose, or distribute this message or its contents or enclosures to any other person and any such actions may be unlawful. Ball reserves the right to monitor and review all messages and enclosures sent to or from this email address. From ibravo at ltgfederal.com Thu Oct 15 18:26:48 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 15 Oct 2015 14:26:48 -0400 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <6D1DB475E9650E4EADE65C051EFBB98B468B06BC@EX2010-DTN-03.AERO.BALL.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <6D1DB475E9650E4EADE65C051EFBB98B468B0642@EX2010-DTN-03.AERO.BALL.com> <561FBCD9.4010209@redhat.com> <6D1DB475E9650E4EADE65C051EFBB98B468B06BC@EX2010-DTN-03.AERO.BALL.com> Message-ID: Jeff, Just know that HA is currently broken in rdo-manager based on conversations happening right now in irc. __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Oct 15, 2015, at 2:22 PM, Richards, Jeff wrote: > > Dan/Ignacio, > > Thanks a ton for those Bugzilla references, with that information in hand I have finally achieved a successful basic deployment! > > Now I can start tinkering with advanced configurations and get really dangerous! > > Jeff Richards > >> -----Original Message----- >> From: Dan Sneddon [mailto:dsneddon at redhat.com] >> >> My apologies, you're absolutely right. I have submitted a bug for us to >> improve that section of the document: >> >> Hopefully my example command lines will help you get closer to a >> successful deployment in the mean time. > > > > This message and any enclosures are intended only for the addressee. Please > notify the sender by email if you are not the intended recipient. If you are > not the intended recipient, you may not use, copy, disclose, or distribute this > message or its contents or enclosures to any other person and any such actions > may be unlawful. Ball reserves the right to monitor and review all messages > and enclosures sent to or from this email address. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Thu Oct 15 19:09:40 2015 From: trown at redhat.com (John Trowbridge) Date: Thu, 15 Oct 2015 15:09:40 -0400 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <6D1DB475E9650E4EADE65C051EFBB98B468B0642@EX2010-DTN-03.AERO.BALL.com> <561FBCD9.4010209@redhat.com> <6D1DB475E9650E4EADE65C051EFBB98B468B06BC@EX2010-DTN-03.AERO.BALL.com> Message-ID: <561FF9F4.3070201@redhat.com> On 10/15/2015 02:26 PM, Ignacio Bravo wrote: > Jeff, > > Just know that HA is currently broken in rdo-manager based on conversations happening right now in irc. > > Yep there is still one outstanding BZ blocking HA: https://bugzilla.redhat.com/show_bug.cgi?id=1271002 There is a patch upstream, but it was -2 until after upstream GA. However ceilometer folks are aware that it is a critical issue and have agreed to fix it ASAP after upstream GA is cut. I am cautiously optimistic it will make it to delorean before we GA RDO which is scheduled for next week. We will at the very least fix it in a patch in the official RDO liberty repo on centos. I have been able to get a HA deploy to succeed with only this one extra patch. > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > >> On Oct 15, 2015, at 2:22 PM, Richards, Jeff wrote: >> >> Dan/Ignacio, >> >> Thanks a ton for those Bugzilla references, with that information in hand I have finally achieved a successful basic deployment! >> >> Now I can start tinkering with advanced configurations and get really dangerous! Thanks for sticking with it Jeff. Was this a baremetal or virtual deploy? >> >> Jeff Richards >> >>> -----Original Message----- >>> From: Dan Sneddon [mailto:dsneddon at redhat.com] >>> >>> My apologies, you're absolutely right. I have submitted a bug for us to >>> improve that section of the document: >>> >>> Hopefully my example command lines will help you get closer to a >>> successful deployment in the mean time. >> >> >> >> This message and any enclosures are intended only for the addressee. Please >> notify the sender by email if you are not the intended recipient. If you are >> not the intended recipient, you may not use, copy, disclose, or distribute this >> message or its contents or enclosures to any other person and any such actions >> may be unlawful. Ball reserves the right to monitor and review all messages >> and enclosures sent to or from this email address. >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From erming at ualberta.ca Thu Oct 15 20:03:26 2015 From: erming at ualberta.ca (Erming Pei) Date: Thu, 15 Oct 2015 14:03:26 -0600 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <561EE408.6030302@redhat.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> Message-ID: <5620068E.1020202@ualberta.ca> Hi Dan, Sasha, Thanks for your answers and hints. I looked up the heat/etc log files and stack/node status. Only thing I found by far is "timed out". I don't know what's the reason. IPMI looks good. Tried with HEAT_INCLUDE_PASSWORD=1 but same error message (Please try again with option --include-password or export HEAT_INCLUDE_PASSWORD=1 Authentication required) BTW. I only followed the exact instruction as shown in the guide: (openstack overcloud deploy --templates) No more options. I thought this is good for a demo deployment. If not sufficient, which one I should follow? See some of your discussions, but not very clear. Should I follow the example from jliberma at redhat.com? Below are my investigation: By runnig: $ heat resource-list overcloud Found that just controller and compute are failed: CREATE_FAILED Checked the reason it says: resource_status_reason | CREATE aborted I then logged into the running overcloud nodes (e.g. the controller): [heat-admin at overcloud-controller-0 ~]$ ifconfig br-ex: flags=4163 mtu 1500 inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 ether 02:21:5e:cd:9d:f3 txqueuelen 0 (Ethernet) RX packets 29926 bytes 2364154 (2.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 81 bytes 25614 (25.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s29f0u2: flags=4163 mtu 1500 inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 ether 02:21:5e:cd:9d:f3 txqueuelen 1000 (Ethernet) RX packets 29956 bytes 1947140 (1.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 102 bytes 28620 (27.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp11s0f0: flags=4163 mtu 1500 inet 10.0.6.64 netmask 255.255.0.0 broadcast 10.0.255.255 inet6 fe80::221:5eff:fec9:abd8 prefixlen 64 scopeid 0x20 ether 00:21:5e:c9:ab:d8 txqueuelen 1000 (Ethernet) RX packets 66256 bytes 21109918 (20.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 35938 bytes 4641202 (4.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp11s0f1: flags=4163 mtu 1500 inet6 fe80::221:5eff:fec9:abda prefixlen 64 scopeid 0x20 ether 00:21:5e:c9:ab:da txqueuelen 1000 (Ethernet) RX packets 25429 bytes 2004574 (1.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6 bytes 532 (532.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ib0: flags=4163 mtu 2044 inet6 fe80::202:c902:23:baf9 prefixlen 64 scopeid 0x20 Infiniband hardware address can be incorrect! Please read BUGS section in ifconfig(8). infiniband 80:00:04:04:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00 txqueuelen 256 (InfiniBand) RX packets 183678 bytes 10292768 (9.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 17 bytes 5380 (5.2 KiB) TX errors 0 dropped 7 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 138 bytes 11792 (11.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 138 bytes 11792 (11.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [heat-admin at overcloud-controller-0 ~]$ ovs-vsctl show ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) [heat-admin at overcloud-controller-0 ~]$ sudo ovs-vsctl show 76e6f8a7-88cf-4920-b133-b4d15a4b9092 Bridge br-ex Port br-ex Interface br-ex type: internal Port "enp0s29f0u2" Interface "enp0s29f0u2" ovs_version: "2.3.1" [heat-admin at overcloud-controller-0 ~]$ [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.65 PING 10.0.6.65 (10.0.6.65) 56(84) bytes of data. 64 bytes from 10.0.6.65: icmp_seq=1 ttl=64 time=0.176 ms 64 bytes from 10.0.6.65: icmp_seq=2 ttl=64 time=0.195 ms ^C --- 10.0.6.65 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.176/0.185/0.195/0.016 ms [heat-admin at overcloud-controller-0 ~]$ [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.64 PING 10.0.6.64 (10.0.6.64) 56(84) bytes of data. 64 bytes from 10.0.6.64: icmp_seq=1 ttl=64 time=0.015 ms ^C --- 10.0.6.64 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms [heat-admin at overcloud-controller-0 ~]$ cat /etc/os-net-config/config.json {"network_config": [{"use_dhcp": true, "type": "ovs_bridge", "name": "br-ex", "members": [{"type": "interface", "name": "nic1", "primary": true}]}]} [heat-admin at overcloud-controller-0 ~]$ [heat-admin at overcloud-controller-0 ~]$ [heat-admin at overcloud-controller-0 ~]$ sudo os-net-config --debug -c /etc/os-net-config/config.json [2015/10/15 07:52:08 PM] [INFO] Using config file at: /etc/os-net-config/config.json [2015/10/15 07:52:08 PM] [INFO] Using mapping file at: /etc/os-net-config/mapping.yaml [2015/10/15 07:52:08 PM] [INFO] Ifcfg net config provider created. [2015/10/15 07:52:08 PM] [DEBUG] network_config JSON: [{'use_dhcp': True, 'type': 'ovs_bridge', 'name': 'br-ex', 'members': [{'type': 'interface', 'name': 'nic1', 'primary': True}]}] [2015/10/15 07:52:08 PM] [INFO] nic1 mapped to: enp0s29f0u2 [2015/10/15 07:52:08 PM] [INFO] nic2 mapped to: enp11s0f0 [2015/10/15 07:52:08 PM] [INFO] nic3 mapped to: enp11s0f1 [2015/10/15 07:52:08 PM] [INFO] nic4 mapped to: ib0 [2015/10/15 07:52:08 PM] [INFO] adding bridge: br-ex [2015/10/15 07:52:08 PM] [DEBUG] bridge data: DEVICE=br-ex ONBOOT=yes HOTPLUG=no DEVICETYPE=ovs TYPE=OVSBridge OVSBOOTPROTO=dhcp OVSDHCPINTERFACES="enp0s29f0u2" OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" [2015/10/15 07:52:08 PM] [INFO] adding interface: enp0s29f0u2 [2015/10/15 07:52:08 PM] [DEBUG] interface data: DEVICE=enp0s29f0u2 ONBOOT=yes HOTPLUG=no DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-ex BOOTPROTO=none [2015/10/15 07:52:08 PM] [INFO] applying network configs... [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: DEVICE=enp0s29f0u2 ONBOOT=yes HOTPLUG=no DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-ex BOOTPROTO=none [2015/10/15 07:52:08 PM] [DEBUG] Diff data: DEVICE=enp0s29f0u2 ONBOOT=yes HOTPLUG=no DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-ex BOOTPROTO=none [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: [2015/10/15 07:52:08 PM] [DEBUG] Diff data: [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: DEVICE=br-ex ONBOOT=yes HOTPLUG=no DEVICETYPE=ovs TYPE=OVSBridge OVSBOOTPROTO=dhcp OVSDHCPINTERFACES="enp0s29f0u2" OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" [2015/10/15 07:52:08 PM] [DEBUG] Diff data: DEVICE=br-ex ONBOOT=yes HOTPLUG=no DEVICETYPE=ovs TYPE=OVSBridge OVSBOOTPROTO=dhcp OVSDHCPINTERFACES="enp0s29f0u2" OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: [2015/10/15 07:52:08 PM] [DEBUG] Diff data: [heat-admin at overcloud-controller-0 ~]$ openstack-status == Nova services == openstack-nova-api: inactive (disabled on boot) openstack-nova-cert: inactive (disabled on boot) openstack-nova-compute: inactive (disabled on boot) openstack-nova-network: inactive (disabled on boot) openstack-nova-scheduler: inactive (disabled on boot) openstack-nova-conductor: inactive (disabled on boot) == Glance services == openstack-glance-api: inactive (disabled on boot) openstack-glance-registry: inactive (disabled on boot) == Keystone service == openstack-keystone: inactive (disabled on boot) == Horizon service == openstack-dashboard: uncontactable == neutron services == neutron-server: inactive (disabled on boot) neutron-dhcp-agent: inactive (disabled on boot) neutron-l3-agent: inactive (disabled on boot) neutron-metadata-agent: inactive (disabled on boot) neutron-lbaas-agent: inactive (disabled on boot) neutron-openvswitch-agent: inactive (disabled on boot) neutron-metering-agent: inactive (disabled on boot) == Swift services == openstack-swift-proxy: inactive (disabled on boot) openstack-swift-account: inactive (disabled on boot) openstack-swift-container: inactive (disabled on boot) openstack-swift-object: inactive (disabled on boot) == Cinder services == openstack-cinder-api: inactive (disabled on boot) openstack-cinder-scheduler: inactive (disabled on boot) openstack-cinder-volume: inactive (disabled on boot) openstack-cinder-backup: inactive (disabled on boot) == Ceilometer services == openstack-ceilometer-api: inactive (disabled on boot) openstack-ceilometer-central: inactive (disabled on boot) openstack-ceilometer-compute: inactive (disabled on boot) openstack-ceilometer-collector: inactive (disabled on boot) openstack-ceilometer-alarm-notifier: inactive (disabled on boot) openstack-ceilometer-alarm-evaluator: inactive (disabled on boot) openstack-ceilometer-notification: inactive (disabled on boot) == Heat services == openstack-heat-api: inactive (disabled on boot) openstack-heat-api-cfn: inactive (disabled on boot) openstack-heat-api-cloudwatch: inactive (disabled on boot) openstack-heat-engine: inactive (disabled on boot) == Support services == libvirtd: active openvswitch: active dbus: active rabbitmq-server: inactive (disabled on boot) memcached: inactive (disabled on boot) == Keystone users == Warning keystonerc not sourced Thanks, Erming On 10/14/15, 5:23 PM, Dan Sneddon wrote: > On 10/14/2015 03:03 PM, Erming Pei wrote: >> Hi, >> >> I am deploying the overcloud in baremetal way and after a couple of >> hours, it showed: >> >> $ openstack overcloud deploy --templates >> Deploying templates in the directory >> /usr/share/openstack-tripleo-heat-templates >> ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again >> with option --include-password or export HEAT_INCLUDE_PASSWORD=1 >> Authentication required >> >> >> But I checked the nodes are now running: >> >> [stack at gcloudcon-3 ~]$ nova list >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >> >> | ID | Name | >> Status | Task State | Power State | Networks | >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >> >> | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | >> ACTIVE | - | Running | ctlplane=10.0.6.60 | >> | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | >> ACTIVE | - | Running | ctlplane=10.0.6.61 | >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >> >> >> 1. Should I re-deploy the nodes or there is a way to do update/makeup >> for the authentication issue? >> >> 2. >> I don't know how to access to the nodes. >> There is not an overcloudrc file produced. >> >> $ ls overcloud* >> overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 >> overcloud-full.vmlinuz >> >> overcloud-full.d: >> dib-manifests >> >> Is it via ssh key or password? Should I set the authentication method >> somewhere? >> >> >> >> Thanks, >> >> Erming >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > This error generally means that something in the deployment got stuck, > and the deployment hung until the token expired after 4 hours. When > that happens, there is no overcloudrc generated (because there is not a > working overcloud). You won't be able to recover with a stack update, > you'll need to perform a stack-delete and redeploy once you know what > went wrong. > > Generally a deployment shouldn't take anywhere near that long, a bare > metal deployment with 6 hosts takes me less than an hour, and less than > 2 including a Ceph deployment. In fact, I usually set a timeout using > the --timeout option, because if it hasn't finished after, say 90 > minutes (depending on how complicated the deployment is), then I want > it to bomb out so I can diagnose what went wrong and redeploy. > > Often when a deployment times out it is because there were connectivity > issues between the nodes. Since you can log in to the hosts, you might > want to do some basic network troubleshooting, such as: > > $ ip address # check to see that all the interfaces are there, and > that the IP addresses have been assigned > > $ sudo ovs-vsctl show # make sure that the bridges have the proper > interfaces, vlans, and that all the expected bridges show up > > $ ping # you can try this on all VLANs to make > sure that any VLAN trunks are working properly > > $ sudo ovs-appctl bond/show # if running bonding, check to see the > bond status > > $ sudo os-net-config --debug -c /etc/os-net-config/config.json # run > the network configuration script again to make sure that it is able to > configure the interfaces without error. WARNING, MAY BE DISRUPTIVE as > this will reset the network interfaces, run on console if possible. > > However, I want to first double-check that you had a valid command > line. You only show "openstack deploy overcloud --templates" in your > original email. You did have a full command-line, right? Refer to the > official installation guide for the right parameters. > -- --------------------------------------------- Erming Pei, Ph.D Senior System Analyst; Grid/Cloud Specialist Research Computing Group Information Services & Technology University of Alberta, Canada Tel: +1 7804929914 Fax: +1 7804921729 --------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasha at redhat.com Thu Oct 15 21:19:46 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Thu, 15 Oct 2015 17:19:46 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <5620068E.1020202@ualberta.ca> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <5620068E.1020202@ualberta.ca> Message-ID: <1377476884.58530663.1444943986933.JavaMail.zimbra@redhat.com> Hi Erming, You can also check the log files on nodes for errors (start with /var/log/messages). if things are working, "openstack overcloud deploy --template" will create a nonHA deployment without network isolation consisting of 1 controller and 1 compute. I usually add "--timeout 90", as this period of time is sufficient on my setup for deploying the overcloud. Seeing the IP being different than 192.0.2.x, I wonder what other changes were made to the undercloud.conf? Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Erming Pei" > To: "Dan Sneddon" , rdo-list at redhat.com > Sent: Thursday, October 15, 2015 4:03:26 PM > Subject: Re: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment > > Hi Dan, Sasha, > > Thanks for your answers and hints. > I looked up the heat/etc log files and stack/node status. > Only thing I found by far is "timed out". I don't know what's the reason. > IPMI looks good. > > Tried with HEAT_INCLUDE_PASSWORD=1 but same error message (Please try again > with option --include-password or export HEAT_INCLUDE_PASSWORD=1 > Authentication required) > > BTW. I only followed the exact instruction as shown in the guide: (openstack > overcloud deploy --templates) No more options. I thought this is good for a > demo deployment. If not sufficient, which one I should follow? See some of > your discussions, but not very clear. Should I follow the example from > jliberma at redhat.com ? > > Below are my investigation: > By runnig: $ heat resource-list overcloud > Found that just controller and compute are failed: CREATE_FAILED > > Checked the reason it says: resource_status_reason | CREATE aborted > > I then logged into the running overcloud nodes (e.g. the controller): > > > [heat-admin at overcloud-controller-0 ~]$ ifconfig > br-ex: flags=4163 mtu 1500 > inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 > ether 02:21:5e:cd:9d:f3 txqueuelen 0 (Ethernet) > RX packets 29926 bytes 2364154 (2.2 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 81 bytes 25614 (25.0 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp0s29f0u2: flags=4163 mtu 1500 > inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 > ether 02:21:5e:cd:9d:f3 txqueuelen 1000 (Ethernet) > RX packets 29956 bytes 1947140 (1.8 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 102 bytes 28620 (27.9 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp11s0f0: flags=4163 mtu 1500 > inet 10.0.6.64 netmask 255.255.0.0 broadcast 10.0.255.255 > inet6 fe80::221:5eff:fec9:abd8 prefixlen 64 scopeid 0x20 > ether 00:21:5e:c9:ab:d8 txqueuelen 1000 (Ethernet) > RX packets 66256 bytes 21109918 (20.1 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 35938 bytes 4641202 (4.4 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp11s0f1: flags=4163 mtu 1500 > inet6 fe80::221:5eff:fec9:abda prefixlen 64 scopeid 0x20 > ether 00:21:5e:c9:ab:da txqueuelen 1000 (Ethernet) > RX packets 25429 bytes 2004574 (1.9 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 6 bytes 532 (532.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > ib0: flags=4163 mtu 2044 > inet6 fe80::202:c902:23:baf9 prefixlen 64 scopeid 0x20 > Infiniband hardware address can be incorrect! Please read BUGS section in > ifconfig(8). > infiniband 80:00:04:04:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00 > txqueuelen 256 (InfiniBand) > RX packets 183678 bytes 10292768 (9.8 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 17 bytes 5380 (5.2 KiB) > TX errors 0 dropped 7 overruns 0 carrier 0 collisions 0 > > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 138 bytes 11792 (11.5 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 138 bytes 11792 (11.5 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > [heat-admin at overcloud-controller-0 ~]$ ovs-vsctl show > ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed > (Permission denied) > [heat-admin at overcloud-controller-0 ~]$ sudo ovs-vsctl show > 76e6f8a7-88cf-4920-b133-b4d15a4b9092 > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Port "enp0s29f0u2" > Interface "enp0s29f0u2" > ovs_version: "2.3.1" > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.65 > PING 10.0.6.65 (10.0.6.65) 56(84) bytes of data. > 64 bytes from 10.0.6.65: icmp_seq=1 ttl=64 time=0.176 ms > 64 bytes from 10.0.6.65: icmp_seq=2 ttl=64 time=0.195 ms > ^C > --- 10.0.6.65 ping statistics --- > 2 packets transmitted, 2 received, 0% packet loss, time 999ms > rtt min/avg/max/mdev = 0.176/0.185/0.195/0.016 ms > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.64 > PING 10.0.6.64 (10.0.6.64) 56(84) bytes of data. > 64 bytes from 10.0.6.64: icmp_seq=1 ttl=64 time=0.015 ms > ^C > --- 10.0.6.64 ping statistics --- > 1 packets transmitted, 1 received, 0% packet loss, time 0ms > rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms > > [heat-admin at overcloud-controller-0 ~]$ cat /etc/os-net-config/config.json > {"network_config": [{"use_dhcp": true, "type": "ovs_bridge", "name": "br-ex", > "members": [{"type": "interface", "name": "nic1", "primary": true}]}]} > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ sudo os-net-config --debug -c > /etc/os-net-config/config.json > [2015/10/15 07:52:08 PM] [INFO] Using config file at: > /etc/os-net-config/config.json > [2015/10/15 07:52:08 PM] [INFO] Using mapping file at: > /etc/os-net-config/mapping.yaml > [2015/10/15 07:52:08 PM] [INFO] Ifcfg net config provider created. > [2015/10/15 07:52:08 PM] [DEBUG] network_config JSON: [{'use_dhcp': True, > 'type': 'ovs_bridge', 'name': 'br-ex', 'members': [{'type': 'interface', > 'name': 'nic1', 'primary': True}]}] > [2015/10/15 07:52:08 PM] [INFO] nic1 mapped to: enp0s29f0u2 > [2015/10/15 07:52:08 PM] [INFO] nic2 mapped to: enp11s0f0 > [2015/10/15 07:52:08 PM] [INFO] nic3 mapped to: enp11s0f1 > [2015/10/15 07:52:08 PM] [INFO] nic4 mapped to: ib0 > [2015/10/15 07:52:08 PM] [INFO] adding bridge: br-ex > [2015/10/15 07:52:08 PM] [DEBUG] bridge data: DEVICE=br-ex > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSBridge > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES="enp0s29f0u2" > OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > > [2015/10/15 07:52:08 PM] [INFO] adding interface: enp0s29f0u2 > [2015/10/15 07:52:08 PM] [DEBUG] interface data: DEVICE=enp0s29f0u2 > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > BOOTPROTO=none > > [2015/10/15 07:52:08 PM] [INFO] applying network configs... > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > DEVICE=enp0s29f0u2 > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > BOOTPROTO=none > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > DEVICE=enp0s29f0u2 > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > BOOTPROTO=none > > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > DEVICE=br-ex > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSBridge > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES="enp0s29f0u2" > OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > DEVICE=br-ex > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSBridge > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES="enp0s29f0u2" > OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > > > > [heat-admin at overcloud-controller-0 ~]$ openstack-status > == Nova services == > openstack-nova-api: inactive (disabled on boot) > openstack-nova-cert: inactive (disabled on boot) > openstack-nova-compute: inactive (disabled on boot) > openstack-nova-network: inactive (disabled on boot) > openstack-nova-scheduler: inactive (disabled on boot) > openstack-nova-conductor: inactive (disabled on boot) > == Glance services == > openstack-glance-api: inactive (disabled on boot) > openstack-glance-registry: inactive (disabled on boot) > == Keystone service == > openstack-keystone: inactive (disabled on boot) > == Horizon service == > openstack-dashboard: uncontactable > == neutron services == > neutron-server: inactive (disabled on boot) > neutron-dhcp-agent: inactive (disabled on boot) > neutron-l3-agent: inactive (disabled on boot) > neutron-metadata-agent: inactive (disabled on boot) > neutron-lbaas-agent: inactive (disabled on boot) > neutron-openvswitch-agent: inactive (disabled on boot) > neutron-metering-agent: inactive (disabled on boot) > == Swift services == > openstack-swift-proxy: inactive (disabled on boot) > openstack-swift-account: inactive (disabled on boot) > openstack-swift-container: inactive (disabled on boot) > openstack-swift-object: inactive (disabled on boot) > == Cinder services == > openstack-cinder-api: inactive (disabled on boot) > openstack-cinder-scheduler: inactive (disabled on boot) > openstack-cinder-volume: inactive (disabled on boot) > openstack-cinder-backup: inactive (disabled on boot) > == Ceilometer services == > openstack-ceilometer-api: inactive (disabled on boot) > openstack-ceilometer-central: inactive (disabled on boot) > openstack-ceilometer-compute: inactive (disabled on boot) > openstack-ceilometer-collector: inactive (disabled on boot) > openstack-ceilometer-alarm-notifier: inactive (disabled on boot) > openstack-ceilometer-alarm-evaluator: inactive (disabled on boot) > openstack-ceilometer-notification: inactive (disabled on boot) > == Heat services == > openstack-heat-api: inactive (disabled on boot) > openstack-heat-api-cfn: inactive (disabled on boot) > openstack-heat-api-cloudwatch: inactive (disabled on boot) > openstack-heat-engine: inactive (disabled on boot) > == Support services == > libvirtd: active > openvswitch: active > dbus: active > rabbitmq-server: inactive (disabled on boot) > memcached: inactive (disabled on boot) > == Keystone users == > Warning keystonerc not sourced > > > > > Thanks, > > Erming > > > On 10/14/15, 5:23 PM, Dan Sneddon wrote: > > > > On 10/14/2015 03:03 PM, Erming Pei wrote: > > > > Hi, > > I am deploying the overcloud in baremetal way and after a couple of > hours, it showed: > > $ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again > with option --include-password or export HEAT_INCLUDE_PASSWORD=1 > Authentication required > > > But I checked the nodes are now running: > > [stack at gcloudcon-3 ~]$ nova list > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > | ID | Name | > Status | Task State | Power State | Networks | > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | > ACTIVE | - | Running | ctlplane=10.0.6.60 | > | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | > ACTIVE | - | Running | ctlplane=10.0.6.61 | > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > > 1. Should I re-deploy the nodes or there is a way to do update/makeup > for the authentication issue? > > 2. > I don't know how to access to the nodes. > There is not an overcloudrc file produced. > > $ ls overcloud* > overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 > overcloud-full.vmlinuz > > overcloud-full.d: > dib-manifests > > Is it via ssh key or password? Should I set the authentication method > somewhere? > > > > Thanks, > > Erming > > > _______________________________________________ > Rdo-list mailing list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: > rdo-list-unsubscribe at redhat.com > This error generally means that something in the deployment got stuck, > and the deployment hung until the token expired after 4 hours. When > that happens, there is no overcloudrc generated (because there is not a > working overcloud). You won't be able to recover with a stack update, > you'll need to perform a stack-delete and redeploy once you know what > went wrong. > > Generally a deployment shouldn't take anywhere near that long, a bare > metal deployment with 6 hosts takes me less than an hour, and less than > 2 including a Ceph deployment. In fact, I usually set a timeout using > the --timeout option, because if it hasn't finished after, say 90 > minutes (depending on how complicated the deployment is), then I want > it to bomb out so I can diagnose what went wrong and redeploy. > > Often when a deployment times out it is because there were connectivity > issues between the nodes. Since you can log in to the hosts, you might > want to do some basic network troubleshooting, such as: > > $ ip address # check to see that all the interfaces are there, and > that the IP addresses have been assigned > > $ sudo ovs-vsctl show # make sure that the bridges have the proper > interfaces, vlans, and that all the expected bridges show up > > $ ping # you can try this on all VLANs to make > sure that any VLAN trunks are working properly > > $ sudo ovs-appctl bond/show # if running bonding, check to see the > bond status > > $ sudo os-net-config --debug -c /etc/os-net-config/config.json # run > the network configuration script again to make sure that it is able to > configure the interfaces without error. WARNING, MAY BE DISRUPTIVE as > this will reset the network interfaces, run on console if possible. > > However, I want to first double-check that you had a valid command > line. You only show "openstack deploy overcloud --templates" in your > original email. You did have a full command-line, right? Refer to the > official installation guide for the right parameters. > > > -- > --------------------------------------------- > Erming Pei, Ph.D > Senior System Analyst; Grid/Cloud Specialist > > Research Computing Group > Information Services & Technology > University of Alberta, Canada > > Tel: +1 7804929914 Fax: +1 7804921729 > --------------------------------------------- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mcornea at redhat.com Thu Oct 15 22:37:18 2015 From: mcornea at redhat.com (Marius Cornea) Date: Thu, 15 Oct 2015 18:37:18 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <561FF9F4.3070201@redhat.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <6D1DB475E9650E4EADE65C051EFBB98B468B0642@EX2010-DTN-03.AERO.BALL.com> <561FBCD9.4010209@redhat.com> <6D1DB475E9650E4EADE65C051EFBB98B468B06BC@EX2010-DTN-03.AERO.BALL.com> <561FF9F4.3070201@redhat.com> Message-ID: <764918597.42932159.1444948638806.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "John Trowbridge" > To: "Ignacio Bravo" , "Jeff Richards" > Cc: rdo-list at redhat.com > Sent: Thursday, October 15, 2015 9:09:40 PM > Subject: Re: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment > > On 10/15/2015 02:26 PM, Ignacio Bravo wrote: > > Jeff, > > > > Just know that HA is currently broken in rdo-manager based on conversations > > happening right now in irc. > > > > > Yep there is still one outstanding BZ blocking HA: > > https://bugzilla.redhat.com/show_bug.cgi?id=1271002 > > There is a patch upstream, but it was -2 until after upstream GA. > However ceilometer folks are aware that it is a critical issue and have > agreed to fix it ASAP after upstream GA is cut. > > I am cautiously optimistic it will make it to delorean before we GA RDO > which is scheduled for next week. We will at the very least fix it in a > patch in the official RDO liberty repo on centos. > > I have been able to get a HA deploy to succeed with only this one extra > patch. A quick note about the HA deployment: you need to pass '-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml' to the deploy command. I submitted a docs patch for this: https://review.openstack.org/#/c/235597/ > > __ > > Ignacio Bravo > > LTG Federal, Inc > > www.ltgfederal.com > > Office: (703) 951-7760 > > > >> On Oct 15, 2015, at 2:22 PM, Richards, Jeff wrote: > >> > >> Dan/Ignacio, > >> > >> Thanks a ton for those Bugzilla references, with that information in hand > >> I have finally achieved a successful basic deployment! > >> > >> Now I can start tinkering with advanced configurations and get really > >> dangerous! > > Thanks for sticking with it Jeff. Was this a baremetal or virtual deploy? > > >> > >> Jeff Richards > >> > >>> -----Original Message----- > >>> From: Dan Sneddon [mailto:dsneddon at redhat.com] > >>> > >>> My apologies, you're absolutely right. I have submitted a bug for us to > >>> improve that section of the document: > >>> > >>> Hopefully my example command lines will help you get closer to a > >>> successful deployment in the mean time. > >> > >> > >> > >> This message and any enclosures are intended only for the addressee. > >> Please > >> notify the sender by email if you are not the intended recipient. If you > >> are > >> not the intended recipient, you may not use, copy, disclose, or distribute > >> this > >> message or its contents or enclosures to any other person and any such > >> actions > >> may be unlawful. Ball reserves the right to monitor and review all > >> messages > >> and enclosures sent to or from this email address. > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From erming at ualberta.ca Thu Oct 15 22:56:17 2015 From: erming at ualberta.ca (Erming Pei) Date: Thu, 15 Oct 2015 16:56:17 -0600 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <1377476884.58530663.1444943986933.JavaMail.zimbra@redhat.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <5620068E.1020202@ualberta.ca> <1377476884.58530663.1444943986933.JavaMail.zimbra@redhat.com> Message-ID: <56202F11.4020505@ualberta.ca> Hi Sasha, I checked the sys logs and see many such errors: Oct 15 22:50:19 localhost os-collect-config: 2015-10-15 22:50:19.133 8516 WARNING os_collect_config.ec2 [-] ('Connection aborted.', error(113, 'No route to host')) Oct 15 22:50:19 localhost os-collect-config: 2015-10-15 22:50:19.133 8516 WARNING os-collect-config [-] Source [ec2] Unavailable. Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 8516 WARNING os_collect_config.heat [-] No auth_url configured. Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 8516 WARNING os_collect_config.request [-] No metadata_url configured. Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 8516 WARNING os-collect-config [-] Source [request] Unavailable. Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 8516 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 8516 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) Below are my undercloud.conf (masked passwords). [stack at gcloudcon-3 ~]$ cat undercloud.conf [DEFAULT] # # From instack-undercloud # # Local file path to the necessary images. The path should be a # directory readable by the current user that contains the full set of # images. (string value) #image_path = . image_path = /gcloud/images # IP information for the interface on the Undercloud that will be # handling the PXE boots and DHCP for Overcloud instances. The IP # portion of the value will be assigned to the network interface # defined by local_interface, with the netmask defined by the prefix # portion of the value. (string value) #local_ip = 192.0.2.1/24 local_ip = 10.0.6.40/16 # Network interface on the Undercloud that will be handling the PXE # boots and DHCP for Overcloud instances. (string value) #local_interface = eth1 local_interface = eth0 # Network that will be masqueraded for external access, if required. # This should be the subnet used for PXE booting. (string value) #masquerade_network = 192.0.2.0/24 masquerade_network = 10.0.6.0/16 # Start of DHCP allocation range for PXE and DHCP of Overcloud # instances. (string value) #dhcp_start = 192.0.2.5 dhcp_start = 10.0.6.50 # End of DHCP allocation range for PXE and DHCP of Overcloud # instances. (string value) #dhcp_end = 192.0.2.24 dhcp_end = 10.0.6.250 # Network CIDR for the Neutron-managed network for Overcloud # instances. This should be the subnet used for PXE booting. (string # value) #network_cidr = 192.0.2.0/24 network_cidr = 10.0.6.0/16 # Network gateway for the Neutron-managed network for Overcloud # instances. This should match the local_ip above when using # masquerading. (string value) #network_gateway = 192.0.2.1 network_gateway = 10.0.6.40 # Network interface on which discovery dnsmasq will listen. If in # doubt, use the default value. (string value) #discovery_interface = br-ctlplane # Temporary IP range that will be given to nodes during the discovery # process. Should not overlap with the range defined by dhcp_start # and dhcp_end, but should be in the same network. (string value) #discovery_iprange = 192.0.2.100,192.0.2.120 discovery_iprange = 10.0.6.251,10.0.6.252 # Whether to run benchmarks when discovering nodes. (boolean value) #discovery_runbench = false # Whether to enable the debug log level for Undercloud OpenStack # services. (boolean value) undercloud_debug = true [auth] # # From instack-undercloud # # Password used for MySQL databases. If left unset, one will be # automatically generated. (string value) undercloud_db_password = xxxxxxxxxxxxxx # Keystone admin token. If left unset, one will be automatically # generated. (string value) #undercloud_admin_token = # Keystone admin password. If left unset, one will be automatically # generated. (string value) undercloud_admin_password = xxxxxxxxxxxxxx # Glance service password. If left unset, one will be automatically # generated. (string value) undercloud_glance_password = xxxxxxxxxxxxxx # Heat db encryption key(must be 8,16 or 32 characters. If left unset, # one will be automatically generated. (string value) #undercloud_heat_encryption_key = # Heat service password. If left unset, one will be automatically # generated. (string value) undercloud_heat_password = xxxxxxxxxxxxxx # Neutron service password. If left unset, one will be automatically # generated. (string value) undercloud_neutron_password = xxxxxxxxxxxxxx # Nova service password. If left unset, one will be automatically # generated. (string value) undercloud_nova_password = xxxxxxxxxxxxxx # Ironic service password. If left unset, one will be automatically # generated. (string value) undercloud_ironic_password = xxxxxxxxxxxxxx # Tuskar service password. If left unset, one will be automatically # generated. (string value) undercloud_tuskar_password = xxxxxxxxxxxxxx # Ceilometer service password. If left unset, one will be # automatically generated. (string value) undercloud_ceilometer_password = xxxxxxxxxxxxxx # Ceilometer metering secret. If left unset, one will be automatically # generated. (string value) #undercloud_ceilometer_metering_secret = # Ceilometer snmpd user. If left unset, one will be automatically # generated. (string value) undercloud_ceilometer_snmpd_user = ceilometer # Ceilometer snmpd password. If left unset, one will be automatically # generated. (string value) undercloud_ceilometer_snmpd_password = xxxxxxxxxxxxxx # Swift service password. If left unset, one will be automatically # generated. (string value) undercloud_swift_password = xxxxxxxxxxxxxx # Rabbitmq cookie. If left unset, one will be automatically generated. # (string value) #undercloud_rabbit_cookie = # Rabbitmq password. If left unset, one will be automatically # generated. (string value) undercloud_rabbit_password = xxxxxxxxxxxxxx # Rabbitmq username. If left unset, one will be automatically # generated. (string value) undercloud_rabbit_username = rabbit # Heat stack domain admin password. If left unset, one will be # automatically generated. (string value) undercloud_heat_stack_domain_admin_password = xxxxxxxxxxxxxx # Swift hash suffix. If left unset, one will be automatically # generated. (string value) #undercloud_swift_hash_suffix = Yes, I am just testing with the basic 1 controller and 1 compute case. I can try with setting a timeout as you did. Thanks, Erming On 10/15/15, 3:19 PM, Sasha Chuzhoy wrote: > Hi Erming, > You can also check the log files on nodes for errors (start with /var/log/messages). > > if things are working, "openstack overcloud deploy --template" will create a nonHA deployment without network isolation consisting of 1 controller and 1 compute. > I usually add "--timeout 90", as this period of time is sufficient on my setup for deploying the overcloud. > > Seeing the IP being different than 192.0.2.x, I wonder what other changes were made to the undercloud.conf? > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- >> From: "Erming Pei" >> To: "Dan Sneddon" , rdo-list at redhat.com >> Sent: Thursday, October 15, 2015 4:03:26 PM >> Subject: Re: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment >> >> Hi Dan, Sasha, >> >> Thanks for your answers and hints. >> I looked up the heat/etc log files and stack/node status. >> Only thing I found by far is "timed out". I don't know what's the reason. >> IPMI looks good. >> >> Tried with HEAT_INCLUDE_PASSWORD=1 but same error message (Please try again >> with option --include-password or export HEAT_INCLUDE_PASSWORD=1 >> Authentication required) >> >> BTW. I only followed the exact instruction as shown in the guide: (openstack >> overcloud deploy --templates) No more options. I thought this is good for a >> demo deployment. If not sufficient, which one I should follow? See some of >> your discussions, but not very clear. Should I follow the example from >> jliberma at redhat.com ? >> >> Below are my investigation: >> By runnig: $ heat resource-list overcloud >> Found that just controller and compute are failed: CREATE_FAILED >> >> Checked the reason it says: resource_status_reason | CREATE aborted >> >> I then logged into the running overcloud nodes (e.g. the controller): >> >> >> [heat-admin at overcloud-controller-0 ~]$ ifconfig >> br-ex: flags=4163 mtu 1500 >> inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 >> ether 02:21:5e:cd:9d:f3 txqueuelen 0 (Ethernet) >> RX packets 29926 bytes 2364154 (2.2 MiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 81 bytes 25614 (25.0 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> enp0s29f0u2: flags=4163 mtu 1500 >> inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 >> ether 02:21:5e:cd:9d:f3 txqueuelen 1000 (Ethernet) >> RX packets 29956 bytes 1947140 (1.8 MiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 102 bytes 28620 (27.9 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> enp11s0f0: flags=4163 mtu 1500 >> inet 10.0.6.64 netmask 255.255.0.0 broadcast 10.0.255.255 >> inet6 fe80::221:5eff:fec9:abd8 prefixlen 64 scopeid 0x20 >> ether 00:21:5e:c9:ab:d8 txqueuelen 1000 (Ethernet) >> RX packets 66256 bytes 21109918 (20.1 MiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 35938 bytes 4641202 (4.4 MiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> enp11s0f1: flags=4163 mtu 1500 >> inet6 fe80::221:5eff:fec9:abda prefixlen 64 scopeid 0x20 >> ether 00:21:5e:c9:ab:da txqueuelen 1000 (Ethernet) >> RX packets 25429 bytes 2004574 (1.9 MiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 6 bytes 532 (532.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> ib0: flags=4163 mtu 2044 >> inet6 fe80::202:c902:23:baf9 prefixlen 64 scopeid 0x20 >> Infiniband hardware address can be incorrect! Please read BUGS section in >> ifconfig(8). >> infiniband 80:00:04:04:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00 >> txqueuelen 256 (InfiniBand) >> RX packets 183678 bytes 10292768 (9.8 MiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 17 bytes 5380 (5.2 KiB) >> TX errors 0 dropped 7 overruns 0 carrier 0 collisions 0 >> >> lo: flags=73 mtu 65536 >> inet 127.0.0.1 netmask 255.0.0.0 >> inet6 ::1 prefixlen 128 scopeid 0x10 >> loop txqueuelen 0 (Local Loopback) >> RX packets 138 bytes 11792 (11.5 KiB) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 138 bytes 11792 (11.5 KiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> [heat-admin at overcloud-controller-0 ~]$ ovs-vsctl show >> ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed >> (Permission denied) >> [heat-admin at overcloud-controller-0 ~]$ sudo ovs-vsctl show >> 76e6f8a7-88cf-4920-b133-b4d15a4b9092 >> Bridge br-ex >> Port br-ex >> Interface br-ex >> type: internal >> Port "enp0s29f0u2" >> Interface "enp0s29f0u2" >> ovs_version: "2.3.1" >> [heat-admin at overcloud-controller-0 ~]$ >> [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.65 >> PING 10.0.6.65 (10.0.6.65) 56(84) bytes of data. >> 64 bytes from 10.0.6.65: icmp_seq=1 ttl=64 time=0.176 ms >> 64 bytes from 10.0.6.65: icmp_seq=2 ttl=64 time=0.195 ms >> ^C >> --- 10.0.6.65 ping statistics --- >> 2 packets transmitted, 2 received, 0% packet loss, time 999ms >> rtt min/avg/max/mdev = 0.176/0.185/0.195/0.016 ms >> [heat-admin at overcloud-controller-0 ~]$ >> [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.64 >> PING 10.0.6.64 (10.0.6.64) 56(84) bytes of data. >> 64 bytes from 10.0.6.64: icmp_seq=1 ttl=64 time=0.015 ms >> ^C >> --- 10.0.6.64 ping statistics --- >> 1 packets transmitted, 1 received, 0% packet loss, time 0ms >> rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms >> >> [heat-admin at overcloud-controller-0 ~]$ cat /etc/os-net-config/config.json >> {"network_config": [{"use_dhcp": true, "type": "ovs_bridge", "name": "br-ex", >> "members": [{"type": "interface", "name": "nic1", "primary": true}]}]} >> [heat-admin at overcloud-controller-0 ~]$ >> [heat-admin at overcloud-controller-0 ~]$ >> [heat-admin at overcloud-controller-0 ~]$ sudo os-net-config --debug -c >> /etc/os-net-config/config.json >> [2015/10/15 07:52:08 PM] [INFO] Using config file at: >> /etc/os-net-config/config.json >> [2015/10/15 07:52:08 PM] [INFO] Using mapping file at: >> /etc/os-net-config/mapping.yaml >> [2015/10/15 07:52:08 PM] [INFO] Ifcfg net config provider created. >> [2015/10/15 07:52:08 PM] [DEBUG] network_config JSON: [{'use_dhcp': True, >> 'type': 'ovs_bridge', 'name': 'br-ex', 'members': [{'type': 'interface', >> 'name': 'nic1', 'primary': True}]}] >> [2015/10/15 07:52:08 PM] [INFO] nic1 mapped to: enp0s29f0u2 >> [2015/10/15 07:52:08 PM] [INFO] nic2 mapped to: enp11s0f0 >> [2015/10/15 07:52:08 PM] [INFO] nic3 mapped to: enp11s0f1 >> [2015/10/15 07:52:08 PM] [INFO] nic4 mapped to: ib0 >> [2015/10/15 07:52:08 PM] [INFO] adding bridge: br-ex >> [2015/10/15 07:52:08 PM] [DEBUG] bridge data: DEVICE=br-ex >> ONBOOT=yes >> HOTPLUG=no >> DEVICETYPE=ovs >> TYPE=OVSBridge >> OVSBOOTPROTO=dhcp >> OVSDHCPINTERFACES="enp0s29f0u2" >> OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" >> >> [2015/10/15 07:52:08 PM] [INFO] adding interface: enp0s29f0u2 >> [2015/10/15 07:52:08 PM] [DEBUG] interface data: DEVICE=enp0s29f0u2 >> ONBOOT=yes >> HOTPLUG=no >> DEVICETYPE=ovs >> TYPE=OVSPort >> OVS_BRIDGE=br-ex >> BOOTPROTO=none >> >> [2015/10/15 07:52:08 PM] [INFO] applying network configs... >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: >> DEVICE=enp0s29f0u2 >> ONBOOT=yes >> HOTPLUG=no >> DEVICETYPE=ovs >> TYPE=OVSPort >> OVS_BRIDGE=br-ex >> BOOTPROTO=none >> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data: >> DEVICE=enp0s29f0u2 >> ONBOOT=yes >> HOTPLUG=no >> DEVICETYPE=ovs >> TYPE=OVSPort >> OVS_BRIDGE=br-ex >> BOOTPROTO=none >> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: >> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data: >> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: >> DEVICE=br-ex >> ONBOOT=yes >> HOTPLUG=no >> DEVICETYPE=ovs >> TYPE=OVSBridge >> OVSBOOTPROTO=dhcp >> OVSDHCPINTERFACES="enp0s29f0u2" >> OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" >> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data: >> DEVICE=br-ex >> ONBOOT=yes >> HOTPLUG=no >> DEVICETYPE=ovs >> TYPE=OVSBridge >> OVSBOOTPROTO=dhcp >> OVSDHCPINTERFACES="enp0s29f0u2" >> OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" >> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: >> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data: >> >> >> >> [heat-admin at overcloud-controller-0 ~]$ openstack-status >> == Nova services == >> openstack-nova-api: inactive (disabled on boot) >> openstack-nova-cert: inactive (disabled on boot) >> openstack-nova-compute: inactive (disabled on boot) >> openstack-nova-network: inactive (disabled on boot) >> openstack-nova-scheduler: inactive (disabled on boot) >> openstack-nova-conductor: inactive (disabled on boot) >> == Glance services == >> openstack-glance-api: inactive (disabled on boot) >> openstack-glance-registry: inactive (disabled on boot) >> == Keystone service == >> openstack-keystone: inactive (disabled on boot) >> == Horizon service == >> openstack-dashboard: uncontactable >> == neutron services == >> neutron-server: inactive (disabled on boot) >> neutron-dhcp-agent: inactive (disabled on boot) >> neutron-l3-agent: inactive (disabled on boot) >> neutron-metadata-agent: inactive (disabled on boot) >> neutron-lbaas-agent: inactive (disabled on boot) >> neutron-openvswitch-agent: inactive (disabled on boot) >> neutron-metering-agent: inactive (disabled on boot) >> == Swift services == >> openstack-swift-proxy: inactive (disabled on boot) >> openstack-swift-account: inactive (disabled on boot) >> openstack-swift-container: inactive (disabled on boot) >> openstack-swift-object: inactive (disabled on boot) >> == Cinder services == >> openstack-cinder-api: inactive (disabled on boot) >> openstack-cinder-scheduler: inactive (disabled on boot) >> openstack-cinder-volume: inactive (disabled on boot) >> openstack-cinder-backup: inactive (disabled on boot) >> == Ceilometer services == >> openstack-ceilometer-api: inactive (disabled on boot) >> openstack-ceilometer-central: inactive (disabled on boot) >> openstack-ceilometer-compute: inactive (disabled on boot) >> openstack-ceilometer-collector: inactive (disabled on boot) >> openstack-ceilometer-alarm-notifier: inactive (disabled on boot) >> openstack-ceilometer-alarm-evaluator: inactive (disabled on boot) >> openstack-ceilometer-notification: inactive (disabled on boot) >> == Heat services == >> openstack-heat-api: inactive (disabled on boot) >> openstack-heat-api-cfn: inactive (disabled on boot) >> openstack-heat-api-cloudwatch: inactive (disabled on boot) >> openstack-heat-engine: inactive (disabled on boot) >> == Support services == >> libvirtd: active >> openvswitch: active >> dbus: active >> rabbitmq-server: inactive (disabled on boot) >> memcached: inactive (disabled on boot) >> == Keystone users == >> Warning keystonerc not sourced >> >> >> >> >> Thanks, >> >> Erming >> >> >> On 10/14/15, 5:23 PM, Dan Sneddon wrote: >> >> >> >> On 10/14/2015 03:03 PM, Erming Pei wrote: >> >> >> >> Hi, >> >> I am deploying the overcloud in baremetal way and after a couple of >> hours, it showed: >> >> $ openstack overcloud deploy --templates >> Deploying templates in the directory >> /usr/share/openstack-tripleo-heat-templates >> ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again >> with option --include-password or export HEAT_INCLUDE_PASSWORD=1 >> Authentication required >> >> >> But I checked the nodes are now running: >> >> [stack at gcloudcon-3 ~]$ nova list >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >> >> | ID | Name | >> Status | Task State | Power State | Networks | >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >> >> | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | >> ACTIVE | - | Running | ctlplane=10.0.6.60 | >> | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | >> ACTIVE | - | Running | ctlplane=10.0.6.61 | >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >> >> >> 1. Should I re-deploy the nodes or there is a way to do update/makeup >> for the authentication issue? >> >> 2. >> I don't know how to access to the nodes. >> There is not an overcloudrc file produced. >> >> $ ls overcloud* >> overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 >> overcloud-full.vmlinuz >> >> overcloud-full.d: >> dib-manifests >> >> Is it via ssh key or password? Should I set the authentication method >> somewhere? >> >> >> >> Thanks, >> >> Erming >> >> >> _______________________________________________ >> Rdo-list mailing list Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: >> rdo-list-unsubscribe at redhat.com >> This error generally means that something in the deployment got stuck, >> and the deployment hung until the token expired after 4 hours. When >> that happens, there is no overcloudrc generated (because there is not a >> working overcloud). You won't be able to recover with a stack update, >> you'll need to perform a stack-delete and redeploy once you know what >> went wrong. >> >> Generally a deployment shouldn't take anywhere near that long, a bare >> metal deployment with 6 hosts takes me less than an hour, and less than >> 2 including a Ceph deployment. In fact, I usually set a timeout using >> the --timeout option, because if it hasn't finished after, say 90 >> minutes (depending on how complicated the deployment is), then I want >> it to bomb out so I can diagnose what went wrong and redeploy. >> >> Often when a deployment times out it is because there were connectivity >> issues between the nodes. Since you can log in to the hosts, you might >> want to do some basic network troubleshooting, such as: >> >> $ ip address # check to see that all the interfaces are there, and >> that the IP addresses have been assigned >> >> $ sudo ovs-vsctl show # make sure that the bridges have the proper >> interfaces, vlans, and that all the expected bridges show up >> >> $ ping # you can try this on all VLANs to make >> sure that any VLAN trunks are working properly >> >> $ sudo ovs-appctl bond/show # if running bonding, check to see the >> bond status >> >> $ sudo os-net-config --debug -c /etc/os-net-config/config.json # run >> the network configuration script again to make sure that it is able to >> configure the interfaces without error. WARNING, MAY BE DISRUPTIVE as >> this will reset the network interfaces, run on console if possible. >> >> However, I want to first double-check that you had a valid command >> line. You only show "openstack deploy overcloud --templates" in your >> original email. You did have a full command-line, right? Refer to the >> official installation guide for the right parameters. >> >> >> -- >> --------------------------------------------- >> Erming Pei, Ph.D >> Senior System Analyst; Grid/Cloud Specialist >> >> Research Computing Group >> Information Services & Technology >> University of Alberta, Canada >> >> Tel: +1 7804929914 Fax: +1 7804921729 >> --------------------------------------------- >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com -- --------------------------------------------- Erming Pei, Ph.D Senior System Analyst; Grid/Cloud Specialist Research Computing Group Information Services & Technology University of Alberta, Canada Tel: +1 7804929914 Fax: +1 7804921729 --------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Thu Oct 15 23:00:45 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Thu, 15 Oct 2015 16:00:45 -0700 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <5620068E.1020202@ualberta.ca> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <5620068E.1020202@ualberta.ca> Message-ID: <5620301D.7020209@redhat.com> If you are doing the most basic deployment (no network isolation) on bare metal, you will want to specify which interface is your external network interface. This will be the interface that gets attached to br-ex (this defaults to your first interface, which may not be correct in your case). In the basic deployment, this network will require a DHCP server to give the controller an address, and if you want to use floating IPs you will need a range of IPs that are free (won't be assigned to other hosts by DHCP server). So, if your external interface is 'enp11s0f0' (just guessing, since that one actually has a DHCP address), then your command-line will be at least: openstack overcloud deploy --templates \ --neutron-public-interface enp11s0f0 But you should probably include a reference to an NTP server: openstack overcloud deploy --templates \ --neutron-public-interface enp11s0f0 \ --ntp-server pool.ntp.org Some other options to consider: --timeout --debug Selecting tunnel type: --neutron-network-type --neutron-tunnel-types -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter On 10/15/2015 01:03 PM, Erming Pei wrote: > Hi Dan, Sasha, > > Thanks for your answers and hints. > I looked up the heat/etc log files and stack/node status. > Only thing I found by far is "timed out". I don't know what's the > reason. IPMI looks good. > > Tried with HEAT_INCLUDE_PASSWORD=1 but same error message (Please > try again with option --include-password or export > HEAT_INCLUDE_PASSWORD=1 Authentication required) > > BTW. I only followed the exact instruction as shown in the guide: > (openstack overcloud deploy --templates) No more options. I thought > this is good for a demo deployment. If not sufficient, which one I > should follow? See some of your discussions, but not very clear. Should > I follow the example from jliberma at redhat.com? > > Below are my investigation: > By runnig: $ heat resource-list overcloud > Found that just controller and compute are failed: CREATE_FAILED > > Checked the reason it says: resource_status_reason | CREATE aborted > > I then logged into the running overcloud nodes (e.g. the controller): > > > [heat-admin at overcloud-controller-0 ~]$ ifconfig > br-ex: flags=4163 mtu 1500 > inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 > ether 02:21:5e:cd:9d:f3 txqueuelen 0 (Ethernet) > RX packets 29926 bytes 2364154 (2.2 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 81 bytes 25614 (25.0 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp0s29f0u2: flags=4163 mtu 1500 > inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 > ether 02:21:5e:cd:9d:f3 txqueuelen 1000 (Ethernet) > RX packets 29956 bytes 1947140 (1.8 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 102 bytes 28620 (27.9 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp11s0f0: flags=4163 mtu 1500 > inet 10.0.6.64 netmask 255.255.0.0 broadcast 10.0.255.255 > inet6 fe80::221:5eff:fec9:abd8 prefixlen 64 scopeid 0x20 > ether 00:21:5e:c9:ab:d8 txqueuelen 1000 (Ethernet) > RX packets 66256 bytes 21109918 (20.1 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 35938 bytes 4641202 (4.4 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp11s0f1: flags=4163 mtu 1500 > inet6 fe80::221:5eff:fec9:abda prefixlen 64 scopeid 0x20 > ether 00:21:5e:c9:ab:da txqueuelen 1000 (Ethernet) > RX packets 25429 bytes 2004574 (1.9 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 6 bytes 532 (532.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > ib0: flags=4163 mtu 2044 > inet6 fe80::202:c902:23:baf9 prefixlen 64 scopeid 0x20 > Infiniband hardware address can be incorrect! Please read BUGS section > in ifconfig(8). > infiniband > 80:00:04:04:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00 txqueuelen > 256 (InfiniBand) > RX packets 183678 bytes 10292768 (9.8 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 17 bytes 5380 (5.2 KiB) > TX errors 0 dropped 7 overruns 0 carrier 0 collisions 0 > > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 138 bytes 11792 (11.5 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 138 bytes 11792 (11.5 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > [heat-admin at overcloud-controller-0 ~]$ ovs-vsctl show > ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection > failed (Permission denied) > [heat-admin at overcloud-controller-0 ~]$ sudo ovs-vsctl show > 76e6f8a7-88cf-4920-b133-b4d15a4b9092 > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Port "enp0s29f0u2" > Interface "enp0s29f0u2" > ovs_version: "2.3.1" > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.65 > PING 10.0.6.65 (10.0.6.65) 56(84) bytes of data. > 64 bytes from 10.0.6.65: icmp_seq=1 ttl=64 time=0.176 ms > 64 bytes from 10.0.6.65: icmp_seq=2 ttl=64 time=0.195 ms > ^C > --- 10.0.6.65 ping statistics --- > 2 packets transmitted, 2 received, 0% packet loss, time 999ms > rtt min/avg/max/mdev = 0.176/0.185/0.195/0.016 ms > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.64 > PING 10.0.6.64 (10.0.6.64) 56(84) bytes of data. > 64 bytes from 10.0.6.64: icmp_seq=1 ttl=64 time=0.015 ms > ^C > --- 10.0.6.64 ping statistics --- > 1 packets transmitted, 1 received, 0% packet loss, time 0ms > rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms > > [heat-admin at overcloud-controller-0 ~]$ cat /etc/os-net-config/config.json > {"network_config": [{"use_dhcp": true, "type": "ovs_bridge", "name": > "br-ex", "members": [{"type": "interface", "name": "nic1", "primary": > true}]}]} > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ sudo os-net-config --debug -c > /etc/os-net-config/config.json > [2015/10/15 07:52:08 PM] [INFO] Using config file at: > /etc/os-net-config/config.json > [2015/10/15 07:52:08 PM] [INFO] Using mapping file at: > /etc/os-net-config/mapping.yaml > [2015/10/15 07:52:08 PM] [INFO] Ifcfg net config provider created. > [2015/10/15 07:52:08 PM] [DEBUG] network_config JSON: [{'use_dhcp': > True, 'type': 'ovs_bridge', 'name': 'br-ex', 'members': [{'type': > 'interface', 'name': 'nic1', 'primary': True}]}] > [2015/10/15 07:52:08 PM] [INFO] nic1 mapped to: enp0s29f0u2 > [2015/10/15 07:52:08 PM] [INFO] nic2 mapped to: enp11s0f0 > [2015/10/15 07:52:08 PM] [INFO] nic3 mapped to: enp11s0f1 > [2015/10/15 07:52:08 PM] [INFO] nic4 mapped to: ib0 > [2015/10/15 07:52:08 PM] [INFO] adding bridge: br-ex > [2015/10/15 07:52:08 PM] [DEBUG] bridge data: DEVICE=br-ex > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSBridge > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES="enp0s29f0u2" > OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > > [2015/10/15 07:52:08 PM] [INFO] adding interface: enp0s29f0u2 > [2015/10/15 07:52:08 PM] [DEBUG] interface data: DEVICE=enp0s29f0u2 > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > BOOTPROTO=none > > [2015/10/15 07:52:08 PM] [INFO] applying network configs... > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > DEVICE=enp0s29f0u2 > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > BOOTPROTO=none > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > DEVICE=enp0s29f0u2 > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > BOOTPROTO=none > > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > DEVICE=br-ex > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSBridge > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES="enp0s29f0u2" > OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > DEVICE=br-ex > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSBridge > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES="enp0s29f0u2" > OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > > > > [heat-admin at overcloud-controller-0 ~]$ openstack-status > == Nova services == > openstack-nova-api: inactive (disabled on boot) > openstack-nova-cert: inactive (disabled on boot) > openstack-nova-compute: inactive (disabled on boot) > openstack-nova-network: inactive (disabled on boot) > openstack-nova-scheduler: inactive (disabled on boot) > openstack-nova-conductor: inactive (disabled on boot) > == Glance services == > openstack-glance-api: inactive (disabled on boot) > openstack-glance-registry: inactive (disabled on boot) > == Keystone service == > openstack-keystone: inactive (disabled on boot) > == Horizon service == > openstack-dashboard: uncontactable > == neutron services == > neutron-server: inactive (disabled on boot) > neutron-dhcp-agent: inactive (disabled on boot) > neutron-l3-agent: inactive (disabled on boot) > neutron-metadata-agent: inactive (disabled on boot) > neutron-lbaas-agent: inactive (disabled on boot) > neutron-openvswitch-agent: inactive (disabled on boot) > neutron-metering-agent: inactive (disabled on boot) > == Swift services == > openstack-swift-proxy: inactive (disabled on boot) > openstack-swift-account: inactive (disabled on boot) > openstack-swift-container: inactive (disabled on boot) > openstack-swift-object: inactive (disabled on boot) > == Cinder services == > openstack-cinder-api: inactive (disabled on boot) > openstack-cinder-scheduler: inactive (disabled on boot) > openstack-cinder-volume: inactive (disabled on boot) > openstack-cinder-backup: inactive (disabled on boot) > == Ceilometer services == > openstack-ceilometer-api: inactive (disabled on boot) > openstack-ceilometer-central: inactive (disabled on boot) > openstack-ceilometer-compute: inactive (disabled on boot) > openstack-ceilometer-collector: inactive (disabled on boot) > openstack-ceilometer-alarm-notifier: inactive (disabled on boot) > openstack-ceilometer-alarm-evaluator: inactive (disabled on boot) > openstack-ceilometer-notification: inactive (disabled on boot) > == Heat services == > openstack-heat-api: inactive (disabled on boot) > openstack-heat-api-cfn: inactive (disabled on boot) > openstack-heat-api-cloudwatch: inactive (disabled on boot) > openstack-heat-engine: inactive (disabled on boot) > == Support services == > libvirtd: active > openvswitch: active > dbus: active > rabbitmq-server: inactive (disabled on boot) > memcached: inactive (disabled on boot) > == Keystone users == > Warning keystonerc not sourced > > > > > Thanks, > > Erming > > > On 10/14/15, 5:23 PM, Dan Sneddon wrote: >> On 10/14/2015 03:03 PM, Erming Pei wrote: >>> Hi, >>> >>> I am deploying the overcloud in baremetal way and after a couple of >>> hours, it showed: >>> >>> $ openstack overcloud deploy --templates >>> Deploying templates in the directory >>> /usr/share/openstack-tripleo-heat-templates >>> ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again >>> with option --include-password or export HEAT_INCLUDE_PASSWORD=1 >>> Authentication required >>> >>> >>> But I checked the nodes are now running: >>> >>> [stack at gcloudcon-3 ~]$ nova list >>> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >>> >>> | ID | Name | >>> Status | Task State | Power State | Networks | >>> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >>> >>> | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | >>> ACTIVE | - | Running | ctlplane=10.0.6.60 | >>> | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | >>> ACTIVE | - | Running | ctlplane=10.0.6.61 | >>> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ >>> >>> >>> 1. Should I re-deploy the nodes or there is a way to do update/makeup >>> for the authentication issue? >>> >>> 2. >>> I don't know how to access to the nodes. >>> There is not an overcloudrc file produced. >>> >>> $ ls overcloud* >>> overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 >>> overcloud-full.vmlinuz >>> >>> overcloud-full.d: >>> dib-manifests >>> >>> Is it via ssh key or password? Should I set the authentication method >>> somewhere? >>> >>> >>> >>> Thanks, >>> >>> Erming >>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> This error generally means that something in the deployment got stuck, >> and the deployment hung until the token expired after 4 hours. When >> that happens, there is no overcloudrc generated (because there is not a >> working overcloud). You won't be able to recover with a stack update, >> you'll need to perform a stack-delete and redeploy once you know what >> went wrong. >> >> Generally a deployment shouldn't take anywhere near that long, a bare >> metal deployment with 6 hosts takes me less than an hour, and less than >> 2 including a Ceph deployment. In fact, I usually set a timeout using >> the --timeout option, because if it hasn't finished after, say 90 >> minutes (depending on how complicated the deployment is), then I want >> it to bomb out so I can diagnose what went wrong and redeploy. >> >> Often when a deployment times out it is because there were connectivity >> issues between the nodes. Since you can log in to the hosts, you might >> want to do some basic network troubleshooting, such as: >> >> $ ip address # check to see that all the interfaces are there, and >> that the IP addresses have been assigned >> >> $ sudo ovs-vsctl show # make sure that the bridges have the proper >> interfaces, vlans, and that all the expected bridges show up >> >> $ ping # you can try this on all VLANs to make >> sure that any VLAN trunks are working properly >> >> $ sudo ovs-appctl bond/show # if running bonding, check to see the >> bond status >> >> $ sudo os-net-config --debug -c /etc/os-net-config/config.json # run >> the network configuration script again to make sure that it is able to >> configure the interfaces without error. WARNING, MAY BE DISRUPTIVE as >> this will reset the network interfaces, run on console if possible. >> >> However, I want to first double-check that you had a valid command >> line. You only show "openstack deploy overcloud --templates" in your >> original email. You did have a full command-line, right? Refer to the >> official installation guide for the right parameters. >> > > > -- > --------------------------------------------- > Erming Pei, Ph.D > Senior System Analyst; Grid/Cloud Specialist > > Research Computing Group > Information Services & Technology > University of Alberta, Canada > > Tel: +1 7804929914 Fax: +1 7804921729 > --------------------------------------------- > From ohochman at redhat.com Thu Oct 15 23:37:20 2015 From: ohochman at redhat.com (Omri Hochman) Date: Thu, 15 Oct 2015 19:37:20 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <56202F11.4020505@ualberta.ca> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <5620068E.1020202@ualberta.ca> <1377476884.58530663.1444943986933.JavaMail.zimbra@redhat.com> <56202F11.4020505@ualberta.ca> Message-ID: <719954854.51768581.1444952240444.JavaMail.zimbra@redhat.com> Note that we also had a bug when changed the default values of undercloud.conf It should've been fixed already, but make sure you're using the newer RPMs that are including this fix : https://bugzilla.redhat.com/show_bug.cgi?id=1270033 Regards, Omri. ----- Original Message ----- > From: "Erming Pei" > To: "Sasha Chuzhoy" > Cc: rdo-list at redhat.com > Sent: Thursday, October 15, 2015 6:56:17 PM > Subject: Re: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment > > Hi Sasha, > > I checked the sys logs and see many such errors: > > Oct 15 22:50:19 localhost os-collect-config: 2015-10-15 22:50:19.133 8516 > WARNING os_collect_config.ec2 [-] ('Connection aborted.', error(113, 'No > route to host')) > Oct 15 22:50:19 localhost os-collect-config: 2015-10-15 22:50:19.133 8516 > WARNING os-collect-config [-] Source [ec2] Unavailable. > Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 8516 > WARNING os_collect_config.heat [-] No auth_url configured. > Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 8516 > WARNING os_collect_config.request [-] No metadata_url configured. > Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 8516 > WARNING os-collect-config [-] Source [request] Unavailable. > Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 8516 > WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data > not found. Skipping > Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 8516 > WARNING os_collect_config.local [-] No local metadata found > (['/var/lib/os-collect-config/local-data']) > > Below are my undercloud.conf (masked passwords). > > > [stack at gcloudcon-3 ~]$ cat undercloud.conf > [DEFAULT] > > # > # From instack-undercloud > # > > # Local file path to the necessary images. The path should be a > # directory readable by the current user that contains the full set of > # images. (string value) > #image_path = . > image_path = /gcloud/images > > # IP information for the interface on the Undercloud that will be > # handling the PXE boots and DHCP for Overcloud instances. The IP > # portion of the value will be assigned to the network interface > # defined by local_interface, with the netmask defined by the prefix > # portion of the value. (string value) > #local_ip = 192.0.2.1/24 > local_ip = 10.0.6.40/16 > > # Network interface on the Undercloud that will be handling the PXE > # boots and DHCP for Overcloud instances. (string value) > #local_interface = eth1 > local_interface = eth0 > > # Network that will be masqueraded for external access, if required. > # This should be the subnet used for PXE booting. (string value) > #masquerade_network = 192.0.2.0/24 > masquerade_network = 10.0.6.0/16 > > # Start of DHCP allocation range for PXE and DHCP of Overcloud > # instances. (string value) > #dhcp_start = 192.0.2.5 > dhcp_start = 10.0.6.50 > > # End of DHCP allocation range for PXE and DHCP of Overcloud > # instances. (string value) > #dhcp_end = 192.0.2.24 > dhcp_end = 10.0.6.250 > > # Network CIDR for the Neutron-managed network for Overcloud > # instances. This should be the subnet used for PXE booting. (string > # value) > #network_cidr = 192.0.2.0/24 > network_cidr = 10.0.6.0/16 > > # Network gateway for the Neutron-managed network for Overcloud > # instances. This should match the local_ip above when using > # masquerading. (string value) > #network_gateway = 192.0.2.1 > network_gateway = 10.0.6.40 > > # Network interface on which discovery dnsmasq will listen. If in > # doubt, use the default value. (string value) > #discovery_interface = br-ctlplane > > # Temporary IP range that will be given to nodes during the discovery > # process. Should not overlap with the range defined by dhcp_start > # and dhcp_end, but should be in the same network. (string value) > #discovery_iprange = 192.0.2.100,192.0.2.120 > discovery_iprange = 10.0.6.251,10.0.6.252 > > # Whether to run benchmarks when discovering nodes. (boolean value) > #discovery_runbench = false > > # Whether to enable the debug log level for Undercloud OpenStack > # services. (boolean value) > undercloud_debug = true > > > [auth] > > # > # From instack-undercloud > # > > # Password used for MySQL databases. If left unset, one will be > # automatically generated. (string value) > undercloud_db_password = xxxxxxxxxxxxxx > > # Keystone admin token. If left unset, one will be automatically > # generated. (string value) > #undercloud_admin_token = > > # Keystone admin password. If left unset, one will be automatically > # generated. (string value) > undercloud_admin_password = xxxxxxxxxxxxxx > > # Glance service password. If left unset, one will be automatically > # generated. (string value) > undercloud_glance_password = xxxxxxxxxxxxxx > > # Heat db encryption key(must be 8,16 or 32 characters. If left unset, > # one will be automatically generated. (string value) > #undercloud_heat_encryption_key = > > # Heat service password. If left unset, one will be automatically > # generated. (string value) > undercloud_heat_password = xxxxxxxxxxxxxx > > # Neutron service password. If left unset, one will be automatically > # generated. (string value) > undercloud_neutron_password = xxxxxxxxxxxxxx > > # Nova service password. If left unset, one will be automatically > # generated. (string value) > undercloud_nova_password = xxxxxxxxxxxxxx > > # Ironic service password. If left unset, one will be automatically > # generated. (string value) > undercloud_ironic_password = xxxxxxxxxxxxxx > > # Tuskar service password. If left unset, one will be automatically > # generated. (string value) > undercloud_tuskar_password = xxxxxxxxxxxxxx > > # Ceilometer service password. If left unset, one will be > # automatically generated. (string value) > undercloud_ceilometer_password = xxxxxxxxxxxxxx > > # Ceilometer metering secret. If left unset, one will be automatically > # generated. (string value) > #undercloud_ceilometer_metering_secret = > > # Ceilometer snmpd user. If left unset, one will be automatically > # generated. (string value) > undercloud_ceilometer_snmpd_user = ceilometer > > # Ceilometer snmpd password. If left unset, one will be automatically > # generated. (string value) > undercloud_ceilometer_snmpd_password = xxxxxxxxxxxxxx > > # Swift service password. If left unset, one will be automatically > # generated. (string value) > undercloud_swift_password = xxxxxxxxxxxxxx > > # Rabbitmq cookie. If left unset, one will be automatically generated. > # (string value) > #undercloud_rabbit_cookie = > > # Rabbitmq password. If left unset, one will be automatically > # generated. (string value) > undercloud_rabbit_password = xxxxxxxxxxxxxx > > # Rabbitmq username. If left unset, one will be automatically > # generated. (string value) > undercloud_rabbit_username = rabbit > > # Heat stack domain admin password. If left unset, one will be > # automatically generated. (string value) > undercloud_heat_stack_domain_admin_password = xxxxxxxxxxxxxx > > # Swift hash suffix. If left unset, one will be automatically > # generated. (string value) > #undercloud_swift_hash_suffix = > > > > Yes, I am just testing with the basic 1 controller and 1 compute case. > I can try with setting a timeout as you did. > > Thanks, > > Erming > > > On 10/15/15, 3:19 PM, Sasha Chuzhoy wrote: > > > > Hi Erming, > You can also check the log files on nodes for errors (start with > /var/log/messages). > > if things are working, "openstack overcloud deploy --template" will create a > nonHA deployment without network isolation consisting of 1 controller and 1 > compute. > I usually add "--timeout 90", as this period of time is sufficient on my > setup for deploying the overcloud. > > Seeing the IP being different than 192.0.2.x, I wonder what other changes > were made to the undercloud.conf? > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > > > From: "Erming Pei" To: "Dan Sneddon" > , rdo-list at redhat.com Sent: Thursday, October 15, 2015 > 4:03:26 PM > Subject: Re: [Rdo-list] [rdo-manager] Authentication required during > overcloud deployment > > Hi Dan, Sasha, > > Thanks for your answers and hints. > I looked up the heat/etc log files and stack/node status. > Only thing I found by far is "timed out". I don't know what's the reason. > IPMI looks good. > > Tried with HEAT_INCLUDE_PASSWORD=1 but same error message (Please try again > with option --include-password or export HEAT_INCLUDE_PASSWORD=1 > Authentication required) > > BTW. I only followed the exact instruction as shown in the guide: (openstack > overcloud deploy --templates) No more options. I thought this is good for a > demo deployment. If not sufficient, which one I should follow? See some of > your discussions, but not very clear. Should I follow the example from > jliberma at redhat.com ? > > Below are my investigation: > By runnig: $ heat resource-list overcloud > Found that just controller and compute are failed: CREATE_FAILED > > Checked the reason it says: resource_status_reason | CREATE aborted > > I then logged into the running overcloud nodes (e.g. the controller): > > > [heat-admin at overcloud-controller-0 ~]$ ifconfig > br-ex: flags=4163 mtu 1500 > inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 > ether 02:21:5e:cd:9d:f3 txqueuelen 0 (Ethernet) > RX packets 29926 bytes 2364154 (2.2 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 81 bytes 25614 (25.0 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp0s29f0u2: flags=4163 mtu 1500 > inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 > ether 02:21:5e:cd:9d:f3 txqueuelen 1000 (Ethernet) > RX packets 29956 bytes 1947140 (1.8 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 102 bytes 28620 (27.9 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp11s0f0: flags=4163 mtu 1500 > inet 10.0.6.64 netmask 255.255.0.0 broadcast 10.0.255.255 > inet6 fe80::221:5eff:fec9:abd8 prefixlen 64 scopeid 0x20 > ether 00:21:5e:c9:ab:d8 txqueuelen 1000 (Ethernet) > RX packets 66256 bytes 21109918 (20.1 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 35938 bytes 4641202 (4.4 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > enp11s0f1: flags=4163 mtu 1500 > inet6 fe80::221:5eff:fec9:abda prefixlen 64 scopeid 0x20 > ether 00:21:5e:c9:ab:da txqueuelen 1000 (Ethernet) > RX packets 25429 bytes 2004574 (1.9 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 6 bytes 532 (532.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > ib0: flags=4163 mtu 2044 > inet6 fe80::202:c902:23:baf9 prefixlen 64 scopeid 0x20 > Infiniband hardware address can be incorrect! Please read BUGS section in > ifconfig(8). > infiniband 80:00:04:04:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00 > txqueuelen 256 (InfiniBand) > RX packets 183678 bytes 10292768 (9.8 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 17 bytes 5380 (5.2 KiB) > TX errors 0 dropped 7 overruns 0 carrier 0 collisions 0 > > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 138 bytes 11792 (11.5 KiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 138 bytes 11792 (11.5 KiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > [heat-admin at overcloud-controller-0 ~]$ ovs-vsctl show > ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed > (Permission denied) > [heat-admin at overcloud-controller-0 ~]$ sudo ovs-vsctl show > 76e6f8a7-88cf-4920-b133-b4d15a4b9092 > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Port "enp0s29f0u2" > Interface "enp0s29f0u2" > ovs_version: "2.3.1" > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.65 > PING 10.0.6.65 (10.0.6.65) 56(84) bytes of data. > 64 bytes from 10.0.6.65: icmp_seq=1 ttl=64 time=0.176 ms > 64 bytes from 10.0.6.65: icmp_seq=2 ttl=64 time=0.195 ms > ^C > --- 10.0.6.65 ping statistics --- > 2 packets transmitted, 2 received, 0% packet loss, time 999ms > rtt min/avg/max/mdev = 0.176/0.185/0.195/0.016 ms > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.64 > PING 10.0.6.64 (10.0.6.64) 56(84) bytes of data. > 64 bytes from 10.0.6.64: icmp_seq=1 ttl=64 time=0.015 ms > ^C > --- 10.0.6.64 ping statistics --- > 1 packets transmitted, 1 received, 0% packet loss, time 0ms > rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms > > [heat-admin at overcloud-controller-0 ~]$ cat /etc/os-net-config/config.json > {"network_config": [{"use_dhcp": true, "type": "ovs_bridge", "name": "br-ex", > "members": [{"type": "interface", "name": "nic1", "primary": true}]}]} > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ > [heat-admin at overcloud-controller-0 ~]$ sudo os-net-config --debug -c > /etc/os-net-config/config.json > [2015/10/15 07:52:08 PM] [INFO] Using config file at: > /etc/os-net-config/config.json > [2015/10/15 07:52:08 PM] [INFO] Using mapping file at: > /etc/os-net-config/mapping.yaml > [2015/10/15 07:52:08 PM] [INFO] Ifcfg net config provider created. > [2015/10/15 07:52:08 PM] [DEBUG] network_config JSON: [{'use_dhcp': True, > 'type': 'ovs_bridge', 'name': 'br-ex', 'members': [{'type': 'interface', > 'name': 'nic1', 'primary': True}]}] > [2015/10/15 07:52:08 PM] [INFO] nic1 mapped to: enp0s29f0u2 > [2015/10/15 07:52:08 PM] [INFO] nic2 mapped to: enp11s0f0 > [2015/10/15 07:52:08 PM] [INFO] nic3 mapped to: enp11s0f1 > [2015/10/15 07:52:08 PM] [INFO] nic4 mapped to: ib0 > [2015/10/15 07:52:08 PM] [INFO] adding bridge: br-ex > [2015/10/15 07:52:08 PM] [DEBUG] bridge data: DEVICE=br-ex > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSBridge > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES="enp0s29f0u2" > OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > > [2015/10/15 07:52:08 PM] [INFO] adding interface: enp0s29f0u2 > [2015/10/15 07:52:08 PM] [DEBUG] interface data: DEVICE=enp0s29f0u2 > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > BOOTPROTO=none > > [2015/10/15 07:52:08 PM] [INFO] applying network configs... > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > DEVICE=enp0s29f0u2 > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > BOOTPROTO=none > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > DEVICE=enp0s29f0u2 > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > BOOTPROTO=none > > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > DEVICE=br-ex > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSBridge > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES="enp0s29f0u2" > OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > DEVICE=br-ex > ONBOOT=yes > HOTPLUG=no > DEVICETYPE=ovs > TYPE=OVSBridge > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES="enp0s29f0u2" > OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > > [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > > [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > > > > [heat-admin at overcloud-controller-0 ~]$ openstack-status > == Nova services == > openstack-nova-api: inactive (disabled on boot) > openstack-nova-cert: inactive (disabled on boot) > openstack-nova-compute: inactive (disabled on boot) > openstack-nova-network: inactive (disabled on boot) > openstack-nova-scheduler: inactive (disabled on boot) > openstack-nova-conductor: inactive (disabled on boot) > == Glance services == > openstack-glance-api: inactive (disabled on boot) > openstack-glance-registry: inactive (disabled on boot) > == Keystone service == > openstack-keystone: inactive (disabled on boot) > == Horizon service == > openstack-dashboard: uncontactable > == neutron services == > neutron-server: inactive (disabled on boot) > neutron-dhcp-agent: inactive (disabled on boot) > neutron-l3-agent: inactive (disabled on boot) > neutron-metadata-agent: inactive (disabled on boot) > neutron-lbaas-agent: inactive (disabled on boot) > neutron-openvswitch-agent: inactive (disabled on boot) > neutron-metering-agent: inactive (disabled on boot) > == Swift services == > openstack-swift-proxy: inactive (disabled on boot) > openstack-swift-account: inactive (disabled on boot) > openstack-swift-container: inactive (disabled on boot) > openstack-swift-object: inactive (disabled on boot) > == Cinder services == > openstack-cinder-api: inactive (disabled on boot) > openstack-cinder-scheduler: inactive (disabled on boot) > openstack-cinder-volume: inactive (disabled on boot) > openstack-cinder-backup: inactive (disabled on boot) > == Ceilometer services == > openstack-ceilometer-api: inactive (disabled on boot) > openstack-ceilometer-central: inactive (disabled on boot) > openstack-ceilometer-compute: inactive (disabled on boot) > openstack-ceilometer-collector: inactive (disabled on boot) > openstack-ceilometer-alarm-notifier: inactive (disabled on boot) > openstack-ceilometer-alarm-evaluator: inactive (disabled on boot) > openstack-ceilometer-notification: inactive (disabled on boot) > == Heat services == > openstack-heat-api: inactive (disabled on boot) > openstack-heat-api-cfn: inactive (disabled on boot) > openstack-heat-api-cloudwatch: inactive (disabled on boot) > openstack-heat-engine: inactive (disabled on boot) > == Support services == > libvirtd: active > openvswitch: active > dbus: active > rabbitmq-server: inactive (disabled on boot) > memcached: inactive (disabled on boot) > == Keystone users == > Warning keystonerc not sourced > > > > > Thanks, > > Erming > > > On 10/14/15, 5:23 PM, Dan Sneddon wrote: > > > > On 10/14/2015 03:03 PM, Erming Pei wrote: > > > > Hi, > > I am deploying the overcloud in baremetal way and after a couple of > hours, it showed: > > $ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again > with option --include-password or export HEAT_INCLUDE_PASSWORD=1 > Authentication required > > > But I checked the nodes are now running: > > [stack at gcloudcon-3 ~]$ nova list > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > | ID | Name | > Status | Task State | Power State | Networks | > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | > ACTIVE | - | Running | ctlplane=10.0.6.60 | > | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | > ACTIVE | - | Running | ctlplane=10.0.6.61 | > +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > > > 1. Should I re-deploy the nodes or there is a way to do update/makeup > for the authentication issue? > > 2. > I don't know how to access to the nodes. > There is not an overcloudrc file produced. > > $ ls overcloud* > overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 > overcloud-full.vmlinuz > > overcloud-full.d: > dib-manifests > > Is it via ssh key or password? Should I set the authentication method > somewhere? > > > > Thanks, > > Erming > > > _______________________________________________ > Rdo-list mailing list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: > rdo-list-unsubscribe at redhat.com This error generally means that something in > the deployment got stuck, > and the deployment hung until the token expired after 4 hours. When > that happens, there is no overcloudrc generated (because there is not a > working overcloud). You won't be able to recover with a stack update, > you'll need to perform a stack-delete and redeploy once you know what > went wrong. > > Generally a deployment shouldn't take anywhere near that long, a bare > metal deployment with 6 hosts takes me less than an hour, and less than > 2 including a Ceph deployment. In fact, I usually set a timeout using > the --timeout option, because if it hasn't finished after, say 90 > minutes (depending on how complicated the deployment is), then I want > it to bomb out so I can diagnose what went wrong and redeploy. > > Often when a deployment times out it is because there were connectivity > issues between the nodes. Since you can log in to the hosts, you might > want to do some basic network troubleshooting, such as: > > $ ip address # check to see that all the interfaces are there, and > that the IP addresses have been assigned > > $ sudo ovs-vsctl show # make sure that the bridges have the proper > interfaces, vlans, and that all the expected bridges show up > > $ ping # you can try this on all VLANs to make > sure that any VLAN trunks are working properly > > $ sudo ovs-appctl bond/show # if running bonding, check to see the > bond status > > $ sudo os-net-config --debug -c /etc/os-net-config/config.json # run > the network configuration script again to make sure that it is able to > configure the interfaces without error. WARNING, MAY BE DISRUPTIVE as > this will reset the network interfaces, run on console if possible. > > However, I want to first double-check that you had a valid command > line. You only show "openstack deploy overcloud --templates" in your > original email. You did have a full command-line, right? Refer to the > official installation guide for the right parameters. > > > -- > --------------------------------------------- > Erming Pei, Ph.D > Senior System Analyst; Grid/Cloud Specialist > > Research Computing Group > Information Services & Technology > University of Alberta, Canada > > Tel: +1 7804929914 Fax: +1 7804921729 > --------------------------------------------- > > _______________________________________________ > Rdo-list mailing list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: > rdo-list-unsubscribe at redhat.com > > > -- > --------------------------------------------- > Erming Pei, Ph.D > Senior System Analyst; Grid/Cloud Specialist > > Research Computing Group > Information Services & Technology > University of Alberta, Canada > > Tel: +1 7804929914 Fax: +1 7804921729 > --------------------------------------------- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mohammed.arafa at gmail.com Fri Oct 16 00:03:18 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 16 Oct 2015 02:03:18 +0200 Subject: [Rdo-list] [rdo-manager] liberty Message-ID: hi i followed liberty documentation today. i made sure i updated all the packages AND then i decided NOT to use git source the output is below: missing files in /opt/stack/selinux-policy/*.te ++ mktemp -d + TMPDIR=/tmp/tmp.nVF4AOtz37 + '[' -x /usr/sbin/semanage ']' + cd /tmp/tmp.nVF4AOtz37 ++ ls '/opt/stack/selinux-policy/*.te' ls: cannot access /opt/stack/selinux-policy/*.te: No such file or directory + semodule -i '/tmp/tmp.nVF4AOtz37/*.pp' semodule: Failed on /tmp/tmp.nVF4AOtz37/*.pp! [2015-10-16 01:56:37,419] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] [2015-10-16 01:56:37,420] (os-refresh-config) [ERROR] Aborting... Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 562, in install _run_orc(instack_env) File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 494, in _run_orc _run_live_command(args, instack_env, 'os-refresh-config') File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 325, in _run_live_command raise RuntimeError('%s failed. See log for details.' % name) RuntimeError: os-refresh-config failed. See log for details. Command 'instack-install-undercloud' returned non-zero exit status 1 then contents of that directory is [stack at rdo ~]$ ll /opt/stack/selinux-policy/ total 4 -rw-r--r--. 1 root root 960 Oct 16 01:56 ipxe.pp -- *805010942448935* * * *GR750055912MA* *Link to me on LinkedIn * From apevec at gmail.com Thu Oct 15 22:53:48 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 16 Oct 2015 00:53:48 +0200 Subject: [Rdo-list] openstack-app-catalog-ui package/RDO process In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7D96E4@EX10MBOX06.pnnl.gov> References: <1A3C52DFCD06494D8528644858247BF01B7D96E4@EX10MBOX06.pnnl.gov> Message-ID: 2015-10-14 18:35 GMT+02:00 Fox, Kevin M : > We recently got the same package into Debian, and it took less then a week > from start to finish. Not trying to be mean or anything here, just trying to > identify obstacles that we can take down to make the process smoother, to > bring contributing to RDO in line with other distro's. With the big tent > being a thing, I believe lowering that bar will become increasingly > important to RDO's success. Fedora package review process is not as automated as it could and depends very much on human intervention. Also it has some annoying bureaucratic limitation like you can't easily change reporter or reviewer... Haikel has started improvements in a "forked" RDO review process https://trello.com/c/7WaR9j3X/81-bugzilla-component-for-packages-reviews where we will then put more automation e.g. review in gerrit where initial fedora-review report is posted by CI jobs etc. > https://bugzilla.redhat.com/show_bug.cgi?id=1268372 In your particular review I see Matthias as provided initial feedback, trouble is that you were not clearly notified, I've now set needinfo on you to have a look. Since your package is first of this kind, we'd like to use opportunity to do it right so it can be used as an example for future UI plugin packages. Cheers, Alan From sasha at redhat.com Fri Oct 16 03:50:22 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Thu, 15 Oct 2015 23:50:22 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <56202F11.4020505@ualberta.ca> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <5620068E.1020202@ualberta.ca> <1377476884.58530663.1444943986933.JavaMail.zimbra@redhat.com> <56202F11.4020505@ualberta.ca> Message-ID: <900691934.58686512.1444967422246.JavaMail.zimbra@redhat.com> Hi Erming, So I tried to reproduce your issue by setting the passwords in the auth section of the undercloud file. My deployment completed successfully, although I ran into https://bugzilla.redhat.com/show_bug.cgi?id=1271289. The warnings (not errors) are on the successfully deployed node too: Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.845 1634 WARNING os-collect-config [-] Source [request] Unavailable. Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.845 1634 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.846 1634 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.847 1634 WARNING os_collect_config.zaqar [-] No auth_url configured. Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.167 1634 WARNING os-collect-config [-] Source [request] Unavailable. Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.168 1634 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.168 1634 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.168 1634 WARNING os_collect_config.zaqar [-] No auth_url configured. Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.150 1634 WARNING os-collect-config [-] Source [request] Unavailable. Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.153 1634 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.153 1634 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.154 1634 WARNING os_collect_config.zaqar [-] No auth_url configured. Did you went through the history of the commands you executed and compared with a guide? Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Erming Pei" > To: "Sasha Chuzhoy" > Cc: "Dan Sneddon" , rdo-list at redhat.com > Sent: Thursday, October 15, 2015 6:56:17 PM > Subject: Re: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment > > Hi Sasha, > > I checked the sys logs and see many such errors: > > Oct 15 22:50:19 localhost os-collect-config: 2015-10-15 22:50:19.133 > 8516 WARNING os_collect_config.ec2 [-] ('Connection aborted.', > error(113, 'No route to host')) > Oct 15 22:50:19 localhost os-collect-config: 2015-10-15 22:50:19.133 > 8516 WARNING os-collect-config [-] Source [ec2] Unavailable. > Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 > 8516 WARNING os_collect_config.heat [-] No auth_url configured. > Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 > 8516 WARNING os_collect_config.request [-] No metadata_url configured. > Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 > 8516 WARNING os-collect-config [-] Source [request] Unavailable. > Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 > 8516 WARNING os_collect_config.local [-] > /var/lib/os-collect-config/local-data not found. Skipping > Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007 > 8516 WARNING os_collect_config.local [-] No local metadata found > (['/var/lib/os-collect-config/local-data']) > > Below are my undercloud.conf (masked passwords). > > > [stack at gcloudcon-3 ~]$ cat undercloud.conf > [DEFAULT] > > # > # From instack-undercloud > # > > # Local file path to the necessary images. The path should be a > # directory readable by the current user that contains the full set of > # images. (string value) > #image_path = . > image_path = /gcloud/images > > # IP information for the interface on the Undercloud that will be > # handling the PXE boots and DHCP for Overcloud instances. The IP > # portion of the value will be assigned to the network interface > # defined by local_interface, with the netmask defined by the prefix > # portion of the value. (string value) > #local_ip = 192.0.2.1/24 > local_ip = 10.0.6.40/16 > > # Network interface on the Undercloud that will be handling the PXE > # boots and DHCP for Overcloud instances. (string value) > #local_interface = eth1 > local_interface = eth0 > > # Network that will be masqueraded for external access, if required. > # This should be the subnet used for PXE booting. (string value) > #masquerade_network = 192.0.2.0/24 > masquerade_network = 10.0.6.0/16 > > # Start of DHCP allocation range for PXE and DHCP of Overcloud > # instances. (string value) > #dhcp_start = 192.0.2.5 > dhcp_start = 10.0.6.50 > > # End of DHCP allocation range for PXE and DHCP of Overcloud > # instances. (string value) > #dhcp_end = 192.0.2.24 > dhcp_end = 10.0.6.250 > > # Network CIDR for the Neutron-managed network for Overcloud > # instances. This should be the subnet used for PXE booting. (string > # value) > #network_cidr = 192.0.2.0/24 > network_cidr = 10.0.6.0/16 > > # Network gateway for the Neutron-managed network for Overcloud > # instances. This should match the local_ip above when using > # masquerading. (string value) > #network_gateway = 192.0.2.1 > network_gateway = 10.0.6.40 > > # Network interface on which discovery dnsmasq will listen. If in > # doubt, use the default value. (string value) > #discovery_interface = br-ctlplane > > # Temporary IP range that will be given to nodes during the discovery > # process. Should not overlap with the range defined by dhcp_start > # and dhcp_end, but should be in the same network. (string value) > #discovery_iprange = 192.0.2.100,192.0.2.120 > discovery_iprange = 10.0.6.251,10.0.6.252 > > # Whether to run benchmarks when discovering nodes. (boolean value) > #discovery_runbench = false > > # Whether to enable the debug log level for Undercloud OpenStack > # services. (boolean value) > undercloud_debug = true > > > [auth] > > # > # From instack-undercloud > # > > # Password used for MySQL databases. If left unset, one will be > # automatically generated. (string value) > undercloud_db_password = xxxxxxxxxxxxxx > > # Keystone admin token. If left unset, one will be automatically > # generated. (string value) > #undercloud_admin_token = > > # Keystone admin password. If left unset, one will be automatically > # generated. (string value) > undercloud_admin_password = xxxxxxxxxxxxxx > > # Glance service password. If left unset, one will be automatically > # generated. (string value) > undercloud_glance_password = xxxxxxxxxxxxxx > > # Heat db encryption key(must be 8,16 or 32 characters. If left unset, > # one will be automatically generated. (string value) > #undercloud_heat_encryption_key = > > # Heat service password. If left unset, one will be automatically > # generated. (string value) > undercloud_heat_password = xxxxxxxxxxxxxx > > # Neutron service password. If left unset, one will be automatically > # generated. (string value) > undercloud_neutron_password = xxxxxxxxxxxxxx > > # Nova service password. If left unset, one will be automatically > # generated. (string value) > undercloud_nova_password = xxxxxxxxxxxxxx > > # Ironic service password. If left unset, one will be automatically > # generated. (string value) > undercloud_ironic_password = xxxxxxxxxxxxxx > > # Tuskar service password. If left unset, one will be automatically > # generated. (string value) > undercloud_tuskar_password = xxxxxxxxxxxxxx > > # Ceilometer service password. If left unset, one will be > # automatically generated. (string value) > undercloud_ceilometer_password = xxxxxxxxxxxxxx > > # Ceilometer metering secret. If left unset, one will be automatically > # generated. (string value) > #undercloud_ceilometer_metering_secret = > > # Ceilometer snmpd user. If left unset, one will be automatically > # generated. (string value) > undercloud_ceilometer_snmpd_user = ceilometer > > # Ceilometer snmpd password. If left unset, one will be automatically > # generated. (string value) > undercloud_ceilometer_snmpd_password = xxxxxxxxxxxxxx > > # Swift service password. If left unset, one will be automatically > # generated. (string value) > undercloud_swift_password = xxxxxxxxxxxxxx > > # Rabbitmq cookie. If left unset, one will be automatically generated. > # (string value) > #undercloud_rabbit_cookie = > > # Rabbitmq password. If left unset, one will be automatically > # generated. (string value) > undercloud_rabbit_password = xxxxxxxxxxxxxx > > # Rabbitmq username. If left unset, one will be automatically > # generated. (string value) > undercloud_rabbit_username = rabbit > > # Heat stack domain admin password. If left unset, one will be > # automatically generated. (string value) > undercloud_heat_stack_domain_admin_password = xxxxxxxxxxxxxx > > # Swift hash suffix. If left unset, one will be automatically > # generated. (string value) > #undercloud_swift_hash_suffix = > > > > Yes, I am just testing with the basic 1 controller and 1 compute case. > I can try with setting a timeout as you did. > > Thanks, > > Erming > > > On 10/15/15, 3:19 PM, Sasha Chuzhoy wrote: > > Hi Erming, > > You can also check the log files on nodes for errors (start with > > /var/log/messages). > > > > if things are working, "openstack overcloud deploy --template" will create > > a nonHA deployment without network isolation consisting of 1 controller > > and 1 compute. > > I usually add "--timeout 90", as this period of time is sufficient on my > > setup for deploying the overcloud. > > > > Seeing the IP being different than 192.0.2.x, I wonder what other changes > > were made to the undercloud.conf? > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > >> From: "Erming Pei" > >> To: "Dan Sneddon" , rdo-list at redhat.com > >> Sent: Thursday, October 15, 2015 4:03:26 PM > >> Subject: Re: [Rdo-list] [rdo-manager] Authentication required during > >> overcloud deployment > >> > >> Hi Dan, Sasha, > >> > >> Thanks for your answers and hints. > >> I looked up the heat/etc log files and stack/node status. > >> Only thing I found by far is "timed out". I don't know what's the reason. > >> IPMI looks good. > >> > >> Tried with HEAT_INCLUDE_PASSWORD=1 but same error message (Please try > >> again > >> with option --include-password or export HEAT_INCLUDE_PASSWORD=1 > >> Authentication required) > >> > >> BTW. I only followed the exact instruction as shown in the guide: > >> (openstack > >> overcloud deploy --templates) No more options. I thought this is good for > >> a > >> demo deployment. If not sufficient, which one I should follow? See some of > >> your discussions, but not very clear. Should I follow the example from > >> jliberma at redhat.com ? > >> > >> Below are my investigation: > >> By runnig: $ heat resource-list overcloud > >> Found that just controller and compute are failed: CREATE_FAILED > >> > >> Checked the reason it says: resource_status_reason | CREATE aborted > >> > >> I then logged into the running overcloud nodes (e.g. the controller): > >> > >> > >> [heat-admin at overcloud-controller-0 ~]$ ifconfig > >> br-ex: flags=4163 mtu 1500 > >> inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 > >> ether 02:21:5e:cd:9d:f3 txqueuelen 0 (Ethernet) > >> RX packets 29926 bytes 2364154 (2.2 MiB) > >> RX errors 0 dropped 0 overruns 0 frame 0 > >> TX packets 81 bytes 25614 (25.0 KiB) > >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > >> > >> enp0s29f0u2: flags=4163 mtu 1500 > >> inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20 > >> ether 02:21:5e:cd:9d:f3 txqueuelen 1000 (Ethernet) > >> RX packets 29956 bytes 1947140 (1.8 MiB) > >> RX errors 0 dropped 0 overruns 0 frame 0 > >> TX packets 102 bytes 28620 (27.9 KiB) > >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > >> > >> enp11s0f0: flags=4163 mtu 1500 > >> inet 10.0.6.64 netmask 255.255.0.0 broadcast 10.0.255.255 > >> inet6 fe80::221:5eff:fec9:abd8 prefixlen 64 scopeid 0x20 > >> ether 00:21:5e:c9:ab:d8 txqueuelen 1000 (Ethernet) > >> RX packets 66256 bytes 21109918 (20.1 MiB) > >> RX errors 0 dropped 0 overruns 0 frame 0 > >> TX packets 35938 bytes 4641202 (4.4 MiB) > >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > >> > >> enp11s0f1: flags=4163 mtu 1500 > >> inet6 fe80::221:5eff:fec9:abda prefixlen 64 scopeid 0x20 > >> ether 00:21:5e:c9:ab:da txqueuelen 1000 (Ethernet) > >> RX packets 25429 bytes 2004574 (1.9 MiB) > >> RX errors 0 dropped 0 overruns 0 frame 0 > >> TX packets 6 bytes 532 (532.0 B) > >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > >> > >> ib0: flags=4163 mtu 2044 > >> inet6 fe80::202:c902:23:baf9 prefixlen 64 scopeid 0x20 > >> Infiniband hardware address can be incorrect! Please read BUGS section in > >> ifconfig(8). > >> infiniband 80:00:04:04:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00 > >> txqueuelen 256 (InfiniBand) > >> RX packets 183678 bytes 10292768 (9.8 MiB) > >> RX errors 0 dropped 0 overruns 0 frame 0 > >> TX packets 17 bytes 5380 (5.2 KiB) > >> TX errors 0 dropped 7 overruns 0 carrier 0 collisions 0 > >> > >> lo: flags=73 mtu 65536 > >> inet 127.0.0.1 netmask 255.0.0.0 > >> inet6 ::1 prefixlen 128 scopeid 0x10 > >> loop txqueuelen 0 (Local Loopback) > >> RX packets 138 bytes 11792 (11.5 KiB) > >> RX errors 0 dropped 0 overruns 0 frame 0 > >> TX packets 138 bytes 11792 (11.5 KiB) > >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > >> > >> [heat-admin at overcloud-controller-0 ~]$ ovs-vsctl show > >> ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed > >> (Permission denied) > >> [heat-admin at overcloud-controller-0 ~]$ sudo ovs-vsctl show > >> 76e6f8a7-88cf-4920-b133-b4d15a4b9092 > >> Bridge br-ex > >> Port br-ex > >> Interface br-ex > >> type: internal > >> Port "enp0s29f0u2" > >> Interface "enp0s29f0u2" > >> ovs_version: "2.3.1" > >> [heat-admin at overcloud-controller-0 ~]$ > >> [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.65 > >> PING 10.0.6.65 (10.0.6.65) 56(84) bytes of data. > >> 64 bytes from 10.0.6.65: icmp_seq=1 ttl=64 time=0.176 ms > >> 64 bytes from 10.0.6.65: icmp_seq=2 ttl=64 time=0.195 ms > >> ^C > >> --- 10.0.6.65 ping statistics --- > >> 2 packets transmitted, 2 received, 0% packet loss, time 999ms > >> rtt min/avg/max/mdev = 0.176/0.185/0.195/0.016 ms > >> [heat-admin at overcloud-controller-0 ~]$ > >> [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.64 > >> PING 10.0.6.64 (10.0.6.64) 56(84) bytes of data. > >> 64 bytes from 10.0.6.64: icmp_seq=1 ttl=64 time=0.015 ms > >> ^C > >> --- 10.0.6.64 ping statistics --- > >> 1 packets transmitted, 1 received, 0% packet loss, time 0ms > >> rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms > >> > >> [heat-admin at overcloud-controller-0 ~]$ cat /etc/os-net-config/config.json > >> {"network_config": [{"use_dhcp": true, "type": "ovs_bridge", "name": > >> "br-ex", > >> "members": [{"type": "interface", "name": "nic1", "primary": true}]}]} > >> [heat-admin at overcloud-controller-0 ~]$ > >> [heat-admin at overcloud-controller-0 ~]$ > >> [heat-admin at overcloud-controller-0 ~]$ sudo os-net-config --debug -c > >> /etc/os-net-config/config.json > >> [2015/10/15 07:52:08 PM] [INFO] Using config file at: > >> /etc/os-net-config/config.json > >> [2015/10/15 07:52:08 PM] [INFO] Using mapping file at: > >> /etc/os-net-config/mapping.yaml > >> [2015/10/15 07:52:08 PM] [INFO] Ifcfg net config provider created. > >> [2015/10/15 07:52:08 PM] [DEBUG] network_config JSON: [{'use_dhcp': True, > >> 'type': 'ovs_bridge', 'name': 'br-ex', 'members': [{'type': 'interface', > >> 'name': 'nic1', 'primary': True}]}] > >> [2015/10/15 07:52:08 PM] [INFO] nic1 mapped to: enp0s29f0u2 > >> [2015/10/15 07:52:08 PM] [INFO] nic2 mapped to: enp11s0f0 > >> [2015/10/15 07:52:08 PM] [INFO] nic3 mapped to: enp11s0f1 > >> [2015/10/15 07:52:08 PM] [INFO] nic4 mapped to: ib0 > >> [2015/10/15 07:52:08 PM] [INFO] adding bridge: br-ex > >> [2015/10/15 07:52:08 PM] [DEBUG] bridge data: DEVICE=br-ex > >> ONBOOT=yes > >> HOTPLUG=no > >> DEVICETYPE=ovs > >> TYPE=OVSBridge > >> OVSBOOTPROTO=dhcp > >> OVSDHCPINTERFACES="enp0s29f0u2" > >> OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > >> > >> [2015/10/15 07:52:08 PM] [INFO] adding interface: enp0s29f0u2 > >> [2015/10/15 07:52:08 PM] [DEBUG] interface data: DEVICE=enp0s29f0u2 > >> ONBOOT=yes > >> HOTPLUG=no > >> DEVICETYPE=ovs > >> TYPE=OVSPort > >> OVS_BRIDGE=br-ex > >> BOOTPROTO=none > >> > >> [2015/10/15 07:52:08 PM] [INFO] applying network configs... > >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > >> DEVICE=enp0s29f0u2 > >> ONBOOT=yes > >> HOTPLUG=no > >> DEVICETYPE=ovs > >> TYPE=OVSPort > >> OVS_BRIDGE=br-ex > >> BOOTPROTO=none > >> > >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > >> DEVICE=enp0s29f0u2 > >> ONBOOT=yes > >> HOTPLUG=no > >> DEVICETYPE=ovs > >> TYPE=OVSPort > >> OVS_BRIDGE=br-ex > >> BOOTPROTO=none > >> > >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > >> > >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > >> > >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > >> DEVICE=br-ex > >> ONBOOT=yes > >> HOTPLUG=no > >> DEVICETYPE=ovs > >> TYPE=OVSBridge > >> OVSBOOTPROTO=dhcp > >> OVSDHCPINTERFACES="enp0s29f0u2" > >> OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > >> > >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > >> DEVICE=br-ex > >> ONBOOT=yes > >> HOTPLUG=no > >> DEVICETYPE=ovs > >> TYPE=OVSBridge > >> OVSBOOTPROTO=dhcp > >> OVSDHCPINTERFACES="enp0s29f0u2" > >> OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3" > >> > >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data: > >> > >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data: > >> > >> > >> > >> [heat-admin at overcloud-controller-0 ~]$ openstack-status > >> == Nova services == > >> openstack-nova-api: inactive (disabled on boot) > >> openstack-nova-cert: inactive (disabled on boot) > >> openstack-nova-compute: inactive (disabled on boot) > >> openstack-nova-network: inactive (disabled on boot) > >> openstack-nova-scheduler: inactive (disabled on boot) > >> openstack-nova-conductor: inactive (disabled on boot) > >> == Glance services == > >> openstack-glance-api: inactive (disabled on boot) > >> openstack-glance-registry: inactive (disabled on boot) > >> == Keystone service == > >> openstack-keystone: inactive (disabled on boot) > >> == Horizon service == > >> openstack-dashboard: uncontactable > >> == neutron services == > >> neutron-server: inactive (disabled on boot) > >> neutron-dhcp-agent: inactive (disabled on boot) > >> neutron-l3-agent: inactive (disabled on boot) > >> neutron-metadata-agent: inactive (disabled on boot) > >> neutron-lbaas-agent: inactive (disabled on boot) > >> neutron-openvswitch-agent: inactive (disabled on boot) > >> neutron-metering-agent: inactive (disabled on boot) > >> == Swift services == > >> openstack-swift-proxy: inactive (disabled on boot) > >> openstack-swift-account: inactive (disabled on boot) > >> openstack-swift-container: inactive (disabled on boot) > >> openstack-swift-object: inactive (disabled on boot) > >> == Cinder services == > >> openstack-cinder-api: inactive (disabled on boot) > >> openstack-cinder-scheduler: inactive (disabled on boot) > >> openstack-cinder-volume: inactive (disabled on boot) > >> openstack-cinder-backup: inactive (disabled on boot) > >> == Ceilometer services == > >> openstack-ceilometer-api: inactive (disabled on boot) > >> openstack-ceilometer-central: inactive (disabled on boot) > >> openstack-ceilometer-compute: inactive (disabled on boot) > >> openstack-ceilometer-collector: inactive (disabled on boot) > >> openstack-ceilometer-alarm-notifier: inactive (disabled on boot) > >> openstack-ceilometer-alarm-evaluator: inactive (disabled on boot) > >> openstack-ceilometer-notification: inactive (disabled on boot) > >> == Heat services == > >> openstack-heat-api: inactive (disabled on boot) > >> openstack-heat-api-cfn: inactive (disabled on boot) > >> openstack-heat-api-cloudwatch: inactive (disabled on boot) > >> openstack-heat-engine: inactive (disabled on boot) > >> == Support services == > >> libvirtd: active > >> openvswitch: active > >> dbus: active > >> rabbitmq-server: inactive (disabled on boot) > >> memcached: inactive (disabled on boot) > >> == Keystone users == > >> Warning keystonerc not sourced > >> > >> > >> > >> > >> Thanks, > >> > >> Erming > >> > >> > >> On 10/14/15, 5:23 PM, Dan Sneddon wrote: > >> > >> > >> > >> On 10/14/2015 03:03 PM, Erming Pei wrote: > >> > >> > >> > >> Hi, > >> > >> I am deploying the overcloud in baremetal way and after a couple of > >> hours, it showed: > >> > >> $ openstack overcloud deploy --templates > >> Deploying templates in the directory > >> /usr/share/openstack-tripleo-heat-templates > >> ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again > >> with option --include-password or export HEAT_INCLUDE_PASSWORD=1 > >> Authentication required > >> > >> > >> But I checked the nodes are now running: > >> > >> [stack at gcloudcon-3 ~]$ nova list > >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > >> > >> | ID | Name | > >> Status | Task State | Power State | Networks | > >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > >> > >> | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 | > >> ACTIVE | - | Running | ctlplane=10.0.6.60 | > >> | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 | > >> ACTIVE | - | Running | ctlplane=10.0.6.61 | > >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+ > >> > >> > >> 1. Should I re-deploy the nodes or there is a way to do update/makeup > >> for the authentication issue? > >> > >> 2. > >> I don't know how to access to the nodes. > >> There is not an overcloudrc file produced. > >> > >> $ ls overcloud* > >> overcloud-env.json overcloud-full.initrd overcloud-full.qcow2 > >> overcloud-full.vmlinuz > >> > >> overcloud-full.d: > >> dib-manifests > >> > >> Is it via ssh key or password? Should I set the authentication method > >> somewhere? > >> > >> > >> > >> Thanks, > >> > >> Erming > >> > >> > >> _______________________________________________ > >> Rdo-list mailing list Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: > >> rdo-list-unsubscribe at redhat.com > >> This error generally means that something in the deployment got stuck, > >> and the deployment hung until the token expired after 4 hours. When > >> that happens, there is no overcloudrc generated (because there is not a > >> working overcloud). You won't be able to recover with a stack update, > >> you'll need to perform a stack-delete and redeploy once you know what > >> went wrong. > >> > >> Generally a deployment shouldn't take anywhere near that long, a bare > >> metal deployment with 6 hosts takes me less than an hour, and less than > >> 2 including a Ceph deployment. In fact, I usually set a timeout using > >> the --timeout option, because if it hasn't finished after, say 90 > >> minutes (depending on how complicated the deployment is), then I want > >> it to bomb out so I can diagnose what went wrong and redeploy. > >> > >> Often when a deployment times out it is because there were connectivity > >> issues between the nodes. Since you can log in to the hosts, you might > >> want to do some basic network troubleshooting, such as: > >> > >> $ ip address # check to see that all the interfaces are there, and > >> that the IP addresses have been assigned > >> > >> $ sudo ovs-vsctl show # make sure that the bridges have the proper > >> interfaces, vlans, and that all the expected bridges show up > >> > >> $ ping # you can try this on all VLANs to make > >> sure that any VLAN trunks are working properly > >> > >> $ sudo ovs-appctl bond/show # if running bonding, check to see the > >> bond status > >> > >> $ sudo os-net-config --debug -c /etc/os-net-config/config.json # run > >> the network configuration script again to make sure that it is able to > >> configure the interfaces without error. WARNING, MAY BE DISRUPTIVE as > >> this will reset the network interfaces, run on console if possible. > >> > >> However, I want to first double-check that you had a valid command > >> line. You only show "openstack deploy overcloud --templates" in your > >> original email. You did have a full command-line, right? Refer to the > >> official installation guide for the right parameters. > >> > >> > >> -- > >> --------------------------------------------- > >> Erming Pei, Ph.D > >> Senior System Analyst; Grid/Cloud Specialist > >> > >> Research Computing Group > >> Information Services & Technology > >> University of Alberta, Canada > >> > >> Tel: +1 7804929914 Fax: +1 7804921729 > >> --------------------------------------------- > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- > --------------------------------------------- > Erming Pei, Ph.D > Senior System Analyst; Grid/Cloud Specialist > > Research Computing Group > Information Services & Technology > University of Alberta, Canada > > Tel: +1 7804929914 Fax: +1 7804921729 > --------------------------------------------- > > From celik.esra at tubitak.gov.tr Fri Oct 16 05:40:16 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Fri, 16 Oct 2015 08:40:16 +0300 (EEST) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1505152794.58180750.1444917521867.JavaMail.zimbra@redhat.com> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1505152794.58180750.1444917521867.JavaMail.zimbra@redhat.com> Message-ID: <1268677576.5046491.1444974015831.JavaMail.zimbra@tubitak.gov.tr> Hi Sasha, I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 Overcloud-Compute This is my undercloud.conf file: image_path = . local_ip = 192.0.2.1/24 local_interface = em2 masquerade_network = 192.0.2.0/24 dhcp_start = 192.0.2.5 dhcp_end = 192.0.2.24 network_cidr = 192.0.2.0/24 network_gateway = 192.0.2.1 inspection_interface = br-ctlplane inspection_iprange = 192.0.2.100,192.0.2.120 inspection_runbench = false undercloud_debug = true enable_tuskar = false enable_tempest = false IP configuration for the Undercloud is as follows: stack at undercloud ~]$ ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: mtu 1500 qdisc mq state UP qlen 1000 link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 valid_lft forever preferred_lft forever inet6 fe80::a9e:1ff:fe50:8a21/64 scope link valid_lft forever preferred_lft forever 3: em2: mtu 1500 qdisc mq master ovs-system state UP qlen 1000 link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff 4: ovs-system: mtu 1500 qdisc noop state DOWN link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff 5: br-ctlplane: mtu 1500 qdisc noqueue state UNKNOWN link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane valid_lft forever preferred_lft forever inet6 fe80::a9e:1ff:fe50:8a22/64 scope link valid_lft forever preferred_lft forever 6: br-int: mtu 1500 qdisc noop state DOWN link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff And I attached two screenshots showing the boot stage for overcloud nodes Is there a way to login the overcloud nodes to see their IP configuration? Thanks Esra ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ----- Orijinal Mesaj ----- > Kimden: "Sasha Chuzhoy" > Kime: "Esra Celik" > Kk: "Marius Cornea" , rdo-list at redhat.com > G?nderilenler: 15 Ekim Per?embe 2015 16:58:41 > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > found" > Just my 2 cents. > Did you make sure that all the registered nodes are configured to boot off > the right NIC first? > Can you watch the console and see what happens on the problematic nodes upon > boot? > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Esra Celik" > > To: "Marius Cornea" > > Cc: rdo-list at redhat.com > > Sent: Thursday, October 15, 2015 4:40:46 AM > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > was found" > > > > > > Sorry for the late reply > > > > ironic node-show results are below. I have my nodes power on after > > introspection bulk start. And I get the following warning > > Introspection didn't finish for nodes > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > Doesn't seem to be the same issue with > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > | Maintenance | > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | available > > | | > > | False | > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | available > > | | > > | False | > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > [stack at undercloud ~]$ ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > +------------------------+-------------------------------------------------------------------------+ > > | Property | Value | > > +------------------------+-------------------------------------------------------------------------+ > > | target_power_state | None | > > | extra | {} | > > | last_error | None | > > | updated_at | 2015-10-15T08:26:42+00:00 | > > | maintenance_reason | None | > > | provision_state | available | > > | clean_step | {} | > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > | console_enabled | False | > > | target_provision_state | None | > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > | maintenance | False | > > | inspection_started_at | None | > > | inspection_finished_at | None | > > | power_state | power on | > > | driver | pxe_ipmitool | > > | reservation | None | > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': > > | u'10', | > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > | instance_uuid | None | > > | name | None | > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > | u'192.168.0.18', | > > | | u'ipmi_username': u'root', u'deploy_kernel': u'49a2c8d4-a283-4bdf-8d6f- > > | | | > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > | | 0d88-4632-af98-8defb05ca6e2'} | > > | created_at | 2015-10-15T07:49:08+00:00 | > > | driver_internal_info | {u'clean_steps': None} | > > | chassis_uuid | | > > | instance_info | {} | > > +------------------------+-------------------------------------------------------------------------+ > > > > > > [stack at undercloud ~]$ ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > +------------------------+-------------------------------------------------------------------------+ > > | Property | Value | > > +------------------------+-------------------------------------------------------------------------+ > > | target_power_state | None | > > | extra | {} | > > | last_error | None | > > | updated_at | 2015-10-15T08:26:42+00:00 | > > | maintenance_reason | None | > > | provision_state | available | > > | clean_step | {} | > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > | console_enabled | False | > > | target_provision_state | None | > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > | maintenance | False | > > | inspection_started_at | None | > > | inspection_finished_at | None | > > | power_state | power on | > > | driver | pxe_ipmitool | > > | reservation | None | > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': > > | u'100', | > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > | instance_uuid | None | > > | name | None | > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > | u'192.168.0.19', | > > | | u'ipmi_username': u'root', u'deploy_kernel': u'49a2c8d4-a283-4bdf-8d6f- > > | | | > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > | | 0d88-4632-af98-8defb05ca6e2'} | > > | created_at | 2015-10-15T07:49:08+00:00 | > > | driver_internal_info | {u'clean_steps': None} | > > | chassis_uuid | | > > | instance_info | {} | > > +------------------------+-------------------------------------------------------------------------+ > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. I don't think I am doing > > something other than > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > doc > > > > > > > > > > > > > > > > 1 vi instackenv.json > > 2 sudo yum -y install epel-release > > 3 sudo curl -o /etc/yum.repos.d/delorean.repo > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > /etc/yum.repos.d/delorean-current.repo > > 6 sudo /bin/bash -c "cat <>/etc/yum.repos.d/delorean-current.repo > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > EOF" > > 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > 8 sudo yum -y install yum-plugin-priorities > > 9 sudo yum install -y python-tripleoclient > > 10 cp /usr/share/instack-undercloud/undercloud.conf.sample > > ~/undercloud.conf > > 11 vi undercloud.conf > > 12 export DIB_INSTALLTYPE_puppet_modules=source > > 13 openstack undercloud install > > 14 source stackrc > > 15 export NODE_DIST=centos7 > > 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > /etc/yum.repos.d/delorean-deps.repo" > > 17 export DIB_INSTALLTYPE_puppet_modules=source > > 18 openstack overcloud image build --all > > 19 ls > > 20 openstack overcloud image upload > > 21 openstack baremetal import --json instackenv.json > > 22 openstack baremetal configure boot > > 23 ironic node-list > > 24 openstack baremetal introspection bulk start > > 25 ironic node-list > > 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > 28 history > > > > > > > > > > > > > > > > Thanks > > > > > > > > Esra ?EL?K > > T?B?TAK B?LGEM > > www.bilgem.tubitak.gov.tr > > celik.esra at tubitak.gov.tr > > > > > > Kimden: "Marius Cornea" > > Kime: "Esra Celik" > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > G?nderilenler: 14 Ekim ?ar?amba 2015 19:40:07 > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > found" > > > > Can you do ironic node-show for your ironic nodes and post the results? > > Also > > check the following suggestion if you're experiencing the same issue: > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Marius Cornea" > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > was found" > > > > > > > > > > > > Well in the early stage of the introspection I can see Client IP of nodes > > > (screenshot attached). But then I see continuous ironic-python-agent > > > errors > > > (screenshot-2 attached). Errors repeat after time out.. And the nodes are > > > not powered off. > > > > > > Seems like I am stuck in introspection stage.. > > > > > > I can use ipmitool command to successfully power on/off the nodes > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > ADMINISTRATOR > > > -U > > > root -R 3 -N 5 -P power status > > > Chassis Power is on > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > chassis power status > > > Chassis Power is on > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > chassis power off > > > Chassis Power Control: Down/Off > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > chassis power status > > > Chassis Power is off > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > chassis power on > > > Chassis Power Control: Up/On > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > chassis power status > > > Chassis Power is on > > > > > > > > > Esra ?EL?K > > > T?B?TAK B?LGEM > > > www.bilgem.tubitak.gov.tr > > > celik.esra at tubitak.gov.tr > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > Kimden: "Marius Cornea" > > > Kime: "Esra Celik" > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > > found" > > > > > > > > > ----- Original Message ----- > > > > From: "Esra Celik" > > > > To: "Marius Cornea" > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > host > > > > was found" > > > > > > > > > > > > Well today I started with re-installing the OS and nothing seems wrong > > > > with > > > > undercloud installation, then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error during image build > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > ... > > > > a lot of log > > > > ... > > > > ++ cat /etc/dib_dracut_drivers > > > > + dracut -N --install ' curl partprobe lsblk targetcli tail head awk > > > > ifconfig > > > > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell > > > > rd.debug > > > > rd.neednet=1 rd.driver.pre=ahci' --include /var/tmp/image.YVhwuArQ/mnt/ > > > > / > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net > > > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > > > > target_core_file target_core_pscsi configfs' -o 'dash plymouth' > > > > /tmp/ramdisk > > > > cat: write error: Broken pipe > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > + chmod o+r /tmp/kernel > > > > + trap EXIT > > > > + target_tag=99-build-dracut-ramdisk > > > > + date +%s.%N > > > > + output '99-build-dracut-ramdisk completed' > > > > ... > > > > a lot of log > > > > ... > > > > > > You can ignore that afaik, if you end up having all the required images > > > it > > > should be ok. > > > > > > > > > > > Then, during introspection stage I see ironic-python-agent errors on > > > > nodes > > > > (screenshot attached) and the following warnings > > > > > > > > > > That looks odd. Is it showing up in the early stage of the introspection? > > > At > > > some point it should receive an address by DHCP and the Network is > > > unreachable error should disappear. Does the introspection complete and > > > the > > > nodes are turned off? > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > openstack-ironic-conductor.service > > > > | > > > > grep -i "warning\|error" > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > 10:30:12.119 > > > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > > > > Option "http_url" from group "pxe" is deprecated. Use option "http_url" > > > > from > > > > group "deploy". > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > 10:30:12.119 > > > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > "http_root" > > > > from group "deploy". > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > This is odd too as I'm expecting the nodes to be powered off before > > > running > > > deployment. > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > > | Maintenance | > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | > > > > | available > > > > | | > > > > | False | > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | > > > > | available > > > > | | > > > > | False | > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > During deployment I get following errors > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > openstack-ironic-conductor.service > > > > | > > > > grep -i "warning\|error" > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > 11:29:01.739 > > > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while > > > > attempting > > > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 > > > > -f > > > > /tmp/tmpSCKHIv power status"for node > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > Error: Unexpected error while running command. > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > 11:29:01.739 > > > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status > > > > failed > > > > for > > > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error > > > > while > > > > running command. > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > 11:29:01.740 > > > > 619 WARNING ironic.conductor.manager [-] During sync_power_state, could > > > > not > > > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt > > > > 1 > > > > of > > > > 3. Error: IPMI call failed: power status.. > > > > > > > > > > This looks like an ipmi error, can you try to manually run commands using > > > the > > > ipmitool and see if you get any success? It's also worth filing a bug > > > with > > > details such as the ipmitool version, server model, drac firmware > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > Kimden: "Marius Cornea" > > > > Kime: "Esra Celik" > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > valid > > > > host was found" > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "Esra Celik" > > > > > To: "Marius Cornea" > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > valid > > > > > host was found" > > > > > > > > > > During deployment they are powering on and deploying the images. I > > > > > see > > > > > lot > > > > > of > > > > > connection error messages about ironic-python-agent but ignore them > > > > > as > > > > > mentioned here > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > That was referring to the introspection stage. From what I can tell you > > > > are > > > > experiencing issues during deployment as it fails to provision the nova > > > > instances, can you check if during that stage the nodes get powered on? > > > > > > > > Make sure that before overcloud deploy the ironic nodes are available > > > > for > > > > provisioning (ironic node-list and check the provisioning state > > > > column). > > > > Also check that you didn't miss any step in the docs in regards to > > > > kernel > > > > and ramdisk assignment, introspection, flavor creation(so it matches > > > > the > > > > nodes resources) > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > In instackenv.json file I do not need to add the undercloud node, or > > > > > do > > > > > I? > > > > > > > > No, the nodes details should be enough. > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > You can check the openstack-ironic-conductor logs(journalctl -fl -u > > > > openstack-ironic-conductor.service) and the logs in /var/log/nova. > > > > > > > > > Thanks > > > > > Esra > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > Kime: > > > > > Esra Celik Kk: Ignacio Bravo > > > > > , rdo-list at redhat.comGönderilenler: Tue, > > > > > 13 > > > > > Oct > > > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails > > > > > with > > > > > error "No valid host was found" > > > > > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > > > > > To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> > > > > > Sent: > > > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] > > > > > OverCloud > > > > > deploy fails with error "No valid host was found"> > > > Actually I > > > > > re-installed the OS for Undercloud before deploying. However I did> > > > > > not > > > > > re-install the OS in Compute and Controller nodes.. I will reinstall> > > > > > basic > > > > > OS for them too, and retry.. > > > > > > > > > > You don't need to reinstall the OS on the controller and compute, > > > > > they > > > > > will > > > > > get the image served by the undercloud. I'd recommend that during > > > > > deployment > > > > > you watch the servers console and make sure they get powered on, pxe > > > > > boot, > > > > > and actually get the image deployed. > > > > > > > > > > Thanks > > > > > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: > > > > > > "Ignacio > > > > > > Bravo" > Kime: "Esra Celik" > > > > > > > Kk: rdo-list at redhat.com> > > > > > > Gönderilenler: > > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy > > > > > > fails > > > > > > with error "No valid host was> found"> > Esra,> > I encountered the > > > > > > same > > > > > > problem after deleting the stack and re-deploying.> > It turns out > > > > > > that > > > > > > 'heat stack-delete overcloud’ does remove the nodes from> > > > > > > ‘nova list’ and one would assume that the baremetal > > > > > > servers > > > > > > are now ready to> be used for the next stack, but when redeploying, > > > > > > I > > > > > > get > > > > > > the same message of> not enough hosts available.> > You can look > > > > > > into > > > > > > the > > > > > > nova logs and it mentions something about ‘node xxx is> > > > > > > already > > > > > > associated with UUID yyyy’ and ‘I tried 3 times and > > > > > > I’m > > > > > > giving up’.> The issue is that the UUID yyyy belonged to a > > > > > > prior > > > > > > unsuccessful deployment.> > I’m now redeploying the basic OS > > > > > > to > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, > > > > > > Inc> > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, > > > > > > at > > > > > > 9:25 > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > > > > > OverCloud deploy fails with error "No valid host was found"> > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> > > > > > > Deploying > > > > > > templates in the directory> > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > Stack failed with status: Resource CREATE failed: > > > > > > resources.Compute:> > > > > > > ResourceInError: resources[0].resources.NovaCompute: Went to status > > > > > > ERROR> > > > > > > due to "Message: No valid host was found. There are not enough > > > > > > hosts> > > > > > > available., Code: 500"> Heat Stack create failed.> > Here are some > > > > > > logs:> > > > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE > > > > > > > Tue > > > > > > > Oct > > > > > > 13> 16:18:17 2015> > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > | resource_name | physical_resource_id | resource_type | > > > > > > | resource_status > > > > > > |> | updated_time | stack_name |> > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > | OS::Heat::ResourceGroup > > > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > |> | Controller > > > > > > |> | | > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > |> > > > > > > | > > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | > > > > > > OS::Nova::Server > > > > > > |> > > > > > > | > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > > > > CREATE_FAILED > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > |> > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > | Property | Value |> > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > | attributes | { |> | | "attributes": null, |> | | "refs": null |> > > > > > > | | > > > > > > | | > > > > > > | } > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | > > > > > > |> | links > > > > > > |> | |> > > > > > > | > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > | (self) |> | | > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > | | (stack) |> | | > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > > | | physical_resource_id > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | > > > > > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> > > > > > > | > > > > > > | > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | > > > > > > Compute > > > > > > |> > > > > > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > > > > > resources.Compute: ResourceInError:> | > > > > > > resources[0].resources.NovaCompute: > > > > > > Went to status ERROR due to "Message:> | No valid host was found. > > > > > > There > > > > > > are not enough hosts available., Code: 500"> | |> | resource_type | > > > > > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > This is my instackenv.json for 1 compute and 1 control node > > > > > > > > > to > > > > > > > > > be > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> > > > > > > "disk":"10",> > > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> > > > > > > "mac":[> > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> > > > > > > "disk":"100",> > > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in > > > > > > advance> > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > _______________________________________________> Rdo-list mailing > > > > > > list> > > > > > > Rdo-list at redhat.com> > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > _______________________________________________> Rdo-list mailing > > > > > > list> > > > > > > Rdo-list at redhat.com> > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rdo-introspection-screenshot-2.png Type: image/png Size: 146063 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rdo-introspection-screenshot.png Type: image/png Size: 96977 bytes Desc: not available URL: From shayne.alone at gmail.com Fri Oct 16 07:19:04 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Fri, 16 Oct 2015 10:49:04 +0330 Subject: [Rdo-list] Overcloud nodes host-name persistence Message-ID: It seem that If you restart bare metal servers! there will be a problem after boot that the node will be register it self with a different name rather than the one used on deployment! as the attachment shown the failed compute node is registered with new name: => overcloud-novacompute-2.localdomain as it was => overcloud-novacompute-2 during deployment! I check: ``` [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname --transient overcloud-novacompute-2 [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname overcloud-novacompute-2 ```` but after reboot the server again append localdomain to iit's hostname Sincerely, Ali R. Taleghani @linkedIn -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2015-10-16 09:44:47.png Type: image/png Size: 140773 bytes Desc: not available URL: From shayne.alone at gmail.com Fri Oct 16 07:22:53 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Fri, 16 Oct 2015 10:52:53 +0330 Subject: [Rdo-list] Overcloud nodes host-name persistence In-Reply-To: References: Message-ID: if you update hostname via hostnamectl and restart nova-compute service without reboot it works! but rebooting will cause new hostname is place :-/ Sincerely, Ali R. Taleghani @linkedIn On Fri, Oct 16, 2015 at 10:49 AM, AliReza Taleghani wrote: > It seem that If you restart bare metal servers! there will be a problem > after boot that the node will be register it self with a different name > rather than the one used on deployment! > > > as the attachment shown the failed compute node is registered with new > name: > => overcloud-novacompute-2.localdomain > as it was > => overcloud-novacompute-2 > during deployment! > > I check: > ``` > [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname > --transient overcloud-novacompute-2 > [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname > overcloud-novacompute-2 > ```` > but after reboot the server again append localdomain to iit's hostname > > Sincerely, > Ali R. Taleghani > @linkedIn > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Fri Oct 16 07:52:06 2015 From: shardy at redhat.com (Steven Hardy) Date: Fri, 16 Oct 2015 08:52:06 +0100 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <900691934.58686512.1444967422246.JavaMail.zimbra@redhat.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <5620068E.1020202@ualberta.ca> <1377476884.58530663.1444943986933.JavaMail.zimbra@redhat.com> <56202F11.4020505@ualberta.ca> <900691934.58686512.1444967422246.JavaMail.zimbra@redhat.com> Message-ID: <20151016075205.GA20570@t430slt.redhat.com> On Thu, Oct 15, 2015 at 11:50:22PM -0400, Sasha Chuzhoy wrote: > Hi Erming, > So I tried to reproduce your issue by setting the passwords in the auth section of the undercloud file. My deployment completed successfully, although I ran into https://bugzilla.redhat.com/show_bug.cgi?id=1271289. > > The warnings (not errors) are on the successfully deployed node too: > Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.845 1634 WARNING os-collect-config [-] Source [request] Unavailable. > Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.845 1634 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping > Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.846 1634 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) > Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.847 1634 WARNING os_collect_config.zaqar [-] No auth_url configured. > Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.167 1634 WARNING os-collect-config [-] Source [request] Unavailable. > Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.168 1634 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping > Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.168 1634 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) > Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.168 1634 WARNING os_collect_config.zaqar [-] No auth_url configured. > Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.150 1634 WARNING os-collect-config [-] Source [request] Unavailable. > Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.153 1634 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping > Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.153 1634 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) > Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.154 1634 WARNING os_collect_config.zaqar [-] No auth_url configured. Yeah these warnings are OK - if you look in /etc/os-collect-config.conf on the nodes, you will see that we only configure the "cfn" collector, and these warnings are coming from the other (unconfigured) collectors. Perhaps we should look at making those a bit quieter, as you only need on configured datasource for os-collect-config to work. If you want to prove basic network connectivity for os-collect config, in addition to checking the logs you can do something like: http://fpaste.org/279853/98174414/ Here we see we can ping the configured IP, and wget shows we get a response from the heat API on port 8000 (the IncompleteSignature is OK, because we didn't include a signature in the request, but anything like no route to host is obviously not OK). Re the original "Authentication failed. Please try again" error - Dan is correct, that nearly always happens due to a timeout, e.g due to the keystone token expiring. Again a clearer error would help here. After we fix [1] this will be easier, because you'll always hit the heat stack timeout instead of the keystone token expiring. [1] https://bugs.launchpad.net/heat/+bug/1306294 Steve From christian at berendt.io Fri Oct 16 08:50:44 2015 From: christian at berendt.io (Christian Berendt) Date: Fri, 16 Oct 2015 10:50:44 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> Message-ID: <5620BA64.6090501@berendt.io> On 10/15/2015 04:43 PM, Steve Gordon wrote: > To get this updated what is required is for someone to walk through the draft install guide [1] vetting each procedure on each target distro and updating the test matrix here: > > https://wiki.openstack.org/wiki/Documentation/LibertyDocTesting > > We also need to determine which "known issues" need to be resolved, and document + file bugs for any new ones that pop up (and ideally resolve them). In future as part of the RDO test day I think we should: Please add me to bug reports you file for openstack-manuals. I will try to take care of them ASAP. > a) Broadcast where the correct packages are more widely (e.g. include openstack-docs at lists.openstack.org in the distribution list for the test day). There seems to be a contention that they weren't available or were available but were mixed up with Mitaka packages which was true at a point in time but was quickly resolved (there was an issue with the config files being shipped though). Which packages should be used for testing Liberty? In the guide we wrote that http://rdo.fedorapeople.org/openstack-liberty/rdo-release-liberty.rpm should be used. This repository is not yet available, so we cannot test with this repository. https://repos.fedorapeople.org/repos/openstack/openstack-liberty/testing/ does not contain any RPM packages. Not sure why this directory is there. > b) Integrate the install guide test matrix into the test day so that we (the RDO community) can help drive vetting it earlier. This is a good idea. Christian. -- Christian Berendt Cloud Solution Architect Mail: berendt at b1-systems.de B1 Systems GmbH Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 From mcornea at redhat.com Fri Oct 16 09:55:51 2015 From: mcornea at redhat.com (Marius Cornea) Date: Fri, 16 Oct 2015 05:55:51 -0400 (EDT) Subject: [Rdo-list] Overcloud nodes host-name persistence In-Reply-To: References: Message-ID: <894650899.43170564.1444989351698.JavaMail.zimbra@redhat.com> Nice catch, I was able to reproduce it on my system and reported it here: https://bugzilla.redhat.com/show_bug.cgi?id=1272376 ----- Original Message ----- > From: "AliReza Taleghani" > To: rdo-list at redhat.com > Sent: Friday, October 16, 2015 9:22:53 AM > Subject: Re: [Rdo-list] Overcloud nodes host-name persistence > > if you update hostname via hostnamectl and restart nova-compute service > without reboot it works! but rebooting will cause new hostname is place :-/ > > Sincerely, > Ali R. Taleghani > @linkedIn > > On Fri, Oct 16, 2015 at 10:49 AM, AliReza Taleghani < shayne.alone at gmail.com > > wrote: > > > > It seem that If you restart bare metal servers! there will be a problem after > boot that the node will be register it self with a different name rather > than the one used on deployment! > > > as the attachment shown the failed compute node is registered with new name: > => overcloud-novacompute-2.localdomain > as it was > => overcloud-novacompute-2 > during deployment! > > I check: > ``` > [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname > --transient overcloud-novacompute-2 > [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname > overcloud-novacompute-2 > ```` > but after reboot the server again append localdomain to iit's hostname > > Sincerely, > Ali R. Taleghani > @linkedIn > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From shayne.alone at gmail.com Fri Oct 16 09:59:42 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Fri, 16 Oct 2015 09:59:42 +0000 Subject: [Rdo-list] Overcloud nodes host-name persistence In-Reply-To: <894650899.43170564.1444989351698.JavaMail.zimbra@redhat.com> References: <894650899.43170564.1444989351698.JavaMail.zimbra@redhat.com> Message-ID: I think this is related to stack - neutron - dhcp agent config inside the undercloud configuration... Cos management interface are in dhcp-client mode on baremetals and on each boot i think the take hostname via dhcp... Or some think alike :-/ On Fri, Oct 16, 2015, 13:25 Marius Cornea wrote: > Nice catch, I was able to reproduce it on my system and reported it here: > https://bugzilla.redhat.com/show_bug.cgi?id=1272376 > > ----- Original Message ----- > > From: "AliReza Taleghani" > > To: rdo-list at redhat.com > > Sent: Friday, October 16, 2015 9:22:53 AM > > Subject: Re: [Rdo-list] Overcloud nodes host-name persistence > > > > if you update hostname via hostnamectl and restart nova-compute service > > without reboot it works! but rebooting will cause new hostname is place > :-/ > > > > Sincerely, > > Ali R. Taleghani > > @linkedIn > > > > On Fri, Oct 16, 2015 at 10:49 AM, AliReza Taleghani < > shayne.alone at gmail.com > > > wrote: > > > > > > > > It seem that If you restart bare metal servers! there will be a problem > after > > boot that the node will be register it self with a different name rather > > than the one used on deployment! > > > > > > as the attachment shown the failed compute node is registered with new > name: > > => overcloud-novacompute-2.localdomain > > as it was > > => overcloud-novacompute-2 > > during deployment! > > > > I check: > > ``` > > [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname > > --transient overcloud-novacompute-2 > > [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname > > overcloud-novacompute-2 > > ```` > > but after reboot the server again append localdomain to iit's hostname > > > > Sincerely, > > Ali R. Taleghani > > @linkedIn > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at berendt.io Fri Oct 16 11:26:11 2015 From: christian at berendt.io (Christian Berendt) Date: Fri, 16 Oct 2015 13:26:11 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <5620BA64.6090501@berendt.io> Message-ID: <5620DED3.3090102@berendt.io> On 10/16/2015 12:59 PM, Mohammed Arafa wrote: > hint: the > http://rdo.fedorapeople.org/openstack-liberty/rdo-release-liberty.rpm > should have a note at the top where to log bugs against the docs This file is not yet available. > ps. just for my sanity. these are the liberty docs? > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ We are talking about two different documentations. The focus of this thread is the OpenStack installation guide provided on docs.openstack.org. It is currently available on http://docs.openstack.org/draft/install-guide-rdo/. Bugs for this guide can be filed here: https://bugs.launchpad.net/openstack-manuals/. The sources of this guide is availabe in the openstack-manuals repository (http://git.openstack.org/cgit/openstack/openstack-manuals/). Christian. -- Christian Berendt Cloud Solution Architect Mail: berendt at b1-systems.de B1 Systems GmbH Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 From apevec at gmail.com Fri Oct 16 11:54:59 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 16 Oct 2015 13:54:59 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <5620BA64.6090501@berendt.io> References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <5620BA64.6090501@berendt.io> Message-ID: 2015-10-16 10:50 GMT+02:00 Christian Berendt : > Please add me to bug reports you file for openstack-manuals. I will try > to take care of them ASAP. Will do, thanks! > Which packages should be used for testing Liberty? In the guide we wrote > that > http://rdo.fedorapeople.org/openstack-liberty/rdo-release-liberty.rpm > should be used. This repository is not yet available, so we cannot test > with this repository. RDO Liberty will be CentOS Cloud SIG repo, testing repo (in the process of getting updated from RC to GA builds) is at http://buildlogs.centos.org/centos/7/cloud/openstack-liberty/ and will be available signed at production location http://mirror.centos.org/centos/7/cloud/x86_64/openstack-liberty/ by Wednesday next week. > https://repos.fedorapeople.org/repos/openstack/openstack-liberty/testing/ does > not contain any RPM packages. Not sure why this directory is there. That was going to be the old setup, I'll remove it now and probably to redirect to centos sig repo in place. Cheers, Alan From christian at berendt.io Fri Oct 16 11:58:57 2015 From: christian at berendt.io (Christian Berendt) Date: Fri, 16 Oct 2015 13:58:57 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <5620BA64.6090501@berendt.io> Message-ID: <5620E681.10907@berendt.io> On 10/16/2015 01:54 PM, Alan Pevec wrote: > will be available signed at production location > http://mirror.centos.org/centos/7/cloud/x86_64/openstack-liberty/ by > Wednesday next week. Packages for Fedora will be also available on Wednesday? Christian. -- Christian Berendt Cloud Solution Architect Mail: berendt at b1-systems.de B1 Systems GmbH Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 From apevec at gmail.com Fri Oct 16 12:05:03 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 16 Oct 2015 14:05:03 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <5620E681.10907@berendt.io> References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <5620BA64.6090501@berendt.io> <5620E681.10907@berendt.io> Message-ID: > Packages for Fedora will be also available on Wednesday? Nope, for Fedora starting Liberty we're providing only Delorean Trunk packages for openstack-* services: https://trello.com/c/wzdl1IlZ/52-openstack-in-fedora Cheers, Alan From christian at berendt.io Fri Oct 16 12:13:04 2015 From: christian at berendt.io (Christian Berendt) Date: Fri, 16 Oct 2015 14:13:04 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <5620BA64.6090501@berendt.io> <5620E681.10907@berendt.io> Message-ID: <5620E9D0.90903@berendt.io> On 10/16/2015 02:05 PM, Alan Pevec wrote: > Nope, for Fedora starting Liberty we're providing only Delorean Trunk > packages for openstack-* services: > https://trello.com/c/wzdl1IlZ/52-openstack-in-fedora This means that we have to remove Fedora from the official installation guide because we only document the usage of released packages (Liberty) there. I will propose a review request to remove Fedora and to add a note that only Delorean packages can be used on Fedora. Christian. -- Christian Berendt Cloud Solution Architect Mail: berendt at b1-systems.de B1 Systems GmbH Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 From mohammed.arafa at gmail.com Fri Oct 16 12:20:08 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 16 Oct 2015 08:20:08 -0400 Subject: [Rdo-list] [rdo-manager] rdo-manager documentation Message-ID: Hi There is a thread about openstack documentation on installing openstack from rpm packages. I am asking about RDO-Manager documentation specifically a) Are these the official docs? Liberty https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ Mitaka http://docs.openstack.org/developer/tripleo-docs/ b) Where are bugs against the docs to be reported? -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Oct 16 12:37:08 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 16 Oct 2015 14:37:08 +0200 Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <370767486.42538011.1444905632463.JavaMail.zimbra@redhat.com> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <370767486.42538011.1444905632463.JavaMail.zimbra@redhat.com> Message-ID: <5620EF74.8020708@redhat.com> On 10/15/2015 12:40 PM, Marius Cornea wrote: > Dmitry, any recommendation for this kind of scenario? It looks like introspection is stuck and the nodes are kept powered on. It would be great for debugging purposes to get shell access via the console and check what went wrong. Is this possible at this time? I can have a lot of reasons, the most popular is nodes having several NIC's. Other possible ideas can be found at https://github.com/openstack/ironic-inspector#introspection-times-out > > Thanks, > Marius > > ----- Original Message ----- >> From: "Esra Celik" >> To: "Marius Cornea" >> Cc: "Ignacio Bravo" , rdo-list at redhat.com >> Sent: Thursday, October 15, 2015 10:40:46 AM >> Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" >> >> >> Sorry for the late reply >> >> ironic node-show results are below. I have my nodes power on after >> introspection bulk start. And I get the following warning >> Introspection didn't finish for nodes >> 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 >> >> Doesn't seem to be the same issue with >> https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html >> >> >> >> >> [stack at undercloud ~]$ ironic node-list >> +--------------------------------------+------+---------------+-------------+--------------------+-------------+ >> | UUID | Name | Instance UUID | Power State | Provisioning State | >> | Maintenance | >> +--------------------------------------+------+---------------+-------------+--------------------+-------------+ >> | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | available | >> | False | >> | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | available | >> | False | >> +--------------------------------------+------+---------------+-------------+--------------------+-------------+ >> >> >> [stack at undercloud ~]$ ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed >> +------------------------+-------------------------------------------------------------------------+ >> | Property | Value | >> +------------------------+-------------------------------------------------------------------------+ >> | target_power_state | None | >> | extra | {} | >> | last_error | None | >> | updated_at | 2015-10-15T08:26:42+00:00 | >> | maintenance_reason | None | >> | provision_state | available | >> | clean_step | {} | >> | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | >> | console_enabled | False | >> | target_provision_state | None | >> | provision_updated_at | 2015-10-15T08:26:42+00:00 | >> | maintenance | False | >> | inspection_started_at | None | >> | inspection_finished_at | None | >> | power_state | power on | >> | driver | pxe_ipmitool | >> | reservation | None | >> | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': >> | u'10', | >> | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | >> | instance_uuid | None | >> | name | None | >> | driver_info | {u'ipmi_password': u'******', u'ipmi_address': >> | u'192.168.0.18', | >> | | u'ipmi_username': u'root', u'deploy_kernel': u'49a2c8d4-a283-4bdf-8d6f- | >> | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | >> | | 0d88-4632-af98-8defb05ca6e2'} | >> | created_at | 2015-10-15T07:49:08+00:00 | >> | driver_internal_info | {u'clean_steps': None} | >> | chassis_uuid | | >> | instance_info | {} | >> +------------------------+-------------------------------------------------------------------------+ >> >> >> [stack at undercloud ~]$ ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 >> +------------------------+-------------------------------------------------------------------------+ >> | Property | Value | >> +------------------------+-------------------------------------------------------------------------+ >> | target_power_state | None | >> | extra | {} | >> | last_error | None | >> | updated_at | 2015-10-15T08:26:42+00:00 | >> | maintenance_reason | None | >> | provision_state | available | >> | clean_step | {} | >> | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | >> | console_enabled | False | >> | target_provision_state | None | >> | provision_updated_at | 2015-10-15T08:26:42+00:00 | >> | maintenance | False | >> | inspection_started_at | None | >> | inspection_finished_at | None | >> | power_state | power on | >> | driver | pxe_ipmitool | >> | reservation | None | >> | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': >> | u'100', | >> | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | >> | instance_uuid | None | >> | name | None | >> | driver_info | {u'ipmi_password': u'******', u'ipmi_address': >> | u'192.168.0.19', | >> | | u'ipmi_username': u'root', u'deploy_kernel': u'49a2c8d4-a283-4bdf-8d6f- | >> | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | >> | | 0d88-4632-af98-8defb05ca6e2'} | >> | created_at | 2015-10-15T07:49:08+00:00 | >> | driver_internal_info | {u'clean_steps': None} | >> | chassis_uuid | | >> | instance_info | {} | >> +------------------------+-------------------------------------------------------------------------+ >> [stack at undercloud ~]$ >> >> >> >> >> >> >> >> >> >> And below I added my history for the stack user. I don't think I am doing >> something other than >> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty >> doc >> >> >> >> >> >> >> >> 1 vi instackenv.json >> 2 sudo yum -y install epel-release >> 3 sudo curl -o /etc/yum.repos.d/delorean.repo >> http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo >> 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo >> http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo >> 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' >> /etc/yum.repos.d/delorean-current.repo >> 6 sudo /bin/bash -c "cat <>/etc/yum.repos.d/delorean-current.repo >> >> includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules >> EOF" >> 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo >> http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo >> 8 sudo yum -y install yum-plugin-priorities >> 9 sudo yum install -y python-tripleoclient >> 10 cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf >> 11 vi undercloud.conf >> 12 export DIB_INSTALLTYPE_puppet_modules=source >> 13 openstack undercloud install >> 14 source stackrc >> 15 export NODE_DIST=centos7 >> 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo >> /etc/yum.repos.d/delorean-deps.repo" >> 17 export DIB_INSTALLTYPE_puppet_modules=source >> 18 openstack overcloud image build --all >> 19 ls >> 20 openstack overcloud image upload >> 21 openstack baremetal import --json instackenv.json >> 22 openstack baremetal configure boot >> 23 ironic node-list >> 24 openstack baremetal introspection bulk start >> 25 ironic node-list >> 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed >> 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 >> 28 history >> >> >> >> >> >> >> >> Thanks >> >> >> >> Esra ?EL?K >> T?B?TAK B?LGEM >> www.bilgem.tubitak.gov.tr >> celik.esra at tubitak.gov.tr >> >> ----- Orijinal Mesaj ----- >> >> Kimden: "Marius Cornea" >> Kime: "Esra Celik" >> Kk: "Ignacio Bravo" , rdo-list at redhat.com >> G?nderilenler: 14 Ekim ?ar?amba 2015 19:40:07 >> Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was >> found" >> >> Can you do ironic node-show for your ironic nodes and post the results? Also >> check the following suggestion if you're experiencing the same issue: >> https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html >> >> ----- Original Message ----- >>> From: "Esra Celik" >>> To: "Marius Cornea" >>> Cc: "Ignacio Bravo" , rdo-list at redhat.com >>> Sent: Wednesday, October 14, 2015 3:22:20 PM >>> Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host >>> was found" >>> >>> >>> >>> Well in the early stage of the introspection I can see Client IP of nodes >>> (screenshot attached). But then I see continuous ironic-python-agent errors >>> (screenshot-2 attached). Errors repeat after time out.. And the nodes are >>> not powered off. >>> >>> Seems like I am stuck in introspection stage.. >>> >>> I can use ipmitool command to successfully power on/off the nodes >>> >>> >>> >>> [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR >>> -U >>> root -R 3 -N 5 -P power status >>> Chassis Power is on >>> >>> >>> [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P >>> chassis power status >>> Chassis Power is on >>> [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P >>> chassis power off >>> Chassis Power Control: Down/Off >>> [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P >>> chassis power status >>> Chassis Power is off >>> [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P >>> chassis power on >>> Chassis Power Control: Up/On >>> [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P >>> chassis power status >>> Chassis Power is on >>> >>> >>> Esra ?EL?K >>> T?B?TAK B?LGEM >>> www.bilgem.tubitak.gov.tr >>> celik.esra at tubitak.gov.tr >>> >>> >>> ----- Orijinal Mesaj ----- >>> >>> Kimden: "Marius Cornea" >>> Kime: "Esra Celik" >>> Kk: "Ignacio Bravo" , rdo-list at redhat.com >>> G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 >>> Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was >>> found" >>> >>> >>> ----- Original Message ----- >>>> From: "Esra Celik" >>>> To: "Marius Cornea" >>>> Cc: "Ignacio Bravo" , rdo-list at redhat.com >>>> Sent: Wednesday, October 14, 2015 10:49:01 AM >>>> Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host >>>> was found" >>>> >>>> >>>> Well today I started with re-installing the OS and nothing seems wrong >>>> with >>>> undercloud installation, then; >>>> >>>> >>>> >>>> >>>> >>>> >>>> I see an error during image build >>>> >>>> >>>> [stack at undercloud ~]$ openstack overcloud image build --all >>>> ... >>>> a lot of log >>>> ... >>>> ++ cat /etc/dib_dracut_drivers >>>> + dracut -N --install ' curl partprobe lsblk targetcli tail head awk >>>> ifconfig >>>> cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell >>>> rd.debug >>>> rd.neednet=1 rd.driver.pre=ahci' --include /var/tmp/image.YVhwuArQ/mnt/ / >>>> --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net >>>> virtio_blk target_core_mod iscsi_target_mod target_core_iblock >>>> target_core_file target_core_pscsi configfs' -o 'dash plymouth' >>>> /tmp/ramdisk >>>> cat: write error: Broken pipe >>>> + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel >>>> + chmod o+r /tmp/kernel >>>> + trap EXIT >>>> + target_tag=99-build-dracut-ramdisk >>>> + date +%s.%N >>>> + output '99-build-dracut-ramdisk completed' >>>> ... >>>> a lot of log >>>> ... >>> >>> You can ignore that afaik, if you end up having all the required images it >>> should be ok. >>> >>>> >>>> Then, during introspection stage I see ironic-python-agent errors on >>>> nodes >>>> (screenshot attached) and the following warnings >>>> >>> >>> That looks odd. Is it showing up in the early stage of the introspection? >>> At >>> some point it should receive an address by DHCP and the Network is >>> unreachable error should disappear. Does the introspection complete and the >>> nodes are turned off? >>> >>>> >>>> >>>> [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service >>>> | >>>> grep -i "warning\|error" >>>> Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 >>>> 10:30:12.119 >>>> 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] >>>> Option "http_url" from group "pxe" is deprecated. Use option "http_url" >>>> from >>>> group "deploy". >>>> Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 >>>> 10:30:12.119 >>>> 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ] >>>> Option "http_root" from group "pxe" is deprecated. Use option "http_root" >>>> from group "deploy". >>>> >>>> >>>> Before deployment ironic node-list: >>>> >>> >>> This is odd too as I'm expecting the nodes to be powered off before running >>> deployment. >>> >>>> >>>> >>>> [stack at undercloud ~]$ ironic node-list >>>> +--------------------------------------+------+---------------+-------------+--------------------+-------------+ >>>> | UUID | Name | Instance UUID | Power State | Provisioning State | >>>> | Maintenance | >>>> +--------------------------------------+------+---------------+-------------+--------------------+-------------+ >>>> | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | >>>> | available >>>> | | >>>> | False | >>>> | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | >>>> | available >>>> | | >>>> | False | >>>> +--------------------------------------+------+---------------+-------------+--------------------+-------------+ >>>> >>>> During deployment I get following errors >>>> >>>> [root at localhost ~]# journalctl -fl -u openstack-ironic-conductor.service >>>> | >>>> grep -i "warning\|error" >>>> Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 >>>> 11:29:01.739 >>>> 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while attempting >>>> "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 >>>> -f >>>> /tmp/tmpSCKHIv power status"for node >>>> b5811c06-d5d1-41f1-87b3-2fd55ae63553. >>>> Error: Unexpected error while running command. >>>> Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 >>>> 11:29:01.739 >>>> 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status failed >>>> for >>>> node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error >>>> while >>>> running command. >>>> Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 >>>> 11:29:01.740 >>>> 619 WARNING ironic.conductor.manager [-] During sync_power_state, could >>>> not >>>> get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt 1 >>>> of >>>> 3. Error: IPMI call failed: power status.. >>>> >>> >>> This looks like an ipmi error, can you try to manually run commands using >>> the >>> ipmitool and see if you get any success? It's also worth filing a bug with >>> details such as the ipmitool version, server model, drac firmware version. >>> >>>> >>>> >>>> >>>> >>>> >>>> Thanks a lot >>>> >>>> >>>> >>>> ----- Orijinal Mesaj ----- >>>> >>>> Kimden: "Marius Cornea" >>>> Kime: "Esra Celik" >>>> Kk: "Ignacio Bravo" , rdo-list at redhat.com >>>> G?nderilenler: 13 Ekim Sal? 2015 21:16:14 >>>> Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid >>>> host was found" >>>> >>>> >>>> ----- Original Message ----- >>>>> From: "Esra Celik" >>>>> To: "Marius Cornea" >>>>> Cc: "Ignacio Bravo" , rdo-list at redhat.com >>>>> Sent: Tuesday, October 13, 2015 5:02:09 PM >>>>> Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No >>>>> valid >>>>> host was found" >>>>> >>>>> During deployment they are powering on and deploying the images. I see >>>>> lot >>>>> of >>>>> connection error messages about ironic-python-agent but ignore them as >>>>> mentioned here >>>>> (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) >>>> >>>> That was referring to the introspection stage. From what I can tell you >>>> are >>>> experiencing issues during deployment as it fails to provision the nova >>>> instances, can you check if during that stage the nodes get powered on? >>>> >>>> Make sure that before overcloud deploy the ironic nodes are available for >>>> provisioning (ironic node-list and check the provisioning state column). >>>> Also check that you didn't miss any step in the docs in regards to kernel >>>> and ramdisk assignment, introspection, flavor creation(so it matches the >>>> nodes resources) >>>> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html >>>> >>>> >>>>> In instackenv.json file I do not need to add the undercloud node, or do >>>>> I? >>>> >>>> No, the nodes details should be enough. >>>> >>>>> And which log files should I watch during deployment? >>>> >>>> You can check the openstack-ironic-conductor logs(journalctl -fl -u >>>> openstack-ironic-conductor.service) and the logs in /var/log/nova. >>>> >>>>> Thanks >>>>> Esra >>>>> >>>>> >>>>> ----- Orijinal Mesaj -----Kimden: Marius Cornea >>>>> Kime: >>>>> Esra Celik Kk: Ignacio Bravo >>>>> , rdo-list at redhat.comGönderilenler: Tue, 13 >>>>> Oct >>>>> 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails >>>>> with >>>>> error "No valid host was found" >>>>> >>>>> ----- Original Message -----> From: "Esra Celik" >>>>> > >>>>> To: "Ignacio Bravo" > Cc: rdo-list at redhat.com> >>>>> Sent: >>>>> Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud >>>>> deploy fails with error "No valid host was found"> > > > Actually I >>>>> re-installed the OS for Undercloud before deploying. However I did> not >>>>> re-install the OS in Compute and Controller nodes.. I will reinstall> >>>>> basic >>>>> OS for them too, and retry.. >>>>> >>>>> You don't need to reinstall the OS on the controller and compute, they >>>>> will >>>>> get the image served by the undercloud. I'd recommend that during >>>>> deployment >>>>> you watch the servers console and make sure they get powered on, pxe >>>>> boot, >>>>> and actually get the image deployed. >>>>> >>>>> Thanks >>>>> >>>>>> Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> >>>>>> www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: >>>>>> "Ignacio >>>>>> Bravo" > Kime: "Esra Celik" >>>>>> > Kk: rdo-list at redhat.com> >>>>>> Gönderilenler: >>>>>> 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy >>>>>> fails >>>>>> with error "No valid host was> found"> > Esra,> > I encountered the >>>>>> same >>>>>> problem after deleting the stack and re-deploying.> > It turns out >>>>>> that >>>>>> 'heat stack-delete overcloud’ does remove the nodes from> >>>>>> ‘nova list’ and one would assume that the baremetal >>>>>> servers >>>>>> are now ready to> be used for the next stack, but when redeploying, I >>>>>> get >>>>>> the same message of> not enough hosts available.> > You can look into >>>>>> the >>>>>> nova logs and it mentions something about ‘node xxx is> already >>>>>> associated with UUID yyyy’ and ‘I tried 3 times and >>>>>> I’m >>>>>> giving up’.> The issue is that the UUID yyyy belonged to a >>>>>> prior >>>>>> unsuccessful deployment.> > I’m now redeploying the basic OS to >>>>>> start from scratch again.> > IB> > __> Ignacio Bravo> LTG Federal, >>>>>> Inc> >>>>>> www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, 2015, at >>>>>> 9:25 >>>>>> AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > >>>>>> OverCloud deploy fails with error "No valid host was found"> > >>>>>> [stack at undercloud ~]$ openstack overcloud deploy --templates> >>>>>> Deploying >>>>>> templates in the directory> >>>>>> /usr/share/openstack-tripleo-heat-templates> >>>>>> Stack failed with status: Resource CREATE failed: resources.Compute:> >>>>>> ResourceInError: resources[0].resources.NovaCompute: Went to status >>>>>> ERROR> >>>>>> due to "Message: No valid host was found. There are not enough hosts> >>>>>> available., Code: 500"> Heat Stack create failed.> > Here are some >>>>>> logs:> >>>>>>> Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE >>>>>>> Tue >>>>>>> Oct >>>>>> 13> 16:18:17 2015> > >>>>>> +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> >>>>>> | resource_name | physical_resource_id | resource_type | >>>>>> | resource_status >>>>>> |> | updated_time | stack_name |> >>>>>> +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> >>>>>> | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | >>>>>> | OS::Heat::ResourceGroup >>>>>> |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller >>>>>> |> | | >>>>>> 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | | >>>>>> CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | >>>>>> 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> | >>>>>> CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | >>>>>> overcloud-Controller-45bbw24xxhxs |> | 0 | >>>>>> e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | >>>>>> CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r >>>>>> |> >>>>>> | >>>>>> Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server >>>>>> |> >>>>>> | >>>>>> CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | >>>>>> overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute | >>>>>> 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | >>>>>> CREATE_FAILED >>>>>> | 2015-10-13T10:20:56 |> | >>>>>> | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef >>>>>> |> >>>>>> +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> >>>>>>>> [stack at undercloud ~]$ heat resource-show overcloud Compute> >>>>>> +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> >>>>>> | Property | Value |> >>>>>> +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> >>>>>> | attributes | { |> | | "attributes": null, |> | | "refs": null |> | >>>>>> | | >>>>>> | } >>>>>> |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | >>>>>> |> | links >>>>>> |> | |> >>>>>> | >>>>>> http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> >>>>>> | (self) |> | | >>>>>> http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> >>>>>> | | (stack) |> | | >>>>>> http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> >>>>>> | | (nested) |> | logical_resource_id | Compute |> | >>>>>> | | physical_resource_id >>>>>> | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | >>>>>> ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | | >>>>>> ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |> | >>>>>> | >>>>>> AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | >>>>>> Compute >>>>>> |> >>>>>> | resource_status | CREATE_FAILED |> | resource_status_reason | >>>>>> resources.Compute: ResourceInError:> | >>>>>> resources[0].resources.NovaCompute: >>>>>> Went to status ERROR due to "Message:> | No valid host was found. >>>>>> There >>>>>> are not enough hosts available., Code: 500"> | |> | resource_type | >>>>>> OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |> >>>>>> +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> >>>>>>>>> This is my instackenv.json for 1 compute and 1 control node to >>>>>>>>> be >>>>>> deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> "mac":[> >>>>>> "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> "disk":"10",> >>>>>> "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> >>>>>> "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> "mac":[> >>>>>> "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> "disk":"100",> >>>>>> "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> >>>>>> "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in advance> >>>>>>> >>>>>>> >>>>>> Esra ÇEL?K> TÜB?TAK B?LGEM> www.bilgem.tubitak.gov.tr> >>>>>> celik.esra at tubitak.gov.tr> > >>>>>> _______________________________________________> Rdo-list mailing >>>>>> list> >>>>>> Rdo-list at redhat.com> >>>>>> https://www.redhat.com/mailman/listinfo/rdo-list> >>>>>>> >>>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > >>>>>> _______________________________________________> Rdo-list mailing >>>>>> list> >>>>>> Rdo-list at redhat.com> >>>>>> https://www.redhat.com/mailman/listinfo/rdo-list> >>>>>>> >>>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> >>>> >>>> >>> >>> >> >> From shayne.alone at gmail.com Fri Oct 16 12:54:32 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Fri, 16 Oct 2015 12:54:32 +0000 Subject: [Rdo-list] overcloud cinder volume limits Message-ID: I have some error via horizon nofication which points to cinder auth api calls... [image: Screenshot from 2015-10-16 16:13:15.png] I try some configuration change as show in: http://paste.ubuntu.com/12798452/ to apply auth info keystonecontext :-/ but it seems nothing to get solve... -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2015-10-16 16:13:15.png Type: image/png Size: 76343 bytes Desc: not available URL: From shayne.alone at gmail.com Fri Oct 16 12:17:01 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Fri, 16 Oct 2015 12:17:01 +0000 Subject: [Rdo-list] Overcloud nodes host-name persistence In-Reply-To: References: <894650899.43170564.1444989351698.JavaMail.zimbra@redhat.com> Message-ID: if you check Hypervisor info => nova hypervisor-show [ID] just before first reboot ( and of course I mean after overcloud deploy ) of node and after rebooting the only param which in change is: [ service_host ] | hypervisor_hostname | overcloud-novacompute-0.localdomain | | service_host | overcloud-novacompute-0 | ==> | hypervisor_hostname | overcloud-novacompute-0.localdomain | | service_host | overcloud-novacompute-0.localdomain | On Fri, Oct 16, 2015 at 1:29 PM AliReza Taleghani wrote: > I think this is related to stack - neutron - dhcp agent config inside the > undercloud configuration... > Cos management interface are in dhcp-client mode on baremetals and on each > boot i think the take hostname via dhcp... Or some think alike :-/ > > On Fri, Oct 16, 2015, 13:25 Marius Cornea wrote: > >> Nice catch, I was able to reproduce it on my system and reported it here: >> https://bugzilla.redhat.com/show_bug.cgi?id=1272376 >> >> ----- Original Message ----- >> > From: "AliReza Taleghani" >> > To: rdo-list at redhat.com >> > Sent: Friday, October 16, 2015 9:22:53 AM >> > Subject: Re: [Rdo-list] Overcloud nodes host-name persistence >> > >> > if you update hostname via hostnamectl and restart nova-compute service >> > without reboot it works! but rebooting will cause new hostname is place >> :-/ >> > >> > Sincerely, >> > Ali R. Taleghani >> > @linkedIn >> > >> > On Fri, Oct 16, 2015 at 10:49 AM, AliReza Taleghani < >> shayne.alone at gmail.com >> > > wrote: >> > >> > >> > >> > It seem that If you restart bare metal servers! there will be a problem >> after >> > boot that the node will be register it self with a different name rather >> > than the one used on deployment! >> > >> > >> > as the attachment shown the failed compute node is registered with new >> name: >> > => overcloud-novacompute-2.localdomain >> > as it was >> > => overcloud-novacompute-2 >> > during deployment! >> > >> > I check: >> > ``` >> > [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname >> > --transient overcloud-novacompute-2 >> > [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname >> > overcloud-novacompute-2 >> > ```` >> > but after reboot the server again append localdomain to iit's hostname >> > >> > Sincerely, >> > Ali R. Taleghani >> > @linkedIn >> > >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -- > Sincerely, > Ali R. Taleghani > -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Fri Oct 16 13:23:02 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 16 Oct 2015 09:23:02 -0400 Subject: [Rdo-list] blogs.rdoproject.org Message-ID: <5620FA36.10904@redhat.com> If you write about your work on RDO, or OpenStack in general, but don't have a convenient place to put it, or if you have your own blog and want to separate your OpenStack-related writing from your personal writing, http://blogs.rdoproject.org/ is the place for you. If you would like an account, please just let me know, and I'll make it happen. The site was previously the eNovance blog, so it already has 3 years of content and a lot of followers. Because of this, you won't have to work very hard to have an immediate audience for your posts. To get started, just send me email (rbowen at redhat.com) with your preferred username. We'd love to see a lot of posts around OpenStack Summit, so now is the perfect time to start writing. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From mohammed.arafa at gmail.com Fri Oct 16 10:59:09 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 16 Oct 2015 06:59:09 -0400 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <5620BA64.6090501@berendt.io> References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <5620BA64.6090501@berendt.io> Message-ID: Hi I am currently attempting a Liberty POC with baremetal. I have opened a couple of bugs on bugzilla against the docs. not 10% sure thats the correct place i will continue to do so until i find out otherwise hint: the http://rdo.fedorapeople.org/openstack-liberty/rdo-release-liberty.rpm should have a note at the top where to log bugs against the docs ps. just for my sanity. these are the liberty docs? https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ On Fri, Oct 16, 2015 at 4:50 AM, Christian Berendt wrote: > On 10/15/2015 04:43 PM, Steve Gordon wrote: > > To get this updated what is required is for someone to walk through the > draft install guide [1] vetting each procedure on each target distro and > updating the test matrix here: > > > > https://wiki.openstack.org/wiki/Documentation/LibertyDocTesting > > > > We also need to determine which "known issues" need to be resolved, and > document + file bugs for any new ones that pop up (and ideally resolve > them). In future as part of the RDO test day I think we should: > > Please add me to bug reports you file for openstack-manuals. I will try > to take care of them ASAP. > > > a) Broadcast where the correct packages are more widely (e.g. include > openstack-docs at lists.openstack.org in the distribution list for the test > day). There seems to be a contention that they weren't available or were > available but were mixed up with Mitaka packages which was true at a point > in time but was quickly resolved (there was an issue with the config files > being shipped though). > > Which packages should be used for testing Liberty? In the guide we wrote > that > http://rdo.fedorapeople.org/openstack-liberty/rdo-release-liberty.rpm > should be used. This repository is not yet available, so we cannot test > with this repository. > > https://repos.fedorapeople.org/repos/openstack/openstack-liberty/testing/ > does > not contain any RPM packages. Not sure why this directory is there. > > > b) Integrate the install guide test matrix into the test day so that we > (the RDO community) can help drive vetting it earlier. > > This is a good idea. > > Christian. > > -- > Christian Berendt > Cloud Solution Architect > Mail: berendt at b1-systems.de > > B1 Systems GmbH > Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de > GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrichar1 at ball.com Fri Oct 16 13:31:48 2015 From: jrichar1 at ball.com (Richards, Jeff) Date: Fri, 16 Oct 2015 13:31:48 +0000 Subject: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment In-Reply-To: <5620068E.1020202@ualberta.ca> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <5620068E.1020202@ualberta.ca> Message-ID: <6D1DB475E9650E4EADE65C051EFBB98B468B0824@EX2010-DTN-03.AERO.BALL.com> I was deploying without Ceph for a "proof that it can work" extremely basic deploy. What finally worked for me was to copy the storage-environment.yaml from the templates (to /home/stack), and edit it to remove all Ceph references. Then this command line deploy worked: openstack overcloud deploy -templates -t 90 -libvirt-type qemu -ntp-service -e /home/stack/storage-environment.yaml Hope that helps. Jeff Richards From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Erming Pei Sent: Thursday, October 15, 2015 4:03 PM To: Dan Sneddon; rdo-list at redhat.com Subject: Re: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment BTW. I only followed the exact instruction as shown in the guide: (openstack overcloud deploy --templates) No more options. I thought this is good for a demo deployment. If not sufficient, which one I should follow? See some of your discussions, but not very clear. Should I follow the example from jliberma at redhat.com? This message and any enclosures are intended only for the addressee. Please notify the sender by email if you are not the intended recipient. If you are not the intended recipient, you may not use, copy, disclose, or distribute this message or its contents or enclosures to any other person and any such actions may be unlawful. Ball reserves the right to monitor and review all messages and enclosures sent to or from this email address. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Fri Oct 16 13:46:15 2015 From: mcornea at redhat.com (Marius Cornea) Date: Fri, 16 Oct 2015 09:46:15 -0400 (EDT) Subject: [Rdo-list] Overcloud nodes host-name persistence In-Reply-To: References: <894650899.43170564.1444989351698.JavaMail.zimbra@redhat.com> Message-ID: <559027620.43379015.1445003175654.JavaMail.zimbra@redhat.com> Yes, let's use the BZ for further investigations, it will help the developers to quickly get the context of the issue. I added my comments there. Thanks ----- Original Message ----- > From: "AliReza Taleghani" > To: "Marius Cornea" > Cc: rdo-list at redhat.com > Sent: Friday, October 16, 2015 2:17:01 PM > Subject: Re: [Rdo-list] Overcloud nodes host-name persistence > > if you check Hypervisor info => nova hypervisor-show [ID] just before first > reboot ( and of course I mean after overcloud deploy ) of node and after > rebooting the only param which in change is: [ service_host ] > > > | hypervisor_hostname | overcloud-novacompute-0.localdomain | > | service_host | overcloud-novacompute-0 | > ==> > | hypervisor_hostname | overcloud-novacompute-0.localdomain | > | service_host | overcloud-novacompute-0.localdomain | > > On Fri, Oct 16, 2015 at 1:29 PM AliReza Taleghani > wrote: > > > I think this is related to stack - neutron - dhcp agent config inside the > > undercloud configuration... > > Cos management interface are in dhcp-client mode on baremetals and on each > > boot i think the take hostname via dhcp... Or some think alike :-/ > > > > On Fri, Oct 16, 2015, 13:25 Marius Cornea wrote: > > > >> Nice catch, I was able to reproduce it on my system and reported it here: > >> https://bugzilla.redhat.com/show_bug.cgi?id=1272376 > >> > >> ----- Original Message ----- > >> > From: "AliReza Taleghani" > >> > To: rdo-list at redhat.com > >> > Sent: Friday, October 16, 2015 9:22:53 AM > >> > Subject: Re: [Rdo-list] Overcloud nodes host-name persistence > >> > > >> > if you update hostname via hostnamectl and restart nova-compute service > >> > without reboot it works! but rebooting will cause new hostname is place > >> :-/ > >> > > >> > Sincerely, > >> > Ali R. Taleghani > >> > @linkedIn > >> > > >> > On Fri, Oct 16, 2015 at 10:49 AM, AliReza Taleghani < > >> shayne.alone at gmail.com > >> > > wrote: > >> > > >> > > >> > > >> > It seem that If you restart bare metal servers! there will be a problem > >> after > >> > boot that the node will be register it self with a different name rather > >> > than the one used on deployment! > >> > > >> > > >> > as the attachment shown the failed compute node is registered with new > >> name: > >> > => overcloud-novacompute-2.localdomain > >> > as it was > >> > => overcloud-novacompute-2 > >> > during deployment! > >> > > >> > I check: > >> > ``` > >> > [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname > >> > --transient overcloud-novacompute-2 > >> > [heat-admin at overcloud-novacompute-2 ~]$ sudo hostnamectl set-hostname > >> > overcloud-novacompute-2 > >> > ```` > >> > but after reboot the server again append localdomain to iit's hostname > >> > > >> > Sincerely, > >> > Ali R. Taleghani > >> > @linkedIn > >> > > >> > > >> > _______________________________________________ > >> > Rdo-list mailing list > >> > Rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > -- > > Sincerely, > > Ali R. Taleghani > > > -- > Sincerely, > Ali R. Taleghani > From sasha at redhat.com Fri Oct 16 15:44:49 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Fri, 16 Oct 2015 11:44:49 -0400 (EDT) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1268677576.5046491.1444974015831.JavaMail.zimbra@tubitak.gov.tr> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1505152794.58180750.1444917521867.JavaMail.zimbra@redhat.com> <1268677576.5046491.1444974015831.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> Hi Esra, if the undercloud nodes are UP - you can login with: ssh heat-admin@ You can see the IP of the nodes with: "nova list". BTW, What do you see if you run "sudo systemctl|grep ironic" on the undercloud? Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Esra Celik" > To: "Sasha Chuzhoy" > Cc: "Marius Cornea" , rdo-list at redhat.com > Sent: Friday, October 16, 2015 1:40:16 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > Hi Sasha, > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 Overcloud-Compute > > This is my undercloud.conf file: > > image_path = . > local_ip = 192.0.2.1/24 > local_interface = em2 > masquerade_network = 192.0.2.0/24 > dhcp_start = 192.0.2.5 > dhcp_end = 192.0.2.24 > network_cidr = 192.0.2.0/24 > network_gateway = 192.0.2.1 > inspection_interface = br-ctlplane > inspection_iprange = 192.0.2.100,192.0.2.120 > inspection_runbench = false > undercloud_debug = true > enable_tuskar = false > enable_tempest = false > > IP configuration for the Undercloud is as follows: > > stack at undercloud ~]$ ip addr > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: em1: mtu 1500 qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > valid_lft forever preferred_lft forever > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > valid_lft forever preferred_lft forever > 3: em2: mtu 1500 qdisc mq master ovs-system > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > 4: ovs-system: mtu 1500 qdisc noop state DOWN > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: br-ctlplane: mtu 1500 qdisc noqueue > state UNKNOWN > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > valid_lft forever preferred_lft forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft forever preferred_lft forever > 6: br-int: mtu 1500 qdisc noop state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > And I attached two screenshots showing the boot stage for overcloud nodes > > Is there a way to login the overcloud nodes to see their IP configuration? > > Thanks > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > ----- Orijinal Mesaj ----- > > > Kimden: "Sasha Chuzhoy" > > Kime: "Esra Celik" > > Kk: "Marius Cornea" , rdo-list at redhat.com > > G?nderilenler: 15 Ekim Per?embe 2015 16:58:41 > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > found" > > > Just my 2 cents. > > Did you make sure that all the registered nodes are configured to boot off > > the right NIC first? > > Can you watch the console and see what happens on the problematic nodes > > upon > > boot? > > > Best regards, > > Sasha Chuzhoy. > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Marius Cornea" > > > Cc: rdo-list at redhat.com > > > Sent: Thursday, October 15, 2015 4:40:46 AM > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > was found" > > > > > > > > > Sorry for the late reply > > > > > > ironic node-show results are below. I have my nodes power on after > > > introspection bulk start. And I get the following warning > > > Introspection didn't finish for nodes > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > Doesn't seem to be the same issue with > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > | Maintenance | > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | > > > | available > > > | | > > > | False | > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | > > > | available > > > | | > > > | False | > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > +------------------------+-------------------------------------------------------------------------+ > > > | Property | Value | > > > +------------------------+-------------------------------------------------------------------------+ > > > | target_power_state | None | > > > | extra | {} | > > > | last_error | None | > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > | maintenance_reason | None | > > > | provision_state | available | > > > | clean_step | {} | > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > | console_enabled | False | > > > | target_provision_state | None | > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > | maintenance | False | > > > | inspection_started_at | None | > > > | inspection_finished_at | None | > > > | power_state | power on | > > > | driver | pxe_ipmitool | > > > | reservation | None | > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > | u'local_gb': > > > | u'10', | > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > | instance_uuid | None | > > > | name | None | > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > | u'192.168.0.18', | > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > | | | > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > | driver_internal_info | {u'clean_steps': None} | > > > | chassis_uuid | | > > > | instance_info | {} | > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > +------------------------+-------------------------------------------------------------------------+ > > > | Property | Value | > > > +------------------------+-------------------------------------------------------------------------+ > > > | target_power_state | None | > > > | extra | {} | > > > | last_error | None | > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > | maintenance_reason | None | > > > | provision_state | available | > > > | clean_step | {} | > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > | console_enabled | False | > > > | target_provision_state | None | > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > | maintenance | False | > > > | inspection_started_at | None | > > > | inspection_finished_at | None | > > > | power_state | power on | > > > | driver | pxe_ipmitool | > > > | reservation | None | > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > | u'local_gb': > > > | u'100', | > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > | instance_uuid | None | > > > | name | None | > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > | u'192.168.0.19', | > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > | | | > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > | driver_internal_info | {u'clean_steps': None} | > > > | chassis_uuid | | > > > | instance_info | {} | > > > +------------------------+-------------------------------------------------------------------------+ > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. I don't think I am doing > > > something other than > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > doc > > > > > > > > > > > > > > > > > > > > > > > > 1 vi instackenv.json > > > 2 sudo yum -y install epel-release > > > 3 sudo curl -o /etc/yum.repos.d/delorean.repo > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > /etc/yum.repos.d/delorean-current.repo > > > 6 sudo /bin/bash -c "cat <>/etc/yum.repos.d/delorean-current.repo > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > EOF" > > > 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > 8 sudo yum -y install yum-plugin-priorities > > > 9 sudo yum install -y python-tripleoclient > > > 10 cp /usr/share/instack-undercloud/undercloud.conf.sample > > > ~/undercloud.conf > > > 11 vi undercloud.conf > > > 12 export DIB_INSTALLTYPE_puppet_modules=source > > > 13 openstack undercloud install > > > 14 source stackrc > > > 15 export NODE_DIST=centos7 > > > 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > /etc/yum.repos.d/delorean-deps.repo" > > > 17 export DIB_INSTALLTYPE_puppet_modules=source > > > 18 openstack overcloud image build --all > > > 19 ls > > > 20 openstack overcloud image upload > > > 21 openstack baremetal import --json instackenv.json > > > 22 openstack baremetal configure boot > > > 23 ironic node-list > > > 24 openstack baremetal introspection bulk start > > > 25 ironic node-list > > > 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > Esra ?EL?K > > > T?B?TAK B?LGEM > > > www.bilgem.tubitak.gov.tr > > > celik.esra at tubitak.gov.tr > > > > > > > > > Kimden: "Marius Cornea" > > > Kime: "Esra Celik" > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > G?nderilenler: 14 Ekim ?ar?amba 2015 19:40:07 > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > > found" > > > > > > Can you do ironic node-show for your ironic nodes and post the results? > > > Also > > > check the following suggestion if you're experiencing the same issue: > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > ----- Original Message ----- > > > > From: "Esra Celik" > > > > To: "Marius Cornea" > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > host > > > > was found" > > > > > > > > > > > > > > > > Well in the early stage of the introspection I can see Client IP of > > > > nodes > > > > (screenshot attached). But then I see continuous ironic-python-agent > > > > errors > > > > (screenshot-2 attached). Errors repeat after time out.. And the nodes > > > > are > > > > not powered off. > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > I can use ipmitool command to successfully power on/off the nodes > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > ADMINISTRATOR > > > > -U > > > > root -R 3 -N 5 -P power status > > > > Chassis Power is on > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > chassis power status > > > > Chassis Power is on > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > chassis power off > > > > Chassis Power Control: Down/Off > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > chassis power status > > > > Chassis Power is off > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > chassis power on > > > > Chassis Power Control: Up/On > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > chassis power status > > > > Chassis Power is on > > > > > > > > > > > > Esra ?EL?K > > > > T?B?TAK B?LGEM > > > > www.bilgem.tubitak.gov.tr > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > Kimden: "Marius Cornea" > > > > Kime: "Esra Celik" > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > was > > > > found" > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "Esra Celik" > > > > > To: "Marius Cornea" > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > host > > > > > was found" > > > > > > > > > > > > > > > Well today I started with re-installing the OS and nothing seems > > > > > wrong > > > > > with > > > > > undercloud installation, then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error during image build > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > ... > > > > > a lot of log > > > > > ... > > > > > ++ cat /etc/dib_dracut_drivers > > > > > + dracut -N --install ' curl partprobe lsblk targetcli tail head awk > > > > > ifconfig > > > > > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell > > > > > rd.debug > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > / > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net > > > > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > > > > > target_core_file target_core_pscsi configfs' -o 'dash plymouth' > > > > > /tmp/ramdisk > > > > > cat: write error: Broken pipe > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > + chmod o+r /tmp/kernel > > > > > + trap EXIT > > > > > + target_tag=99-build-dracut-ramdisk > > > > > + date +%s.%N > > > > > + output '99-build-dracut-ramdisk completed' > > > > > ... > > > > > a lot of log > > > > > ... > > > > > > > > You can ignore that afaik, if you end up having all the required images > > > > it > > > > should be ok. > > > > > > > > > > > > > > Then, during introspection stage I see ironic-python-agent errors on > > > > > nodes > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > That looks odd. Is it showing up in the early stage of the > > > > introspection? > > > > At > > > > some point it should receive an address by DHCP and the Network is > > > > unreachable error should disappear. Does the introspection complete and > > > > the > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > openstack-ironic-conductor.service > > > > > | > > > > > grep -i "warning\|error" > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > 10:30:12.119 > > > > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > ] > > > > > Option "http_url" from group "pxe" is deprecated. Use option > > > > > "http_url" > > > > > from > > > > > group "deploy". > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > 10:30:12.119 > > > > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > ] > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > "http_root" > > > > > from group "deploy". > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > This is odd too as I'm expecting the nodes to be powered off before > > > > running > > > > deployment. > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > > > | Maintenance | > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | > > > > > | available > > > > > | | > > > > > | False | > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | > > > > > | available > > > > > | | > > > > > | False | > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > During deployment I get following errors > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > openstack-ironic-conductor.service > > > > > | > > > > > grep -i "warning\|error" > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > 11:29:01.739 > > > > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while > > > > > attempting > > > > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N > > > > > 5 > > > > > -f > > > > > /tmp/tmpSCKHIv power status"for node > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > Error: Unexpected error while running command. > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > 11:29:01.739 > > > > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status > > > > > failed > > > > > for > > > > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected > > > > > error > > > > > while > > > > > running command. > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > 11:29:01.740 > > > > > 619 WARNING ironic.conductor.manager [-] During sync_power_state, > > > > > could > > > > > not > > > > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > attempt > > > > > 1 > > > > > of > > > > > 3. Error: IPMI call failed: power status.. > > > > > > > > > > > > > This looks like an ipmi error, can you try to manually run commands > > > > using > > > > the > > > > ipmitool and see if you get any success? It's also worth filing a bug > > > > with > > > > details such as the ipmitool version, server model, drac firmware > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > Kimden: "Marius Cornea" > > > > > Kime: "Esra Celik" > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > valid > > > > > host was found" > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > From: "Esra Celik" > > > > > > To: "Marius Cornea" > > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > valid > > > > > > host was found" > > > > > > > > > > > > During deployment they are powering on and deploying the images. I > > > > > > see > > > > > > lot > > > > > > of > > > > > > connection error messages about ironic-python-agent but ignore them > > > > > > as > > > > > > mentioned here > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > That was referring to the introspection stage. From what I can tell > > > > > you > > > > > are > > > > > experiencing issues during deployment as it fails to provision the > > > > > nova > > > > > instances, can you check if during that stage the nodes get powered > > > > > on? > > > > > > > > > > Make sure that before overcloud deploy the ironic nodes are available > > > > > for > > > > > provisioning (ironic node-list and check the provisioning state > > > > > column). > > > > > Also check that you didn't miss any step in the docs in regards to > > > > > kernel > > > > > and ramdisk assignment, introspection, flavor creation(so it matches > > > > > the > > > > > nodes resources) > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > In instackenv.json file I do not need to add the undercloud node, > > > > > > or > > > > > > do > > > > > > I? > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > > > You can check the openstack-ironic-conductor logs(journalctl -fl -u > > > > > openstack-ironic-conductor.service) and the logs in /var/log/nova. > > > > > > > > > > > Thanks > > > > > > Esra > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > Kime: > > > > > > Esra Celik Kk: Ignacio Bravo > > > > > > , rdo-list at redhat.comGönderilenler: > > > > > > Tue, > > > > > > 13 > > > > > > Oct > > > > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy > > > > > > fails > > > > > > with > > > > > > error "No valid host was found" > > > > > > > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > > > > > > > To: "Ignacio Bravo" > Cc: > > > > > > rdo-list at redhat.com> > > > > > > Sent: > > > > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] > > > > > > OverCloud > > > > > > deploy fails with error "No valid host was found"> > > > Actually I > > > > > > re-installed the OS for Undercloud before deploying. However I did> > > > > > > not > > > > > > re-install the OS in Compute and Controller nodes.. I will > > > > > > reinstall> > > > > > > basic > > > > > > OS for them too, and retry.. > > > > > > > > > > > > You don't need to reinstall the OS on the controller and compute, > > > > > > they > > > > > > will > > > > > > get the image served by the undercloud. I'd recommend that during > > > > > > deployment > > > > > > you watch the servers console and make sure they get powered on, > > > > > > pxe > > > > > > boot, > > > > > > and actually get the image deployed. > > > > > > > > > > > > Thanks > > > > > > > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: > > > > > > > "Ignacio > > > > > > > Bravo" > Kime: "Esra Celik" > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > Gönderilenler: > > > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy > > > > > > > fails > > > > > > > with error "No valid host was> found"> > Esra,> > I encountered > > > > > > > the > > > > > > > same > > > > > > > problem after deleting the stack and re-deploying.> > It turns > > > > > > > out > > > > > > > that > > > > > > > 'heat stack-delete overcloud’ does remove the nodes from> > > > > > > > ‘nova list’ and one would assume that the baremetal > > > > > > > servers > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > redeploying, > > > > > > > I > > > > > > > get > > > > > > > the same message of> not enough hosts available.> > You can look > > > > > > > into > > > > > > > the > > > > > > > nova logs and it mentions something about ‘node xxx is> > > > > > > > already > > > > > > > associated with UUID yyyy’ and ‘I tried 3 times and > > > > > > > I’m > > > > > > > giving up’.> The issue is that the UUID yyyy belonged to a > > > > > > > prior > > > > > > > unsuccessful deployment.> > I’m now redeploying the basic > > > > > > > OS > > > > > > > to > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > Federal, > > > > > > > Inc> > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, > > > > > > > 2015, > > > > > > > at > > > > > > > 9:25 > > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > > > > > > > > > > > > > OverCloud deploy fails with error "No valid host was found"> > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> > > > > > > > Deploying > > > > > > > templates in the directory> > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > Stack failed with status: Resource CREATE failed: > > > > > > > resources.Compute:> > > > > > > > ResourceInError: resources[0].resources.NovaCompute: Went to > > > > > > > status > > > > > > > ERROR> > > > > > > > due to "Message: No valid host was found. There are not enough > > > > > > > hosts> > > > > > > > available., Code: 500"> Heat Stack create failed.> > Here are > > > > > > > some > > > > > > > logs:> > > > > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > COMPLETE > > > > > > > > Tue > > > > > > > > Oct > > > > > > > 13> 16:18:17 2015> > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > | resource_name | physical_resource_id | resource_type | > > > > > > > | resource_status > > > > > > > |> | updated_time | stack_name |> > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > > | OS::Heat::ResourceGroup > > > > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > |> | Controller > > > > > > > |> | | > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | > > > > > > > | > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> > > > > > > > | > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > |> > > > > > > > | > > > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | > > > > > > > OS::Nova::Server > > > > > > > |> > > > > > > > | > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute > > > > > > > | > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > > > > > CREATE_FAILED > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > |> > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > | Property | Value |> > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > | attributes | { |> | | "attributes": null, |> | | "refs": null > > > > > > > | |> > > > > > > > | | > > > > > > > | | > > > > > > > | } > > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | > > > > > > > |> | links > > > > > > > |> | |> > > > > > > > | > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > | (self) |> | | > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > | | (stack) |> | | > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > > > | | physical_resource_id > > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | > > > > > > > | > > > > > > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment > > > > > > > |> > > > > > > > | > > > > > > > | > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | > > > > > > > Compute > > > > > > > |> > > > > > > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > resources[0].resources.NovaCompute: > > > > > > > Went to status ERROR due to "Message:> | No valid host was found. > > > > > > > There > > > > > > > are not enough hosts available., Code: 500"> | |> | resource_type > > > > > > > | > > > > > > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 > > > > > > > |> > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > This is my instackenv.json for 1 compute and 1 control node > > > > > > > > > > to > > > > > > > > > > be > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > "mac":[> > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > "disk":"10",> > > > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> > > > > > > > "mac":[> > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > "disk":"100",> > > > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > _______________________________________________> Rdo-list mailing > > > > > > > list> > > > > > > > Rdo-list at redhat.com> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > _______________________________________________> Rdo-list mailing > > > > > > > list> > > > > > > > Rdo-list at redhat.com> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From dtrishkin at mirantis.com Fri Oct 16 16:21:59 2015 From: dtrishkin at mirantis.com (Daniil Trishkin) Date: Fri, 16 Oct 2015 19:21:59 +0300 Subject: [Rdo-list] Review of Murano, Mistral, python-muranoclient, python-mistralclient Message-ID: Hello fellows, here is some projects for review, please take a look :) [Murano] https://bugzilla.redhat.com/show_bug.cgi?id=1272513 https://review.gerrithub.io/#/c/249763/ [python-muranoclient] https://bugzilla.redhat.com/show_bug.cgi?id=1272527 https://review.gerrithub.io/#/c/249766/ [Mistral] https://bugzilla.redhat.com/show_bug.cgi?id=1272524 https://review.gerrithub.io/#/c/249803/ [python-mistralclient] https://bugzilla.redhat.com/show_bug.cgi?id=1272530 https://review.gerrithub.io/#/c/249802/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From celik.esra at tubitak.gov.tr Fri Oct 16 17:33:18 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Fri, 16 Oct 2015 20:33:18 +0300 (EEST) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1505152794.58180750.1444917521867.JavaMail.zimbra@redhat.com> <1268677576.5046491.1444974015831.JavaMail.zimbra@tubitak.gov.tr> <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> Message-ID: <758642094.5387457.1445016798126.JavaMail.zimbra@tubitak.gov.tr> Thanks but I will not be able to try it until monday. I will send the results then.. Esra ÇEL?K TÜB?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ----- Sasha Chuzhoy ?öyle yaz?yor:> Hi Esra, if the undercloud nodes are UP - you can login with: ssh heat-admin@ You can see the IP of the nodes with: "nova list". BTW, What do you see if you run "sudo systemctl|grep ironic" on the undercloud? Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Esra Celik" > To: "Sasha Chuzhoy" > Cc: "Marius Cornea" , rdo-list at redhat.com > Sent: Friday, October 16, 2015 1:40:16 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > Hi Sasha, > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 Overcloud-Compute > > This is my undercloud.conf file: > > image_path = . > local_ip = 192.0.2.1/24 > local_interface = em2 > masquerade_network = 192.0.2.0/24 > dhcp_start = 192.0.2.5 > dhcp_end = 192.0.2.24 > network_cidr = 192.0.2.0/24 > network_gateway = 192.0.2.1 > inspection_interface = br-ctlplane > inspection_iprange = 192.0.2.100,192.0.2.120 > inspection_runbench = false > undercloud_debug = true > enable_tuskar = false > enable_tempest = false > > IP configuration for the Undercloud is as follows: > > stack at undercloud ~]$ ip addr > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: em1: mtu 1500 qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > valid_lft forever preferred_lft forever > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > valid_lft forever preferred_lft forever > 3: em2: mtu 1500 qdisc mq master ovs-system > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > 4: ovs-system: mtu 1500 qdisc noop state DOWN > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: br-ctlplane: mtu 1500 qdisc noqueue > state UNKNOWN > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > valid_lft forever preferred_lft forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft forever preferred_lft forever > 6: br-int: mtu 1500 qdisc noop state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > And I attached two screenshots showing the boot stage for overcloud nodes > > Is there a way to login the overcloud nodes to see their IP configuration? > > Thanks > > Esra ÇEL?K > TÜB?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > ----- Orijinal Mesaj ----- > > > Kimden: "Sasha Chuzhoy" > > Kime: "Esra Celik" > > Kk: "Marius Cornea" , rdo-list at redhat.com > > Gönderilenler: 15 Ekim Per?embe 2015 16:58:41 > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > found" > > > Just my 2 cents. > > Did you make sure that all the registered nodes are configured to boot off > > the right NIC first? > > Can you watch the console and see what happens on the problematic nodes > > upon > > boot? > > > Best regards, > > Sasha Chuzhoy. > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Marius Cornea" > > > Cc: rdo-list at redhat.com > > > Sent: Thursday, October 15, 2015 4:40:46 AM > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > was found" > > > > > > > > > Sorry for the late reply > > > > > > ironic node-show results are below. I have my nodes power on after > > > introspection bulk start. And I get the following warning > > > Introspection didn't finish for nodes > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > Doesn't seem to be the same issue with > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > | Maintenance | > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | > > > | available > > > | | > > > | False | > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | > > > | available > > > | | > > > | False | > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > +------------------------+-------------------------------------------------------------------------+ > > > | Property | Value | > > > +------------------------+-------------------------------------------------------------------------+ > > > | target_power_state | None | > > > | extra | {} | > > > | last_error | None | > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > | maintenance_reason | None | > > > | provision_state | available | > > > | clean_step | {} | > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > | console_enabled | False | > > > | target_provision_state | None | > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > | maintenance | False | > > > | inspection_started_at | None | > > > | inspection_finished_at | None | > > > | power_state | power on | > > > | driver | pxe_ipmitool | > > > | reservation | None | > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > | u'local_gb': > > > | u'10', | > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > | instance_uuid | None | > > > | name | None | > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > | u'192.168.0.18', | > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > | | | > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > | driver_internal_info | {u'clean_steps': None} | > > > | chassis_uuid | | > > > | instance_info | {} | > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > +------------------------+-------------------------------------------------------------------------+ > > > | Property | Value | > > > +------------------------+-------------------------------------------------------------------------+ > > > | target_power_state | None | > > > | extra | {} | > > > | last_error | None | > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > | maintenance_reason | None | > > > | provision_state | available | > > > | clean_step | {} | > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > | console_enabled | False | > > > | target_provision_state | None | > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > | maintenance | False | > > > | inspection_started_at | None | > > > | inspection_finished_at | None | > > > | power_state | power on | > > > | driver | pxe_ipmitool | > > > | reservation | None | > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > | u'local_gb': > > > | u'100', | > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > | instance_uuid | None | > > > | name | None | > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > | u'192.168.0.19', | > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > | | | > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > | driver_internal_info | {u'clean_steps': None} | > > > | chassis_uuid | | > > > | instance_info | {} | > > > +------------------------+-------------------------------------------------------------------------+ > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. I don't think I am doing > > > something other than > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > doc > > > > > > > > > > > > > > > > > > > > > > > > 1 vi instackenv.json > > > 2 sudo yum -y install epel-release > > > 3 sudo curl -o /etc/yum.repos.d/delorean.repo > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > /etc/yum.repos.d/delorean-current.repo > > > 6 sudo /bin/bash -c "cat <>/etc/yum.repos.d/delorean-current.repo > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > EOF" > > > 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > 8 sudo yum -y install yum-plugin-priorities > > > 9 sudo yum install -y python-tripleoclient > > > 10 cp /usr/share/instack-undercloud/undercloud.conf.sample > > > ~/undercloud.conf > > > 11 vi undercloud.conf > > > 12 export DIB_INSTALLTYPE_puppet_modules=source > > > 13 openstack undercloud install > > > 14 source stackrc > > > 15 export NODE_DIST=centos7 > > > 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > /etc/yum.repos.d/delorean-deps.repo" > > > 17 export DIB_INSTALLTYPE_puppet_modules=source > > > 18 openstack overcloud image build --all > > > 19 ls > > > 20 openstack overcloud image upload > > > 21 openstack baremetal import --json instackenv.json > > > 22 openstack baremetal configure boot > > > 23 ironic node-list > > > 24 openstack baremetal introspection bulk start > > > 25 ironic node-list > > > 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > Esra ÇEL?K > > > TÜB?TAK B?LGEM > > > www.bilgem.tubitak.gov.tr > > > celik.esra at tubitak.gov.tr > > > > > > > > > Kimden: "Marius Cornea" > > > Kime: "Esra Celik" > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > Gönderilenler: 14 Ekim Çar?amba 2015 19:40:07 > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > > found" > > > > > > Can you do ironic node-show for your ironic nodes and post the results? > > > Also > > > check the following suggestion if you're experiencing the same issue: > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > ----- Original Message ----- > > > > From: "Esra Celik" > > > > To: "Marius Cornea" > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > host > > > > was found" > > > > > > > > > > > > > > > > Well in the early stage of the introspection I can see Client IP of > > > > nodes > > > > (screenshot attached). But then I see continuous ironic-python-agent > > > > errors > > > > (screenshot-2 attached). Errors repeat after time out.. And the nodes > > > > are > > > > not powered off. > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > I can use ipmitool command to successfully power on/off the nodes > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > ADMINISTRATOR > > > > -U > > > > root -R 3 -N 5 -P power status > > > > Chassis Power is on > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > chassis power status > > > > Chassis Power is on > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > chassis power off > > > > Chassis Power Control: Down/Off > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > chassis power status > > > > Chassis Power is off > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > chassis power on > > > > Chassis Power Control: Up/On > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > chassis power status > > > > Chassis Power is on > > > > > > > > > > > > Esra ÇEL?K > > > > TÜB?TAK B?LGEM > > > > www.bilgem.tubitak.gov.tr > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > Kimden: "Marius Cornea" > > > > Kime: "Esra Celik" > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > Gönderilenler: 14 Ekim Çar?amba 2015 14:59:30 > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > was > > > > found" > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "Esra Celik" > > > > > To: "Marius Cornea" > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > host > > > > > was found" > > > > > > > > > > > > > > > Well today I started with re-installing the OS and nothing seems > > > > > wrong > > > > > with > > > > > undercloud installation, then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error during image build > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > ... > > > > > a lot of log > > > > > ... > > > > > ++ cat /etc/dib_dracut_drivers > > > > > + dracut -N --install ' curl partprobe lsblk targetcli tail head awk > > > > > ifconfig > > > > > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell > > > > > rd.debug > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > / > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net > > > > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > > > > > target_core_file target_core_pscsi configfs' -o 'dash plymouth' > > > > > /tmp/ramdisk > > > > > cat: write error: Broken pipe > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > + chmod o+r /tmp/kernel > > > > > + trap EXIT > > > > > + target_tag=99-build-dracut-ramdisk > > > > > + date +%s.%N > > > > > + output '99-build-dracut-ramdisk completed' > > > > > ... > > > > > a lot of log > > > > > ... > > > > > > > > You can ignore that afaik, if you end up having all the required images > > > > it > > > > should be ok. > > > > > > > > > > > > > > Then, during introspection stage I see ironic-python-agent errors on > > > > > nodes > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > That looks odd. Is it showing up in the early stage of the > > > > introspection? > > > > At > > > > some point it should receive an address by DHCP and the Network is > > > > unreachable error should disappear. Does the introspection complete and > > > > the > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > openstack-ironic-conductor.service > > > > > | > > > > > grep -i "warning\|error" > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > 10:30:12.119 > > > > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > ] > > > > > Option "http_url" from group "pxe" is deprecated. Use option > > > > > "http_url" > > > > > from > > > > > group "deploy". > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > 10:30:12.119 > > > > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > ] > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > "http_root" > > > > > from group "deploy". > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > This is odd too as I'm expecting the nodes to be powered off before > > > > running > > > > deployment. > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > > > | Maintenance | > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | > > > > > | available > > > > > | | > > > > > | False | > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | > > > > > | available > > > > > | | > > > > > | False | > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > During deployment I get following errors > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > openstack-ironic-conductor.service > > > > > | > > > > > grep -i "warning\|error" > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > 11:29:01.739 > > > > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while > > > > > attempting > > > > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N > > > > > 5 > > > > > -f > > > > > /tmp/tmpSCKHIv power status"for node > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > Error: Unexpected error while running command. > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > 11:29:01.739 > > > > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status > > > > > failed > > > > > for > > > > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected > > > > > error > > > > > while > > > > > running command. > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > 11:29:01.740 > > > > > 619 WARNING ironic.conductor.manager [-] During sync_power_state, > > > > > could > > > > > not > > > > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > attempt > > > > > 1 > > > > > of > > > > > 3. Error: IPMI call failed: power status.. > > > > > > > > > > > > > This looks like an ipmi error, can you try to manually run commands > > > > using > > > > the > > > > ipmitool and see if you get any success? It's also worth filing a bug > > > > with > > > > details such as the ipmitool version, server model, drac firmware > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > Kimden: "Marius Cornea" > > > > > Kime: "Esra Celik" > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > Gönderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > valid > > > > > host was found" > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > From: "Esra Celik" > > > > > > To: "Marius Cornea" > > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > valid > > > > > > host was found" > > > > > > > > > > > > During deployment they are powering on and deploying the images. I > > > > > > see > > > > > > lot > > > > > > of > > > > > > connection error messages about ironic-python-agent but ignore them > > > > > > as > > > > > > mentioned here > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > That was referring to the introspection stage. From what I can tell > > > > > you > > > > > are > > > > > experiencing issues during deployment as it fails to provision the > > > > > nova > > > > > instances, can you check if during that stage the nodes get powered > > > > > on? > > > > > > > > > > Make sure that before overcloud deploy the ironic nodes are available > > > > > for > > > > > provisioning (ironic node-list and check the provisioning state > > > > > column). > > > > > Also check that you didn't miss any step in the docs in regards to > > > > > kernel > > > > > and ramdisk assignment, introspection, flavor creation(so it matches > > > > > the > > > > > nodes resources) > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > In instackenv.json file I do not need to add the undercloud node, > > > > > > or > > > > > > do > > > > > > I? > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > > > You can check the openstack-ironic-conductor logs(journalctl -fl -u > > > > > openstack-ironic-conductor.service) and the logs in /var/log/nova. > > > > > > > > > > > Thanks > > > > > > Esra > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > Kime: > > > > > > Esra Celik Kk: Ignacio Bravo > > > > > > , rdo-list at redhat.comGönderilenler: > > > > > > Tue, > > > > > > 13 > > > > > > Oct > > > > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy > > > > > > fails > > > > > > with > > > > > > error "No valid host was found" > > > > > > > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > > > > > > > To: "Ignacio Bravo" > Cc: > > > > > > rdo-list at redhat.com> > > > > > > Sent: > > > > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] > > > > > > OverCloud > > > > > > deploy fails with error "No valid host was found"> > > > Actually I > > > > > > re-installed the OS for Undercloud before deploying. However I did> > > > > > > not > > > > > > re-install the OS in Compute and Controller nodes.. I will > > > > > > reinstall> > > > > > > basic > > > > > > OS for them too, and retry.. > > > > > > > > > > > > You don't need to reinstall the OS on the controller and compute, > > > > > > they > > > > > > will > > > > > > get the image served by the undercloud. I'd recommend that during > > > > > > deployment > > > > > > you watch the servers console and make sure they get powered on, > > > > > > pxe > > > > > > boot, > > > > > > and actually get the image deployed. > > > > > > > > > > > > Thanks > > > > > > > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > Kimden: > > > > > > > "Ignacio > > > > > > > Bravo" > Kime: "Esra Celik" > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > Gönderilenler: > > > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy > > > > > > > fails > > > > > > > with error "No valid host was> found"> > Esra,> > I encountered > > > > > > > the > > > > > > > same > > > > > > > problem after deleting the stack and re-deploying.> > It turns > > > > > > > out > > > > > > > that > > > > > > > 'heat stack-delete overcloud’ does remove the nodes from> > > > > > > > ‘nova list’ and one would assume that the baremetal > > > > > > > servers > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > redeploying, > > > > > > > I > > > > > > > get > > > > > > > the same message of> not enough hosts available.> > You can look > > > > > > > into > > > > > > > the > > > > > > > nova logs and it mentions something about ‘node xxx is> > > > > > > > already > > > > > > > associated with UUID yyyy’ and ‘I tried 3 times and > > > > > > > I’m > > > > > > > giving up’.> The issue is that the UUID yyyy belonged to a > > > > > > > prior > > > > > > > unsuccessful deployment.> > I’m now redeploying the basic > > > > > > > OS > > > > > > > to > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > Federal, > > > > > > > Inc> > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, > > > > > > > 2015, > > > > > > > at > > > > > > > 9:25 > > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi all,> > > > > > > > > > > > > > > > OverCloud deploy fails with error "No valid host was found"> > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> > > > > > > > Deploying > > > > > > > templates in the directory> > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > Stack failed with status: Resource CREATE failed: > > > > > > > resources.Compute:> > > > > > > > ResourceInError: resources[0].resources.NovaCompute: Went to > > > > > > > status > > > > > > > ERROR> > > > > > > > due to "Message: No valid host was found. There are not enough > > > > > > > hosts> > > > > > > > available., Code: 500"> Heat Stack create failed.> > Here are > > > > > > > some > > > > > > > logs:> > > > > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > COMPLETE > > > > > > > > Tue > > > > > > > > Oct > > > > > > > 13> 16:18:17 2015> > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > | resource_name | physical_resource_id | resource_type | > > > > > > > | resource_status > > > > > > > |> | updated_time | stack_name |> > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > > | OS::Heat::ResourceGroup > > > > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > |> | Controller > > > > > > > |> | | > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | > > > > > > > | > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> > > > > > > > | > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> | > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > |> > > > > > > > | > > > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | > > > > > > > OS::Nova::Server > > > > > > > |> > > > > > > > | > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute > > > > > > > | > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > > > > > CREATE_FAILED > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > |> > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > | Property | Value |> > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > | attributes | { |> | | "attributes": null, |> | | "refs": null > > > > > > > | |> > > > > > > > | | > > > > > > > | | > > > > > > > | } > > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> | > > > > > > > |> | links > > > > > > > |> | |> > > > > > > > | > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > | (self) |> | | > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > | | (stack) |> | | > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > > > | | physical_resource_id > > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | > > > > > > > | > > > > > > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment > > > > > > > |> > > > > > > > | > > > > > > > | > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | > > > > > > > Compute > > > > > > > |> > > > > > > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > resources[0].resources.NovaCompute: > > > > > > > Went to status ERROR due to "Message:> | No valid host was found. > > > > > > > There > > > > > > > are not enough hosts available., Code: 500"> | |> | resource_type > > > > > > > | > > > > > > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 > > > > > > > |> > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > This is my instackenv.json for 1 compute and 1 control node > > > > > > > > > > to > > > > > > > > > > be > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > "mac":[> > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > "disk":"10",> > > > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> > > > > > > > "mac":[> > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > "disk":"100",> > > > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > _______________________________________________> Rdo-list mailing > > > > > > > list> > > > > > > > Rdo-list at redhat.com> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > _______________________________________________> Rdo-list mailing > > > > > > > list> > > > > > > > Rdo-list at redhat.com> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkassawara at gmail.com Thu Oct 15 18:31:06 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Thu, 15 Oct 2015 12:31:06 -0600 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> Message-ID: I plan to make a more detailed post to the OpenStack documentation mailing list regarding the installation guide development process, but here's a quick list of prerequisites: 1) Installation guide contributors often find packaging bugs long before "test days" for any distribution. For some distributions, we provide the only "real world" testing. Packagers should take advantage of our labor by providing a process to address potential bugs in a timely fashion. 2) Packages become available at a known and consistent location at some point during the milestone tags, ideally a bit over a month before the official upstream release, and update in a timely fashion to include at least one release candidate for each project. Release packages become available, at least for testing, within a week of the upstream release. 3) Packages follow best practices for deployment. In most cases, this simply involves including upstream example configuration files with default values. If a project requires generation of example configuration files, run the necessary procedure prior to packaging it. 4) Packages only reference upstream configuration files in standard locations (e.g., /etc/keystone). Matt On Thu, Oct 15, 2015 at 8:43 AM, Steve Gordon wrote: > ----- Original Message ----- > > From: "Rich Bowen" > > To: rdo-list at redhat.com > > Sent: Thursday, October 15, 2015 9:01:03 AM > > Subject: Re: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of > RDO > > > > Just to follow up: > > > > http://docs.openstack.org/ > > > > Installation Guide for Debian 8 (not yet available) > > > > Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora > > 22 (not yet available) > > > > > > > > --Rich > > To get this updated what is required is for someone to walk through the > draft install guide [1] vetting each procedure on each target distro and > updating the test matrix here: > > https://wiki.openstack.org/wiki/Documentation/LibertyDocTesting > > We also need to determine which "known issues" need to be resolved, and > document + file bugs for any new ones that pop up (and ideally resolve > them). In future as part of the RDO test day I think we should: > > a) Broadcast where the correct packages are more widely (e.g. include > openstack-docs at lists.openstack.org in the distribution list for the test > day). There seems to be a contention that they weren't available or were > available but were mixed up with Mitaka packages which was true at a point > in time but was quickly resolved (there was an issue with the config files > being shipped though). > > b) Integrate the install guide test matrix into the test day so that we > (the RDO community) can help drive vetting it earlier. > > Thanks, > > Steve > > > [1] http://docs.openstack.org/draft/install-guide-rdo/ > > > On 10/14/2015 07:49 AM, Rich Bowen wrote: > > > I wanted to be certain that everyone has seen this message to > > > OpenStack-docs, and the subsequent conversation at > > > > http://lists.openstack.org/pipermail/openstack-docs/2015-October/007622.html > > > > > > > > > This is quite serious, as Lana is basically saying that RDO isn't a > > > viable way to deploy OpenStack in Liberty, and so it's being removed > > > from the docs. > > > > > > It would be helpful if someone closer to Liberty packages, and > Delorean, > > > could participate there in a constructive way to bring this to a happy > > > conclusion before the release tomorrow. > > > > > > Thanks. > > > > > > --Rich > > > > > > > > > -------- Forwarded Message -------- > > > Subject: [OpenStack-docs] [install-guide] Status of RDO > > > Date: Wed, 14 Oct 2015 16:22:45 +1000 > > > From: Lana Brindley > > > To: openstack-docs at lists.openstack.org < > openstack-docs at lists.openstack.org> > > > > > > -----BEGIN PGP SIGNED MESSAGE----- > > > Hash: SHA256 > > > > > > Hi everyone, > > > > > > We've been unable to obtain good pre-release packages from Red Hat for > > > the Fedora and Red Hat/CentOS repos, despite our best efforts. This has > > > left the RDO Install Guide in a largely untested state, so I don't feel > > > confident publishing it at this stage. > > > > > > As far as we can tell, Fedora are no longer planning on having > > > pre-release packages available, so this might be a permanent change for > > > that OS. For Red Hat/CentOS, it seems to be a temporary problem, so > > > hopefully we can get the packages, complete testing, and publish the > > > book soon. > > > > > > The patch to remove RDO is here, for anyone who cares to comment: > > > https://review.openstack.org/#/c/234584/ > > > > > > Lana > > > > > > - -- Lana Brindley > > > Technical Writer > > > Rackspace Cloud Builders Australia > > > http://lanabrindley.com > > > -----BEGIN PGP SIGNATURE----- > > > Version: GnuPG v2 > > > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > > > > > iQEcBAEBCAAGBQJWHfS1AAoJELppzVb4+KUyM7cH/ii5Ekz5vjTe3dTykXBUbWGt > > > bR2XJTAbS/mFB+xayecNNPLvgejI6Nxvk8msSFNnN7/ZyDNwr+eceQw7ftMKuJnR > > > h7qKBb6o5iayLJxgNRK3Kjo13NjGdaiXwfLTbB5br/aiP2HHsrDRexAcLteUCKGt > > > eHbZUEYqg4VADUvodxNpbZ+7fHuXrIRZoH4aDQ4+o1p0dCdw+vkjzF/MzPSgZFar > > > Rq9L94rpofDat9ymuW48c+SgUeOnmTvxwEN8ExTENNMXo4nUOJwcUS65J6XURO9K > > > RUGvjPmSmm7ZaQGE+koKyGZSzF/Oqoa+vBUwxdeQqmtr2tWo//jlUVV/PDc8QV0= > > > =rQp4 > > > -----END PGP SIGNATURE----- > > > > > > _______________________________________________ > > > OpenStack-docs mailing list > > > OpenStack-docs at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -- > > Rich Bowen - rbowen at redhat.com > > OpenStack Community Liaison > > http://rdoproject.org/ > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -- > Steve Gordon, RHCE > Sr. Technical Product Manager, > Red Hat Enterprise Linux OpenStack Platform > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Fri Oct 16 20:33:45 2015 From: ibravo at ltgfederal.com (Bravo, Ignacio) Date: Fri, 16 Oct 2015 16:33:45 -0400 Subject: [Rdo-list] [rdo-manager] [Ceph] Ceph deployment / usage In-Reply-To: <561F9E63.1020906@ltgfederal.com> References: <561F9E63.1020906@ltgfederal.com> Message-ID: Anyone? -- *Ignacio Bravo* *LTG Federal* Mb: 571.224.6046 ibravo at ltgfederal.com On Thu, Oct 15, 2015 at 8:38 AM, Ignacio Bravo wrote: > All, > > I need to deploy oVirt to put some VMs today and it requires a either > CephFS or GlusterFS as the backends. Ideally, I would chose CephFS so that > oVirt + RDO-Manager can share the same storage resources with Ceph and not > having two distinct storage products. > > After the last couple of days, a lot of bug fixes have been done to > RDO-Manager, but I have not yet being able to perform a HA deployment (3 > controllers) plus Ceph as the backend. > > So my question is can I deploy Ceph as a stand alone product and then > configure RDO-Manager to use this pool without deploying a new Ceph > instance? or shall I deploy everything through RDO-Manager and then build > from there? > > Thanks for your insight. > IB > > -- > Ignacio Bravo > LTG Federal Inc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Fri Oct 16 20:47:47 2015 From: mcornea at redhat.com (Marius Cornea) Date: Fri, 16 Oct 2015 16:47:47 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] [Ceph] Ceph deployment / usage In-Reply-To: References: <561F9E63.1020906@ltgfederal.com> Message-ID: <942603598.43690057.1445028467848.JavaMail.zimbra@redhat.com> Hi Ignacio, RDO Manager should be able to connect to an external Ceph cluster: https://review.openstack.org/#/c/197734/ Thanks, Marius ----- Original Message ----- > From: "Ignacio Bravo" > To: "rdo-list" > Sent: Friday, October 16, 2015 10:33:45 PM > Subject: Re: [Rdo-list] [rdo-manager] [Ceph] Ceph deployment / usage > > Anyone? > > > -- > Ignacio Bravo > LTG Federal > Mb: 571.224.6046 > ibravo at ltgfederal.com > > On Thu, Oct 15, 2015 at 8:38 AM, Ignacio Bravo < ibravo at ltgfederal.com > > wrote: > > > All, > > I need to deploy oVirt to put some VMs today and it requires a either CephFS > or GlusterFS as the backends. Ideally, I would chose CephFS so that oVirt + > RDO-Manager can share the same storage resources with Ceph and not having > two distinct storage products. > > After the last couple of days, a lot of bug fixes have been done to > RDO-Manager, but I have not yet being able to perform a HA deployment (3 > controllers) plus Ceph as the backend. > > So my question is can I deploy Ceph as a stand alone product and then > configure RDO-Manager to use this pool without deploying a new Ceph > instance? or shall I deploy everything through RDO-Manager and then build > from there? > > Thanks for your insight. > IB > > -- > Ignacio Bravo > LTG Federal Inc > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From gfidente at redhat.com Fri Oct 16 23:27:04 2015 From: gfidente at redhat.com (Giulio Fidente) Date: Sat, 17 Oct 2015 01:27:04 +0200 Subject: [Rdo-list] [rdo-manager] [Ceph] Ceph deployment / usage In-Reply-To: <561F9E63.1020906@ltgfederal.com> References: <561F9E63.1020906@ltgfederal.com> Message-ID: <562187C8.5050107@redhat.com> On 10/15/2015 02:38 PM, Ignacio Bravo wrote: > All, > > I need to deploy oVirt to put some VMs today and it requires a either > CephFS or GlusterFS as the backends. Ideally, I would chose CephFS so > that oVirt + RDO-Manager can share the same storage resources with Ceph > and not having two distinct storage products. > > After the last couple of days, a lot of bug fixes have been done to > RDO-Manager, but I have not yet being able to perform a HA deployment (3 > controllers) plus Ceph as the backend. > > So my question is can I deploy Ceph as a stand alone product and then > configure RDO-Manager to use this pool without deploying a new Ceph > instance? or shall I deploy everything through RDO-Manager and then > build from there? hi, it is possible to deploy an Overcloud and provide to it as input parameters the details so that it would attach nova, cinder and glance to an external, unmanaged, Ceph deployment yes. To do so, make a copy of the file /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph-external.yaml edit the parameters as needed and then deploy adding -e /path/to/your/custom.yaml This said, I am concerned about the failed attempts to deploy with Ceph because we do test the deployment with Ceph as one of our gating jobs upstream in gerrit, it's not a 2nd class citizen and it'd be interesting to figure what is going wrong for you. Were you able to deploy with 3 controllers without Ceph on that same environment? Did you provide both these arguments: --ceph-storage-scale X -e /usr/share/openstack-tripleo-heat-temapltes/environments/storage-environment.yaml when deploying with Ceph? If the CephStorage nodes are ACTIVE in Nova, could you try logging on both the controller node and the cephstorage node to see what 'sudo ceph status' returns? -- Giulio Fidente GPG KEY: 08D733BA From mohammed.arafa at gmail.com Sat Oct 17 04:39:00 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sat, 17 Oct 2015 06:39:00 +0200 Subject: [Rdo-list] [rdo-manager] puppet error on undercloud install Message-ID: hello i am attempting to install liberty on physical machines and i get this error on openstack undercloud install. i have looked at the file in question and the line is for allowed hosts. which is a repeat of an earlier line. not sure why the install is burping out this error. pointers/hints would be nice. Error: (): did not find expected alphabetic or numeric character while scanning an anchor at line 81 column 36 at /etc/puppet/manifests/puppet-stack-config.pp:16 on node rdo.cloud.vms Wrapped exception: (): did not find expected alphabetic or numeric character while scanning an anchor at line 81 column 36 Error: (): did not find expected alphabetic or numeric character while scanning an anchor at line 81 column 36 at /etc/puppet/manifests/puppet-stack-config.pp:16 on node rdo.cloud.vms -- *805010942448935* * * *GR750055912MA* *Link to me on LinkedIn * From shayne.alone at gmail.com Sat Oct 17 06:58:29 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Sat, 17 Oct 2015 10:28:29 +0330 Subject: [Rdo-list] overcloud update [external - float interface name] Message-ID: I have been deployed my overcloud via: ### openstack overcloud deploy --compute-scale 4 --templates --compute-flavor compute --control-flavor control ### the baremetal server interfaces naming is as enoX X == {1,2,3,4} I also have connected all baremetal servers eno1 directly into undercloud eth1 as management zone Now I wana create external network for float ip assignment via: ### neutron net-create ext-net --router:external --provider:physical_network datacentre --provider:network_type flat ### the datacentre is directlly connected to the public ip address router, but the problem is that I don't have such and interface name in OS level... is seem's I should have to solution: 1- rename Controller eno2 -> datacentre 2- update overcloud and don't know how but force it to know [ eno2 ] instead of [ datacentre ] :-? solution 1 ( os level interface renaming ) seems to be a bit distructive... cos centos wiki's told we should edit kernel param at grub and revert to old interface naming also stick to udev rules and alike... I pereferer to know how can I update my overcloud to accept eno2 instead of datacentre? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Sat Oct 17 12:33:35 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sat, 17 Oct 2015 14:33:35 +0200 Subject: [Rdo-list] overcloud update [external - float interface name] In-Reply-To: References: Message-ID: On Sat, Oct 17, 2015 at 8:58 AM, AliReza Taleghani wrote: > I have been deployed my overcloud via: > ### > openstack overcloud deploy --compute-scale 4 --templates --compute-flavor > compute --control-flavor control > ### > > the baremetal server interfaces naming is as enoX X == {1,2,3,4} > I also have connected all baremetal servers eno1 directly into undercloud > eth1 as management zone > Now I wana create external network for float ip assignment via: > ### > neutron net-create ext-net --router:external --provider:physical_network > datacentre --provider:network_type flat > ### > > > the datacentre is directlly connected to the public ip address router, but > the problem is that I don't have such and interface name in OS level... Datacentre is a mapping which points to br-ex ovs bridge. By default I believe the provisioning network nic gets bridged to br-ex. If you want to have another interface bridged to br-ex you just need to add the following argument to the deploy command: --neutron-public-interface eno2 or --neutron-public-interface nic2 assuming that eno2 is the 2nd nic which has an active cable connected You should also check the network isolation feature for more advanced networking configurations: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/network_isolation.html > is seem's I should have to solution: > > 1- rename Controller eno2 -> datacentre > 2- update overcloud and don't know how but force it to know [ eno2 ] instead > of [ datacentre ] > > :-? > > solution 1 ( os level interface renaming ) seems to be a bit distructive... > cos centos wiki's told we should edit kernel param at grub and revert to old > interface naming also stick to udev rules and alike... > > I pereferer to know how can I update my overcloud to accept eno2 instead of > datacentre? > > thanks > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ibravo at ltgfederal.com Sun Oct 18 03:52:14 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Sat, 17 Oct 2015 23:52:14 -0400 Subject: [Rdo-list] [RDO-Manager] [undercloud] Repository moved Message-ID: <5623176E.4030806@ltgfederal.com> I just tried to install the undercloud and it fails with the following: Caching puppet-ceph from https://git.openstack.org/stackforge/puppet-ceph.git in /root/.cache/image-create/source-repositories/puppet_ceph_37a8b07e4ba60cba4257960ec9cd83b3213fe5f1 Cloning into '/root/.cache/image-create/source-repositories/puppet_ceph_37a8b07e4ba60cba4257960ec9cd83b3213fe5f1.tmp'... fatal: repository 'https://git.openstack.org/stackforge/puppet-ceph.git/' not found I have tried replicating from a command line, and it appears that the repository was moved to: git clone https://git.openstack.org/openstack/puppet-ceph.git -- Ignacio Bravo LTG Federal Inc -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.alone at gmail.com Sun Oct 18 04:38:13 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Sun, 18 Oct 2015 08:08:13 +0330 Subject: [Rdo-list] overcloud cinder volume limits In-Reply-To: References: Message-ID: I think there is a mis configuration here: the notification pop up when you navigate on [ Admin > System > Defaults ] #Notification Error: Unable to retrieve volume limit information. ################################################### if you filter logs for cnider or error this one brings up which delight two point! 1- Authentication target => localhost 2- Authentication API version => v3 I'm sure that the localhost is wrong and should be the float address or at lease controller management interface address as bellow nobody listen on localhost [root at overcloud-controller-0 ~]# netstat -npltu | grep 5000 tcp 0 0 10.20.30.28:5000 0.0.0.0:* LISTEN 2340/python2 tcp 0 0 10.20.30.26:5000 0.0.0.0:* LISTEN 1872/haproxy #ControllerLog Oct 18 04:22:25 overcloud-controller-0.localdomain cinder-api[2374]: 2015-10-18 04:22:25.912 5406 ERROR cinder.api.middleware.fault [req-3512a941-266d-49f0-b580-4c4ab93ca4e9 42e375bb99ac4107803ea2dc8a254f8d 638b58b4ff54462195f34adf6cec427c - - -] Caught error: Authorization failed: Unable to establish connection to http://localhost:5000/v3/auth/tokens ################################################### But I'm still looking to find how can I force [ cinder.api.middleware ] authentication configuration for example to update target ip address, be replace with localhost Sincerely, Ali R. Taleghani @linkedIn On Fri, Oct 16, 2015 at 4:24 PM, AliReza Taleghani wrote: > > I have some error via horizon nofication which points to cinder auth api > calls... > > [image: Screenshot from 2015-10-16 16:13:15.png] > > I try some configuration change as show in: > http://paste.ubuntu.com/12798452/ > > to apply auth info keystonecontext :-/ > but it seems nothing to get solve... > > -- > Sincerely, > Ali R. Taleghani > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2015-10-16 16:13:15.png Type: image/png Size: 76343 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2015-10-16 16:13:15.png Type: image/png Size: 76343 bytes Desc: not available URL: From shayne.alone at gmail.com Sun Oct 18 04:49:25 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Sun, 18 Oct 2015 08:19:25 +0330 Subject: [Rdo-list] overcloud update [external - float interface name] In-Reply-To: References: Message-ID: :"> it's was fault... nice tricks covered on: [stack at undercloud ~]$ openstack help overcloud deploy ######## --neutron-public-interface NEUTRON_PUBLIC_INTERFACE *Deprecated* ######## but if it's deprecated! what else exist? does it mean we should check neutron it self lated doc? Sincerely, Ali R. Taleghani @linkedIn On Sat, Oct 17, 2015 at 4:03 PM, Marius Cornea wrote: > On Sat, Oct 17, 2015 at 8:58 AM, AliReza Taleghani > wrote: > > I have been deployed my overcloud via: > > ### > > openstack overcloud deploy --compute-scale 4 --templates --compute-flavor > > compute --control-flavor control > > ### > > > > the baremetal server interfaces naming is as enoX X == {1,2,3,4} > > I also have connected all baremetal servers eno1 directly into undercloud > > eth1 as management zone > > Now I wana create external network for float ip assignment via: > > ### > > neutron net-create ext-net --router:external --provider:physical_network > > datacentre --provider:network_type flat > > ### > > > > > > the datacentre is directlly connected to the public ip address router, > but > > the problem is that I don't have such and interface name in OS level... > > Datacentre is a mapping which points to br-ex ovs bridge. By default I > believe the provisioning network nic gets bridged to br-ex. If you > want to have another interface bridged to br-ex you just need to add > the following argument to the deploy command: > > --neutron-public-interface eno2 > > or > > --neutron-public-interface nic2 > assuming that eno2 is the 2nd nic which has an active cable connected > > You should also check the network isolation feature for more advanced > networking configurations: > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/network_isolation.html > > > is seem's I should have to solution: > > > > 1- rename Controller eno2 -> datacentre > > 2- update overcloud and don't know how but force it to know [ eno2 ] > instead > > of [ datacentre ] > > > > :-? > > > > solution 1 ( os level interface renaming ) seems to be a bit > distructive... > > cos centos wiki's told we should edit kernel param at grub and revert to > old > > interface naming also stick to udev rules and alike... > > > > I pereferer to know how can I update my overcloud to accept eno2 instead > of > > datacentre? > > > > thanks > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Sun Oct 18 09:15:08 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 18 Oct 2015 11:15:08 +0200 Subject: [Rdo-list] overcloud cinder volume limits In-Reply-To: References: Message-ID: On Sun, Oct 18, 2015 at 6:38 AM, AliReza Taleghani wrote: > I think there is a mis configuration here: > > the notification pop up when you navigate on [ Admin > System > Defaults ] > #Notification > Error: Unable to retrieve volume limit information. > ################################################### > > if you filter logs for cnider or error this one brings up which delight > two point! > 1- Authentication target => localhost > 2- Authentication API version => v3 > > I'm sure that the localhost is wrong and should be the float address or at > lease controller management interface address > > as bellow nobody listen on localhost > [root at overcloud-controller-0 ~]# netstat -npltu | grep 5000 > tcp 0 0 10.20.30.28:5000 0.0.0.0:* > LISTEN 2340/python2 > tcp 0 0 10.20.30.26:5000 0.0.0.0:* > LISTEN 1872/haproxy > > #ControllerLog > Oct 18 04:22:25 overcloud-controller-0.localdomain cinder-api[2374]: > 2015-10-18 04:22:25.912 5406 ERROR cinder.api.middleware.fault > [req-3512a941-266d-49f0-b580-4c4ab93ca4e9 42e375bb99ac4107803ea2dc8a254f8d > 638b58b4ff54462195f34adf6cec427c - - -] Caught error: Authorization failed: > Unable to establish connection to http://localhost:5000/v3/auth/tokens > ################################################### > > > But I'm still looking to find how can I force [ cinder.api.middleware ] > authentication configuration for example to update target ip address, be > replace with localhost > There is a ticket filed about this but I don't have a workaround at this point: https://bugzilla.redhat.com/show_bug.cgi?id=1272572 > > > Sincerely, > Ali R. Taleghani > @linkedIn > > On Fri, Oct 16, 2015 at 4:24 PM, AliReza Taleghani > wrote: > >> >> I have some error via horizon nofication which points to cinder auth api >> calls... >> >> [image: Screenshot from 2015-10-16 16:13:15.png] >> >> I try some configuration change as show in: >> http://paste.ubuntu.com/12798452/ >> >> to apply auth info keystonecontext :-/ >> but it seems nothing to get solve... >> >> -- >> Sincerely, >> Ali R. Taleghani >> > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2015-10-16 16:13:15.png Type: image/png Size: 76343 bytes Desc: not available URL: From marius at remote-lab.net Sun Oct 18 09:31:37 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 18 Oct 2015 11:31:37 +0200 Subject: [Rdo-list] overcloud update [external - float interface name] In-Reply-To: References: Message-ID: On Sun, Oct 18, 2015 at 6:49 AM, AliReza Taleghani wrote: > :"> it's was fault... > > nice tricks covered on: > [stack at undercloud ~]$ openstack help overcloud deploy > > ######## > --neutron-public-interface NEUTRON_PUBLIC_INTERFACE > Deprecated > ######## > but if it's deprecated! what else exist? > does it mean we should check neutron it self lated doc? I'm not sure about why it's being deprecated but I believe you can also pass it as a parameter in an environment file: https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud-without-mergepy.yaml#L142-L145 > Sincerely, > Ali R. Taleghani > @linkedIn > > On Sat, Oct 17, 2015 at 4:03 PM, Marius Cornea > wrote: >> >> On Sat, Oct 17, 2015 at 8:58 AM, AliReza Taleghani >> wrote: >> > I have been deployed my overcloud via: >> > ### >> > openstack overcloud deploy --compute-scale 4 --templates >> > --compute-flavor >> > compute --control-flavor control >> > ### >> > >> > the baremetal server interfaces naming is as enoX X == {1,2,3,4} >> > I also have connected all baremetal servers eno1 directly into >> > undercloud >> > eth1 as management zone >> > Now I wana create external network for float ip assignment via: >> > ### >> > neutron net-create ext-net --router:external --provider:physical_network >> > datacentre --provider:network_type flat >> > ### >> > >> > >> > the datacentre is directlly connected to the public ip address router, >> > but >> > the problem is that I don't have such and interface name in OS level... >> >> Datacentre is a mapping which points to br-ex ovs bridge. By default I >> believe the provisioning network nic gets bridged to br-ex. If you >> want to have another interface bridged to br-ex you just need to add >> the following argument to the deploy command: >> >> --neutron-public-interface eno2 >> >> or >> >> --neutron-public-interface nic2 >> assuming that eno2 is the 2nd nic which has an active cable connected >> >> You should also check the network isolation feature for more advanced >> networking configurations: >> >> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/network_isolation.html >> >> > is seem's I should have to solution: >> > >> > 1- rename Controller eno2 -> datacentre >> > 2- update overcloud and don't know how but force it to know [ eno2 ] >> > instead >> > of [ datacentre ] >> > >> > :-? >> > >> > solution 1 ( os level interface renaming ) seems to be a bit >> > distructive... >> > cos centos wiki's told we should edit kernel param at grub and revert to >> > old >> > interface naming also stick to udev rules and alike... >> > >> > I pereferer to know how can I update my overcloud to accept eno2 instead >> > of >> > datacentre? >> > >> > thanks >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > > From marius at remote-lab.net Sun Oct 18 09:36:42 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 18 Oct 2015 11:36:42 +0200 Subject: [Rdo-list] [RDO-Manager] [undercloud] Repository moved In-Reply-To: <5623176E.4030806@ltgfederal.com> References: <5623176E.4030806@ltgfederal.com> Message-ID: Hi Ignacio, I believe installing the puppet modules from source is not required anymore so try running installation without the 'export DIB_INSTALLTYPE_puppet_modules=source' Thanks, Marius On Sun, Oct 18, 2015 at 5:52 AM, Ignacio Bravo wrote: > I just tried to install the undercloud and it fails with the following: > > Caching puppet-ceph from > https://git.openstack.org/stackforge/puppet-ceph.git in > /root/.cache/image-create/source-repositories/puppet_ceph_37a8b07e4ba60cba4257960ec9cd83b3213fe5f1 > Cloning into > '/root/.cache/image-create/source-repositories/puppet_ceph_37a8b07e4ba60cba4257960ec9cd83b3213fe5f1.tmp'... > fatal: repository 'https://git.openstack.org/stackforge/puppet-ceph.git/' > not found > > > I have tried replicating from a command line, and it appears that the > repository was moved to: > > git clone https://git.openstack.org/openstack/puppet-ceph.git > > > -- > Ignacio Bravo > LTG Federal Inc > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From shayne.alone at gmail.com Sun Oct 18 09:38:04 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Sun, 18 Oct 2015 13:08:04 +0330 Subject: [Rdo-list] overcloud cinder volume limits In-Reply-To: References: Message-ID: I think I figure this out: Line: 27 file: /usr/lib/python2.7/site-packages/cinder/keymgr/key_mgr.py this was the default param on cinder key manager which was: default='http://localhost:5000/v3', which I updated it with: default='http://10.20.30.26:5000/v3', and that was all.... but it's the default library value I thinks, and it should be possible to be overwrite on some configuration files i hope... (ex: /etc/cinder/api-paste.ini) -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Sun Oct 18 09:49:33 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 18 Oct 2015 11:49:33 +0200 Subject: [Rdo-list] overcloud cinder volume limits In-Reply-To: References: Message-ID: That's great! Please add your findings to the BZ so we can properly keep track of it: https://bugzilla.redhat.com/show_bug.cgi?id=1272572 Thanks, Marius On Sun, Oct 18, 2015 at 11:38 AM, AliReza Taleghani wrote: > I think I figure this out: > > Line: 27 > file: /usr/lib/python2.7/site-packages/cinder/keymgr/key_mgr.py > > this was the default param on cinder key manager which was: > > default='http://localhost:5000/v3', > which I updated it with: > default='http://10.20.30.26:5000/v3', > and that was all.... > > but it's the default library value I thinks, and it should be possible to be > overwrite on some configuration files i hope... (ex: > /etc/cinder/api-paste.ini) From marius at remote-lab.net Sun Oct 18 10:13:23 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sun, 18 Oct 2015 12:13:23 +0200 Subject: [Rdo-list] HA with network isolation on virt howto Message-ID: Hi everyone, I wrote a blog post about how to deploy a HA with network isolation overcloud on top of the virtual environment. I tried to provide some insights into what instack-virt-setup creates and how to use the network isolation templates in the virtual environment. I hope you find it useful. https://remote-lab.net/rdo-manager-ha-openstack-deployment/ Thanks, Marius From celik.esra at tubitak.gov.tr Mon Oct 19 10:34:32 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Mon, 19 Oct 2015 13:34:32 +0300 (EEST) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1505152794.58180750.1444917521867.JavaMail.zimbra@redhat.com> <1268677576.5046491.1444974015831.JavaMail.zimbra@tubitak.gov.tr> <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> Message-ID: <1405259710.6257739.1445250871899.JavaMail.zimbra@tubitak.gov.tr> Hi again, "nova list" was empty after introspection stage which was not completed successfully. So I cloud not ssh the nodes.. Is there another way to obtain the IP addresses? [stack at undercloud ~]$ sudo systemctl|grep ironic openstack-ironic-api.service loaded active running OpenStack Ironic API service openstack-ironic-conductor.service loaded active running OpenStack Ironic Conductor service openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot dnsmasq service for Ironic Inspector openstack-ironic-inspector.service loaded active running Hardware introspection service for OpenStack Ironic If I start deployment anyway I get 2 nodes in ERROR state [stack at undercloud ~]$ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates Stack failed with status: resources.Controller: resources[0]: ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" [stack at undercloud ~]$ nova list +--------------------------------------+-------------------------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+----------+ | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | overcloud-controller-0 | ERROR | - | NOSTATE | | | 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | ERROR | - | NOSTATE | | +--------------------------------------+-------------------------+--------+------------+-------------+----------+ Did the repositories update during weekend? Should I better restart the overall Undercloud and Overcloud installation from the beginning? Thanks. Esra ?EL?K Uzman Ara?t?rmac? Bili?im Teknolojileri Enstit?s? T?B?TAK B?LGEM 41470 GEBZE - KOCAEL? T +90 262 675 3140 F +90 262 646 3187 www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ................................................................ Sorumluluk Reddi ----- Orijinal Mesaj ----- > Kimden: "Sasha Chuzhoy" > Kime: "Esra Celik" > Kk: "Marius Cornea" , rdo-list at redhat.com > G?nderilenler: 16 Ekim Cuma 2015 18:44:49 > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > found" > Hi Esra, > if the undercloud nodes are UP - you can login with: ssh heat-admin@ > You can see the IP of the nodes with: "nova list". > BTW, > What do you see if you run "sudo systemctl|grep ironic" on the undercloud? > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Esra Celik" > > To: "Sasha Chuzhoy" > > Cc: "Marius Cornea" , rdo-list at redhat.com > > Sent: Friday, October 16, 2015 1:40:16 AM > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > was found" > > > > Hi Sasha, > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 Overcloud-Compute > > > > This is my undercloud.conf file: > > > > image_path = . > > local_ip = 192.0.2.1/24 > > local_interface = em2 > > masquerade_network = 192.0.2.0/24 > > dhcp_start = 192.0.2.5 > > dhcp_end = 192.0.2.24 > > network_cidr = 192.0.2.0/24 > > network_gateway = 192.0.2.1 > > inspection_interface = br-ctlplane > > inspection_iprange = 192.0.2.100,192.0.2.120 > > inspection_runbench = false > > undercloud_debug = true > > enable_tuskar = false > > enable_tempest = false > > > > IP configuration for the Undercloud is as follows: > > > > stack at undercloud ~]$ ip addr > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > inet6 ::1/128 scope host > > valid_lft forever preferred_lft forever > > 2: em1: mtu 1500 qdisc mq state UP qlen > > 1000 > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > valid_lft forever preferred_lft forever > > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > > valid_lft forever preferred_lft forever > > 3: em2: mtu 1500 qdisc mq master > > ovs-system > > state UP qlen 1000 > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > 4: ovs-system: mtu 1500 qdisc noop state DOWN > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > 5: br-ctlplane: mtu 1500 qdisc noqueue > > state UNKNOWN > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > valid_lft forever preferred_lft forever > > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > > valid_lft forever preferred_lft forever > > 6: br-int: mtu 1500 qdisc noop state DOWN > > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > > > And I attached two screenshots showing the boot stage for overcloud nodes > > > > Is there a way to login the overcloud nodes to see their IP configuration? > > > > Thanks > > > > Esra ?EL?K > > T?B?TAK B?LGEM > > www.bilgem.tubitak.gov.tr > > celik.esra at tubitak.gov.tr > > > > ----- Orijinal Mesaj ----- > > > > > Kimden: "Sasha Chuzhoy" > > > Kime: "Esra Celik" > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > G?nderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > > found" > > > > > Just my 2 cents. > > > Did you make sure that all the registered nodes are configured to boot > > > off > > > the right NIC first? > > > Can you watch the console and see what happens on the problematic nodes > > > upon > > > boot? > > > > > Best regards, > > > Sasha Chuzhoy. > > > > > ----- Original Message ----- > > > > From: "Esra Celik" > > > > To: "Marius Cornea" > > > > Cc: rdo-list at redhat.com > > > > Sent: Thursday, October 15, 2015 4:40:46 AM > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > host > > > > was found" > > > > > > > > > > > > Sorry for the late reply > > > > > > > > ironic node-show results are below. I have my nodes power on after > > > > introspection bulk start. And I get the following warning > > > > Introspection didn't finish for nodes > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > Doesn't seem to be the same issue with > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > > | Maintenance | > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | > > > > | available > > > > | | > > > > | False | > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | > > > > | available > > > > | | > > > > | False | > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > +------------------------+-------------------------------------------------------------------------+ > > > > | Property | Value | > > > > +------------------------+-------------------------------------------------------------------------+ > > > > | target_power_state | None | > > > > | extra | {} | > > > > | last_error | None | > > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > > | maintenance_reason | None | > > > > | provision_state | available | > > > > | clean_step | {} | > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > | console_enabled | False | > > > > | target_provision_state | None | > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > | maintenance | False | > > > > | inspection_started_at | None | > > > > | inspection_finished_at | None | > > > > | power_state | power on | > > > > | driver | pxe_ipmitool | > > > > | reservation | None | > > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > > | u'local_gb': > > > > | u'10', | > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > | instance_uuid | None | > > > > | name | None | > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > | u'192.168.0.18', | > > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > | | | > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > > | driver_internal_info | {u'clean_steps': None} | > > > > | chassis_uuid | | > > > > | instance_info | {} | > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > +------------------------+-------------------------------------------------------------------------+ > > > > | Property | Value | > > > > +------------------------+-------------------------------------------------------------------------+ > > > > | target_power_state | None | > > > > | extra | {} | > > > > | last_error | None | > > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > > | maintenance_reason | None | > > > > | provision_state | available | > > > > | clean_step | {} | > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > | console_enabled | False | > > > > | target_provision_state | None | > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > | maintenance | False | > > > > | inspection_started_at | None | > > > > | inspection_finished_at | None | > > > > | power_state | power on | > > > > | driver | pxe_ipmitool | > > > > | reservation | None | > > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > > | u'local_gb': > > > > | u'100', | > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > | instance_uuid | None | > > > > | name | None | > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > | u'192.168.0.19', | > > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > | | | > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > > | driver_internal_info | {u'clean_steps': None} | > > > > | chassis_uuid | | > > > > | instance_info | {} | > > > > +------------------------+-------------------------------------------------------------------------+ > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. I don't think I am > > > > doing > > > > something other than > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi instackenv.json > > > > 2 sudo yum -y install epel-release > > > > 3 sudo curl -o /etc/yum.repos.d/delorean.repo > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > /etc/yum.repos.d/delorean-current.repo > > > > 6 sudo /bin/bash -c "cat <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > EOF" > > > > 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > 8 sudo yum -y install yum-plugin-priorities > > > > 9 sudo yum install -y python-tripleoclient > > > > 10 cp /usr/share/instack-undercloud/undercloud.conf.sample > > > > ~/undercloud.conf > > > > 11 vi undercloud.conf > > > > 12 export DIB_INSTALLTYPE_puppet_modules=source > > > > 13 openstack undercloud install > > > > 14 source stackrc > > > > 15 export NODE_DIST=centos7 > > > > 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > 17 export DIB_INSTALLTYPE_puppet_modules=source > > > > 18 openstack overcloud image build --all > > > > 19 ls > > > > 20 openstack overcloud image upload > > > > 21 openstack baremetal import --json instackenv.json > > > > 22 openstack baremetal configure boot > > > > 23 ironic node-list > > > > 24 openstack baremetal introspection bulk start > > > > 25 ironic node-list > > > > 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > Esra ?EL?K > > > > T?B?TAK B?LGEM > > > > www.bilgem.tubitak.gov.tr > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > Kime: "Esra Celik" > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > G?nderilenler: 14 Ekim ?ar?amba 2015 19:40:07 > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > was > > > > found" > > > > > > > > Can you do ironic node-show for your ironic nodes and post the results? > > > > Also > > > > check the following suggestion if you're experiencing the same issue: > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > ----- Original Message ----- > > > > > From: "Esra Celik" > > > > > To: "Marius Cornea" > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > host > > > > > was found" > > > > > > > > > > > > > > > > > > > > Well in the early stage of the introspection I can see Client IP of > > > > > nodes > > > > > (screenshot attached). But then I see continuous ironic-python-agent > > > > > errors > > > > > (screenshot-2 attached). Errors repeat after time out.. And the nodes > > > > > are > > > > > not powered off. > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > I can use ipmitool command to successfully power on/off the nodes > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > ADMINISTRATOR > > > > > -U > > > > > root -R 3 -N 5 -P power status > > > > > Chassis Power is on > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > > chassis power status > > > > > Chassis Power is on > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > > chassis power off > > > > > Chassis Power Control: Down/Off > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > > chassis power status > > > > > Chassis Power is off > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > > chassis power on > > > > > Chassis Power Control: Up/On > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P > > > > > chassis power status > > > > > Chassis Power is on > > > > > > > > > > > > > > > Esra ?EL?K > > > > > T?B?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > Kimden: "Marius Cornea" > > > > > Kime: "Esra Celik" > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > > was > > > > > found" > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > From: "Esra Celik" > > > > > > To: "Marius Cornea" > > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > was found" > > > > > > > > > > > > > > > > > > Well today I started with re-installing the OS and nothing seems > > > > > > wrong > > > > > > with > > > > > > undercloud installation, then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error during image build > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > ... > > > > > > a lot of log > > > > > > ... > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > + dracut -N --install ' curl partprobe lsblk targetcli tail head > > > > > > awk > > > > > > ifconfig > > > > > > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell > > > > > > rd.debug > > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > / > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net > > > > > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > > > > > > target_core_file target_core_pscsi configfs' -o 'dash plymouth' > > > > > > /tmp/ramdisk > > > > > > cat: write error: Broken pipe > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > + chmod o+r /tmp/kernel > > > > > > + trap EXIT > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > + date +%s.%N > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > ... > > > > > > a lot of log > > > > > > ... > > > > > > > > > > You can ignore that afaik, if you end up having all the required > > > > > images > > > > > it > > > > > should be ok. > > > > > > > > > > > > > > > > > Then, during introspection stage I see ironic-python-agent errors > > > > > > on > > > > > > nodes > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early stage of the > > > > > introspection? > > > > > At > > > > > some point it should receive an address by DHCP and the Network is > > > > > unreachable error should disappear. Does the introspection complete > > > > > and > > > > > the > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > openstack-ironic-conductor.service > > > > > > | > > > > > > grep -i "warning\|error" > > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > > 10:30:12.119 > > > > > > 619 WARNING oslo_config.cfg > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > ] > > > > > > Option "http_url" from group "pxe" is deprecated. Use option > > > > > > "http_url" > > > > > > from > > > > > > group "deploy". > > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > > 10:30:12.119 > > > > > > 619 WARNING oslo_config.cfg > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > ] > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > "http_root" > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > > > > This is odd too as I'm expecting the nodes to be powered off before > > > > > running > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > > > > | Maintenance | > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | > > > > > > | available > > > > > > | | > > > > > > | False | > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | > > > > > > | available > > > > > > | | > > > > > > | False | > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > During deployment I get following errors > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > openstack-ironic-conductor.service > > > > > > | > > > > > > grep -i "warning\|error" > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > > 11:29:01.739 > > > > > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while > > > > > > attempting > > > > > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 > > > > > > -N > > > > > > 5 > > > > > > -f > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > Error: Unexpected error while running command. > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > > 11:29:01.739 > > > > > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status > > > > > > failed > > > > > > for > > > > > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected > > > > > > error > > > > > > while > > > > > > running command. > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > > 11:29:01.740 > > > > > > 619 WARNING ironic.conductor.manager [-] During sync_power_state, > > > > > > could > > > > > > not > > > > > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > attempt > > > > > > 1 > > > > > > of > > > > > > 3. Error: IPMI call failed: power status.. > > > > > > > > > > > > > > > > This looks like an ipmi error, can you try to manually run commands > > > > > using > > > > > the > > > > > ipmitool and see if you get any success? It's also worth filing a bug > > > > > with > > > > > details such as the ipmitool version, server model, drac firmware > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > Kime: "Esra Celik" > > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > valid > > > > > > host was found" > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "Esra Celik" > > > > > > > To: "Marius Cornea" > > > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error > > > > > > > "No > > > > > > > valid > > > > > > > host was found" > > > > > > > > > > > > > > During deployment they are powering on and deploying the images. > > > > > > > I > > > > > > > see > > > > > > > lot > > > > > > > of > > > > > > > connection error messages about ironic-python-agent but ignore > > > > > > > them > > > > > > > as > > > > > > > mentioned here > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > That was referring to the introspection stage. From what I can tell > > > > > > you > > > > > > are > > > > > > experiencing issues during deployment as it fails to provision the > > > > > > nova > > > > > > instances, can you check if during that stage the nodes get powered > > > > > > on? > > > > > > > > > > > > Make sure that before overcloud deploy the ironic nodes are > > > > > > available > > > > > > for > > > > > > provisioning (ironic node-list and check the provisioning state > > > > > > column). > > > > > > Also check that you didn't miss any step in the docs in regards to > > > > > > kernel > > > > > > and ramdisk assignment, introspection, flavor creation(so it > > > > > > matches > > > > > > the > > > > > > nodes resources) > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > In instackenv.json file I do not need to add the undercloud node, > > > > > > > or > > > > > > > do > > > > > > > I? > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > > > > > You can check the openstack-ironic-conductor logs(journalctl -fl -u > > > > > > openstack-ironic-conductor.service) and the logs in /var/log/nova. > > > > > > > > > > > > > Thanks > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > Kime: > > > > > > > Esra Celik Kk: Ignacio Bravo > > > > > > > , rdo-list at redhat.comGönderilenler: > > > > > > > Tue, > > > > > > > 13 > > > > > > > Oct > > > > > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy > > > > > > > fails > > > > > > > with > > > > > > > error "No valid host was found" > > > > > > > > > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > To: "Ignacio Bravo" > Cc: > > > > > > > rdo-list at redhat.com> > > > > > > > Sent: > > > > > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] > > > > > > > OverCloud > > > > > > > deploy fails with error "No valid host was found"> > > > Actually > > > > > > > I > > > > > > > re-installed the OS for Undercloud before deploying. However I > > > > > > > did> > > > > > > > not > > > > > > > re-install the OS in Compute and Controller nodes.. I will > > > > > > > reinstall> > > > > > > > basic > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and compute, > > > > > > > they > > > > > > > will > > > > > > > get the image served by the undercloud. I'd recommend that during > > > > > > > deployment > > > > > > > you watch the servers console and make sure they get powered on, > > > > > > > pxe > > > > > > > boot, > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > Kimden: > > > > > > > > "Ignacio > > > > > > > > Bravo" > Kime: "Esra Celik" > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > Gönderilenler: > > > > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud > > > > > > > > deploy > > > > > > > > fails > > > > > > > > with error "No valid host was> found"> > Esra,> > I encountered > > > > > > > > the > > > > > > > > same > > > > > > > > problem after deleting the stack and re-deploying.> > It turns > > > > > > > > out > > > > > > > > that > > > > > > > > 'heat stack-delete overcloud’ does remove the nodes from> > > > > > > > > ‘nova list’ and one would assume that the baremetal > > > > > > > > servers > > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > > redeploying, > > > > > > > > I > > > > > > > > get > > > > > > > > the same message of> not enough hosts available.> > You can > > > > > > > > look > > > > > > > > into > > > > > > > > the > > > > > > > > nova logs and it mentions something about ‘node xxx is> > > > > > > > > already > > > > > > > > associated with UUID yyyy’ and ‘I tried 3 times and > > > > > > > > I’m > > > > > > > > giving up’.> The issue is that the UUID yyyy belonged to > > > > > > > > a > > > > > > > > prior > > > > > > > > unsuccessful deployment.> > I’m now redeploying the basic > > > > > > > > OS > > > > > > > > to > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > Federal, > > > > > > > > Inc> > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, > > > > > > > > 2015, > > > > > > > > at > > > > > > > > 9:25 > > > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi > > > > > > > > all,> > > > > > > > > > > > > > > > > > OverCloud deploy fails with error "No valid host was found"> > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> > > > > > > > > Deploying > > > > > > > > templates in the directory> > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > Stack failed with status: Resource CREATE failed: > > > > > > > > resources.Compute:> > > > > > > > > ResourceInError: resources[0].resources.NovaCompute: Went to > > > > > > > > status > > > > > > > > ERROR> > > > > > > > > due to "Message: No valid host was found. There are not enough > > > > > > > > hosts> > > > > > > > > available., Code: 500"> Heat Stack create failed.> > Here are > > > > > > > > some > > > > > > > > logs:> > > > > > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > COMPLETE > > > > > > > > > Tue > > > > > > > > > Oct > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > | resource_name | physical_resource_id | resource_type | > > > > > > > > | resource_status > > > > > > > > |> | updated_time | stack_name |> > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > |> | Controller > > > > > > > > |> | | > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> > > > > > > > > | > > > > > > > > | > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller > > > > > > > > |> > > > > > > > > | > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> > > > > > > > > | > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > |> > > > > > > > > | > > > > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | > > > > > > > > OS::Nova::Server > > > > > > > > |> > > > > > > > > | > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | > > > > > > > > NovaCompute > > > > > > > > | > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > > > > > > CREATE_FAILED > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > |> > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud Compute> > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > | Property | Value |> > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > | attributes | { |> | | "attributes": null, |> | | "refs": null > > > > > > > > | |> > > > > > > > > | | > > > > > > > > | | > > > > > > > > | } > > > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> > > > > > > > > |> | | > > > > > > > > |> | links > > > > > > > > |> | |> > > > > > > > > | > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > | (self) |> | | > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > | | (stack) |> | | > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > > > > | | physical_resource_id > > > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> > > > > > > > > | > > > > > > > > | > > > > > > > > ComputeCephDeployment |> | | > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > |> > > > > > > > > | > > > > > > > > | > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name | > > > > > > > > Compute > > > > > > > > |> > > > > > > > > | resource_status | CREATE_FAILED |> | resource_status_reason | > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > Went to status ERROR due to "Message:> | No valid host was > > > > > > > > found. > > > > > > > > There > > > > > > > > are not enough hosts available., Code: 500"> | |> | > > > > > > > > resource_type > > > > > > > > | > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 > > > > > > > > |> > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > This is my instackenv.json for 1 compute and 1 control > > > > > > > > > > > node > > > > > > > > > > > to > > > > > > > > > > > be > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > "mac":[> > > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > > "disk":"10",> > > > > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> > > > > > > > > "mac":[> > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > > "disk":"100",> > > > > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > _______________________________________________> Rdo-list > > > > > > > > mailing > > > > > > > > list> > > > > > > > > Rdo-list at redhat.com> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > _______________________________________________> Rdo-list > > > > > > > > mailing > > > > > > > > list> > > > > > > > > Rdo-list at redhat.com> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Mon Oct 19 12:36:58 2015 From: mcornea at redhat.com (Marius Cornea) Date: Mon, 19 Oct 2015 08:36:58 -0400 (EDT) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1405259710.6257739.1445250871899.JavaMail.zimbra@tubitak.gov.tr> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1505152794.58180750.1444917521867.JavaMail.zimbra@redhat.com> <1268677576.5046491.1444974015831.JavaMail.zimbra@tubitak.gov.tr> <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> <1405259710.6257739.1445250871899.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1225535906.44268987.1445258218601.JavaMail.zimbra@redhat.com> Hi, I believe the nodes were stuck in introspection so they were not ready for deployment thus the not enough hosts message. Can you describe the networking setup (how many nics the nodes have and to what networks they're connected)? Thanks, Marius ----- Original Message ----- > From: "Esra Celik" > To: "Sasha Chuzhoy" > Cc: "Marius Cornea" , rdo-list at redhat.com > Sent: Monday, October 19, 2015 12:34:32 PM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > Hi again, > > "nova list" was empty after introspection stage which was not completed > successfully. So I cloud not ssh the nodes.. Is there another way to obtain > the IP addresses? > > [stack at undercloud ~]$ sudo systemctl|grep ironic > openstack-ironic-api.service loaded active running OpenStack Ironic API > service > openstack-ironic-conductor.service loaded active running OpenStack Ironic > Conductor service > openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot > dnsmasq service for Ironic Inspector > openstack-ironic-inspector.service loaded active running Hardware > introspection service for OpenStack Ironic > > If I start deployment anyway I get 2 nodes in ERROR state > > [stack at undercloud ~]$ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > Stack failed with status: resources.Controller: resources[0]: > ResourceInError: resources.Controller: Went to status ERROR due to "Message: > No valid host was found. There are not enough hosts available., Code: 500" > > [stack at undercloud ~]$ nova list > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > | ID | Name | Status | Task State | Power State | Networks | > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | overcloud-controller-0 | ERROR | - | > | NOSTATE | | > | 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | ERROR | - > | | NOSTATE | | > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > Did the repositories update during weekend? Should I better restart the > overall Undercloud and Overcloud installation from the beginning? > > Thanks. > > Esra ?EL?K > Uzman Ara?t?rmac? > Bili?im Teknolojileri Enstit?s? > T?B?TAK B?LGEM > 41470 GEBZE - KOCAEL? > T +90 262 675 3140 > F +90 262 646 3187 > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > ................................................................ > > Sorumluluk Reddi > > ----- Orijinal Mesaj ----- > > > Kimden: "Sasha Chuzhoy" > > Kime: "Esra Celik" > > Kk: "Marius Cornea" , rdo-list at redhat.com > > G?nderilenler: 16 Ekim Cuma 2015 18:44:49 > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > found" > > > Hi Esra, > > if the undercloud nodes are UP - you can login with: ssh heat-admin@ > > You can see the IP of the nodes with: "nova list". > > BTW, > > What do you see if you run "sudo systemctl|grep ironic" on the undercloud? > > > Best regards, > > Sasha Chuzhoy. > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Sasha Chuzhoy" > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > Sent: Friday, October 16, 2015 1:40:16 AM > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > was found" > > > > > > Hi Sasha, > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 Overcloud-Compute > > > > > > This is my undercloud.conf file: > > > > > > image_path = . > > > local_ip = 192.0.2.1/24 > > > local_interface = em2 > > > masquerade_network = 192.0.2.0/24 > > > dhcp_start = 192.0.2.5 > > > dhcp_end = 192.0.2.24 > > > network_cidr = 192.0.2.0/24 > > > network_gateway = 192.0.2.1 > > > inspection_interface = br-ctlplane > > > inspection_iprange = 192.0.2.100,192.0.2.120 > > > inspection_runbench = false > > > undercloud_debug = true > > > enable_tuskar = false > > > enable_tempest = false > > > > > > IP configuration for the Undercloud is as follows: > > > > > > stack at undercloud ~]$ ip addr > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > inet 127.0.0.1/8 scope host lo > > > valid_lft forever preferred_lft forever > > > inet6 ::1/128 scope host > > > valid_lft forever preferred_lft forever > > > 2: em1: mtu 1500 qdisc mq state UP qlen > > > 1000 > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > valid_lft forever preferred_lft forever > > > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > > > valid_lft forever preferred_lft forever > > > 3: em2: mtu 1500 qdisc mq master > > > ovs-system > > > state UP qlen 1000 > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > 4: ovs-system: mtu 1500 qdisc noop state DOWN > > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > 5: br-ctlplane: mtu 1500 qdisc noqueue > > > state UNKNOWN > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > > valid_lft forever preferred_lft forever > > > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > > > valid_lft forever preferred_lft forever > > > 6: br-int: mtu 1500 qdisc noop state DOWN > > > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > > > > > And I attached two screenshots showing the boot stage for overcloud nodes > > > > > > Is there a way to login the overcloud nodes to see their IP > > > configuration? > > > > > > Thanks > > > > > > Esra ?EL?K > > > T?B?TAK B?LGEM > > > www.bilgem.tubitak.gov.tr > > > celik.esra at tubitak.gov.tr > > > > > > ----- Orijinal Mesaj ----- > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra Celik" > > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > > G?nderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > was > > > > found" > > > > > > > Just my 2 cents. > > > > Did you make sure that all the registered nodes are configured to boot > > > > off > > > > the right NIC first? > > > > Can you watch the console and see what happens on the problematic nodes > > > > upon > > > > boot? > > > > > > > Best regards, > > > > Sasha Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > From: "Esra Celik" > > > > > To: "Marius Cornea" > > > > > Cc: rdo-list at redhat.com > > > > > Sent: Thursday, October 15, 2015 4:40:46 AM > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > host > > > > > was found" > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > ironic node-show results are below. I have my nodes power on after > > > > > introspection bulk start. And I get the following warning > > > > > Introspection didn't finish for nodes > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > Doesn't seem to be the same issue with > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > > > | Maintenance | > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | > > > > > | available > > > > > | | > > > > > | False | > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | > > > > > | available > > > > > | | > > > > > | False | > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > | Property | Value | > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > | target_power_state | None | > > > > > | extra | {} | > > > > > | last_error | None | > > > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > > > | maintenance_reason | None | > > > > > | provision_state | available | > > > > > | clean_step | {} | > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > | console_enabled | False | > > > > > | target_provision_state | None | > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > | maintenance | False | > > > > > | inspection_started_at | None | > > > > > | inspection_finished_at | None | > > > > > | power_state | power on | > > > > > | driver | pxe_ipmitool | > > > > > | reservation | None | > > > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > > > | u'local_gb': > > > > > | u'10', | > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > | instance_uuid | None | > > > > > | name | None | > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > | u'192.168.0.18', | > > > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > | | | > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > > > | driver_internal_info | {u'clean_steps': None} | > > > > > | chassis_uuid | | > > > > > | instance_info | {} | > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > | Property | Value | > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > | target_power_state | None | > > > > > | extra | {} | > > > > > | last_error | None | > > > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > > > | maintenance_reason | None | > > > > > | provision_state | available | > > > > > | clean_step | {} | > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > | console_enabled | False | > > > > > | target_provision_state | None | > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > | maintenance | False | > > > > > | inspection_started_at | None | > > > > > | inspection_finished_at | None | > > > > > | power_state | power on | > > > > > | driver | pxe_ipmitool | > > > > > | reservation | None | > > > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > > > | u'local_gb': > > > > > | u'100', | > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > | instance_uuid | None | > > > > > | name | None | > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > | u'192.168.0.19', | > > > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > | | | > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > > > | driver_internal_info | {u'clean_steps': None} | > > > > > | chassis_uuid | | > > > > > | instance_info | {} | > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. I don't think I am > > > > > doing > > > > > something other than > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi instackenv.json > > > > > 2 sudo yum -y install epel-release > > > > > 3 sudo curl -o /etc/yum.repos.d/delorean.repo > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > 6 sudo /bin/bash -c "cat > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > EOF" > > > > > 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > 9 sudo yum install -y python-tripleoclient > > > > > 10 cp /usr/share/instack-undercloud/undercloud.conf.sample > > > > > ~/undercloud.conf > > > > > 11 vi undercloud.conf > > > > > 12 export DIB_INSTALLTYPE_puppet_modules=source > > > > > 13 openstack undercloud install > > > > > 14 source stackrc > > > > > 15 export NODE_DIST=centos7 > > > > > 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > 17 export DIB_INSTALLTYPE_puppet_modules=source > > > > > 18 openstack overcloud image build --all > > > > > 19 ls > > > > > 20 openstack overcloud image upload > > > > > 21 openstack baremetal import --json instackenv.json > > > > > 22 openstack baremetal configure boot > > > > > 23 ironic node-list > > > > > 24 openstack baremetal introspection bulk start > > > > > 25 ironic node-list > > > > > 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > Esra ?EL?K > > > > > T?B?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > Kime: "Esra Celik" > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > G?nderilenler: 14 Ekim ?ar?amba 2015 19:40:07 > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > > was > > > > > found" > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > results? > > > > > Also > > > > > check the following suggestion if you're experiencing the same issue: > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > ----- Original Message ----- > > > > > > From: "Esra Celik" > > > > > > To: "Marius Cornea" > > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the introspection I can see Client IP of > > > > > > nodes > > > > > > (screenshot attached). But then I see continuous > > > > > > ironic-python-agent > > > > > > errors > > > > > > (screenshot-2 attached). Errors repeat after time out.. And the > > > > > > nodes > > > > > > are > > > > > > not powered off. > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the nodes > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > ADMINISTRATOR > > > > > > -U > > > > > > root -R 3 -N 5 -P power status > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root > > > > > > -P > > > > > > chassis power status > > > > > > Chassis Power is on > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root > > > > > > -P > > > > > > chassis power off > > > > > > Chassis Power Control: Down/Off > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root > > > > > > -P > > > > > > chassis power status > > > > > > Chassis Power is off > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root > > > > > > -P > > > > > > chassis power on > > > > > > Chassis Power Control: Up/On > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root > > > > > > -P > > > > > > chassis power status > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > Esra ?EL?K > > > > > > T?B?TAK B?LGEM > > > > > > www.bilgem.tubitak.gov.tr > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > Kime: "Esra Celik" > > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > was > > > > > > found" > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "Esra Celik" > > > > > > > To: "Marius Cornea" > > > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > valid > > > > > > > host > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > Well today I started with re-installing the OS and nothing seems > > > > > > > wrong > > > > > > > with > > > > > > > undercloud installation, then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error during image build > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > ... > > > > > > > a lot of log > > > > > > > ... > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > + dracut -N --install ' curl partprobe lsblk targetcli tail head > > > > > > > awk > > > > > > > ifconfig > > > > > > > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell > > > > > > > rd.debug > > > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > / > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > virtio_net > > > > > > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > > > > > > > target_core_file target_core_pscsi configfs' -o 'dash plymouth' > > > > > > > /tmp/ramdisk > > > > > > > cat: write error: Broken pipe > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > + chmod o+r /tmp/kernel > > > > > > > + trap EXIT > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > + date +%s.%N > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > ... > > > > > > > a lot of log > > > > > > > ... > > > > > > > > > > > > You can ignore that afaik, if you end up having all the required > > > > > > images > > > > > > it > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > Then, during introspection stage I see ironic-python-agent errors > > > > > > > on > > > > > > > nodes > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early stage of the > > > > > > introspection? > > > > > > At > > > > > > some point it should receive an address by DHCP and the Network is > > > > > > unreachable error should disappear. Does the introspection complete > > > > > > and > > > > > > the > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > openstack-ironic-conductor.service > > > > > > > | > > > > > > > grep -i "warning\|error" > > > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > > > 10:30:12.119 > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > ] > > > > > > > Option "http_url" from group "pxe" is deprecated. Use option > > > > > > > "http_url" > > > > > > > from > > > > > > > group "deploy". > > > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > > > 10:30:12.119 > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > ] > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > "http_root" > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > > > > > > > This is odd too as I'm expecting the nodes to be powered off before > > > > > > running > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning State > > > > > > > | | > > > > > > > | Maintenance | > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > During deployment I get following errors > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > openstack-ironic-conductor.service > > > > > > > | > > > > > > > grep -i "warning\|error" > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > > > 11:29:01.739 > > > > > > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while > > > > > > > attempting > > > > > > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R > > > > > > > 3 > > > > > > > -N > > > > > > > 5 > > > > > > > -f > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > Error: Unexpected error while running command. > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > > > 11:29:01.739 > > > > > > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status > > > > > > > failed > > > > > > > for > > > > > > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected > > > > > > > error > > > > > > > while > > > > > > > running command. > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14 > > > > > > > 11:29:01.740 > > > > > > > 619 WARNING ironic.conductor.manager [-] During sync_power_state, > > > > > > > could > > > > > > > not > > > > > > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > attempt > > > > > > > 1 > > > > > > > of > > > > > > > 3. Error: IPMI call failed: power status.. > > > > > > > > > > > > > > > > > > > This looks like an ipmi error, can you try to manually run commands > > > > > > using > > > > > > the > > > > > > ipmitool and see if you get any success? It's also worth filing a > > > > > > bug > > > > > > with > > > > > > details such as the ipmitool version, server model, drac firmware > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > Kime: "Esra Celik" > > > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error > > > > > > > "No > > > > > > > valid > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: "Esra Celik" > > > > > > > > To: "Marius Cornea" > > > > > > > > Cc: "Ignacio Bravo" , > > > > > > > > rdo-list at redhat.com > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error > > > > > > > > "No > > > > > > > > valid > > > > > > > > host was found" > > > > > > > > > > > > > > > > During deployment they are powering on and deploying the > > > > > > > > images. > > > > > > > > I > > > > > > > > see > > > > > > > > lot > > > > > > > > of > > > > > > > > connection error messages about ironic-python-agent but ignore > > > > > > > > them > > > > > > > > as > > > > > > > > mentioned here > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > That was referring to the introspection stage. From what I can > > > > > > > tell > > > > > > > you > > > > > > > are > > > > > > > experiencing issues during deployment as it fails to provision > > > > > > > the > > > > > > > nova > > > > > > > instances, can you check if during that stage the nodes get > > > > > > > powered > > > > > > > on? > > > > > > > > > > > > > > Make sure that before overcloud deploy the ironic nodes are > > > > > > > available > > > > > > > for > > > > > > > provisioning (ironic node-list and check the provisioning state > > > > > > > column). > > > > > > > Also check that you didn't miss any step in the docs in regards > > > > > > > to > > > > > > > kernel > > > > > > > and ramdisk assignment, introspection, flavor creation(so it > > > > > > > matches > > > > > > > the > > > > > > > nodes resources) > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > In instackenv.json file I do not need to add the undercloud > > > > > > > > node, > > > > > > > > or > > > > > > > > do > > > > > > > > I? > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > > > > > > > You can check the openstack-ironic-conductor logs(journalctl -fl > > > > > > > -u > > > > > > > openstack-ironic-conductor.service) and the logs in > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > Thanks > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > Kime: > > > > > > > > Esra Celik Kk: Ignacio Bravo > > > > > > > > , rdo-list at redhat.comGönderilenler: > > > > > > > > Tue, > > > > > > > > 13 > > > > > > > > Oct > > > > > > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy > > > > > > > > fails > > > > > > > > with > > > > > > > > error "No valid host was found" > > > > > > > > > > > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > To: "Ignacio Bravo" > Cc: > > > > > > > > rdo-list at redhat.com> > > > > > > > > Sent: > > > > > > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] > > > > > > > > OverCloud > > > > > > > > deploy fails with error "No valid host was found"> > > > > > > > > > > > Actually > > > > > > > > I > > > > > > > > re-installed the OS for Undercloud before deploying. However I > > > > > > > > did> > > > > > > > > not > > > > > > > > re-install the OS in Compute and Controller nodes.. I will > > > > > > > > reinstall> > > > > > > > > basic > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > compute, > > > > > > > > they > > > > > > > > will > > > > > > > > get the image served by the undercloud. I'd recommend that > > > > > > > > during > > > > > > > > deployment > > > > > > > > you watch the servers console and make sure they get powered > > > > > > > > on, > > > > > > > > pxe > > > > > > > > boot, > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > Kimden: > > > > > > > > > "Ignacio > > > > > > > > > Bravo" > Kime: "Esra Celik" > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > Gönderilenler: > > > > > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud > > > > > > > > > deploy > > > > > > > > > fails > > > > > > > > > with error "No valid host was> found"> > Esra,> > I > > > > > > > > > encountered > > > > > > > > > the > > > > > > > > > same > > > > > > > > > problem after deleting the stack and re-deploying.> > It > > > > > > > > > turns > > > > > > > > > out > > > > > > > > > that > > > > > > > > > 'heat stack-delete overcloud’ does remove the nodes > > > > > > > > > from> > > > > > > > > > ‘nova list’ and one would assume that the > > > > > > > > > baremetal > > > > > > > > > servers > > > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > > > redeploying, > > > > > > > > > I > > > > > > > > > get > > > > > > > > > the same message of> not enough hosts available.> > You can > > > > > > > > > look > > > > > > > > > into > > > > > > > > > the > > > > > > > > > nova logs and it mentions something about ‘node xxx is> > > > > > > > > > already > > > > > > > > > associated with UUID yyyy’ and ‘I tried 3 times > > > > > > > > > and > > > > > > > > > I’m > > > > > > > > > giving up’.> The issue is that the UUID yyyy belonged > > > > > > > > > to > > > > > > > > > a > > > > > > > > > prior > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > basic > > > > > > > > > OS > > > > > > > > > to > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > > Federal, > > > > > > > > > Inc> > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct 13, > > > > > > > > > 2015, > > > > > > > > > at > > > > > > > > > 9:25 > > > > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with error "No valid host was found"> > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates> > > > > > > > > > Deploying > > > > > > > > > templates in the directory> > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > Stack failed with status: Resource CREATE failed: > > > > > > > > > resources.Compute:> > > > > > > > > > ResourceInError: resources[0].resources.NovaCompute: Went to > > > > > > > > > status > > > > > > > > > ERROR> > > > > > > > > > due to "Message: No valid host was found. There are not > > > > > > > > > enough > > > > > > > > > hosts> > > > > > > > > > available., Code: 500"> Heat Stack create failed.> > Here are > > > > > > > > > some > > > > > > > > > logs:> > > > > > > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > COMPLETE > > > > > > > > > > Tue > > > > > > > > > > Oct > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > | resource_name | physical_resource_id | resource_type | > > > > > > > > > | resource_status > > > > > > > > > |> | updated_time | stack_name |> > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > |> | Controller > > > > > > > > > |> | | > > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > | > > > > > > > > > | > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > OS::TripleO::Controller > > > > > > > > > |> > > > > > > > > > | > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute > > > > > > > > > |> > > > > > > > > > | > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > |> > > > > > > > > > | > > > > > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | > > > > > > > > > OS::Nova::Server > > > > > > > > > |> > > > > > > > > > | > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | > > > > > > > > > NovaCompute > > > > > > > > > | > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> | > > > > > > > > > CREATE_FAILED > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > |> > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud > > > > > > > > > > > Compute> > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > | Property | Value |> > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > | attributes | { |> | | "attributes": null, |> | | "refs": > > > > > > > > > | null > > > > > > > > > | |> > > > > > > > > > | | > > > > > > > > > | | > > > > > > > > > | } > > > > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | > > > > > > > > > |> | |> > > > > > > > > > |> | | > > > > > > > > > |> | links > > > > > > > > > |> | |> > > > > > > > > > | > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > | (self) |> | | > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > | | (stack) |> | | > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > > > > > | | physical_resource_id > > > > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment > > > > > > > > > |> > > > > > > > > > | > > > > > > > > > | > > > > > > > > > ComputeCephDeployment |> | | > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > |> > > > > > > > > > | > > > > > > > > > | > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name > > > > > > > > > | > > > > > > > > > Compute > > > > > > > > > |> > > > > > > > > > | resource_status | CREATE_FAILED |> | resource_status_reason > > > > > > > > > | | > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > Went to status ERROR due to "Message:> | No valid host was > > > > > > > > > found. > > > > > > > > > There > > > > > > > > > are not enough hosts available., Code: 500"> | |> | > > > > > > > > > resource_type > > > > > > > > > | > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > |> > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > This is my instackenv.json for 1 compute and 1 control > > > > > > > > > > > > node > > > > > > > > > > > > to > > > > > > > > > > > > be > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > "mac":[> > > > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > > > "disk":"10",> > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > > > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > "mac":[> > > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > > > "disk":"100",> > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> "pm_password":"calvin",> > > > > > > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > _______________________________________________> Rdo-list > > > > > > > > > mailing > > > > > > > > > list> > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > _______________________________________________> Rdo-list > > > > > > > > > mailing > > > > > > > > > list> > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > From tshefi at redhat.com Mon Oct 19 13:12:24 2015 From: tshefi at redhat.com (Tzach Shefi) Date: Mon, 19 Oct 2015 16:12:24 +0300 Subject: [Rdo-list] Liberty Packstack fails to install Cinder and Ceilometer missing python packages. Message-ID: Hi, Figured I'd try packstack-ing a Liberty on centos7.1 Packstack failed to install Cinder due to missing: python-cheetah After manually installing Python-cheetah, Cinder installed. Also missing python-werkzeug for Ceilometer. yum -d 0 -e 0 -y install openstack-ceilometer-compute Error: Package: 1:python-ceilometer-5.0.0.0-rc2.dev5.el7.centos.noarch (delorean) Requires: python-werkzeug Again manually installing python-werkzeug fixed issue. Tzach -------------- next part -------------- An HTML attachment was scrubbed... URL: From celik.esra at tubitak.gov.tr Mon Oct 19 13:47:08 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Mon, 19 Oct 2015 16:47:08 +0300 (EEST) Subject: [Rdo-list] blogs.rdoproject.org In-Reply-To: <5620FA36.10904@redhat.com> References: <5620FA36.10904@redhat.com> Message-ID: <1754054178.6381903.1445262428146.JavaMail.zimbra@tubitak.gov.tr> Hi Rich We have a design analysis report on OpenStack installation tools such as RDO, Fuel and Compass.. We finally decided to use RDO for our own needs. Maybe we could share our report on RDO blog if we have an account on it... Esra ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ----- Orijinal Mesaj ----- > Kimden: "Rich Bowen" > Kime: rdo-list at redhat.com > G?nderilenler: 16 Ekim Cuma 2015 16:23:02 > Konu: [Rdo-list] blogs.rdoproject.org > If you write about your work on RDO, or OpenStack in general, but don't > have a convenient place to put it, or if you have your own blog and want > to separate your OpenStack-related writing from your personal writing, > http://blogs.rdoproject.org/ is the place for you. > If you would like an account, please just let me know, and I'll make it > happen. > The site was previously the eNovance blog, so it already has 3 years of > content and a lot of followers. Because of this, you won't have to work > very hard to have an immediate audience for your posts. > To get started, just send me email (rbowen at redhat.com) with your > preferred username. We'd love to see a lot of posts around OpenStack > Summit, so now is the perfect time to start writing. > --Rich > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From celik.esra at tubitak.gov.tr Mon Oct 19 13:51:51 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Mon, 19 Oct 2015 16:51:51 +0300 (EEST) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1225535906.44268987.1445258218601.JavaMail.zimbra@redhat.com> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1505152794.58180750.1444917521867.JavaMail.zimbra@redhat.com> <1268677576.5046491.1444974015831.JavaMail.zimbra@tubitak.gov.tr> <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> <1405259710.6257739.1445250871899.JavaMail.zimbra@tubitak.gov.tr> <1225535906.44268987.1445258218601.JavaMail.zimbra@redhat.com> Message-ID: <1327898261.6388383.1445262711501.JavaMail.zimbra@tubitak.gov.tr> All 3 baremetal nodes (1 undercloud, 2 overcloud) have 2 nics. the undercloud machine's ip config is as follows: [stack at undercloud ~]$ ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: mtu 1500 qdisc mq state UP qlen 1000 link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 valid_lft forever preferred_lft forever inet6 fe80::a9e:1ff:fe50:8a21/64 scope link valid_lft forever preferred_lft forever 3: em2: mtu 1500 qdisc mq master ovs-system state UP qlen 1000 link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff 4: ovs-system: mtu 1500 qdisc noop state DOWN link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff 5: br-ctlplane: mtu 1500 qdisc noqueue state UNKNOWN link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane valid_lft forever preferred_lft forever inet6 fe80::a9e:1ff:fe50:8a22/64 scope link valid_lft forever preferred_lft forever 6: br-int: mtu 1500 qdisc noop state DOWN link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff I am using em2 for pxe boot on the other machines.. So I configured instackenv.json to have em2's MAC address For overcloud nodes, em1 was configured to have 10.1.34.x ip, but after image deploy I am not sure what happened for that nic. Thanks Esra ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ----- Orijinal Mesaj ----- > Kimden: "Marius Cornea" > Kime: "Esra Celik" > Kk: "Sasha Chuzhoy" , rdo-list at redhat.com > G?nderilenler: 19 Ekim Pazartesi 2015 15:36:58 > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > found" > Hi, > I believe the nodes were stuck in introspection so they were not ready for > deployment thus the not enough hosts message. Can you describe the > networking setup (how many nics the nodes have and to what networks they're > connected)? > Thanks, > Marius > ----- Original Message ----- > > From: "Esra Celik" > > To: "Sasha Chuzhoy" > > Cc: "Marius Cornea" , rdo-list at redhat.com > > Sent: Monday, October 19, 2015 12:34:32 PM > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > was found" > > > > Hi again, > > > > "nova list" was empty after introspection stage which was not completed > > successfully. So I cloud not ssh the nodes.. Is there another way to obtain > > the IP addresses? > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > openstack-ironic-api.service loaded active running OpenStack Ironic API > > service > > openstack-ironic-conductor.service loaded active running OpenStack Ironic > > Conductor service > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot > > dnsmasq service for Ironic Inspector > > openstack-ironic-inspector.service loaded active running Hardware > > introspection service for OpenStack Ironic > > > > If I start deployment anyway I get 2 nodes in ERROR state > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates > > Deploying templates in the directory > > /usr/share/openstack-tripleo-heat-templates > > Stack failed with status: resources.Controller: resources[0]: > > ResourceInError: resources.Controller: Went to status ERROR due to > > "Message: > > No valid host was found. There are not enough hosts available., Code: 500" > > > > [stack at undercloud ~]$ nova list > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > | ID | Name | Status | Task State | Power State | Networks | > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | overcloud-controller-0 | ERROR | - > > | | > > | NOSTATE | | > > | 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | ERROR | > > | - > > | | NOSTATE | | > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > Did the repositories update during weekend? Should I better restart the > > overall Undercloud and Overcloud installation from the beginning? > > > > Thanks. > > > > Esra ?EL?K > > Uzman Ara?t?rmac? > > Bili?im Teknolojileri Enstit?s? > > T?B?TAK B?LGEM > > 41470 GEBZE - KOCAEL? > > T +90 262 675 3140 > > F +90 262 646 3187 > > www.bilgem.tubitak.gov.tr > > celik.esra at tubitak.gov.tr > > ................................................................ > > > > Sorumluluk Reddi > > > > ----- Orijinal Mesaj ----- > > > > > Kimden: "Sasha Chuzhoy" > > > Kime: "Esra Celik" > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > G?nderilenler: 16 Ekim Cuma 2015 18:44:49 > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > > found" > > > > > Hi Esra, > > > if the undercloud nodes are UP - you can login with: ssh heat-admin@ > > > You can see the IP of the nodes with: "nova list". > > > BTW, > > > What do you see if you run "sudo systemctl|grep ironic" on the > > > undercloud? > > > > > Best regards, > > > Sasha Chuzhoy. > > > > > ----- Original Message ----- > > > > From: "Esra Celik" > > > > To: "Sasha Chuzhoy" > > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > > Sent: Friday, October 16, 2015 1:40:16 AM > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > host > > > > was found" > > > > > > > > Hi Sasha, > > > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 > > > > Overcloud-Compute > > > > > > > > This is my undercloud.conf file: > > > > > > > > image_path = . > > > > local_ip = 192.0.2.1/24 > > > > local_interface = em2 > > > > masquerade_network = 192.0.2.0/24 > > > > dhcp_start = 192.0.2.5 > > > > dhcp_end = 192.0.2.24 > > > > network_cidr = 192.0.2.0/24 > > > > network_gateway = 192.0.2.1 > > > > inspection_interface = br-ctlplane > > > > inspection_iprange = 192.0.2.100,192.0.2.120 > > > > inspection_runbench = false > > > > undercloud_debug = true > > > > enable_tuskar = false > > > > enable_tempest = false > > > > > > > > IP configuration for the Undercloud is as follows: > > > > > > > > stack at undercloud ~]$ ip addr > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > inet 127.0.0.1/8 scope host lo > > > > valid_lft forever preferred_lft forever > > > > inet6 ::1/128 scope host > > > > valid_lft forever preferred_lft forever > > > > 2: em1: mtu 1500 qdisc mq state UP > > > > qlen > > > > 1000 > > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 3: em2: mtu 1500 qdisc mq master > > > > ovs-system > > > > state UP qlen 1000 > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > 4: ovs-system: mtu 1500 qdisc noop state DOWN > > > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > > 5: br-ctlplane: mtu 1500 qdisc > > > > noqueue > > > > state UNKNOWN > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 6: br-int: mtu 1500 qdisc noop state DOWN > > > > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > > > > > > > And I attached two screenshots showing the boot stage for overcloud > > > > nodes > > > > > > > > Is there a way to login the overcloud nodes to see their IP > > > > configuration? > > > > > > > > Thanks > > > > > > > > Esra ?EL?K > > > > T?B?TAK B?LGEM > > > > www.bilgem.tubitak.gov.tr > > > > celik.esra at tubitak.gov.tr > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > Kime: "Esra Celik" > > > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > > > G?nderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > > was > > > > > found" > > > > > > > > > Just my 2 cents. > > > > > Did you make sure that all the registered nodes are configured to > > > > > boot > > > > > off > > > > > the right NIC first? > > > > > Can you watch the console and see what happens on the problematic > > > > > nodes > > > > > upon > > > > > boot? > > > > > > > > > Best regards, > > > > > Sasha Chuzhoy. > > > > > > > > > ----- Original Message ----- > > > > > > From: "Esra Celik" > > > > > > To: "Marius Cornea" > > > > > > Cc: rdo-list at redhat.com > > > > > > Sent: Thursday, October 15, 2015 4:40:46 AM > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > was found" > > > > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > > > ironic node-show results are below. I have my nodes power on after > > > > > > introspection bulk start. And I get the following warning > > > > > > Introspection didn't finish for nodes > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > Doesn't seem to be the same issue with > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > > > > | Maintenance | > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | > > > > > > | available > > > > > > | | > > > > > > | False | > > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | > > > > > > | available > > > > > > | | > > > > > > | False | > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > | Property | Value | > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > | target_power_state | None | > > > > > > | extra | {} | > > > > > > | last_error | None | > > > > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > | maintenance_reason | None | > > > > > > | provision_state | available | > > > > > > | clean_step | {} | > > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > > | console_enabled | False | > > > > > > | target_provision_state | None | > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > | maintenance | False | > > > > > > | inspection_started_at | None | > > > > > > | inspection_finished_at | None | > > > > > > | power_state | power on | > > > > > > | driver | pxe_ipmitool | > > > > > > | reservation | None | > > > > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > > > > | u'local_gb': > > > > > > | u'10', | > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > > | instance_uuid | None | > > > > > > | name | None | > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > | u'192.168.0.18', | > > > > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > | | | > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > > > > | driver_internal_info | {u'clean_steps': None} | > > > > > > | chassis_uuid | | > > > > > > | instance_info | {} | > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > | Property | Value | > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > | target_power_state | None | > > > > > > | extra | {} | > > > > > > | last_error | None | > > > > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > | maintenance_reason | None | > > > > > > | provision_state | available | > > > > > > | clean_step | {} | > > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > > | console_enabled | False | > > > > > > | target_provision_state | None | > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > | maintenance | False | > > > > > > | inspection_started_at | None | > > > > > > | inspection_finished_at | None | > > > > > > | power_state | power on | > > > > > > | driver | pxe_ipmitool | > > > > > > | reservation | None | > > > > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > > > > | u'local_gb': > > > > > > | u'100', | > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > > | instance_uuid | None | > > > > > > | name | None | > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > | u'192.168.0.19', | > > > > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > | | | > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > > > > | driver_internal_info | {u'clean_steps': None} | > > > > > > | chassis_uuid | | > > > > > > | instance_info | {} | > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. I don't think I am > > > > > > doing > > > > > > something other than > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi instackenv.json > > > > > > 2 sudo yum -y install epel-release > > > > > > 3 sudo curl -o /etc/yum.repos.d/delorean.repo > > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > 6 sudo /bin/bash -c "cat > > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > > EOF" > > > > > > 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo > > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > > 9 sudo yum install -y python-tripleoclient > > > > > > 10 cp /usr/share/instack-undercloud/undercloud.conf.sample > > > > > > ~/undercloud.conf > > > > > > 11 vi undercloud.conf > > > > > > 12 export DIB_INSTALLTYPE_puppet_modules=source > > > > > > 13 openstack undercloud install > > > > > > 14 source stackrc > > > > > > 15 export NODE_DIST=centos7 > > > > > > 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > > 17 export DIB_INSTALLTYPE_puppet_modules=source > > > > > > 18 openstack overcloud image build --all > > > > > > 19 ls > > > > > > 20 openstack overcloud image upload > > > > > > 21 openstack baremetal import --json instackenv.json > > > > > > 22 openstack baremetal configure boot > > > > > > 23 ironic node-list > > > > > > 24 openstack baremetal introspection bulk start > > > > > > 25 ironic node-list > > > > > > 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > Esra ?EL?K > > > > > > T?B?TAK B?LGEM > > > > > > www.bilgem.tubitak.gov.tr > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > Kime: "Esra Celik" > > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > G?nderilenler: 14 Ekim ?ar?amba 2015 19:40:07 > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > was > > > > > > found" > > > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > > results? > > > > > > Also > > > > > > check the following suggestion if you're experiencing the same > > > > > > issue: > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "Esra Celik" > > > > > > > To: "Marius Cornea" > > > > > > > Cc: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > valid > > > > > > > host > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the introspection I can see Client IP > > > > > > > of > > > > > > > nodes > > > > > > > (screenshot attached). But then I see continuous > > > > > > > ironic-python-agent > > > > > > > errors > > > > > > > (screenshot-2 attached). Errors repeat after time out.. And the > > > > > > > nodes > > > > > > > are > > > > > > > not powered off. > > > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > > ADMINISTRATOR > > > > > > > -U > > > > > > > root -R 3 -N 5 -P power status > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root > > > > > > > -P > > > > > > > chassis power status > > > > > > > Chassis Power is on > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root > > > > > > > -P > > > > > > > chassis power off > > > > > > > Chassis Power Control: Down/Off > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root > > > > > > > -P > > > > > > > chassis power status > > > > > > > Chassis Power is off > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root > > > > > > > -P > > > > > > > chassis power on > > > > > > > Chassis Power Control: Up/On > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root > > > > > > > -P > > > > > > > chassis power status > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > Esra ?EL?K > > > > > > > T?B?TAK B?LGEM > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > Kime: "Esra Celik" > > > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > > G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: "Esra Celik" > > > > > > > > To: "Marius Cornea" > > > > > > > > Cc: "Ignacio Bravo" , > > > > > > > > rdo-list at redhat.com > > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > valid > > > > > > > > host > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > Well today I started with re-installing the OS and nothing > > > > > > > > seems > > > > > > > > wrong > > > > > > > > with > > > > > > > > undercloud installation, then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error during image build > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > > ... > > > > > > > > a lot of log > > > > > > > > ... > > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > > + dracut -N --install ' curl partprobe lsblk targetcli tail > > > > > > > > head > > > > > > > > awk > > > > > > > > ifconfig > > > > > > > > cut expr route ping nc wget tftp grep' --kernel-cmdline > > > > > > > > 'rd.shell > > > > > > > > rd.debug > > > > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > > / > > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > > virtio_net > > > > > > > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock > > > > > > > > target_core_file target_core_pscsi configfs' -o 'dash plymouth' > > > > > > > > /tmp/ramdisk > > > > > > > > cat: write error: Broken pipe > > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > > + chmod o+r /tmp/kernel > > > > > > > > + trap EXIT > > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > > + date +%s.%N > > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > > ... > > > > > > > > a lot of log > > > > > > > > ... > > > > > > > > > > > > > > You can ignore that afaik, if you end up having all the required > > > > > > > images > > > > > > > it > > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > > > > Then, during introspection stage I see ironic-python-agent > > > > > > > > errors > > > > > > > > on > > > > > > > > nodes > > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early stage of the > > > > > > > introspection? > > > > > > > At > > > > > > > some point it should receive an address by DHCP and the Network > > > > > > > is > > > > > > > unreachable error should disappear. Does the introspection > > > > > > > complete > > > > > > > and > > > > > > > the > > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > openstack-ironic-conductor.service > > > > > > > > | > > > > > > > > grep -i "warning\|error" > > > > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: > > > > > > > > 2015-10-14 > > > > > > > > 10:30:12.119 > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > ] > > > > > > > > Option "http_url" from group "pxe" is deprecated. Use option > > > > > > > > "http_url" > > > > > > > > from > > > > > > > > group "deploy". > > > > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: > > > > > > > > 2015-10-14 > > > > > > > > 10:30:12.119 > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > ] > > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > > "http_root" > > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > > > > > > > > > > This is odd too as I'm expecting the nodes to be powered off > > > > > > > before > > > > > > > running > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning > > > > > > > > | State > > > > > > > > | | > > > > > > > > | Maintenance | > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on > > > > > > > > | | > > > > > > > > | available > > > > > > > > | | > > > > > > > > | False | > > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on > > > > > > > > | | > > > > > > > > | available > > > > > > > > | | > > > > > > > > | False | > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > During deployment I get following errors > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > openstack-ironic-conductor.service > > > > > > > > | > > > > > > > > grep -i "warning\|error" > > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: > > > > > > > > 2015-10-14 > > > > > > > > 11:29:01.739 > > > > > > > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while > > > > > > > > attempting > > > > > > > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root > > > > > > > > -R > > > > > > > > 3 > > > > > > > > -N > > > > > > > > 5 > > > > > > > > -f > > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > > Error: Unexpected error while running command. > > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: > > > > > > > > 2015-10-14 > > > > > > > > 11:29:01.739 > > > > > > > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power > > > > > > > > status > > > > > > > > failed > > > > > > > > for > > > > > > > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: > > > > > > > > Unexpected > > > > > > > > error > > > > > > > > while > > > > > > > > running command. > > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: > > > > > > > > 2015-10-14 > > > > > > > > 11:29:01.740 > > > > > > > > 619 WARNING ironic.conductor.manager [-] During > > > > > > > > sync_power_state, > > > > > > > > could > > > > > > > > not > > > > > > > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > > attempt > > > > > > > > 1 > > > > > > > > of > > > > > > > > 3. Error: IPMI call failed: power status.. > > > > > > > > > > > > > > > > > > > > > > This looks like an ipmi error, can you try to manually run > > > > > > > commands > > > > > > > using > > > > > > > the > > > > > > > ipmitool and see if you get any success? It's also worth filing a > > > > > > > bug > > > > > > > with > > > > > > > details such as the ipmitool version, server model, drac firmware > > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > Kime: "Esra Celik" > > > > > > > > Kk: "Ignacio Bravo" , > > > > > > > > rdo-list at redhat.com > > > > > > > > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error > > > > > > > > "No > > > > > > > > valid > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > From: "Esra Celik" > > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: "Ignacio Bravo" , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > error > > > > > > > > > "No > > > > > > > > > valid > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > During deployment they are powering on and deploying the > > > > > > > > > images. > > > > > > > > > I > > > > > > > > > see > > > > > > > > > lot > > > > > > > > > of > > > > > > > > > connection error messages about ironic-python-agent but > > > > > > > > > ignore > > > > > > > > > them > > > > > > > > > as > > > > > > > > > mentioned here > > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > > > That was referring to the introspection stage. From what I can > > > > > > > > tell > > > > > > > > you > > > > > > > > are > > > > > > > > experiencing issues during deployment as it fails to provision > > > > > > > > the > > > > > > > > nova > > > > > > > > instances, can you check if during that stage the nodes get > > > > > > > > powered > > > > > > > > on? > > > > > > > > > > > > > > > > Make sure that before overcloud deploy the ironic nodes are > > > > > > > > available > > > > > > > > for > > > > > > > > provisioning (ironic node-list and check the provisioning state > > > > > > > > column). > > > > > > > > Also check that you didn't miss any step in the docs in regards > > > > > > > > to > > > > > > > > kernel > > > > > > > > and ramdisk assignment, introspection, flavor creation(so it > > > > > > > > matches > > > > > > > > the > > > > > > > > nodes resources) > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > > > > In instackenv.json file I do not need to add the undercloud > > > > > > > > > node, > > > > > > > > > or > > > > > > > > > do > > > > > > > > > I? > > > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > > > > > > > > > You can check the openstack-ironic-conductor logs(journalctl > > > > > > > > -fl > > > > > > > > -u > > > > > > > > openstack-ironic-conductor.service) and the logs in > > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > > Kime: > > > > > > > > > Esra Celik Kk: Ignacio Bravo > > > > > > > > > , > > > > > > > > > rdo-list at redhat.comGönderilenler: > > > > > > > > > Tue, > > > > > > > > > 13 > > > > > > > > > Oct > > > > > > > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud > > > > > > > > > deploy > > > > > > > > > fails > > > > > > > > > with > > > > > > > > > error "No valid host was found" > > > > > > > > > > > > > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > > > To: "Ignacio Bravo" > Cc: > > > > > > > > > rdo-list at redhat.com> > > > > > > > > > Sent: > > > > > > > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] > > > > > > > > > OverCloud > > > > > > > > > deploy fails with error "No valid host was found"> > > > > > > > > > > > > Actually > > > > > > > > > I > > > > > > > > > re-installed the OS for Undercloud before deploying. However > > > > > > > > > I > > > > > > > > > did> > > > > > > > > > not > > > > > > > > > re-install the OS in Compute and Controller nodes.. I will > > > > > > > > > reinstall> > > > > > > > > > basic > > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > > compute, > > > > > > > > > they > > > > > > > > > will > > > > > > > > > get the image served by the undercloud. I'd recommend that > > > > > > > > > during > > > > > > > > > deployment > > > > > > > > > you watch the servers console and make sure they get powered > > > > > > > > > on, > > > > > > > > > pxe > > > > > > > > > boot, > > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > > Kimden: > > > > > > > > > > "Ignacio > > > > > > > > > > Bravo" > Kime: "Esra Celik" > > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > > Gönderilenler: > > > > > > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud > > > > > > > > > > deploy > > > > > > > > > > fails > > > > > > > > > > with error "No valid host was> found"> > Esra,> > I > > > > > > > > > > encountered > > > > > > > > > > the > > > > > > > > > > same > > > > > > > > > > problem after deleting the stack and re-deploying.> > It > > > > > > > > > > turns > > > > > > > > > > out > > > > > > > > > > that > > > > > > > > > > 'heat stack-delete overcloud’ does remove the nodes > > > > > > > > > > from> > > > > > > > > > > ‘nova list’ and one would assume that the > > > > > > > > > > baremetal > > > > > > > > > > servers > > > > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > > > > redeploying, > > > > > > > > > > I > > > > > > > > > > get > > > > > > > > > > the same message of> not enough hosts available.> > You can > > > > > > > > > > look > > > > > > > > > > into > > > > > > > > > > the > > > > > > > > > > nova logs and it mentions something about ‘node xxx > > > > > > > > > > is> > > > > > > > > > > already > > > > > > > > > > associated with UUID yyyy’ and ‘I tried 3 times > > > > > > > > > > and > > > > > > > > > > I’m > > > > > > > > > > giving up’.> The issue is that the UUID yyyy belonged > > > > > > > > > > to > > > > > > > > > > a > > > > > > > > > > prior > > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > > basic > > > > > > > > > > OS > > > > > > > > > > to > > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > > > Federal, > > > > > > > > > > Inc> > > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct > > > > > > > > > > 13, > > > > > > > > > > 2015, > > > > > > > > > > at > > > > > > > > > > 9:25 > > > > > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > Hi > > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with error "No valid host was > > > > > > > > > > found"> > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy > > > > > > > > > > --templates> > > > > > > > > > > Deploying > > > > > > > > > > templates in the directory> > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > > Stack failed with status: Resource CREATE failed: > > > > > > > > > > resources.Compute:> > > > > > > > > > > ResourceInError: resources[0].resources.NovaCompute: Went > > > > > > > > > > to > > > > > > > > > > status > > > > > > > > > > ERROR> > > > > > > > > > > due to "Message: No valid host was found. There are not > > > > > > > > > > enough > > > > > > > > > > hosts> > > > > > > > > > > available., Code: 500"> Heat Stack create failed.> > Here > > > > > > > > > > are > > > > > > > > > > some > > > > > > > > > > logs:> > > > > > > > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > > COMPLETE > > > > > > > > > > > Tue > > > > > > > > > > > Oct > > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > | resource_name | physical_resource_id | resource_type | > > > > > > > > > > | resource_status > > > > > > > > > > |> | updated_time | stack_name |> > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > |> | Controller > > > > > > > > > > |> | | > > > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > > OS::TripleO::Controller > > > > > > > > > > |> > > > > > > > > > > | > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute > > > > > > > > > > |> > > > > > > > > > > | > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > > |> > > > > > > > > > > | > > > > > > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | > > > > > > > > > > OS::Nova::Server > > > > > > > > > > |> > > > > > > > > > > | > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | > > > > > > > > > > NovaCompute > > > > > > > > > > | > > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> > > > > > > > > > > | > > > > > > > > > > CREATE_FAILED > > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > > |> > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud > > > > > > > > > > > > Compute> > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > | Property | Value |> > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > | attributes | { |> | | "attributes": null, |> | | "refs": > > > > > > > > > > | null > > > > > > > > > > | |> > > > > > > > > > > | | > > > > > > > > > > | | > > > > > > > > > > | } > > > > > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description | > > > > > > > > > > |> | |> > > > > > > > > > > |> | | > > > > > > > > > > |> | links > > > > > > > > > > |> | |> > > > > > > > > > > | > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > > | (self) |> | | > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > > | | (stack) |> | | > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > > > > > > | | physical_resource_id > > > > > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment > > > > > > > > > > |> > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > ComputeCephDeployment |> | | > > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > > |> > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | > > > > > > > > > > resource_name > > > > > > > > > > | > > > > > > > > > > Compute > > > > > > > > > > |> > > > > > > > > > > | resource_status | CREATE_FAILED |> | > > > > > > > > > > | resource_status_reason > > > > > > > > > > | | > > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > > Went to status ERROR due to "Message:> | No valid host was > > > > > > > > > > found. > > > > > > > > > > There > > > > > > > > > > are not enough hosts available., Code: 500"> | |> | > > > > > > > > > > resource_type > > > > > > > > > > | > > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > > |> > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > This is my instackenv.json for 1 compute and 1 > > > > > > > > > > > > > control > > > > > > > > > > > > > node > > > > > > > > > > > > > to > > > > > > > > > > > > > be > > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > "mac":[> > > > > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > > > > "disk":"10",> > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > "pm_addr":"192.168.0.18"> },> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > "mac":[> > > > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > > > > "disk":"100",> > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks in > > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > > _______________________________________________> Rdo-list > > > > > > > > > > mailing > > > > > > > > > > list> > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > _______________________________________________> Rdo-list > > > > > > > > > > mailing > > > > > > > > > > list> > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Oct 19 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 19 Oct 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20151019150003.6E05660A3FD9@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2015-10-21 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting ([Agenda: https://etherpad.openstack.org/p/RDO-Packaging](https://etherpad.openstack.org/p/RDO-Packaging)) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From sasha at redhat.com Mon Oct 19 15:03:03 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Mon, 19 Oct 2015 11:03:03 -0400 (EDT) Subject: [Rdo-list] OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1327898261.6388383.1445262711501.JavaMail.zimbra@tubitak.gov.tr> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1505152794.58180750.1444917521867.JavaMail.zimbra@redhat.com> <1268677576.5046491.1444974015831.JavaMail.zimbra@tubitak.gov.tr> <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> <1405259710.6257739.1445250871899.JavaMail.zimbra@tubitak.gov.tr> <1225535906.44268987.1445258218601.JavaMail.zimbra@redhat.com> <1327898261.6388383.1445262711501.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1632812339.60289332.1445266983089.JavaMail.zimbra@redhat.com> Esra, Is it possible to check the console of the nodes being introspected and/or deployed? I wonder if the instackenv.json file is accurate. Also, what's the output from 'nova flavor-list'? Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Esra Celik" > To: "Marius Cornea" > Cc: "Sasha Chuzhoy" , rdo-list at redhat.com > Sent: Monday, October 19, 2015 9:51:51 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > All 3 baremetal nodes (1 undercloud, 2 overcloud) have 2 nics. > > the undercloud machine's ip config is as follows: > > [stack at undercloud ~]$ ip addr > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: em1: mtu 1500 qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > valid_lft forever preferred_lft forever > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > valid_lft forever preferred_lft forever > 3: em2: mtu 1500 qdisc mq master ovs-system > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > 4: ovs-system: mtu 1500 qdisc noop state DOWN > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: br-ctlplane: mtu 1500 qdisc noqueue > state UNKNOWN > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > valid_lft forever preferred_lft forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft forever preferred_lft forever > 6: br-int: mtu 1500 qdisc noop state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > I am using em2 for pxe boot on the other machines.. So I configured > instackenv.json to have em2's MAC address > For overcloud nodes, em1 was configured to have 10.1.34.x ip, but after image > deploy I am not sure what happened for that nic. > > Thanks > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > ----- Orijinal Mesaj ----- > > > Kimden: "Marius Cornea" > > Kime: "Esra Celik" > > Kk: "Sasha Chuzhoy" , rdo-list at redhat.com > > G?nderilenler: 19 Ekim Pazartesi 2015 15:36:58 > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > found" > > > Hi, > > > I believe the nodes were stuck in introspection so they were not ready for > > deployment thus the not enough hosts message. Can you describe the > > networking setup (how many nics the nodes have and to what networks they're > > connected)? > > > Thanks, > > Marius > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Sasha Chuzhoy" > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > Sent: Monday, October 19, 2015 12:34:32 PM > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > was found" > > > > > > Hi again, > > > > > > "nova list" was empty after introspection stage which was not completed > > > successfully. So I cloud not ssh the nodes.. Is there another way to > > > obtain > > > the IP addresses? > > > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > > openstack-ironic-api.service loaded active running OpenStack Ironic API > > > service > > > openstack-ironic-conductor.service loaded active running OpenStack Ironic > > > Conductor service > > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot > > > dnsmasq service for Ironic Inspector > > > openstack-ironic-inspector.service loaded active running Hardware > > > introspection service for OpenStack Ironic > > > > > > If I start deployment anyway I get 2 nodes in ERROR state > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates > > > Deploying templates in the directory > > > /usr/share/openstack-tripleo-heat-templates > > > Stack failed with status: resources.Controller: resources[0]: > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > "Message: > > > No valid host was found. There are not enough hosts available., Code: > > > 500" > > > > > > [stack at undercloud ~]$ nova list > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > | ID | Name | Status | Task State | Power State | Networks | > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | overcloud-controller-0 | ERROR | > > > | - > > > | | > > > | NOSTATE | | > > > | 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | ERROR > > > | | > > > | - > > > | | NOSTATE | | > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > Did the repositories update during weekend? Should I better restart the > > > overall Undercloud and Overcloud installation from the beginning? > > > > > > Thanks. > > > > > > Esra ?EL?K > > > Uzman Ara?t?rmac? > > > Bili?im Teknolojileri Enstit?s? > > > T?B?TAK B?LGEM > > > 41470 GEBZE - KOCAEL? > > > T +90 262 675 3140 > > > F +90 262 646 3187 > > > www.bilgem.tubitak.gov.tr > > > celik.esra at tubitak.gov.tr > > > ................................................................ > > > > > > Sorumluluk Reddi > > > > > > ----- Orijinal Mesaj ----- > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra Celik" > > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > > G?nderilenler: 16 Ekim Cuma 2015 18:44:49 > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > was > > > > found" > > > > > > > Hi Esra, > > > > if the undercloud nodes are UP - you can login with: ssh > > > > heat-admin@ > > > > You can see the IP of the nodes with: "nova list". > > > > BTW, > > > > What do you see if you run "sudo systemctl|grep ironic" on the > > > > undercloud? > > > > > > > Best regards, > > > > Sasha Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > From: "Esra Celik" > > > > > To: "Sasha Chuzhoy" > > > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > > > Sent: Friday, October 16, 2015 1:40:16 AM > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > host > > > > > was found" > > > > > > > > > > Hi Sasha, > > > > > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 > > > > > Overcloud-Compute > > > > > > > > > > This is my undercloud.conf file: > > > > > > > > > > image_path = . > > > > > local_ip = 192.0.2.1/24 > > > > > local_interface = em2 > > > > > masquerade_network = 192.0.2.0/24 > > > > > dhcp_start = 192.0.2.5 > > > > > dhcp_end = 192.0.2.24 > > > > > network_cidr = 192.0.2.0/24 > > > > > network_gateway = 192.0.2.1 > > > > > inspection_interface = br-ctlplane > > > > > inspection_iprange = 192.0.2.100,192.0.2.120 > > > > > inspection_runbench = false > > > > > undercloud_debug = true > > > > > enable_tuskar = false > > > > > enable_tempest = false > > > > > > > > > > IP configuration for the Undercloud is as follows: > > > > > > > > > > stack at undercloud ~]$ ip addr > > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever preferred_lft forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft forever preferred_lft forever > > > > > 2: em1: mtu 1500 qdisc mq state UP > > > > > qlen > > > > > 1000 > > > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > > > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > 3: em2: mtu 1500 qdisc mq master > > > > > ovs-system > > > > > state UP qlen 1000 > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > 4: ovs-system: mtu 1500 qdisc noop state DOWN > > > > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > > > 5: br-ctlplane: mtu 1500 qdisc > > > > > noqueue > > > > > state UNKNOWN > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > 6: br-int: mtu 1500 qdisc noop state DOWN > > > > > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > And I attached two screenshots showing the boot stage for overcloud > > > > > nodes > > > > > > > > > > Is there a way to login the overcloud nodes to see their IP > > > > > configuration? > > > > > > > > > > Thanks > > > > > > > > > > Esra ?EL?K > > > > > T?B?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > > Kime: "Esra Celik" > > > > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > > > > G?nderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > was > > > > > > found" > > > > > > > > > > > Just my 2 cents. > > > > > > Did you make sure that all the registered nodes are configured to > > > > > > boot > > > > > > off > > > > > > the right NIC first? > > > > > > Can you watch the console and see what happens on the problematic > > > > > > nodes > > > > > > upon > > > > > > boot? > > > > > > > > > > > Best regards, > > > > > > Sasha Chuzhoy. > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "Esra Celik" > > > > > > > To: "Marius Cornea" > > > > > > > Cc: rdo-list at redhat.com > > > > > > > Sent: Thursday, October 15, 2015 4:40:46 AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > valid > > > > > > > host > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > > > > > ironic node-show results are below. I have my nodes power on > > > > > > > after > > > > > > > introspection bulk start. And I get the following warning > > > > > > > Introspection didn't finish for nodes > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > Doesn't seem to be the same issue with > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning State > > > > > > > | | > > > > > > > | Maintenance | > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > | Property | Value | > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > | target_power_state | None | > > > > > > > | extra | {} | > > > > > > > | last_error | None | > > > > > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > | provision_state | available | > > > > > > > | clean_step | {} | > > > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > > > | console_enabled | False | > > > > > > > | target_provision_state | None | > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | None | > > > > > > > | inspection_finished_at | None | > > > > > > > | power_state | power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | reservation | None | > > > > > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | u'10', | > > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > | u'192.168.0.18', | > > > > > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > | instance_info | {} | > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > | Property | Value | > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > | target_power_state | None | > > > > > > > | extra | {} | > > > > > > > | last_error | None | > > > > > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > | provision_state | available | > > > > > > > | clean_step | {} | > > > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > > > | console_enabled | False | > > > > > > > | target_provision_state | None | > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | None | > > > > > > > | inspection_finished_at | None | > > > > > > > | power_state | power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | reservation | None | > > > > > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | u'100', | > > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > | u'192.168.0.19', | > > > > > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > | instance_info | {} | > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. I don't think I > > > > > > > am > > > > > > > doing > > > > > > > something other than > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi instackenv.json > > > > > > > 2 sudo yum -y install epel-release > > > > > > > 3 sudo curl -o /etc/yum.repos.d/delorean.repo > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > > 6 sudo /bin/bash -c "cat > > > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > > > EOF" > > > > > > > 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo > > > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > > > 9 sudo yum install -y python-tripleoclient > > > > > > > 10 cp /usr/share/instack-undercloud/undercloud.conf.sample > > > > > > > ~/undercloud.conf > > > > > > > 11 vi undercloud.conf > > > > > > > 12 export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 13 openstack undercloud install > > > > > > > 14 source stackrc > > > > > > > 15 export NODE_DIST=centos7 > > > > > > > 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > > > 17 export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 18 openstack overcloud image build --all > > > > > > > 19 ls > > > > > > > 20 openstack overcloud image upload > > > > > > > 21 openstack baremetal import --json instackenv.json > > > > > > > 22 openstack baremetal configure boot > > > > > > > 23 ironic node-list > > > > > > > 24 openstack baremetal introspection bulk start > > > > > > > 25 ironic node-list > > > > > > > 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ?EL?K > > > > > > > T?B?TAK B?LGEM > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > Kime: "Esra Celik" > > > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > > G?nderilenler: 14 Ekim ?ar?amba 2015 19:40:07 > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > > > results? > > > > > > > Also > > > > > > > check the following suggestion if you're experiencing the same > > > > > > > issue: > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: "Esra Celik" > > > > > > > > To: "Marius Cornea" > > > > > > > > Cc: "Ignacio Bravo" , > > > > > > > > rdo-list at redhat.com > > > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > valid > > > > > > > > host > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the introspection I can see Client > > > > > > > > IP > > > > > > > > of > > > > > > > > nodes > > > > > > > > (screenshot attached). But then I see continuous > > > > > > > > ironic-python-agent > > > > > > > > errors > > > > > > > > (screenshot-2 attached). Errors repeat after time out.. And the > > > > > > > > nodes > > > > > > > > are > > > > > > > > not powered off. > > > > > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > > > ADMINISTRATOR > > > > > > > > -U > > > > > > > > root -R 3 -N 5 -P power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power status > > > > > > > > Chassis Power is on > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power off > > > > > > > > Chassis Power Control: Down/Off > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power status > > > > > > > > Chassis Power is off > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power on > > > > > > > > Chassis Power Control: Up/On > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > Esra ?EL?K > > > > > > > > T?B?TAK B?LGEM > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > Kime: "Esra Celik" > > > > > > > > Kk: "Ignacio Bravo" , > > > > > > > > rdo-list at redhat.com > > > > > > > > G?nderilenler: 14 Ekim ?ar?amba 2015 14:59:30 > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > valid > > > > > > > > host > > > > > > > > was > > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > From: "Esra Celik" > > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: "Ignacio Bravo" , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > Well today I started with re-installing the OS and nothing > > > > > > > > > seems > > > > > > > > > wrong > > > > > > > > > with > > > > > > > > > undercloud installation, then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error during image build > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > > > + dracut -N --install ' curl partprobe lsblk targetcli tail > > > > > > > > > head > > > > > > > > > awk > > > > > > > > > ifconfig > > > > > > > > > cut expr route ping nc wget tftp grep' --kernel-cmdline > > > > > > > > > 'rd.shell > > > > > > > > > rd.debug > > > > > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > > > / > > > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > > > virtio_net > > > > > > > > > virtio_blk target_core_mod iscsi_target_mod > > > > > > > > > target_core_iblock > > > > > > > > > target_core_file target_core_pscsi configfs' -o 'dash > > > > > > > > > plymouth' > > > > > > > > > /tmp/ramdisk > > > > > > > > > cat: write error: Broken pipe > > > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > > > + chmod o+r /tmp/kernel > > > > > > > > > + trap EXIT > > > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > > > + date +%s.%N > > > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > You can ignore that afaik, if you end up having all the > > > > > > > > required > > > > > > > > images > > > > > > > > it > > > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > > > > > > > Then, during introspection stage I see ironic-python-agent > > > > > > > > > errors > > > > > > > > > on > > > > > > > > > nodes > > > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early stage of the > > > > > > > > introspection? > > > > > > > > At > > > > > > > > some point it should receive an address by DHCP and the Network > > > > > > > > is > > > > > > > > unreachable error should disappear. Does the introspection > > > > > > > > complete > > > > > > > > and > > > > > > > > the > > > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > Option "http_url" from group "pxe" is deprecated. Use option > > > > > > > > > "http_url" > > > > > > > > > from > > > > > > > > > group "deploy". > > > > > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > > > "http_root" > > > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > > > > > > > > > > > > > This is odd too as I'm expecting the nodes to be powered off > > > > > > > > before > > > > > > > > running > > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning > > > > > > > > > | State > > > > > > > > > | | > > > > > > > > > | Maintenance | > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > During deployment I get following errors > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error > > > > > > > > > while > > > > > > > > > attempting > > > > > > > > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root > > > > > > > > > -R > > > > > > > > > 3 > > > > > > > > > -N > > > > > > > > > 5 > > > > > > > > > -f > > > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > > > Error: Unexpected error while running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power > > > > > > > > > status > > > > > > > > > failed > > > > > > > > > for > > > > > > > > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: > > > > > > > > > Unexpected > > > > > > > > > error > > > > > > > > > while > > > > > > > > > running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > 11:29:01.740 > > > > > > > > > 619 WARNING ironic.conductor.manager [-] During > > > > > > > > > sync_power_state, > > > > > > > > > could > > > > > > > > > not > > > > > > > > > get power state for node > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > > > attempt > > > > > > > > > 1 > > > > > > > > > of > > > > > > > > > 3. Error: IPMI call failed: power status.. > > > > > > > > > > > > > > > > > > > > > > > > > This looks like an ipmi error, can you try to manually run > > > > > > > > commands > > > > > > > > using > > > > > > > > the > > > > > > > > ipmitool and see if you get any success? It's also worth filing > > > > > > > > a > > > > > > > > bug > > > > > > > > with > > > > > > > > details such as the ipmitool version, server model, drac > > > > > > > > firmware > > > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > Kk: "Ignacio Bravo" , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > G?nderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > error > > > > > > > > > "No > > > > > > > > > valid > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > Cc: "Ignacio Bravo" , > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > During deployment they are powering on and deploying the > > > > > > > > > > images. > > > > > > > > > > I > > > > > > > > > > see > > > > > > > > > > lot > > > > > > > > > > of > > > > > > > > > > connection error messages about ironic-python-agent but > > > > > > > > > > ignore > > > > > > > > > > them > > > > > > > > > > as > > > > > > > > > > mentioned here > > > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > > > > > That was referring to the introspection stage. From what I > > > > > > > > > can > > > > > > > > > tell > > > > > > > > > you > > > > > > > > > are > > > > > > > > > experiencing issues during deployment as it fails to > > > > > > > > > provision > > > > > > > > > the > > > > > > > > > nova > > > > > > > > > instances, can you check if during that stage the nodes get > > > > > > > > > powered > > > > > > > > > on? > > > > > > > > > > > > > > > > > > Make sure that before overcloud deploy the ironic nodes are > > > > > > > > > available > > > > > > > > > for > > > > > > > > > provisioning (ironic node-list and check the provisioning > > > > > > > > > state > > > > > > > > > column). > > > > > > > > > Also check that you didn't miss any step in the docs in > > > > > > > > > regards > > > > > > > > > to > > > > > > > > > kernel > > > > > > > > > and ramdisk assignment, introspection, flavor creation(so it > > > > > > > > > matches > > > > > > > > > the > > > > > > > > > nodes resources) > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > In instackenv.json file I do not need to add the undercloud > > > > > > > > > > node, > > > > > > > > > > or > > > > > > > > > > do > > > > > > > > > > I? > > > > > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > > > > > > > > > > > You can check the openstack-ironic-conductor logs(journalctl > > > > > > > > > -fl > > > > > > > > > -u > > > > > > > > > openstack-ironic-conductor.service) and the logs in > > > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > > > Kime: > > > > > > > > > > Esra Celik Kk: Ignacio Bravo > > > > > > > > > > , > > > > > > > > > > rdo-list at redhat.comGönderilenler: > > > > > > > > > > Tue, > > > > > > > > > > 13 > > > > > > > > > > Oct > > > > > > > > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud > > > > > > > > > > deploy > > > > > > > > > > fails > > > > > > > > > > with > > > > > > > > > > error "No valid host was found" > > > > > > > > > > > > > > > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > To: "Ignacio Bravo" > Cc: > > > > > > > > > > rdo-list at redhat.com> > > > > > > > > > > Sent: > > > > > > > > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: > > > > > > > > > > [Rdo-list] > > > > > > > > > > OverCloud > > > > > > > > > > deploy fails with error "No valid host was found"> > > > > > > > > > > > > > Actually > > > > > > > > > > I > > > > > > > > > > re-installed the OS for Undercloud before deploying. > > > > > > > > > > However > > > > > > > > > > I > > > > > > > > > > did> > > > > > > > > > > not > > > > > > > > > > re-install the OS in Compute and Controller nodes.. I will > > > > > > > > > > reinstall> > > > > > > > > > > basic > > > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > > > compute, > > > > > > > > > > they > > > > > > > > > > will > > > > > > > > > > get the image served by the undercloud. I'd recommend that > > > > > > > > > > during > > > > > > > > > > deployment > > > > > > > > > > you watch the servers console and make sure they get > > > > > > > > > > powered > > > > > > > > > > on, > > > > > > > > > > pxe > > > > > > > > > > boot, > > > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > Kimden: > > > > > > > > > > > "Ignacio > > > > > > > > > > > Bravo" > Kime: "Esra Celik" > > > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > > > Gönderilenler: > > > > > > > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] > > > > > > > > > > > OverCloud > > > > > > > > > > > deploy > > > > > > > > > > > fails > > > > > > > > > > > with error "No valid host was> found"> > Esra,> > I > > > > > > > > > > > encountered > > > > > > > > > > > the > > > > > > > > > > > same > > > > > > > > > > > problem after deleting the stack and re-deploying.> > It > > > > > > > > > > > turns > > > > > > > > > > > out > > > > > > > > > > > that > > > > > > > > > > > 'heat stack-delete overcloud’ does remove the nodes > > > > > > > > > > > from> > > > > > > > > > > > ‘nova list’ and one would assume that the > > > > > > > > > > > baremetal > > > > > > > > > > > servers > > > > > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > > > > > redeploying, > > > > > > > > > > > I > > > > > > > > > > > get > > > > > > > > > > > the same message of> not enough hosts available.> > You > > > > > > > > > > > can > > > > > > > > > > > look > > > > > > > > > > > into > > > > > > > > > > > the > > > > > > > > > > > nova logs and it mentions something about ‘node xxx > > > > > > > > > > > is> > > > > > > > > > > > already > > > > > > > > > > > associated with UUID yyyy’ and ‘I tried 3 > > > > > > > > > > > times > > > > > > > > > > > and > > > > > > > > > > > I’m > > > > > > > > > > > giving up’.> The issue is that the UUID yyyy > > > > > > > > > > > belonged > > > > > > > > > > > to > > > > > > > > > > > a > > > > > > > > > > > prior > > > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > > > basic > > > > > > > > > > > OS > > > > > > > > > > > to > > > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > > > > Federal, > > > > > > > > > > > Inc> > > > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct > > > > > > > > > > > 13, > > > > > > > > > > > 2015, > > > > > > > > > > > at > > > > > > > > > > > 9:25 > > > > > > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > > > > > > > > > > > > Hi > > > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with error "No valid host was > > > > > > > > > > > found"> > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy > > > > > > > > > > > --templates> > > > > > > > > > > > Deploying > > > > > > > > > > > templates in the directory> > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > > > Stack failed with status: Resource CREATE failed: > > > > > > > > > > > resources.Compute:> > > > > > > > > > > > ResourceInError: resources[0].resources.NovaCompute: Went > > > > > > > > > > > to > > > > > > > > > > > status > > > > > > > > > > > ERROR> > > > > > > > > > > > due to "Message: No valid host was found. There are not > > > > > > > > > > > enough > > > > > > > > > > > hosts> > > > > > > > > > > > available., Code: 500"> Heat Stack create failed.> > Here > > > > > > > > > > > are > > > > > > > > > > > some > > > > > > > > > > > logs:> > > > > > > > > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > > > COMPLETE > > > > > > > > > > > > Tue > > > > > > > > > > > > Oct > > > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > | resource_name | physical_resource_id | resource_type | > > > > > > > > > > > | resource_status > > > > > > > > > > > |> | updated_time | stack_name |> > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > > |> | Controller > > > > > > > > > > > |> | | > > > > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > > > OS::TripleO::Controller > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | > > > > > > > > > > > OS::TripleO::Compute > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | > > > > > > > > > > > OS::Nova::Server > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | > > > > > > > > > > > NovaCompute > > > > > > > > > > > | > > > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED > > > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > > > |> > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud > > > > > > > > > > > > > Compute> > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > | Property | Value |> > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > | attributes | { |> | | "attributes": null, |> | | > > > > > > > > > > > | "refs": > > > > > > > > > > > | null > > > > > > > > > > > | |> > > > > > > > > > > > | | > > > > > > > > > > > | | > > > > > > > > > > > | } > > > > > > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description > > > > > > > > > > > |> | | > > > > > > > > > > > |> | |> > > > > > > > > > > > |> | | > > > > > > > > > > > |> | links > > > > > > > > > > > |> | |> > > > > > > > > > > > | > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > > > | (self) |> | | > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > > > | | (stack) |> | | > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > > > > > > > | | physical_resource_id > > > > > > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > > > ComputeAllNodesDeployment |> | | > > > > > > > > > > > ComputeNodesPostDeployment > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > ComputeCephDeployment |> | | > > > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | > > > > > > > > > > > resource_name > > > > > > > > > > > | > > > > > > > > > > > Compute > > > > > > > > > > > |> > > > > > > > > > > > | resource_status | CREATE_FAILED |> | > > > > > > > > > > > | resource_status_reason > > > > > > > > > > > | | > > > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > > > Went to status ERROR due to "Message:> | No valid host > > > > > > > > > > > was > > > > > > > > > > > found. > > > > > > > > > > > There > > > > > > > > > > > are not enough hosts available., Code: 500"> | |> | > > > > > > > > > > > resource_type > > > > > > > > > > > | > > > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > > > |> > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > This is my instackenv.json for 1 compute and 1 > > > > > > > > > > > > > > control > > > > > > > > > > > > > > node > > > > > > > > > > > > > > to > > > > > > > > > > > > > > be > > > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > > > > > "disk":"10",> > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > "pm_addr":"192.168.0.18"> },> {> > > > > > > > > > > > "pm_type":"pxe_ipmitool",> > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > > > > > "disk":"100",> > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks > > > > > > > > > > > in > > > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > > > _______________________________________________> Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > _______________________________________________> Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > Rdo-list mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > From apevec at gmail.com Mon Oct 19 15:29:23 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 19 Oct 2015 17:29:23 +0200 Subject: [Rdo-list] Liberty Packstack fails to install Cinder and Ceilometer missing python packages. In-Reply-To: References: Message-ID: > Figured I'd try packstack-ing a Liberty on centos7.1 How did you install centos 7.1? > Packstack failed to install Cinder due to missing: python-cheetah > After manually installing Python-cheetah, Cinder installed. > Also missing python-werkzeug for Ceilometer. To be clear, both deps are correctly expressed as Requires: in cinder and ceilometer .specs. Those two packages are in extras repo which is enabled out of the box in the default centos install. I guess kickstart you're using disables it? Cheers, Alan From celik.esra at tubitak.gov.tr Mon Oct 19 15:39:46 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Mon, 19 Oct 2015 18:39:46 +0300 (EEST) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1632812339.60289332.1445266983089.JavaMail.zimbra@redhat.com> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1505152794.58180750.1444917521867.JavaMail.zimbra@redhat.com> <1268677576.5046491.1444974015831.JavaMail.zimbra@tubitak.gov.tr> <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> <1405259710.6257739.1445250871899.JavaMail.zimbra@tubitak.gov.tr> <1225535906.44268987.1445258218601.JavaMail.zimbra@redhat.com> <1327898261.6388383.1445262711501.JavaMail.zimbra@tubitak.gov.tr> <1632812339.60289332.1445266983089.JavaMail.zimbra@redhat.com> Message-ID: <1114892065.6426670.1445269186395.JavaMail.zimbra@tubitak.gov.tr> Hi Sasha This is my instackenv.json. MAC addresses are, em2 interface’s MAC address of the nodes { "nodes": [ { "pm_type":"pxe_ipmitool", "mac":[ "08:9E:01:58:CC:A1" ], "cpu":"4", "memory":"8192", "disk":"10", "arch":"x86_64", "pm_user":"root", "pm_password”:””, "pm_addr":"192.168.0.18" }, { "pm_type":"pxe_ipmitool", "mac":[ "08:9E:01:58:D0:3D" ], "cpu":"4", "memory":"8192", "disk":"100", "arch":"x86_64", "pm_user":"root", "pm_password”:””, "pm_addr":"192.168.0.19" } ] } This is my undercloud.conf file: image_path = . local_ip = 192.0.2.1/24 local_interface = em2 masquerade_network = 192.0.2.0/24 dhcp_start = 192.0.2.5 dhcp_end = 192.0.2.24 network_cidr = 192.0.2.0/24 network_gateway = 192.0.2.1 inspection_interface = br-ctlplane inspection_iprange = 192.0.2.100,192.0.2.120 inspection_runbench = false undercloud_debug = true enable_tuskar = false enable_tempest = false I have previously sent the screenshot of the consoles during introspection stage. Now I am attaching them again. I cannot login to consoles because introspection stage is not completed successfully and I don't know the IP addresses. (nova list is empty) (I don't know if I can login with the IP addresses that I was previously set by myself. I am not able to reach the nodes now, from home.) I ran the flavor-create command after the introspection stage. But introspection was not completed successfully, I just ran deploy command to see if nova list fills during deployment. Esra ÇEL?K TÜB?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ----- Sasha Chuzhoy ?öyle yaz?yor:> Esra, Is it possible to check the console of the nodes being introspected and/or deployed? I wonder if the instackenv.json file is accurate. Also, what's the output from 'nova flavor-list'? Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Esra Celik" > To: "Marius Cornea" > Cc: "Sasha Chuzhoy" , rdo-list at redhat.com > Sent: Monday, October 19, 2015 9:51:51 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > All 3 baremetal nodes (1 undercloud, 2 overcloud) have 2 nics. > > the undercloud machine's ip config is as follows: > > [stack at undercloud ~]$ ip addr > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: em1: mtu 1500 qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > valid_lft forever preferred_lft forever > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > valid_lft forever preferred_lft forever > 3: em2: mtu 1500 qdisc mq master ovs-system > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > 4: ovs-system: mtu 1500 qdisc noop state DOWN > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: br-ctlplane: mtu 1500 qdisc noqueue > state UNKNOWN > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > valid_lft forever preferred_lft forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft forever preferred_lft forever > 6: br-int: mtu 1500 qdisc noop state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > I am using em2 for pxe boot on the other machines.. So I configured > instackenv.json to have em2's MAC address > For overcloud nodes, em1 was configured to have 10.1.34.x ip, but after image > deploy I am not sure what happened for that nic. > > Thanks > > Esra ÇEL?K > TÜB?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > ----- Orijinal Mesaj ----- > > > Kimden: "Marius Cornea" > > Kime: "Esra Celik" > > Kk: "Sasha Chuzhoy" , rdo-list at redhat.com > > Gönderilenler: 19 Ekim Pazartesi 2015 15:36:58 > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was > > found" > > > Hi, > > > I believe the nodes were stuck in introspection so they were not ready for > > deployment thus the not enough hosts message. Can you describe the > > networking setup (how many nics the nodes have and to what networks they're > > connected)? > > > Thanks, > > Marius > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Sasha Chuzhoy" > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > Sent: Monday, October 19, 2015 12:34:32 PM > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > was found" > > > > > > Hi again, > > > > > > "nova list" was empty after introspection stage which was not completed > > > successfully. So I cloud not ssh the nodes.. Is there another way to > > > obtain > > > the IP addresses? > > > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > > openstack-ironic-api.service loaded active running OpenStack Ironic API > > > service > > > openstack-ironic-conductor.service loaded active running OpenStack Ironic > > > Conductor service > > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot > > > dnsmasq service for Ironic Inspector > > > openstack-ironic-inspector.service loaded active running Hardware > > > introspection service for OpenStack Ironic > > > > > > If I start deployment anyway I get 2 nodes in ERROR state > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates > > > Deploying templates in the directory > > > /usr/share/openstack-tripleo-heat-templates > > > Stack failed with status: resources.Controller: resources[0]: > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > "Message: > > > No valid host was found. There are not enough hosts available., Code: > > > 500" > > > > > > [stack at undercloud ~]$ nova list > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > | ID | Name | Status | Task State | Power State | Networks | > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | overcloud-controller-0 | ERROR | > > > | - > > > | | > > > | NOSTATE | | > > > | 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | ERROR > > > | | > > > | - > > > | | NOSTATE | | > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > Did the repositories update during weekend? Should I better restart the > > > overall Undercloud and Overcloud installation from the beginning? > > > > > > Thanks. > > > > > > Esra ÇEL?K > > > Uzman Ara?t?rmac? > > > Bili?im Teknolojileri Enstitüsü > > > TÜB?TAK B?LGEM > > > 41470 GEBZE - KOCAEL? > > > T +90 262 675 3140 > > > F +90 262 646 3187 > > > www.bilgem.tubitak.gov.tr > > > celik.esra at tubitak.gov.tr > > > ................................................................ > > > > > > Sorumluluk Reddi > > > > > > ----- Orijinal Mesaj ----- > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra Celik" > > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > > Gönderilenler: 16 Ekim Cuma 2015 18:44:49 > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > was > > > > found" > > > > > > > Hi Esra, > > > > if the undercloud nodes are UP - you can login with: ssh > > > > heat-admin@ > > > > You can see the IP of the nodes with: "nova list". > > > > BTW, > > > > What do you see if you run "sudo systemctl|grep ironic" on the > > > > undercloud? > > > > > > > Best regards, > > > > Sasha Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > From: "Esra Celik" > > > > > To: "Sasha Chuzhoy" > > > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > > > Sent: Friday, October 16, 2015 1:40:16 AM > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > host > > > > > was found" > > > > > > > > > > Hi Sasha, > > > > > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 > > > > > Overcloud-Compute > > > > > > > > > > This is my undercloud.conf file: > > > > > > > > > > image_path = . > > > > > local_ip = 192.0.2.1/24 > > > > > local_interface = em2 > > > > > masquerade_network = 192.0.2.0/24 > > > > > dhcp_start = 192.0.2.5 > > > > > dhcp_end = 192.0.2.24 > > > > > network_cidr = 192.0.2.0/24 > > > > > network_gateway = 192.0.2.1 > > > > > inspection_interface = br-ctlplane > > > > > inspection_iprange = 192.0.2.100,192.0.2.120 > > > > > inspection_runbench = false > > > > > undercloud_debug = true > > > > > enable_tuskar = false > > > > > enable_tempest = false > > > > > > > > > > IP configuration for the Undercloud is as follows: > > > > > > > > > > stack at undercloud ~]$ ip addr > > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever preferred_lft forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft forever preferred_lft forever > > > > > 2: em1: mtu 1500 qdisc mq state UP > > > > > qlen > > > > > 1000 > > > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > > > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > 3: em2: mtu 1500 qdisc mq master > > > > > ovs-system > > > > > state UP qlen 1000 > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > 4: ovs-system: mtu 1500 qdisc noop state DOWN > > > > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > > > 5: br-ctlplane: mtu 1500 qdisc > > > > > noqueue > > > > > state UNKNOWN > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > 6: br-int: mtu 1500 qdisc noop state DOWN > > > > > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > And I attached two screenshots showing the boot stage for overcloud > > > > > nodes > > > > > > > > > > Is there a way to login the overcloud nodes to see their IP > > > > > configuration? > > > > > > > > > > Thanks > > > > > > > > > > Esra ÇEL?K > > > > > TÜB?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > > Kime: "Esra Celik" > > > > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > > > > Gönderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > was > > > > > > found" > > > > > > > > > > > Just my 2 cents. > > > > > > Did you make sure that all the registered nodes are configured to > > > > > > boot > > > > > > off > > > > > > the right NIC first? > > > > > > Can you watch the console and see what happens on the problematic > > > > > > nodes > > > > > > upon > > > > > > boot? > > > > > > > > > > > Best regards, > > > > > > Sasha Chuzhoy. > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "Esra Celik" > > > > > > > To: "Marius Cornea" > > > > > > > Cc: rdo-list at redhat.com > > > > > > > Sent: Thursday, October 15, 2015 4:40:46 AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > valid > > > > > > > host > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > > > > > ironic node-show results are below. I have my nodes power on > > > > > > > after > > > > > > > introspection bulk start. And I get the following warning > > > > > > > Introspection didn't finish for nodes > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > Doesn't seem to be the same issue with > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning State > > > > > > > | | > > > > > > > | Maintenance | > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > | Property | Value | > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > | target_power_state | None | > > > > > > > | extra | {} | > > > > > > > | last_error | None | > > > > > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > | provision_state | available | > > > > > > > | clean_step | {} | > > > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > > > | console_enabled | False | > > > > > > > | target_provision_state | None | > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | None | > > > > > > > | inspection_finished_at | None | > > > > > > > | power_state | power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | reservation | None | > > > > > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | u'10', | > > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > | u'192.168.0.18', | > > > > > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > | instance_info | {} | > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-show > > > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > | Property | Value | > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > | target_power_state | None | > > > > > > > | extra | {} | > > > > > > > | last_error | None | > > > > > > > | updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > | provision_state | available | > > > > > > > | clean_step | {} | > > > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > > > | console_enabled | False | > > > > > > > | target_provision_state | None | > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | None | > > > > > > > | inspection_finished_at | None | > > > > > > > | power_state | power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | reservation | None | > > > > > > > | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | u'100', | > > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > | u'192.168.0.19', | > > > > > > > | | u'ipmi_username': u'root', u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | created_at | 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > | instance_info | {} | > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. I don't think I > > > > > > > am > > > > > > > doing > > > > > > > something other than > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi instackenv.json > > > > > > > 2 sudo yum -y install epel-release > > > > > > > 3 sudo curl -o /etc/yum.repos.d/delorean.repo > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > > 6 sudo /bin/bash -c "cat > > > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > > > EOF" > > > > > > > 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo > > > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > > > 9 sudo yum install -y python-tripleoclient > > > > > > > 10 cp /usr/share/instack-undercloud/undercloud.conf.sample > > > > > > > ~/undercloud.conf > > > > > > > 11 vi undercloud.conf > > > > > > > 12 export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 13 openstack undercloud install > > > > > > > 14 source stackrc > > > > > > > 15 export NODE_DIST=centos7 > > > > > > > 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > > > 17 export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 18 openstack overcloud image build --all > > > > > > > 19 ls > > > > > > > 20 openstack overcloud image upload > > > > > > > 21 openstack baremetal import --json instackenv.json > > > > > > > 22 openstack baremetal configure boot > > > > > > > 23 ironic node-list > > > > > > > 24 openstack baremetal introspection bulk start > > > > > > > 25 ironic node-list > > > > > > > 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > TÜB?TAK B?LGEM > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > Kime: "Esra Celik" > > > > > > > Kk: "Ignacio Bravo" , rdo-list at redhat.com > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 19:40:07 > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > > > results? > > > > > > > Also > > > > > > > check the following suggestion if you're experiencing the same > > > > > > > issue: > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: "Esra Celik" > > > > > > > > To: "Marius Cornea" > > > > > > > > Cc: "Ignacio Bravo" , > > > > > > > > rdo-list at redhat.com > > > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > valid > > > > > > > > host > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the introspection I can see Client > > > > > > > > IP > > > > > > > > of > > > > > > > > nodes > > > > > > > > (screenshot attached). But then I see continuous > > > > > > > > ironic-python-agent > > > > > > > > errors > > > > > > > > (screenshot-2 attached). Errors repeat after time out.. And the > > > > > > > > nodes > > > > > > > > are > > > > > > > > not powered off. > > > > > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > > > ADMINISTRATOR > > > > > > > > -U > > > > > > > > root -R 3 -N 5 -P power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power status > > > > > > > > Chassis Power is on > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power off > > > > > > > > Chassis Power Control: Down/Off > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power status > > > > > > > > Chassis Power is off > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power on > > > > > > > > Chassis Power Control: Up/On > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > > TÜB?TAK B?LGEM > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > Kime: "Esra Celik" > > > > > > > > Kk: "Ignacio Bravo" , > > > > > > > > rdo-list at redhat.com > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 14:59:30 > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > valid > > > > > > > > host > > > > > > > > was > > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > From: "Esra Celik" > > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: "Ignacio Bravo" , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > Well today I started with re-installing the OS and nothing > > > > > > > > > seems > > > > > > > > > wrong > > > > > > > > > with > > > > > > > > > undercloud installation, then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error during image build > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > > > + dracut -N --install ' curl partprobe lsblk targetcli tail > > > > > > > > > head > > > > > > > > > awk > > > > > > > > > ifconfig > > > > > > > > > cut expr route ping nc wget tftp grep' --kernel-cmdline > > > > > > > > > 'rd.shell > > > > > > > > > rd.debug > > > > > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > > > / > > > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > > > virtio_net > > > > > > > > > virtio_blk target_core_mod iscsi_target_mod > > > > > > > > > target_core_iblock > > > > > > > > > target_core_file target_core_pscsi configfs' -o 'dash > > > > > > > > > plymouth' > > > > > > > > > /tmp/ramdisk > > > > > > > > > cat: write error: Broken pipe > > > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > > > + chmod o+r /tmp/kernel > > > > > > > > > + trap EXIT > > > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > > > + date +%s.%N > > > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > You can ignore that afaik, if you end up having all the > > > > > > > > required > > > > > > > > images > > > > > > > > it > > > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > > > > > > > Then, during introspection stage I see ironic-python-agent > > > > > > > > > errors > > > > > > > > > on > > > > > > > > > nodes > > > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early stage of the > > > > > > > > introspection? > > > > > > > > At > > > > > > > > some point it should receive an address by DHCP and the Network > > > > > > > > is > > > > > > > > unreachable error should disappear. Does the introspection > > > > > > > > complete > > > > > > > > and > > > > > > > > the > > > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > Option "http_url" from group "pxe" is deprecated. Use option > > > > > > > > > "http_url" > > > > > > > > > from > > > > > > > > > group "deploy". > > > > > > > > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > > > "http_root" > > > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > > > > > > > > > > > > > This is odd too as I'm expecting the nodes to be powered off > > > > > > > > before > > > > > > > > running > > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning > > > > > > > > > | State > > > > > > > > > | | > > > > > > > > > | Maintenance | > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > During deployment I get following errors > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error > > > > > > > > > while > > > > > > > > > attempting > > > > > > > > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root > > > > > > > > > -R > > > > > > > > > 3 > > > > > > > > > -N > > > > > > > > > 5 > > > > > > > > > -f > > > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > > > Error: Unexpected error while running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power > > > > > > > > > status > > > > > > > > > failed > > > > > > > > > for > > > > > > > > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: > > > > > > > > > Unexpected > > > > > > > > > error > > > > > > > > > while > > > > > > > > > running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > 11:29:01.740 > > > > > > > > > 619 WARNING ironic.conductor.manager [-] During > > > > > > > > > sync_power_state, > > > > > > > > > could > > > > > > > > > not > > > > > > > > > get power state for node > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > > > attempt > > > > > > > > > 1 > > > > > > > > > of > > > > > > > > > 3. Error: IPMI call failed: power status.. > > > > > > > > > > > > > > > > > > > > > > > > > This looks like an ipmi error, can you try to manually run > > > > > > > > commands > > > > > > > > using > > > > > > > > the > > > > > > > > ipmitool and see if you get any success? It's also worth filing > > > > > > > > a > > > > > > > > bug > > > > > > > > with > > > > > > > > details such as the ipmitool version, server model, drac > > > > > > > > firmware > > > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > Kk: "Ignacio Bravo" , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > Gönderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > error > > > > > > > > > "No > > > > > > > > > valid > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > Cc: "Ignacio Bravo" , > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > During deployment they are powering on and deploying the > > > > > > > > > > images. > > > > > > > > > > I > > > > > > > > > > see > > > > > > > > > > lot > > > > > > > > > > of > > > > > > > > > > connection error messages about ironic-python-agent but > > > > > > > > > > ignore > > > > > > > > > > them > > > > > > > > > > as > > > > > > > > > > mentioned here > > > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > > > > > That was referring to the introspection stage. From what I > > > > > > > > > can > > > > > > > > > tell > > > > > > > > > you > > > > > > > > > are > > > > > > > > > experiencing issues during deployment as it fails to > > > > > > > > > provision > > > > > > > > > the > > > > > > > > > nova > > > > > > > > > instances, can you check if during that stage the nodes get > > > > > > > > > powered > > > > > > > > > on? > > > > > > > > > > > > > > > > > > Make sure that before overcloud deploy the ironic nodes are > > > > > > > > > available > > > > > > > > > for > > > > > > > > > provisioning (ironic node-list and check the provisioning > > > > > > > > > state > > > > > > > > > column). > > > > > > > > > Also check that you didn't miss any step in the docs in > > > > > > > > > regards > > > > > > > > > to > > > > > > > > > kernel > > > > > > > > > and ramdisk assignment, introspection, flavor creation(so it > > > > > > > > > matches > > > > > > > > > the > > > > > > > > > nodes resources) > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > In instackenv.json file I do not need to add the undercloud > > > > > > > > > > node, > > > > > > > > > > or > > > > > > > > > > do > > > > > > > > > > I? > > > > > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > > > > > > > > > > > You can check the openstack-ironic-conductor logs(journalctl > > > > > > > > > -fl > > > > > > > > > -u > > > > > > > > > openstack-ironic-conductor.service) and the logs in > > > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > > > Kime: > > > > > > > > > > Esra Celik Kk: Ignacio Bravo > > > > > > > > > > , > > > > > > > > > > rdo-list at redhat.comGönderilenler: > > > > > > > > > > Tue, > > > > > > > > > > 13 > > > > > > > > > > Oct > > > > > > > > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud > > > > > > > > > > deploy > > > > > > > > > > fails > > > > > > > > > > with > > > > > > > > > > error "No valid host was found" > > > > > > > > > > > > > > > > > > > > ----- Original Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > To: "Ignacio Bravo" > Cc: > > > > > > > > > > rdo-list at redhat.com> > > > > > > > > > > Sent: > > > > > > > > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: > > > > > > > > > > [Rdo-list] > > > > > > > > > > OverCloud > > > > > > > > > > deploy fails with error "No valid host was found"> > > > > > > > > > > > > > Actually > > > > > > > > > > I > > > > > > > > > > re-installed the OS for Undercloud before deploying. > > > > > > > > > > However > > > > > > > > > > I > > > > > > > > > > did> > > > > > > > > > > not > > > > > > > > > > re-install the OS in Compute and Controller nodes.. I will > > > > > > > > > > reinstall> > > > > > > > > > > basic > > > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > > > compute, > > > > > > > > > > they > > > > > > > > > > will > > > > > > > > > > get the image served by the undercloud. I'd recommend that > > > > > > > > > > during > > > > > > > > > > deployment > > > > > > > > > > you watch the servers console and make sure they get > > > > > > > > > > powered > > > > > > > > > > on, > > > > > > > > > > pxe > > > > > > > > > > boot, > > > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > Thanks> > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > Kimden: > > > > > > > > > > > "Ignacio > > > > > > > > > > > Bravo" > Kime: "Esra Celik" > > > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > > > Gönderilenler: > > > > > > > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: Re: [Rdo-list] > > > > > > > > > > > OverCloud > > > > > > > > > > > deploy > > > > > > > > > > > fails > > > > > > > > > > > with error "No valid host was> found"> > Esra,> > I > > > > > > > > > > > encountered > > > > > > > > > > > the > > > > > > > > > > > same > > > > > > > > > > > problem after deleting the stack and re-deploying.> > It > > > > > > > > > > > turns > > > > > > > > > > > out > > > > > > > > > > > that > > > > > > > > > > > 'heat stack-delete overcloud’ does remove the nodes > > > > > > > > > > > from> > > > > > > > > > > > ‘nova list’ and one would assume that the > > > > > > > > > > > baremetal > > > > > > > > > > > servers > > > > > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > > > > > redeploying, > > > > > > > > > > > I > > > > > > > > > > > get > > > > > > > > > > > the same message of> not enough hosts available.> > You > > > > > > > > > > > can > > > > > > > > > > > look > > > > > > > > > > > into > > > > > > > > > > > the > > > > > > > > > > > nova logs and it mentions something about ‘node xxx > > > > > > > > > > > is> > > > > > > > > > > > already > > > > > > > > > > > associated with UUID yyyy’ and ‘I tried 3 > > > > > > > > > > > times > > > > > > > > > > > and > > > > > > > > > > > I’m > > > > > > > > > > > giving up’.> The issue is that the UUID yyyy > > > > > > > > > > > belonged > > > > > > > > > > > to > > > > > > > > > > > a > > > > > > > > > > > prior > > > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > > > basic > > > > > > > > > > > OS > > > > > > > > > > > to > > > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > > > > Federal, > > > > > > > > > > > Inc> > > > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct > > > > > > > > > > > 13, > > > > > > > > > > > 2015, > > > > > > > > > > > at > > > > > > > > > > > 9:25 > > > > > > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > wrote:> > > > > > > > > > > > > > Hi > > > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with error "No valid host was > > > > > > > > > > > found"> > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy > > > > > > > > > > > --templates> > > > > > > > > > > > Deploying > > > > > > > > > > > templates in the directory> > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > > > Stack failed with status: Resource CREATE failed: > > > > > > > > > > > resources.Compute:> > > > > > > > > > > > ResourceInError: resources[0].resources.NovaCompute: Went > > > > > > > > > > > to > > > > > > > > > > > status > > > > > > > > > > > ERROR> > > > > > > > > > > > due to "Message: No valid host was found. There are not > > > > > > > > > > > enough > > > > > > > > > > > hosts> > > > > > > > > > > > available., Code: 500"> Heat Stack create failed.> > Here > > > > > > > > > > > are > > > > > > > > > > > some > > > > > > > > > > > logs:> > > > > > > > > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > > > COMPLETE > > > > > > > > > > > > Tue > > > > > > > > > > > > Oct > > > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > | resource_name | physical_resource_id | resource_type | > > > > > > > > > > > | resource_status > > > > > > > > > > > |> | updated_time | stack_name |> > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > > |> | Controller > > > > > > > > > > > |> | | > > > > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > > > OS::TripleO::Controller > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | > > > > > > > > > > > OS::TripleO::Compute > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | > > > > > > > > > > > OS::Nova::Server > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | > > > > > > > > > > > NovaCompute > > > > > > > > > > > | > > > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED > > > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > > > |> > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud > > > > > > > > > > > > > Compute> > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > | Property | Value |> > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > | attributes | { |> | | "attributes": null, |> | | > > > > > > > > > > > | "refs": > > > > > > > > > > > | null > > > > > > > > > > > | |> > > > > > > > > > > > | | > > > > > > > > > > > | | > > > > > > > > > > > | } > > > > > > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description > > > > > > > > > > > |> | | > > > > > > > > > > > |> | |> > > > > > > > > > > > |> | | > > > > > > > > > > > |> | links > > > > > > > > > > > |> | |> > > > > > > > > > > > | > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > > > | (self) |> | | > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > > > | | (stack) |> | | > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > > > > > > > | | physical_resource_id > > > > > > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > > > ComputeAllNodesDeployment |> | | > > > > > > > > > > > ComputeNodesPostDeployment > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > ComputeCephDeployment |> | | > > > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | > > > > > > > > > > > resource_name > > > > > > > > > > > | > > > > > > > > > > > Compute > > > > > > > > > > > |> > > > > > > > > > > > | resource_status | CREATE_FAILED |> | > > > > > > > > > > > | resource_status_reason > > > > > > > > > > > | | > > > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > > > Went to status ERROR due to "Message:> | No valid host > > > > > > > > > > > was > > > > > > > > > > > found. > > > > > > > > > > > There > > > > > > > > > > > are not enough hosts available., Code: 500"> | |> | > > > > > > > > > > > resource_type > > > > > > > > > > > | > > > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > > > |> > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > This is my instackenv.json for 1 compute and 1 > > > > > > > > > > > > > > control > > > > > > > > > > > > > > node > > > > > > > > > > > > > > to > > > > > > > > > > > > > > be > > > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > > > > > "disk":"10",> > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > "pm_addr":"192.168.0.18"> },> {> > > > > > > > > > > > "pm_type":"pxe_ipmitool",> > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> "memory":"8192",> > > > > > > > > > > > "disk":"100",> > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > "pm_addr":"192.168.0.19"> }> ]> }> > > Any ideas? Thanks > > > > > > > > > > > in > > > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > > > _______________________________________________> Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > _______________________________________________> Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > Rdo-list mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rdo-introspection-screenshot-2.png Type: image/png Size: 146063 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rdo-introspection-screenshot.png Type: image/png Size: 96977 bytes Desc: not available URL: From dsneddon at redhat.com Mon Oct 19 15:46:11 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Mon, 19 Oct 2015 08:46:11 -0700 Subject: [Rdo-list] overcloud update [external - float interface name] In-Reply-To: References: Message-ID: <56251043.8040809@redhat.com> On 10/17/2015 09:49 PM, AliReza Taleghani wrote: > :"> it's was fault... > > nice tricks covered on: > [stack at undercloud ~]$ openstack help overcloud deploy > > ######## > --neutron-public-interface NEUTRON_PUBLIC_INTERFACE > /*Deprecated*/ > ######## > but if it's deprecated! what else exist? > does it mean we should check neutron it self lated doc? > > Sincerely, > Ali R. Taleghani > @linkedIn > > On Sat, Oct 17, 2015 at 4:03 PM, Marius Cornea > wrote: > > On Sat, Oct 17, 2015 at 8:58 AM, AliReza Taleghani > > wrote: > > I have been deployed my overcloud via: > > ### > > openstack overcloud deploy --compute-scale 4 --templates --compute-flavor > > compute --control-flavor control > > ### > > > > the baremetal server interfaces naming is as enoX X == {1,2,3,4} > > I also have connected all baremetal servers eno1 directly into undercloud > > eth1 as management zone > > Now I wana create external network for float ip assignment via: > > ### > > neutron net-create ext-net --router:external --provider:physical_network > > datacentre --provider:network_type flat > > ### > > > > > > the datacentre is directlly connected to the public ip address router, but > > the problem is that I don't have such and interface name in OS level... > > Datacentre is a mapping which points to br-ex ovs bridge. By default I > believe the provisioning network nic gets bridged to br-ex. If you > want to have another interface bridged to br-ex you just need to add > the following argument to the deploy command: > > --neutron-public-interface eno2 > > or > > --neutron-public-interface nic2 > assuming that eno2 is the 2nd nic which has an active cable connected > > You should also check the network isolation feature for more advanced > networking configurations: > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/network_isolation.html > > > is seem's I should have to solution: > > > > 1- rename Controller eno2 -> datacentre > > 2- update overcloud and don't know how but force it to know [ eno2 ] instead > > of [ datacentre ] > > > > :-? > > > > solution 1 ( os level interface renaming ) seems to be a bit distructive... > > cos centos wiki's told we should edit kernel param at grub and revert to old > > interface naming also stick to udev rules and alike... > > > > I pereferer to know how can I update my overcloud to accept eno2 instead of > > datacentre? > > > > thanks > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > I believe that option is marked as deprecated because it only applies to the basic (legacy) TripleO networking model (ctlplane + external). When you are using network isolation (as covered in the Advanced Deployment section), this CLI parameter does nothing. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From sasha at redhat.com Mon Oct 19 16:13:24 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Mon, 19 Oct 2015 12:13:24 -0400 (EDT) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1114892065.6426670.1445269186395.JavaMail.zimbra@tubitak.gov.tr> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1268677576.5046491.1444974015831.JavaMail.zimbra@tubitak.gov.tr> <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> <1405259710.6257739.1445250871899.JavaMail.zimbra@tubitak.gov.tr> <1225535906.44268987.1445258218601.JavaMail.zimbra@redhat.com> <1327898261.6388383.1445262711501.JavaMail.zimbra@tubitak.gov.tr> <1632812339.60289332.1445266983089.JavaMail.zimbra@redhat.com> <1114892065.6426670.1445269186395.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1139664202.60372126.1445271204093.JavaMail.zimbra@redhat.com> Could you please 1.run: 'ironic node-set-provision-state [UUID] provide' for each node where UUID is replaced with the actual UUID of the node (ironic node-list). 2.retry the deployment Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Esra Celik" > To: "Sasha Chuzhoy" > Cc: "Marius Cornea" , rdo-list at redhat.com > Sent: Monday, October 19, 2015 11:39:46 AM > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > Hi Sasha > > > > This is my instackenv.json. MAC addresses are, em2 interface’s MAC > address of the nodes > > { > "nodes": [ > { > "pm_type":"pxe_ipmitool", > "mac":[ > "08:9E:01:58:CC:A1" > ], > "cpu":"4", > "memory":"8192", > "disk":"10", > "arch":"x86_64", > "pm_user":"root", > "pm_password”:””, > "pm_addr":"192.168.0.18" > }, > { > "pm_type":"pxe_ipmitool", > "mac":[ > "08:9E:01:58:D0:3D" > ], > "cpu":"4", > "memory":"8192", > "disk":"100", > "arch":"x86_64", > "pm_user":"root", > "pm_password”:””, > "pm_addr":"192.168.0.19" > } > ] > } > > This is my undercloud.conf file: > image_path = . > local_ip = 192.0.2.1/24 > local_interface = em2 > masquerade_network = 192.0.2.0/24 > dhcp_start = 192.0.2.5 > dhcp_end = 192.0.2.24 > network_cidr = 192.0.2.0/24 > network_gateway = 192.0.2.1 > inspection_interface = br-ctlplane > inspection_iprange = 192.0.2.100,192.0.2.120 > inspection_runbench = false > undercloud_debug = true > enable_tuskar = false > enable_tempest = false > > > > I have previously sent the screenshot of the consoles during introspection > stage. Now I am attaching them again. > I cannot login to consoles because introspection stage is not completed > successfully and I don't know the IP addresses. (nova list is empty) > (I don't know if I can login with the IP addresses that I was previously set > by myself. I am not able to reach the nodes now, from home.) > > I ran the flavor-create command after the introspection stage. But > introspection was not completed successfully, > I just ran deploy command to see if nova list fills during deployment. > > > Esra ÇEL?K > TÜB?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > > > ----- Sasha Chuzhoy ?öyle yaz?yor:> Esra, Is it > possible to check the console of the nodes being introspected and/or > deployed? I wonder if the instackenv.json file is accurate. Also, what's the > output from 'nova flavor-list'? Thanks. Best regards, Sasha Chuzhoy. ----- > Original Message ----- > From: "Esra Celik" > > To: "Marius Cornea" > Cc: "Sasha Chuzhoy" > , rdo-list at redhat.com > Sent: Monday, October 19, 2015 > 9:51:51 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > valid host was found" > > All 3 baremetal nodes (1 undercloud, 2 overcloud) > have 2 nics. > > the undercloud machine's ip config is as follows: > > > [stack at undercloud ~]$ ip addr > 1: lo: mtu 65536 > qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd > 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever > preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever > preferred_lft forever > 2: em1: mtu 1500 > qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd > ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > valid_lft forever preferred_lft forever > inet6 fe80::a9e:1ff:fe50:8a21/64 > scope link > valid_lft forever preferred_lft forever > 3: em2: > mtu 1500 qdisc mq master ovs-system > > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > 4: > ovs-system: mtu 1500 qdisc noop state DOWN > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: br-ctlplane: > mtu 1500 qdisc noqueue > state UNKNOWN > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet 192.0.2.1/24 brd > 192.0.2.255 scope global br-ctlplane > valid_lft forever preferred_lft > forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft forever > preferred_lft forever > 6: br-int: mtu 1500 qdisc noop > state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > I am > using em2 for pxe boot on the other machines.. So I configured > > instackenv.json to have em2's MAC address > For overcloud nodes, em1 was > configured to have 10.1.34.x ip, but after image > deploy I am not sure what > happened for that nic. > > Thanks > > Esra ÇEL?K > TÜB?TAK > B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > ----- > Orijinal Mesaj ----- > > > Kimden: "Marius Cornea" > > > Kime: "Esra Celik" > > Kk: "Sasha Chuzhoy" > , rdo-list at redhat.com > > Gönderilenler: 19 Ekim > Pazartesi 2015 15:36:58 > > Konu: Re: [Rdo-list] OverCloud deploy fails with > error "No valid host was > > found" > > > Hi, > > > I believe the nodes were > stuck in introspection so they were not ready for > > deployment thus the > not enough hosts message. Can you describe the > > networking setup (how > many nics the nodes have and to what networks they're > > connected)? > > > > Thanks, > > Marius > > > ----- Original Message ----- > > > From: "Esra > Celik" > > > To: "Sasha Chuzhoy" > > > > Cc: "Marius Cornea" , > rdo-list at redhat.com > > > Sent: Monday, October 19, 2015 12:34:32 PM > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > was found" > > > > > > Hi again, > > > > > > "nova list" was empty after > introspection stage which was not completed > > > successfully. So I cloud > not ssh the nodes.. Is there another way to > > > obtain > > > the IP > addresses? > > > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > > > openstack-ironic-api.service loaded active running OpenStack Ironic API > > > > service > > > openstack-ironic-conductor.service loaded active running > OpenStack Ironic > > > Conductor service > > > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot > > > > dnsmasq service for Ironic Inspector > > > > openstack-ironic-inspector.service loaded active running Hardware > > > > introspection service for OpenStack Ironic > > > > > > If I start deployment > anyway I get 2 nodes in ERROR state > > > > > > [stack at undercloud ~]$ > openstack overcloud deploy --templates > > > Deploying templates in the > directory > > > /usr/share/openstack-tripleo-heat-templates > > > Stack > failed with status: resources.Controller: resources[0]: > > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > > "Message: > > > No valid host was found. There are not enough hosts > available., Code: > > > 500" > > > > > > [stack at undercloud ~]$ nova list > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > | ID | Name | Status | Task State | Power State | Networks | > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | overcloud-controller-0 | > ERROR | > > > | - > > > | | > > > | NOSTATE | | > > > | > 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | ERROR > > > > | | > > > | - > > > | | NOSTATE | | > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > Did the repositories update during weekend? Should I better > restart the > > > overall Undercloud and Overcloud installation from the > beginning? > > > > > > Thanks. > > > > > > Esra ÇEL?K > > > Uzman > Ara?t?rmac? > > > Bili?im Teknolojileri Enstitüsü > > > > TÜB?TAK B?LGEM > > > 41470 GEBZE - KOCAEL? > > > T +90 262 675 3140 > > > > F +90 262 646 3187 > > > www.bilgem.tubitak.gov.tr > > > > celik.esra at tubitak.gov.tr > > > > ................................................................ > > > > > > > Sorumluluk Reddi > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra Celik" > > > > > Kk: "Marius Cornea" > , rdo-list at redhat.com > > > > Gönderilenler: 16 > Ekim Cuma 2015 18:44:49 > > > > Konu: Re: [Rdo-list] OverCloud deploy fails > with error "No valid host > > > > was > > > > found" > > > > > > > Hi Esra, > > > > > if the undercloud nodes are UP - you can login with: ssh > > > > > heat-admin@ > > > > You can see the IP of the nodes with: "nova list". > > > > > BTW, > > > > What do you see if you run "sudo systemctl|grep ironic" > on the > > > > undercloud? > > > > > > > Best regards, > > > > Sasha > Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > From: "Esra > Celik" > > > > > To: "Sasha Chuzhoy" > > > > > > Cc: "Marius Cornea" , > rdo-list at redhat.com > > > > > Sent: Friday, October 16, 2015 1:40:16 AM > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > was found" > > > > > > > > > > Hi Sasha, > > > > > > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 > > > > > > Overcloud-Compute > > > > > > > > > > This is my undercloud.conf file: > > > > > > > > > > > image_path = . > > > > > local_ip = 192.0.2.1/24 > > > > > > local_interface = em2 > > > > > masquerade_network = 192.0.2.0/24 > > > > > > dhcp_start = 192.0.2.5 > > > > > dhcp_end = 192.0.2.24 > > > > > > network_cidr = 192.0.2.0/24 > > > > > network_gateway = 192.0.2.1 > > > > > > inspection_interface = br-ctlplane > > > > > inspection_iprange = > 192.0.2.100,192.0.2.120 > > > > > inspection_runbench = false > > > > > > undercloud_debug = true > > > > > enable_tuskar = false > > > > > > enable_tempest = false > > > > > > > > > > IP configuration for the > Undercloud is as follows: > > > > > > > > > > stack at undercloud ~]$ ip addr > > > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever preferred_lft > forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft forever > preferred_lft forever > > > > > 2: em1: > mtu 1500 qdisc mq state UP > > > > > qlen > > > > > 1000 > > > > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > > > inet > 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > > valid_lft forever > preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > > > > > > valid_lft forever preferred_lft forever > > > > > 3: em2: > mtu 1500 qdisc mq master > > > > > > ovs-system > > > > > state UP qlen 1000 > > > > > link/ether > 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > 4: ovs-system: > mtu 1500 qdisc noop state DOWN > > > > > link/ether > 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > > > 5: br-ctlplane: > mtu 1500 qdisc > > > > > noqueue > > > > > > state UNKNOWN > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > > > > > valid_lft forever preferred_lft forever > > > > > inet6 > fe80::a9e:1ff:fe50:8a22/64 scope link > > > > > valid_lft forever > preferred_lft forever > > > > > 6: br-int: mtu 1500 > qdisc noop state DOWN > > > > > link/ether fa:85:ac:92:f5:41 brd > ff:ff:ff:ff:ff:ff > > > > > > > > > > And I attached two screenshots showing > the boot stage for overcloud > > > > > nodes > > > > > > > > > > Is there a > way to login the overcloud nodes to see their IP > > > > > configuration? > > > > > > > > > > > Thanks > > > > > > > > > > Esra ÇEL?K > > > > > > TÜB?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > > > Kime: "Esra Celik" > > > > > > Kk: "Marius > Cornea" , rdo-list at redhat.com > > > > > > > Gönderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > > > Konu: Re: > [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > > was > > > > > > found" > > > > > > > > > > > Just my 2 cents. > > > > > > > Did you make sure that all the registered nodes are configured to > > > > > > > boot > > > > > > off > > > > > > the right NIC first? > > > > > > > Can you watch the console and see what happens on the problematic > > > > > > > nodes > > > > > > upon > > > > > > boot? > > > > > > > > > > > Best > regards, > > > > > > Sasha Chuzhoy. > > > > > > > > > > > ----- Original > Message ----- > > > > > > > From: "Esra Celik" > > > > > > > > To: "Marius Cornea" > > > > > > > Cc: > rdo-list at redhat.com > > > > > > > Sent: Thursday, October 15, 2015 4:40:46 > AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error > "No > > > > > > > valid > > > > > > > host > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > > > > > > ironic node-show results are below. I have my nodes power on > > > > > > > > after > > > > > > > introspection bulk start. And I get the > following warning > > > > > > > Introspection didn't finish for nodes > > > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > Doesn't seem to be the same issue with > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning > State > > > > > > > | | > > > > > > > | Maintenance | > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power > on | > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | > > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic > node-show > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > | Property | Value | > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > | target_power_state | None | > > > > > > > | extra | {} | > > > > > > > > | last_error | None | > > > > > > > | updated_at | > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > > | provision_state | available | > > > > > > > | clean_step | {} | > > > > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > > > > | console_enabled | False | > > > > > > > | target_provision_state | None | > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | None | > > > > > > > > | inspection_finished_at | None | > > > > > > > | power_state | > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > reservation | None | > > > > > > > | properties | {u'memory_mb': u'8192', > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | u'10', > | > > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > > | u'192.168.0.18', | > > > > > > > | | u'ipmi_username': u'root', > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | created_at | > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > | > instance_info | {} | > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic > node-show > > > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > | Property | Value | > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > | target_power_state | None | > > > > > > > | extra | {} | > > > > > > > > | last_error | None | > > > > > > > | updated_at | > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > > | provision_state | available | > > > > > > > | clean_step | {} | > > > > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > > > > | console_enabled | False | > > > > > > > | target_provision_state | None | > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | None | > > > > > > > > | inspection_finished_at | None | > > > > > > > | power_state | > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > reservation | None | > > > > > > > | properties | {u'memory_mb': u'8192', > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | u'100', > | > > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > > | u'192.168.0.19', | > > > > > > > | | u'ipmi_username': u'root', > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | created_at | > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > | > instance_info | {} | > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. I > don't think I > > > > > > > am > > > > > > > doing > > > > > > > something > other than > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi instackenv.json > > > > > > > > 2 sudo yum -y install epel-release > > > > > > > 3 sudo curl -o > /etc/yum.repos.d/delorean.repo > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > > 6 sudo /bin/bash -c > "cat > > > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > > > > EOF" > > > > > > > 7 sudo curl -o > /etc/yum.repos.d/delorean-deps.repo > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > > > 9 sudo yum install > -y python-tripleoclient > > > > > > > 10 cp > /usr/share/instack-undercloud/undercloud.conf.sample > > > > > > > > ~/undercloud.conf > > > > > > > 11 vi undercloud.conf > > > > > > > 12 > export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 13 openstack > undercloud install > > > > > > > 14 source stackrc > > > > > > > 15 export > NODE_DIST=centos7 > > > > > > > 16 export > DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > > > 17 export > DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 18 openstack overcloud > image build --all > > > > > > > 19 ls > > > > > > > 20 openstack overcloud > image upload > > > > > > > 21 openstack baremetal import --json > instackenv.json > > > > > > > 22 openstack baremetal configure boot > > > > > > > > 23 ironic node-list > > > > > > > 24 openstack baremetal introspection > bulk start > > > > > > > 25 ironic node-list > > > > > > > 26 ironic > node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > 27 ironic > node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > TÜB?TAK B?LGEM > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > Kime: "Esra Celik" > > > > > > > > Kk: "Ignacio Bravo" > , rdo-list at redhat.com > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 19:40:07 > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > > > > results? > > > > > > > Also > > > > > > > check the following suggestion if > you're experiencing the same > > > > > > > issue: > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: "Esra > Celik" > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: "Ignacio Bravo" > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > > > Subject: Re: > [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > valid > > > > > > > > > host > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the introspection I > can see Client > > > > > > > > IP > > > > > > > > of > > > > > > > > nodes > > > > > > > > > (screenshot attached). But then I see continuous > > > > > > > > > ironic-python-agent > > > > > > > > errors > > > > > > > > (screenshot-2 > attached). Errors repeat after time out.. And the > > > > > > > > nodes > > > > > > > > > are > > > > > > > > not powered off. > > > > > > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > > > > ADMINISTRATOR > > > > > > > > -U > > > > > > > > root -R 3 -N 5 -P > power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > chassis power status > > > > > > > > Chassis Power is on > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis > power off > > > > > > > > Chassis Power Control: Down/Off > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power status > > > > > > > > > Chassis Power is off > > > > > > > > [stack at undercloud ~]$ > ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > > -P > > > > > > > > chassis power on > > > > > > > > Chassis Power > Control: Up/On > > > > > > > > [stack at undercloud ~]$ ipmitool -H > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > chassis power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > > > TÜB?TAK B?LGEM > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > Kimden: > "Marius Cornea" > > > > > > > > Kime: "Esra Celik" > > > > > > > > > Kk: "Ignacio Bravo" > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 14:59:30 > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > valid > > > > > > > > host > > > > > > > > was > > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: "Ignacio > Bravo" , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well today I started with > re-installing the OS and nothing > > > > > > > > > seems > > > > > > > > > > wrong > > > > > > > > > with > > > > > > > > > undercloud installation, > then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error > during image build > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > > > + dracut -N --install ' > curl partprobe lsblk targetcli tail > > > > > > > > > head > > > > > > > > > > awk > > > > > > > > > ifconfig > > > > > > > > > cut expr route ping nc wget > tftp grep' --kernel-cmdline > > > > > > > > > 'rd.shell > > > > > > > > > > rd.debug > > > > > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > > > / > > > > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > > > > virtio_net > > > > > > > > > virtio_blk target_core_mod iscsi_target_mod > > > > > > > > > > target_core_iblock > > > > > > > > > target_core_file > target_core_pscsi configfs' -o 'dash > > > > > > > > > plymouth' > > > > > > > > > > /tmp/ramdisk > > > > > > > > > cat: write error: Broken pipe > > > > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > > > > + chmod o+r /tmp/kernel > > > > > > > > > + trap EXIT > > > > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > > > + date +%s.%N > > > > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > You can ignore that afaik, if you end up having all the > > > > > > > > > required > > > > > > > > images > > > > > > > > it > > > > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > > > > > > > Then, > during introspection stage I see ironic-python-agent > > > > > > > > > > errors > > > > > > > > > on > > > > > > > > > nodes > > > > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early stage of > the > > > > > > > > introspection? > > > > > > > > At > > > > > > > > some > point it should receive an address by DHCP and the Network > > > > > > > > > is > > > > > > > > unreachable error should disappear. Does the > introspection > > > > > > > > complete > > > > > > > > and > > > > > > > > > the > > > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 10:30:12 > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > Option "http_url" from group "pxe" is deprecated. Use option > > > > > > > > > > "http_url" > > > > > > > > > from > > > > > > > > > group > "deploy". > > > > > > > > > Oct 14 10:30:12 undercloud.rdo > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > > > > "http_root" > > > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > > > > > > > > > > > > > > This is odd too as I'm > expecting the nodes to be powered off > > > > > > > > before > > > > > > > > > running > > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning > > > > > > > > > > | State > > > > > > > > > | | > > > > > > > > > | > Maintenance | > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | > power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | > available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power > > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > During deployment I get following errors > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 11:29:01 > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 ERROR > ironic.drivers.modules.ipmitool [-] IPMI Error > > > > > > > > > while > > > > > > > > > > attempting > > > > > > > > > "ipmitool -I lanplus -H > 192.168.0.19 -L ADMINISTRATOR -U root > > > > > > > > > -R > > > > > > > > > > 3 > > > > > > > > > -N > > > > > > > > > 5 > > > > > > > > > -f > > > > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > > > Error: Unexpected > error while running command. > > > > > > > > > Oct 14 11:29:01 > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 WARNING > ironic.drivers.modules.ipmitool [-] IPMI power > > > > > > > > > status > > > > > > > > > > failed > > > > > > > > > for > > > > > > > > > node > b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: > > > > > > > > > > Unexpected > > > > > > > > > error > > > > > > > > > while > > > > > > > > > > running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > 11:29:01.740 > > > > > > > > > 619 WARNING ironic.conductor.manager [-] > During > > > > > > > > > sync_power_state, > > > > > > > > > could > > > > > > > > > > not > > > > > > > > > get power state for node > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > > > attempt > > > > > > > > > > 1 > > > > > > > > > of > > > > > > > > > 3. Error: IPMI call failed: > power status.. > > > > > > > > > > > > > > > > > > > > > > > > > This looks > like an ipmi error, can you try to manually run > > > > > > > > commands > > > > > > > > > using > > > > > > > > the > > > > > > > > ipmitool and see if > you get any success? It's also worth filing > > > > > > > > a > > > > > > > > > bug > > > > > > > > with > > > > > > > > details such as the ipmitool > version, server model, drac > > > > > > > > firmware > > > > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > Kimden: "Marius > Cornea" > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > Kk: "Ignacio Bravo" > , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > Gönderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > > > Konu: > Re: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > valid > > > > > > > > > host was > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original > Message ----- > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > Cc: "Ignacio Bravo" > , > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > During deployment > they are powering on and deploying the > > > > > > > > > > images. > > > > > > > > > > > I > > > > > > > > > > see > > > > > > > > > > lot > > > > > > > > > > > of > > > > > > > > > > connection error messages about > ironic-python-agent but > > > > > > > > > > ignore > > > > > > > > > > them > > > > > > > > > > > as > > > > > > > > > > mentioned here > > > > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > > > > > > That was referring to the introspection > stage. From what I > > > > > > > > > can > > > > > > > > > tell > > > > > > > > > > you > > > > > > > > > are > > > > > > > > > experiencing issues during > deployment as it fails to > > > > > > > > > provision > > > > > > > > > the > > > > > > > > > > nova > > > > > > > > > instances, can you check if during > that stage the nodes get > > > > > > > > > powered > > > > > > > > > on? > > > > > > > > > > > > > > > > > > > Make sure that before overcloud deploy the > ironic nodes are > > > > > > > > > available > > > > > > > > > for > > > > > > > > > > provisioning (ironic node-list and check the provisioning > > > > > > > > > > state > > > > > > > > > column). > > > > > > > > > Also check that > you didn't miss any step in the docs in > > > > > > > > > regards > > > > > > > > > > to > > > > > > > > > kernel > > > > > > > > > and ramdisk > assignment, introspection, flavor creation(so it > > > > > > > > > matches > > > > > > > > > > the > > > > > > > > > nodes resources) > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > In instackenv.json > file I do not need to add the undercloud > > > > > > > > > > node, > > > > > > > > > > > or > > > > > > > > > > do > > > > > > > > > > I? > > > > > > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > > > > > > > > > > > > You can check the > openstack-ironic-conductor logs(journalctl > > > > > > > > > -fl > > > > > > > > > > -u > > > > > > > > > openstack-ironic-conductor.service) and the logs > in > > > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > > > > Kime: > > > > > > > > > > Esra Celik > Kk: Ignacio Bravo > > > > > > > > > > > , > > > > > > > > > > > rdo-list at redhat.comGönderilenler: > > > > > > > > > > Tue, > > > > > > > > > > > 13 > > > > > > > > > > Oct > > > > > > > > > > 2015 17:25:00 +0300 > (EEST)Konu: Re: [Rdo-list] OverCloud > > > > > > > > > > deploy > > > > > > > > > > > fails > > > > > > > > > > with > > > > > > > > > > error "No valid > host was found" > > > > > > > > > > > > > > > > > > > > ----- Original > Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > To: "Ignacio Bravo" > > Cc: > > > > > > > > > > rdo-list at redhat.com> > > > > > > > > > > > Sent: > > > > > > > > > > Tuesday, October 13, 2015 3:47:57 > PM> Subject: Re: > > > > > > > > > > [Rdo-list] > > > > > > > > > > > OverCloud > > > > > > > > > > deploy fails with error "No valid host was > found"> > > > > > > > > > > > > > Actually > > > > > > > > > > I > > > > > > > > > > > re-installed the OS for Undercloud before deploying. > > > > > > > > > > > However > > > > > > > > > > I > > > > > > > > > > did> > > > > > > > > > > > not > > > > > > > > > > re-install the OS in Compute and Controller > nodes.. I will > > > > > > > > > > reinstall> > > > > > > > > > > basic > > > > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > > > > compute, > > > > > > > > > > they > > > > > > > > > > will > > > > > > > > > > > get the image served by the undercloud. I'd recommend that > > > > > > > > > > > during > > > > > > > > > > deployment > > > > > > > > > > you > watch the servers console and make sure they get > > > > > > > > > > powered > > > > > > > > > > > on, > > > > > > > > > > pxe > > > > > > > > > > boot, > > > > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > Thanks> > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > Kimden: > > > > > > > > > > > "Ignacio > > > > > > > > > > > Bravo" > > Kime: "Esra Celik" > > > > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > > > > Gönderilenler: > > > > > > > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: > Re: [Rdo-list] > > > > > > > > > > > OverCloud > > > > > > > > > > > deploy > > > > > > > > > > > > fails > > > > > > > > > > > with error "No valid host > was> found"> > Esra,> > I > > > > > > > > > > > encountered > > > > > > > > > > > > the > > > > > > > > > > > same > > > > > > > > > > > problem after > deleting the stack and re-deploying.> > It > > > > > > > > > > > turns > > > > > > > > > > > > out > > > > > > > > > > > that > > > > > > > > > > > 'heat > stack-delete overcloud’ does remove the nodes > > > > > > > > > > > > from> > > > > > > > > > > > ‘nova list’ and one would assume > that the > > > > > > > > > > > baremetal > > > > > > > > > > > servers > > > > > > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > > > > > > redeploying, > > > > > > > > > > > I > > > > > > > > > > > > get > > > > > > > > > > > the same message of> not enough hosts available.> > > You > > > > > > > > > > > can > > > > > > > > > > > look > > > > > > > > > > > > into > > > > > > > > > > > the > > > > > > > > > > > nova logs and it > mentions something about ‘node xxx > > > > > > > > > > > is> > > > > > > > > > > > > already > > > > > > > > > > > associated with UUID yyyy’ > and ‘I tried 3 > > > > > > > > > > > times > > > > > > > > > > > and > > > > > > > > > > > > I’m > > > > > > > > > > > giving up’.> The > issue is that the UUID yyyy > > > > > > > > > > > belonged > > > > > > > > > > > > to > > > > > > > > > > > a > > > > > > > > > > > prior > > > > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > > > > basic > > > > > > > > > > > OS > > > > > > > > > > > to > > > > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > > > > > Federal, > > > > > > > > > > > Inc> > > > > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct > > > > > > > > > > > > 13, > > > > > > > > > > > 2015, > > > > > > > > > > > at > > > > > > > > > > > > 9:25 > > > > > > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > > wrote:> > > > > > > > > > > > > > Hi > > > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with error "No > valid host was > > > > > > > > > > > found"> > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy > > > > > > > > > > > > --templates> > > > > > > > > > > > Deploying > > > > > > > > > > > > templates in the directory> > > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > > > Stack > failed with status: Resource CREATE failed: > > > > > > > > > > > > resources.Compute:> > > > > > > > > > > > ResourceInError: > resources[0].resources.NovaCompute: Went > > > > > > > > > > > to > > > > > > > > > > > > status > > > > > > > > > > > ERROR> > > > > > > > > > > > due to > "Message: No valid host was found. There are not > > > > > > > > > > > > enough > > > > > > > > > > > hosts> > > > > > > > > > > > available., Code: > 500"> Heat Stack create failed.> > Here > > > > > > > > > > > are > > > > > > > > > > > > some > > > > > > > > > > > logs:> > > > > > > > > > > > > Every > 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > > > > COMPLETE > > > > > > > > > > > > Tue > > > > > > > > > > > > Oct > > > > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > | resource_name | physical_resource_id | resource_type > | > > > > > > > > > > > | resource_status > > > > > > > > > > > |> | > updated_time | stack_name |> > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > > > |> | > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > > > |> | Controller > > > > > > > > > > > |> | | > > > > > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > > > > OS::TripleO::Controller > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | > > > > > > > > > > > > OS::TripleO::Compute > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > > > |> > > > > > > > > > > > > | > > > > > > > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 > | > > > > > > > > > > > OS::Nova::Server > > > > > > > > > > > |> > > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 > |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk > |> | > > > > > > > > > > > NovaCompute > > > > > > > > > > > | > > > > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED > > > > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > > > | > overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > > > |> > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > [stack at undercloud ~]$ heat resource-show overcloud > > > > > > > > > > > > > > Compute> > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > | Property | Value |> > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > | attributes | { |> | | "attributes": null, |> | | > > > > > > > > > > > > | "refs": > > > > > > > > > > > | null > > > > > > > > > > > > | |> > > > > > > > > > > > | | > > > > > > > > > > > | | > > > > > > > > > > > > | } > > > > > > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | > description > > > > > > > > > > > |> | | > > > > > > > > > > > |> | |> > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | links > > > > > > > > > > > > |> | |> > > > > > > > > > > > | > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > > > > | (self) |> | | > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > > > > | | (stack) |> | | > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > > > > > > > > | | physical_resource_id > > > > > > > > > > > | > e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > > > > ComputeAllNodesDeployment |> | | > > > > > > > > > > > > ComputeNodesPostDeployment > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > > > > ComputeCephDeployment |> | | > > > > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | > > > > > > > > > > > > resource_name > > > > > > > > > > > | > > > > > > > > > > > Compute > > > > > > > > > > > > |> > > > > > > > > > > > | resource_status | CREATE_FAILED |> > | > > > > > > > > > > > | resource_status_reason > > > > > > > > > > > | | > > > > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > > > Went to status > ERROR due to "Message:> | No valid host > > > > > > > > > > > was > > > > > > > > > > > > found. > > > > > > > > > > > There > > > > > > > > > > > are not > enough hosts available., Code: 500"> | |> | > > > > > > > > > > > > resource_type > > > > > > > > > > > | > > > > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > > > |> > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > This is my instackenv.json for 1 compute and 1 > > > > > > > > > > > > > > > control > > > > > > > > > > > > > > node > > > > > > > > > > > > > > > to > > > > > > > > > > > > > > be > > > > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> > "memory":"8192",> > > > > > > > > > > > "disk":"10",> > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > "pm_addr":"192.168.0.18"> },> > {> > > > > > > > > > > > "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> > "memory":"8192",> > > > > > > > > > > > "disk":"100",> > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > "pm_addr":"192.168.0.19"> }> > ]> }> > > Any ideas? Thanks > > > > > > > > > > > in > > > > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > _______________________________________________> > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > _______________________________________________> > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > Rdo-list > mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > From celik.esra at tubitak.gov.tr Tue Oct 20 05:31:41 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Tue, 20 Oct 2015 08:31:41 +0300 (EEST) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1139664202.60372126.1445271204093.JavaMail.zimbra@redhat.com> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1292368113.59262232.1445010289525.JavaMail.zimbra@redhat.com> <1405259710.6257739.1445250871899.JavaMail.zimbra@tubitak.gov.tr> <1225535906.44268987.1445258218601.JavaMail.zimbra@redhat.com> <1327898261.6388383.1445262711501.JavaMail.zimbra@tubitak.gov.tr> <1632812339.60289332.1445266983089.JavaMail.zimbra@redhat.com> <1114892065.6426670.1445269186395.JavaMail.zimbra@tubitak.gov.tr> <1139664202.60372126.1445271204093.JavaMail.zimbra@redhat.com> Message-ID: <104309364.6608280.1445319101175.JavaMail.zimbra@tubitak.gov.tr> Ok, I ran ironic node-set-provision-state [UUID] provide for each node and retried deployment. I attached the screenshots [stack at undercloud ~]$ ironic node-list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power off | available | False | | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power off | available | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ [stack at undercloud ~]$ nova flavor-list +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | b9428c86-5696-4d68-a0e0-77faf4e7f627 | baremetal | 4096 | 40 | 0 | | 1 | 1.0 | True | +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ [stack at undercloud ~]$ openstack overcloud deploy --templates Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates Stack failed with status: resources.Controller: resources[0]: ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" Heat Stack update failed. [stack at undercloud ~]$ sudo systemctl|grep ironic openstack-ironic-api.service loaded active running OpenStack Ironic API service openstack-ironic-conductor.service loaded active running OpenStack Ironic Conductor service openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot dnsmasq service for Ironic Inspector openstack-ironic-inspector.service loaded active running Hardware introspection service for OpenStack Ironic "journalctl -fl -u openstack-ironic-conductor.service" gives no warning or error. Regards Esra ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ----- Orijinal Mesaj ----- > Kimden: "Sasha Chuzhoy" > Kime: "Esra Celik" > Kk: "Marius Cornea" , rdo-list at redhat.com > G?nderilenler: 19 Ekim Pazartesi 2015 19:13:24 > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > host was found" > Could you please > 1.run: > 'ironic node-set-provision-state [UUID] provide' for each node where UUID is > replaced with the actual UUID of the node (ironic node-list). > 2.retry the deployment > Thanks. > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Esra Celik" > > To: "Sasha Chuzhoy" > > Cc: "Marius Cornea" , rdo-list at redhat.com > > Sent: Monday, October 19, 2015 11:39:46 AM > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > host was found" > > > > Hi Sasha > > > > > > > > This is my instackenv.json. MAC addresses are, em2 interface’s MAC > > address of the nodes > > > > { > > "nodes": [ > > { > > "pm_type":"pxe_ipmitool", > > "mac":[ > > "08:9E:01:58:CC:A1" > > ], > > "cpu":"4", > > "memory":"8192", > > "disk":"10", > > "arch":"x86_64", > > "pm_user":"root", > > "pm_password”:””, > > "pm_addr":"192.168.0.18" > > }, > > { > > "pm_type":"pxe_ipmitool", > > "mac":[ > > "08:9E:01:58:D0:3D" > > ], > > "cpu":"4", > > "memory":"8192", > > "disk":"100", > > "arch":"x86_64", > > "pm_user":"root", > > "pm_password”:””, > > "pm_addr":"192.168.0.19" > > } > > ] > > } > > > > This is my undercloud.conf file: > > image_path = . > > local_ip = 192.0.2.1/24 > > local_interface = em2 > > masquerade_network = 192.0.2.0/24 > > dhcp_start = 192.0.2.5 > > dhcp_end = 192.0.2.24 > > network_cidr = 192.0.2.0/24 > > network_gateway = 192.0.2.1 > > inspection_interface = br-ctlplane > > inspection_iprange = 192.0.2.100,192.0.2.120 > > inspection_runbench = false > > undercloud_debug = true > > enable_tuskar = false > > enable_tempest = false > > > > > > > > I have previously sent the screenshot of the consoles during introspection > > stage. Now I am attaching them again. > > I cannot login to consoles because introspection stage is not completed > > successfully and I don't know the IP addresses. (nova list is empty) > > (I don't know if I can login with the IP addresses that I was previously > > set > > by myself. I am not able to reach the nodes now, from home.) > > > > I ran the flavor-create command after the introspection stage. But > > introspection was not completed successfully, > > I just ran deploy command to see if nova list fills during deployment. > > > > > > Esra ÇEL?K > > TÜB?TAK B?LGEM > > www.bilgem.tubitak.gov.tr > > celik.esra at tubitak.gov.tr > > > > > > > > ----- Sasha Chuzhoy ?öyle yaz?yor:> Esra, Is it > > possible to check the console of the nodes being introspected and/or > > deployed? I wonder if the instackenv.json file is accurate. Also, what's > > the > > output from 'nova flavor-list'? Thanks. Best regards, Sasha Chuzhoy. ----- > > Original Message ----- > From: "Esra Celik" > > > To: "Marius Cornea" > Cc: "Sasha Chuzhoy" > > , rdo-list at redhat.com > Sent: Monday, October 19, 2015 > > 9:51:51 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > valid host was found" > > All 3 baremetal nodes (1 undercloud, 2 overcloud) > > have 2 nics. > > the undercloud machine's ip config is as follows: > > > > [stack at undercloud ~]$ ip addr > 1: lo: mtu 65536 > > qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd > > 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever > > preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever > > preferred_lft forever > 2: em1: mtu 1500 > > qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd > > ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > valid_lft forever preferred_lft forever > inet6 fe80::a9e:1ff:fe50:8a21/64 > > scope link > valid_lft forever preferred_lft forever > 3: em2: > > mtu 1500 qdisc mq master ovs-system > > > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > 4: > > ovs-system: mtu 1500 qdisc noop state DOWN > > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: br-ctlplane: > > mtu 1500 qdisc noqueue > state UNKNOWN > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet 192.0.2.1/24 brd > > 192.0.2.255 scope global br-ctlplane > valid_lft forever preferred_lft > > forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft forever > > preferred_lft forever > 6: br-int: mtu 1500 qdisc > > noop > > state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > I am > > using em2 for pxe boot on the other machines.. So I configured > > > instackenv.json to have em2's MAC address > For overcloud nodes, em1 was > > configured to have 10.1.34.x ip, but after image > deploy I am not sure > > what > > happened for that nic. > > Thanks > > Esra ÇEL?K > TÜB?TAK > > B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > ----- > > Orijinal Mesaj ----- > > > Kimden: "Marius Cornea" > > > > Kime: "Esra Celik" > > Kk: "Sasha Chuzhoy" > > , rdo-list at redhat.com > > Gönderilenler: 19 Ekim > > Pazartesi 2015 15:36:58 > > Konu: Re: [Rdo-list] OverCloud deploy fails > > with > > error "No valid host was > > found" > > > Hi, > > > I believe the nodes > > were > > stuck in introspection so they were not ready for > > deployment thus the > > not enough hosts message. Can you describe the > > networking setup (how > > many nics the nodes have and to what networks they're > > connected)? > > > > > Thanks, > > Marius > > > ----- Original Message ----- > > > From: "Esra > > Celik" > > > To: "Sasha Chuzhoy" > > > > > Cc: "Marius Cornea" , > > rdo-list at redhat.com > > > Sent: Monday, October 19, 2015 12:34:32 PM > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > > > > was found" > > > > > > Hi again, > > > > > > "nova list" was empty after > > introspection stage which was not completed > > > successfully. So I cloud > > not ssh the nodes.. Is there another way to > > > obtain > > > the IP > > addresses? > > > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > > > > openstack-ironic-api.service loaded active running OpenStack Ironic API > > > > > service > > > openstack-ironic-conductor.service loaded active running > > OpenStack Ironic > > > Conductor service > > > > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot > > > > > dnsmasq service for Ironic Inspector > > > > > openstack-ironic-inspector.service loaded active running Hardware > > > > > introspection service for OpenStack Ironic > > > > > > If I start > > deployment > > anyway I get 2 nodes in ERROR state > > > > > > [stack at undercloud ~]$ > > openstack overcloud deploy --templates > > > Deploying templates in the > > directory > > > /usr/share/openstack-tripleo-heat-templates > > > Stack > > failed with status: resources.Controller: resources[0]: > > > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > > > "Message: > > > No valid host was found. There are not enough hosts > > available., Code: > > > 500" > > > > > > [stack at undercloud ~]$ nova list > > > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > | ID | Name | Status | Task State | Power State | Networks | > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | overcloud-controller-0 | > > ERROR | > > > | - > > > | | > > > | NOSTATE | | > > > | > > 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | ERROR > > > > > > > | | > > > | - > > > | | NOSTATE | | > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > Did the repositories update during weekend? Should I better > > restart the > > > overall Undercloud and Overcloud installation from the > > beginning? > > > > > > Thanks. > > > > > > Esra ÇEL?K > > > Uzman > > Ara?t?rmac? > > > Bili?im Teknolojileri Enstitüsü > > > > > TÜB?TAK B?LGEM > > > 41470 GEBZE - KOCAEL? > > > T +90 262 675 3140 > > > > > > > F +90 262 646 3187 > > > www.bilgem.tubitak.gov.tr > > > > > celik.esra at tubitak.gov.tr > > > > > ................................................................ > > > > > > > > > > Sorumluluk Reddi > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra Celik" > > > > > > Kk: "Marius Cornea" > > , rdo-list at redhat.com > > > > Gönderilenler: 16 > > Ekim Cuma 2015 18:44:49 > > > > Konu: Re: [Rdo-list] OverCloud deploy fails > > with error "No valid host > > > > was > > > > found" > > > > > > > Hi Esra, > > > > > > if the undercloud nodes are UP - you can login with: ssh > > > > > > heat-admin@ > > > > You can see the IP of the nodes with: "nova list". > > > > > > > > BTW, > > > > What do you see if you run "sudo systemctl|grep ironic" > > on the > > > > undercloud? > > > > > > > Best regards, > > > > Sasha > > Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > From: "Esra > > Celik" > > > > > To: "Sasha Chuzhoy" > > > > > > > Cc: "Marius Cornea" , > > rdo-list at redhat.com > > > > > Sent: Friday, October 16, 2015 1:40:16 AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > host > > > > > was found" > > > > > > > > > > Hi Sasha, > > > > > > > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 > > > > > > > Overcloud-Compute > > > > > > > > > > This is my undercloud.conf file: > > > > > > > > > > > > > > image_path = . > > > > > local_ip = 192.0.2.1/24 > > > > > > > local_interface = em2 > > > > > masquerade_network = 192.0.2.0/24 > > > > > > > dhcp_start = 192.0.2.5 > > > > > dhcp_end = 192.0.2.24 > > > > > > > network_cidr = 192.0.2.0/24 > > > > > network_gateway = 192.0.2.1 > > > > > > > inspection_interface = br-ctlplane > > > > > inspection_iprange = > > 192.0.2.100,192.0.2.120 > > > > > inspection_runbench = false > > > > > > > undercloud_debug = true > > > > > enable_tuskar = false > > > > > > > enable_tempest = false > > > > > > > > > > IP configuration for the > > Undercloud is as follows: > > > > > > > > > > stack at undercloud ~]$ ip addr > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever preferred_lft > > forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft forever > > preferred_lft forever > > > > > 2: em1: > > mtu 1500 qdisc mq state UP > > > > > qlen > > > > > 1000 > > > > > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > > > inet > > 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > > valid_lft forever > > preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a21/64 scope link > > > > > > > valid_lft forever preferred_lft forever > > > > > 3: em2: > > mtu 1500 qdisc mq master > > > > > > > ovs-system > > > > > state UP qlen 1000 > > > > > link/ether > > 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > 4: ovs-system: > > mtu 1500 qdisc noop state DOWN > > > > > link/ether > > 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > > > 5: br-ctlplane: > > mtu 1500 qdisc > > > > > noqueue > > > > > > > > > state UNKNOWN > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > > > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > inet6 > > fe80::a9e:1ff:fe50:8a22/64 scope link > > > > > valid_lft forever > > preferred_lft forever > > > > > 6: br-int: mtu 1500 > > qdisc noop state DOWN > > > > > link/ether fa:85:ac:92:f5:41 brd > > ff:ff:ff:ff:ff:ff > > > > > > > > > > And I attached two screenshots > > showing > > the boot stage for overcloud > > > > > nodes > > > > > > > > > > Is there a > > way to login the overcloud nodes to see their IP > > > > > configuration? > > > > > > > > > > > > Thanks > > > > > > > > > > Esra ÇEL?K > > > > > > > TÜB?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > > > > Kime: "Esra Celik" > > > > > > Kk: "Marius > > Cornea" , rdo-list at redhat.com > > > > > > > > Gönderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > > > Konu: Re: > > [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > > > was > > > > > > found" > > > > > > > > > > > Just my 2 cents. > > > > > > > > Did you make sure that all the registered nodes are configured to > > > > > > > > boot > > > > > > off > > > > > > the right NIC first? > > > > > > > > Can you watch the console and see what happens on the problematic > > > > > > > > nodes > > > > > > upon > > > > > > boot? > > > > > > > > > > > Best > > regards, > > > > > > Sasha Chuzhoy. > > > > > > > > > > > ----- Original > > Message ----- > > > > > > > From: "Esra Celik" > > > > > > > > > > > To: "Marius Cornea" > > > > > > > Cc: > > rdo-list at redhat.com > > > > > > > Sent: Thursday, October 15, 2015 4:40:46 > > AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error > > "No > > > > > > > valid > > > > > > > host > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic node-show results are below. I have my nodes power on > > > > > > > > > > > > > > > > after > > > > > > > introspection bulk start. And I get the > > following warning > > > > > > > Introspection didn't finish for nodes > > > > > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > > Doesn't seem to be the same issue with > > > > > > > > > > > > > > > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning > > State > > > > > > > | | > > > > > > > | Maintenance | > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power > > on | > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | > > > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic > > node-show > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > | Property | Value | > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > | target_power_state | None | > > > > > > > | extra | {} | > > > > > > > > > > | > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > > > | provision_state | available | > > > > > > > | clean_step | {} | > > > > > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > > > > > | console_enabled | False | > > > > > > > | target_provision_state | None | > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > > > > > > | > > > > > | maintenance | False | > > > > > > > | inspection_started_at | None | > > > > | > > > > > > > > > | inspection_finished_at | None | > > > > > > > | power_state | > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > > reservation | None | > > > > > > > | properties | {u'memory_mb': u'8192', > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | u'10', > > | > > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > > > > > > | > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > > | > > > > > > | u'192.168.0.18', | > > > > > > > | | u'ipmi_username': u'root', > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | created_at | > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > | > > instance_info | {} | > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic > > node-show > > > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > | Property | Value | > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > | target_power_state | None | > > > > > > > | extra | {} | > > > > > > > > > > | > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > > > | provision_state | available | > > > > > > > | clean_step | {} | > > > > > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > > > > > | console_enabled | False | > > > > > > > | target_provision_state | None | > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > > > > > > | > > > > > | maintenance | False | > > > > > > > | inspection_started_at | None | > > > > | > > > > > > > > > | inspection_finished_at | None | > > > > > > > | power_state | > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > > reservation | None | > > > > > > > | properties | {u'memory_mb': u'8192', > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | > > u'100', > > | > > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} | > > > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > > > > > > | > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > > | > > > > > > | u'192.168.0.19', | > > > > > > > | | u'ipmi_username': u'root', > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | created_at | > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > | > > instance_info | {} | > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. I > > don't think I > > > > > > > am > > > > > > > doing > > > > > > > something > > other than > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi instackenv.json > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2 sudo yum -y install epel-release > > > > > > > 3 sudo curl -o > > /etc/yum.repos.d/delorean.repo > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > > > > > > > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > > 6 sudo /bin/bash -c > > "cat > > > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > > > > > EOF" > > > > > > > 7 sudo curl -o > > /etc/yum.repos.d/delorean-deps.repo > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > > > 9 sudo yum > > install > > -y python-tripleoclient > > > > > > > 10 cp > > /usr/share/instack-undercloud/undercloud.conf.sample > > > > > > > > > ~/undercloud.conf > > > > > > > 11 vi undercloud.conf > > > > > > > 12 > > export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 13 openstack > > undercloud install > > > > > > > 14 source stackrc > > > > > > > 15 export > > NODE_DIST=centos7 > > > > > > > 16 export > > DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > > > 17 export > > DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 18 openstack overcloud > > image build --all > > > > > > > 19 ls > > > > > > > 20 openstack overcloud > > image upload > > > > > > > 21 openstack baremetal import --json > > instackenv.json > > > > > > > 22 openstack baremetal configure boot > > > > > > > > > 23 ironic node-list > > > > > > > 24 openstack baremetal > > > > > introspection > > bulk start > > > > > > > 25 ironic node-list > > > > > > > 26 ironic > > node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > 27 ironic > > node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > TÜB?TAK B?LGEM > > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > Kk: "Ignacio Bravo" > > , rdo-list at redhat.com > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 19:40:07 > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > > > > > > > results? > > > > > > > Also > > > > > > > check the following suggestion if > > you're experiencing the same > > > > > > > issue: > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: > > > > > > > > > > > > "Esra > > Celik" > > > > > > > > To: "Marius Cornea" > > > > > > > > > > Cc: "Ignacio Bravo" > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > > > Subject: Re: > > [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > valid > > > > > > > > > > > > host > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the introspection > > > > > > > > > > > > > > > > > I > > can see Client > > > > > > > > IP > > > > > > > > of > > > > > > > > nodes > > > > > > > > > > > > (screenshot attached). But then I see continuous > > > > > > > > > > > > > > > > > > > ironic-python-agent > > > > > > > > errors > > > > > > > > (screenshot-2 > > attached). Errors repeat after time out.. And the > > > > > > > > nodes > > > > > > > > > > are > > > > > > > > not powered off. > > > > > > > > > > > > > > > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > > > > > > > ADMINISTRATOR > > > > > > > > -U > > > > > > > > root -R 3 -N 5 -P > > power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > chassis power status > > > > > > > > Chassis Power is on > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis > > power off > > > > > > > > Chassis Power Control: Down/Off > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power status > > > > > > > > > > Chassis Power is off > > > > > > > > [stack at undercloud ~]$ > > ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > > > -P > > > > > > > > chassis power on > > > > > > > > Chassis > > Power > > Control: Up/On > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > chassis power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > > > > TÜB?TAK B?LGEM > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > Kimden: > > "Marius Cornea" > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > Kk: "Ignacio Bravo" > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 14:59:30 > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > valid > > > > > > > > host > > > > > > > > was > > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: "Ignacio > > Bravo" , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well today I started with > > re-installing the OS and nothing > > > > > > > > > seems > > > > > > > > > > > wrong > > > > > > > > > with > > > > > > > > > undercloud installation, > > then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error > > during image build > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > > > + dracut -N --install ' > > curl partprobe lsblk targetcli tail > > > > > > > > > head > > > > > > > > > > > > > awk > > > > > > > > > ifconfig > > > > > > > > > cut expr route ping nc > > wget > > tftp grep' --kernel-cmdline > > > > > > > > > 'rd.shell > > > > > > > > > > > rd.debug > > > > > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > > > / > > > > > > > > > > > > > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > > > > > > > > virtio_net > > > > > > > > > virtio_blk target_core_mod iscsi_target_mod > > > > > > > > > > > > > target_core_iblock > > > > > > > > > target_core_file > > target_core_pscsi configfs' -o 'dash > > > > > > > > > plymouth' > > > > > > > > > > > > > /tmp/ramdisk > > > > > > > > > cat: write error: Broken pipe > > > > > > > > > > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > > > > > + chmod o+r /tmp/kernel > > > > > > > > > + trap EXIT > > > > > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > > > + date +%s.%N > > > > > > > > > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > You can ignore that afaik, if you end up having all the > > > > > > > > > > required > > > > > > > > images > > > > > > > > it > > > > > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > > > > > > > Then, > > during introspection stage I see ironic-python-agent > > > > > > > > > > > errors > > > > > > > > > on > > > > > > > > > nodes > > > > > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early stage > > > > > > > > > > > > > of > > the > > > > > > > > introspection? > > > > > > > > At > > > > > > > > some > > point it should receive an address by DHCP and the Network > > > > > > > > > > is > > > > > > > > unreachable error should disappear. Does the > > introspection > > > > > > > > complete > > > > > > > > and > > > > > > > > > > the > > > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 10:30:12 > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > > Option "http_url" from group "pxe" is deprecated. Use option > > > > > > > > > > > > > > > > > > > "http_url" > > > > > > > > > from > > > > > > > > > group > > "deploy". > > > > > > > > > Oct 14 10:30:12 undercloud.rdo > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > > > > > "http_root" > > > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > > > > > > > > > > > > > > > This is odd too as I'm > > expecting the nodes to be powered off > > > > > > > > before > > > > > > > > > > > > running > > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > | UUID | Name | Instance UUID | Power State | > > > > > > > > > > > | Provisioning > > > > > > > > > > > | State > > > > > > > > > | | > > > > > > > > > | > > Maintenance | > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | > > power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | > > available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power > > > > > > > > > | > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > > > | > > > > | | > > > > > > > > > | False | > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > During deployment I get following > > > > > > > > > > > > > > > > > > > > errors > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 11:29:01 > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 ERROR > > ironic.drivers.modules.ipmitool [-] IPMI Error > > > > > > > > > while > > > > > > > > > > > > > attempting > > > > > > > > > "ipmitool -I lanplus -H > > 192.168.0.19 -L ADMINISTRATOR -U root > > > > > > > > > -R > > > > > > > > > > > > > 3 > > > > > > > > > -N > > > > > > > > > 5 > > > > > > > > > -f > > > > > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > > > Error: Unexpected > > error while running command. > > > > > > > > > Oct 14 11:29:01 > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 WARNING > > ironic.drivers.modules.ipmitool [-] IPMI power > > > > > > > > > status > > > > > > > > > > > failed > > > > > > > > > for > > > > > > > > > node > > b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: > > > > > > > > > > > Unexpected > > > > > > > > > error > > > > > > > > > while > > > > > > > > > > > > > running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > 11:29:01.740 > > > > > > > > > 619 WARNING ironic.conductor.manager [-] > > During > > > > > > > > > sync_power_state, > > > > > > > > > could > > > > > > > > > > > > > not > > > > > > > > > get power state for node > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > > > attempt > > > > > > > > > > > 1 > > > > > > > > > of > > > > > > > > > 3. Error: IPMI call failed: > > power status.. > > > > > > > > > > > > > > > > > > > > > > > > > This looks > > like an ipmi error, can you try to manually run > > > > > > > > commands > > > > > > > > > > > > using > > > > > > > > the > > > > > > > > ipmitool and see if > > you get any success? It's also worth filing > > > > > > > > a > > > > > > > > > > bug > > > > > > > > with > > > > > > > > details such as the ipmitool > > version, server model, drac > > > > > > > > firmware > > > > > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > Kimden: "Marius > > Cornea" > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > Kk: "Ignacio Bravo" > > , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > Gönderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > > > Konu: > > Re: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > error > > > > > > > > > > > "No > > > > > > > > > valid > > > > > > > > > host was > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original > > Message ----- > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > > Cc: "Ignacio Bravo" > > , > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > During deployment > > they are powering on and deploying the > > > > > > > > > > images. > > > > > > > > > > > > > > I > > > > > > > > > > see > > > > > > > > > > lot > > > > > > > > > > > > of > > > > > > > > > > connection error messages about > > ironic-python-agent but > > > > > > > > > > ignore > > > > > > > > > > them > > > > > > > > > > > > as > > > > > > > > > > mentioned here > > > > > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > > > > > > > That was referring to the introspection > > stage. From what I > > > > > > > > > can > > > > > > > > > tell > > > > > > > > > > > you > > > > > > > > > are > > > > > > > > > experiencing issues > > > > > during > > deployment as it fails to > > > > > > > > > provision > > > > > > > > > the > > > > > > > > > > > nova > > > > > > > > > instances, can you check if during > > that stage the nodes get > > > > > > > > > powered > > > > > > > > > on? > > > > > > > > > > > > > > > > > > > > > > Make sure that before overcloud deploy the > > ironic nodes are > > > > > > > > > available > > > > > > > > > for > > > > > > > > > > > > > provisioning (ironic node-list and check the provisioning > > > > > > > > > > > state > > > > > > > > > column). > > > > > > > > > Also check that > > you didn't miss any step in the docs in > > > > > > > > > regards > > > > > > > > > > > to > > > > > > > > > kernel > > > > > > > > > and ramdisk > > assignment, introspection, flavor creation(so it > > > > > > > > > matches > > > > > > > > > > > > > the > > > > > > > > > nodes resources) > > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > In instackenv.json > > file I do not need to add the undercloud > > > > > > > > > > node, > > > > > > > > > > > > > > or > > > > > > > > > > do > > > > > > > > > > I? > > > > > > > > > > > > > > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > > > > > > > > > > > > > You can check the > > openstack-ironic-conductor logs(journalctl > > > > > > > > > -fl > > > > > > > > > > > > > -u > > > > > > > > > openstack-ironic-conductor.service) and the logs > > in > > > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > > > > > Kime: > > > > > > > > > > Esra Celik > > Kk: Ignacio Bravo > > > > > > > > > > > > , > > > > > > > > > > > > rdo-list at redhat.comGönderilenler: > > > > > > > > > > Tue, > > > > > > > > > > > > 13 > > > > > > > > > > Oct > > > > > > > > > > 2015 17:25:00 +0300 > > (EEST)Konu: Re: [Rdo-list] OverCloud > > > > > > > > > > deploy > > > > > > > > > > > > fails > > > > > > > > > > with > > > > > > > > > > error "No valid > > host was found" > > > > > > > > > > > > > > > > > > > > ----- Original > > Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > To: "Ignacio Bravo" > > > Cc: > > > > > > > > > > rdo-list at redhat.com> > > > > > > > > > > > > Sent: > > > > > > > > > > Tuesday, October 13, 2015 3:47:57 > > PM> Subject: Re: > > > > > > > > > > [Rdo-list] > > > > > > > > > > > > OverCloud > > > > > > > > > > deploy fails with error "No valid host was > > found"> > > > > > > > > > > > > > Actually > > > > > > > > > > I > > > > > > > > > > > > > > re-installed the OS for Undercloud before deploying. > > > > > > > > > > > > > > > > > > However > > > > > > > > > > I > > > > > > > > > > did> > > > > > > > > > > > > > > > > not > > > > > > > > > > re-install the OS in Compute and Controller > > nodes.. I will > > > > > > > > > > reinstall> > > > > > > > > > > basic > > > > > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > > > > > > > > > > > compute, > > > > > > > > > > they > > > > > > > > > > will > > > > > > > > > > > > > > > > > > get the image served by the undercloud. I'd recommend that > > > > > > > > > > > > > > > > > > > during > > > > > > > > > > deployment > > > > > > > > > > you > > watch the servers console and make sure they get > > > > > > > > > > > > powered > > > > > > > > > > > > on, > > > > > > > > > > pxe > > > > > > > > > > boot, > > > > > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > Thanks> > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > Kimden: > > > > > > > > > > > "Ignacio > > > > > > > > > > > Bravo" > > > Kime: "Esra Celik" > > > > > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > > > > > Gönderilenler: > > > > > > > > > > > 13 Ekim Sal? 2015 16:36:06> Konu: > > Re: [Rdo-list] > > > > > > > > > > > OverCloud > > > > > > > > > > > deploy > > > > > > > > > > > > > fails > > > > > > > > > > > with error "No valid host > > was> found"> > Esra,> > I > > > > > > > > > > > encountered > > > > > > > > > > > > > the > > > > > > > > > > > same > > > > > > > > > > > problem after > > deleting the stack and re-deploying.> > It > > > > > > > > > > > turns > > > > > > > > > > > > > > > out > > > > > > > > > > > that > > > > > > > > > > > 'heat > > stack-delete overcloud’ does remove the nodes > > > > > > > > > > > > > from> > > > > > > > > > > > ‘nova list’ and one would assume > > that the > > > > > > > > > > > baremetal > > > > > > > > > > > servers > > > > > > > > > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > > > > > > > > > > > > > > > > > redeploying, > > > > > > > > > > > I > > > > > > > > > > > > > get > > > > > > > > > > > the same message of> not enough hosts available.> > > > You > > > > > > > > > > > can > > > > > > > > > > > look > > > > > > > > > > > > > > > > into > > > > > > > > > > > the > > > > > > > > > > > nova logs and it > > mentions something about ‘node xxx > > > > > > > > > > > is> > > > > > > > > > > > > > > > already > > > > > > > > > > > associated with UUID yyyy’ > > and ‘I tried 3 > > > > > > > > > > > times > > > > > > > > > > > and > > > > > > > > > > > > > > > I’m > > > > > > > > > > > giving up’.> The > > issue is that the UUID yyyy > > > > > > > > > > > belonged > > > > > > > > > > > > > > > to > > > > > > > > > > > a > > > > > > > > > > > prior > > > > > > > > > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > > > > > basic > > > > > > > > > > > OS > > > > > > > > > > > to > > > > > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > > > > > > Federal, > > > > > > > > > > > Inc> > > > > > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct > > > > > > > > > > > > > > > 13, > > > > > > > > > > > 2015, > > > > > > > > > > > at > > > > > > > > > > > > > > > > 9:25 > > > > > > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > > > wrote:> > > > > > > > > > > > > > Hi > > > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with error "No > > valid host was > > > > > > > > > > > found"> > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy > > > > > > > > > > > > > > > > > > > > > > --templates> > > > > > > > > > > > Deploying > > > > > > > > > > > > > templates in the directory> > > > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > > > Stack > > failed with status: Resource CREATE failed: > > > > > > > > > > > > > resources.Compute:> > > > > > > > > > > > ResourceInError: > > resources[0].resources.NovaCompute: Went > > > > > > > > > > > to > > > > > > > > > > > > > status > > > > > > > > > > > ERROR> > > > > > > > > > > > due > > > > > > > > to > > "Message: No valid host was found. There are not > > > > > > > > > > > > > enough > > > > > > > > > > > hosts> > > > > > > > > > > > available., Code: > > 500"> Heat Stack create failed.> > Here > > > > > > > > > > > are > > > > > > > > > > > > > some > > > > > > > > > > > logs:> > > > > > > > > > > > > Every > > 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > > > > > COMPLETE > > > > > > > > > > > > Tue > > > > > > > > > > > > Oct > > > > > > > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > | resource_name | physical_resource_id | > > > > > > > > > > > > > | resource_type > > | > > > > > > > > > > > | resource_status > > > > > > > > > > > |> | > > updated_time | stack_name |> > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > > > > > > > > > | > > > > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > > > |> | > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > > > > |> | Controller > > > > > > > > > > > |> | | > > > > > > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 | > > > > > > > > > > > > > > > > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > > > > > > > > > > > > > > > > > OS::TripleO::Controller > > > > > > > > > > > |> > > > > > > > > > > > > > > > | > > > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > > > > > > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | > > > > > > > > > > > > > OS::TripleO::Compute > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > > > |> > > > > > > > > > > > > > | > > > > > > > > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 > > | > > > > > > > > > > > OS::Nova::Server > > > > > > > > > > > |> > > > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | > > > > > > > > | > > > > > > > > > > > 2015-10-13T10:20:54 > > |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk > > |> | > > > > > > > > > > > NovaCompute > > > > > > > > > > > | > > > > > > > > |> | > > > > > > > > > > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > > > | > > overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > [stack at undercloud ~]$ heat resource-show > > > > > > > > > > > > > > > overcloud > > > > > > > > > > > > > > > Compute> > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > | Property | Value |> > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > | attributes | { |> | | "attributes": null, |> | | > > > > > > > > > > > > > > | > > > > > > > > > > > > | "refs": > > > > > > > > > > > | null > > > > > > > > > > > > > > > > > > > > | > > > > | |> > > > > > > > > > > > | | > > > > > > > > > > > | | > > > > > > > > > > > > > | } > > > > > > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> | > > description > > > > > > > > > > > |> | | > > > > > > > > > > > |> | |> > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | links > > > > > > > > > > > > > |> | |> > > > > > > > > > > > | > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > > > > > | (self) |> | | > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > > > > > | | (stack) |> | | > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > > > > > | | (nested) |> | logical_resource_id | Compute |> | > > > > > > > > > > > > > | | > > > > > > > > > > > > > | | physical_resource_id > > > > > > > > > > > | > > e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > > > > > ComputeAllNodesDeployment |> | | > > > > > > > > > > > > > ComputeNodesPostDeployment > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > ComputeCephDeployment |> | | > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | > > > > > > > > > > > > > resource_name > > > > > > > > > > > | > > > > > > > > > > > Compute > > > > > > > > > > > > > |> > > > > > > > > > > > | resource_status | CREATE_FAILED |> > > | > > > > > > > > > > > | resource_status_reason > > > > > > > > > > > | | > > | > > > > > > > > > > > | > > > > > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > > > > > > > > > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > > > Went to > > > > status > > ERROR due to "Message:> | No valid host > > > > > > > > > > > was > > > > > > > > > > > > > found. > > > > > > > > > > > There > > > > > > > > > > > are > > > > > > > > not > > enough hosts available., Code: 500"> | |> | > > > > > > > > > > > > > resource_type > > > > > > > > > > > | > > > > > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > > > |> > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > This is my instackenv.json for 1 compute and 1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > control > > > > > > > > > > > > > > node > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > > > > be > > > > > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> > > "memory":"8192",> > > > > > > > > > > > "disk":"10",> > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > "pm_addr":"192.168.0.18"> > > },> > > {> > > > > > > > > > > > "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> > > "memory":"8192",> > > > > > > > > > > > "disk":"100",> > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > "pm_addr":"192.168.0.19"> }> > > ]> }> > > Any ideas? Thanks > > > > > > > > > > > in > > > > > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > _______________________________________________> > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________> > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > Rdo-list > > mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: screenshot3.png Type: image/png Size: 111630 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: screenshot2.png Type: image/png Size: 95963 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: screenshot1.png Type: image/png Size: 66940 bytes Desc: not available URL: From tshefi at redhat.com Tue Oct 20 07:15:21 2015 From: tshefi at redhat.com (Tzach Shefi) Date: Tue, 20 Oct 2015 10:15:21 +0300 Subject: [Rdo-list] Liberty Packstack fails to install Cinder and Ceilometer missing python packages. In-Reply-To: References: Message-ID: Hey guys, Alan, Centos 7.1 came from theforeman (internal tlv) How can I check if kickstart disabled these packages? Don't have admin access to foreman if needed, but can ask ops people to check this out if need be. BTW another college ran into same missing packages, guess he used same centos from foreman, would explain same results. David, Repo list attached below: # yum repolist -v Not loading "rhnplugin" plugin, as it is disabled Loading "langpacks" plugin Loading "priorities" plugin Loading "product-id" plugin Loading "subscription-manager" plugin Adding en_US.UTF-8 to language list Adding to language list Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Config time: 0.162 Yum version: 3.4.3 Setting up Package Sacks --> python-cliff-1.15.0-1.el7.noarch from delorean-common-testing excluded (priority) --> python-hardware-0.16-2.el7.noarch from delorean-common-testing excluded (priority) --> python-osprofiler-doc-0.3.0-1.el7.noarch from delorean-common-testing excluded (priority) --> python-stevedore-1.8.0-1.el7.noarch from delorean-common-testing excluded (priority) --> python-osprofiler-0.3.0-1.el7.noarch from delorean-common-testing excluded (priority) --> python-pysaml2-3.0.0-1.el7.noarch from delorean-common-testing excluded (priority) --> python-hardware-doc-0.16-2.el7.noarch from delorean-common-testing excluded (priority) --> python2-hacking-0.10.2-2.el7.noarch from delorean-common-testing excluded (priority) --> python-unicodecsv-0.14.1-1.el7.noarch from delorean-common-testing excluded (priority) --> python-cachetools-1.0.3-2.el7.noarch from delorean-common-testing excluded (priority) --> 1:openstack-neutron-brocade-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ceilometer-polling-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-cells-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python2-oslo-i18n-doc-2.6.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-manilaclient-doc-1.4.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-puppet-modules-7.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-zaqarclient-0.2.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-keystoneclient-1.7.2-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-swiftclient-2.6.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-ironic-python-agent-1.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-glance-store-0.9.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-api-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python2-oslo-i18n-2.6.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-openstackclient-1.7.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-automaton-0.7.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-common-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> instack-0.0.8-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-middleware-doc-2.8.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-serialproxy-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-conductor-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-db-doc-2.6.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ceilometer-ipmi-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-sriov-nic-agent-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-tripleo-doc-0.0.6-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-fwaas-7.0.0-0.3.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-service-0.9.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-cert-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-ironic-inspector-2.2.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-neutron-tests-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-ceilometerclient-1.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-keystone-8.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-opencontrail-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-doc-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-heat-common-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-tripleo-puppet-elements-0.0.2-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-dashboard-8.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-troveclient-1.3.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-heat-engine-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-rootwrap-2.3.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-manila-share-1.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ironic-api-4.2.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-ceilometerclient-doc-1.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-glance-11.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-trove-taskmanager-4.0.0-0.1.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-ceilometermiddleware-0.3.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-pycadf-1.1.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> tripleo-common-0.0.1-2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-vmware-doc-1.21.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-nova-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-neutron-vpnaas-7.0.0-0.3.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-manila-doc-1.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-aodh-notifier-1.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-concurrency-2.6.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-spicehtml5proxy-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> instack-undercloud-2.1.3-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-midonet-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-tripleo-0.0.6-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-packstack-2015.2-0.1.dev1654.gcbbf46e.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ceilometer-alarm-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-swift-container-2.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-ironic-inspector-doc-2.2.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python2-castellan-0.2.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-designateclient-1.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-cinder-7.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-aodh-listener-1.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-keystoneauth1-doc-1.1.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-taskflow-1.21.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-middleware-2.8.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-glanceclient-doc-1.1.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-dashboard-theme-8.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-zaqar-1.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ceilometer-notification-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-glance-doc-11.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-sahara-doc-3.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-keystonemiddleware-2.3.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-django-horizon-8.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-keystoneclient-doc-1.7.2-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-aodh-common-1.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-manila-1.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-metering-agent-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-policy-doc-0.11.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-scheduler-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-trove-4.0.0-0.1.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-neutron-lbaas-7.0.0-0.3.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-keystonemiddleware-doc-2.3.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-heat-api-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-glanceclient-1.1.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-aodh-expirer-1.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-log-doc-1.10.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-vpnaas-7.0.0-0.3.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-neutronclient-3.1.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-bigswitch-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-log-1.10.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-aodh-evaluator-1.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-cache-0.7.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ceilometer-common-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-aodh-api-1.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-swift-object-2.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-policy-0.11.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-trove-api-4.0.0-0.1.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-cinder-doc-7.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-cinder-7.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-keystone-8.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-neutron-fwaas-7.0.0-0.3.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> python2-oslotest-1.11.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python2-os-client-config-1.7.4-1.el7.noarch from delorean-liberty-testing excluded (priority) --> dib-utils-0.0.9-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-tripleoclient-0.0.11-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-cisco-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-neutron-fwaas-tests-7.0.0-0.3.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-neutron-lbaas-tests-7.0.0-0.3.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-embrane-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-novaclient-2.30.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-packstack-puppet-2015.2-0.1.dev1654.gcbbf46e.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-mellanox-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-nuage-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-objectstore-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-lbaas-7.0.0-0.3.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-sahara-api-3.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-trove-4.0.0-0.1.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-swift-2.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-versionedobjects-0.10.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-oneconvergence-nvsd-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-versionedobjects-doc-0.10.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-sahara-common-3.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-ovsvapp-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-packstack-doc-2015.2-0.1.dev1654.gcbbf46e.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-concurrency-doc-2.6.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-network-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-openvswitch-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-novaclient-doc-2.30.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-trove-conductor-4.0.0-0.1.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-trove-guestagent-4.0.0-0.1.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-taskflow-doc-1.21.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-trove-common-4.0.0-0.1.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ceilometer-api-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-ceilometer-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-keystone-doc-8.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-common-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-swift-proxy-2.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ironic-common-4.2.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python2-futurist-0.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-manila-1.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-neutron-vpnaas-tests-7.0.0-0.3.0rc2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-glance-11.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ceilometer-central-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-linuxbridge-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-console-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-swiftclient-doc-2.6.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-ml2-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-cinderclient-1.4.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> diskimage-builder-1.1.3-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-messaging-doc-2.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-django-horizon-doc-8.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-heat-api-cfn-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 2:python2-oslo-config-doc-2.4.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ceilometer-compute-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-saharaclient-0.11.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-swift-doc-2.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-ofagent-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-tripleo-image-elements-0.9.7-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-manilaclient-1.4.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python2-mox3-0.10.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ceilometer-collector-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-cinderclient-doc-1.4.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-aodh-compat-1.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-sahara-engine-3.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-ironic-conductor-4.2.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-novncproxy-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-heat-templates-0-0.1.20151019.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-nova-compute-12.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-heatclient-doc-0.8.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-sahara-3.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 2:python2-oslo-config-2.4.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-openstackclient-doc-1.7.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-cache-doc-0.7.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-tripleo-heat-templates-0.8.7-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-rpc-server-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> python2-os-client-config-doc-1.7.4-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-heat-api-cloudwatch-5.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-ironicclient-0.8.1-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-heatclient-0.8.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:openstack-neutron-dev-server-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> 1:python-neutron-7.0.0-2.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-vmware-1.21.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-messaging-2.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-aodh-1.0.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> openstack-swift-account-2.5.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> python-oslo-db-2.6.0-1.el7.noarch from delorean-liberty-testing excluded (priority) --> fontawesome-fonts-web-4.1.0-1.el7.noarch from rhel-optional excluded (priority) --> libnetfilter_queue-devel-1.0.2-1.el7.i686 from rhel-optional excluded (priority) --> libnetfilter_queue-devel-1.0.2-1.el7.x86_64 from rhel-optional excluded (priority) --> pyOpenSSL-doc-0.13.1-3.el7.noarch from rhel-optional excluded (priority) --> pyparsing-doc-1.5.6-9.el7.noarch from rhel-optional excluded (priority) --> pytest-2.3.5-4.el7.noarch from rhel-optional excluded (priority) --> python-dtopt-0.1-13.el7.noarch from rhel-optional excluded (priority) --> python-nose-docs-1.3.0-2.el7.noarch from rhel-optional excluded (priority) --> python-py-1.4.14-4.el7.noarch from rhel-optional excluded (priority) --> babel-0.9.6-8.el7.noarch from rhel-server excluded (priority) --> fontawesome-fonts-4.1.0-1.el7.noarch from rhel-server excluded (priority) --> libnetfilter_queue-1.0.2-1.el7.i686 from rhel-server excluded (priority) --> libnetfilter_queue-1.0.2-1.el7.x86_64 from rhel-server excluded (priority) --> pyOpenSSL-0.13.1-3.el7.x86_64 from rhel-server excluded (priority) --> pyparsing-1.5.6-9.el7.noarch from rhel-server excluded (priority) --> python-babel-0.9.6-8.el7.noarch from rhel-server excluded (priority) --> python-dns-1.11.1-2.20140901git9329daf.el7.noarch from rhel-server excluded (priority) --> python-netaddr-0.7.5-7.el7.noarch from rhel-server excluded (priority) --> python-nose-1.3.0-2.el7.noarch from rhel-server excluded (priority) --> python-requests-1.1.0-8.el7.noarch from rhel-server excluded (priority) --> python-six-1.3.0-4.el7.noarch from rhel-server excluded (priority) --> python-tempita-0.5.1-6.el7.noarch from rhel-server excluded (priority) --> python-urllib3-1.5-8.el7.noarch from rhel-server excluded (priority) 224 packages excluded due to repository priority protections pkgsack time: 0.420 Repo-id : delorean Repo-name : delorean-python-tripleoclient-199a35f696208911021ed589d82fced0117d6292 Repo-revision: 1444982084 Repo-updated : Fri Oct 16 10:55:02 2015 Repo-pkgs : 274 Repo-size : 68 M Repo-baseurl : http://trunk.rdoproject.org/centos7-liberty/19/9a/199a35f696208911021ed589d82fced0117d6292_0b1ce934 Repo-expire : 21,600 second(s) (last: Tue Oct 20 04:25:34 2015) Repo-excluded: 111 Repo-filename: /etc/yum.repos.d/delorean.repo Repo-id : delorean-common-testing/x86_64 Repo-name : delorean-common-testing Repo-revision: 1444822873 Repo-tags : binary-x86_64 Repo-updated : Wed Oct 14 14:41:15 2015 Repo-pkgs : 489 Repo-size : 299 M Repo-baseurl : http://cbs.centos.org/repos/cloud7-openstack-common-testing/x86_64/os/ Repo-expire : 21,600 second(s) (last: Tue Oct 20 04:25:35 2015) Repo-excluded: 10 Repo-filename: /etc/yum.repos.d/delorean-deps.repo Repo-id : delorean-liberty-testing/x86_64 Repo-name : delorean-liberty-testing Repo-revision: 1445301663 Repo-tags : binary-x86_64 Repo-updated : Tue Oct 20 03:41:05 2015 Repo-pkgs : 34 Repo-size : 9.9 M Repo-baseurl : http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/x86_64/os/ Repo-expire : 21,600 second(s) (last: Tue Oct 20 04:25:35 2015) Repo-excluded: 191 Repo-filename: /etc/yum.repos.d/delorean-deps.repo Below internal repos also include, removed full urls from email -> ... Repo-id : qe-tlv ... Repo-filename: /etc/yum.repos.d/qe-tlv.repo Repo-id : rhel-optional ... Repo-filename: /etc/yum.repos.d/rhel-optional.repo Repo-id : rhel-server ... Repo-filename: /etc/yum.repos.d/rhel-server.repo repolist: 9,340 Thanks Tzach On Mon, Oct 19, 2015 at 6:29 PM, Alan Pevec wrote: > > Figured I'd try packstack-ing a Liberty on centos7.1 > > How did you install centos 7.1? > > > Packstack failed to install Cinder due to missing: python-cheetah > > After manually installing Python-cheetah, Cinder installed. > > Also missing python-werkzeug for Ceilometer. > > To be clear, both deps are correctly expressed as Requires: in cinder > and ceilometer .specs. > Those two packages are in extras repo which is enabled out of the box > in the default centos install. > I guess kickstart you're using disables it? > > Cheers, > Alan > -- *Tzach Shefi* Quality Engineer, Redhat OSP +972-54-4701080 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Tue Oct 20 10:02:51 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 20 Oct 2015 06:02:51 -0400 (EDT) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <104309364.6608280.1445319101175.JavaMail.zimbra@tubitak.gov.tr> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1405259710.6257739.1445250871899.JavaMail.zimbra@tubitak.gov.tr> <1225535906.44268987.1445258218601.JavaMail.zimbra@redhat.com> <1327898261.6388383.1445262711501.JavaMail.zimbra@tubitak.gov.tr> <1632812339.60289332.1445266983089.JavaMail.zimbra@redhat.com> <1114892065.6426670.1445269186395.JavaMail.zimbra@tubitak.gov.tr> <1139664202.60372126.1445271204093.JavaMail.zimbra@redhat.com> <104309364.6608280.1445319101175.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1238139899.44937284.1445335371251.JavaMail.zimbra@redhat.com> Hi, >From what I can tell from the screenshots DHCP fails for both of the nics after loading the inspector image, thus the nodes have no ip address and the Network is unreachable message. Can you see any DHCP messages(output of dhclient) on the console? You could try leaving the nodes connected *only* to the provisioning network and rerun introspection. Thanks, Marius ----- Original Message ----- > From: "Esra Celik" > To: "Sasha Chuzhoy" > Cc: "Marius Cornea" , rdo-list at redhat.com > Sent: Tuesday, October 20, 2015 7:31:41 AM > Subject: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > Ok, I ran ironic node-set-provision-state [UUID] provide for each node and > retried deployment. I attached the screenshots > > [stack at undercloud ~]$ ironic node-list > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power State | Provisioning State | > | Maintenance | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power off | available > | | False | > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power off | available > | | False | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > [stack at undercloud ~]$ nova flavor-list > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | > | Is_Public | > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > | b9428c86-5696-4d68-a0e0-77faf4e7f627 | baremetal | 4096 | 40 | 0 | | 1 | > | 1.0 | True | > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > [stack at undercloud ~]$ openstack overcloud deploy --templates > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > Stack failed with status: resources.Controller: resources[0]: > ResourceInError: resources.Controller: Went to status ERROR due to "Message: > No valid host was found. There are not enough hosts available., Code: 500" > Heat Stack update failed. > > [stack at undercloud ~]$ sudo systemctl|grep ironic > openstack-ironic-api.service loaded active running OpenStack Ironic API > service > openstack-ironic-conductor.service loaded active running OpenStack Ironic > Conductor service > openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot > dnsmasq service for Ironic Inspector > openstack-ironic-inspector.service loaded active running Hardware > introspection service for OpenStack Ironic > > "journalctl -fl -u openstack-ironic-conductor.service" gives no warning or > error. > > Regards > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > ----- Orijinal Mesaj ----- > > > Kimden: "Sasha Chuzhoy" > > Kime: "Esra Celik" > > Kk: "Marius Cornea" , rdo-list at redhat.com > > G?nderilenler: 19 Ekim Pazartesi 2015 19:13:24 > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > host was found" > > > Could you please > > 1.run: > > 'ironic node-set-provision-state [UUID] provide' for each node where UUID > > is > > replaced with the actual UUID of the node (ironic node-list). > > > 2.retry the deployment > > Thanks. > > > Best regards, > > Sasha Chuzhoy. > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Sasha Chuzhoy" > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > Sent: Monday, October 19, 2015 11:39:46 AM > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > host was found" > > > > > > Hi Sasha > > > > > > > > > > > > This is my instackenv.json. MAC addresses are, em2 interface’s MAC > > > address of the nodes > > > > > > { > > > "nodes": [ > > > { > > > "pm_type":"pxe_ipmitool", > > > "mac":[ > > > "08:9E:01:58:CC:A1" > > > ], > > > "cpu":"4", > > > "memory":"8192", > > > "disk":"10", > > > "arch":"x86_64", > > > "pm_user":"root", > > > "pm_password”:””, > > > "pm_addr":"192.168.0.18" > > > }, > > > { > > > "pm_type":"pxe_ipmitool", > > > "mac":[ > > > "08:9E:01:58:D0:3D" > > > ], > > > "cpu":"4", > > > "memory":"8192", > > > "disk":"100", > > > "arch":"x86_64", > > > "pm_user":"root", > > > "pm_password”:””, > > > "pm_addr":"192.168.0.19" > > > } > > > ] > > > } > > > > > > This is my undercloud.conf file: > > > image_path = . > > > local_ip = 192.0.2.1/24 > > > local_interface = em2 > > > masquerade_network = 192.0.2.0/24 > > > dhcp_start = 192.0.2.5 > > > dhcp_end = 192.0.2.24 > > > network_cidr = 192.0.2.0/24 > > > network_gateway = 192.0.2.1 > > > inspection_interface = br-ctlplane > > > inspection_iprange = 192.0.2.100,192.0.2.120 > > > inspection_runbench = false > > > undercloud_debug = true > > > enable_tuskar = false > > > enable_tempest = false > > > > > > > > > > > > I have previously sent the screenshot of the consoles during > > > introspection > > > stage. Now I am attaching them again. > > > I cannot login to consoles because introspection stage is not completed > > > successfully and I don't know the IP addresses. (nova list is empty) > > > (I don't know if I can login with the IP addresses that I was previously > > > set > > > by myself. I am not able to reach the nodes now, from home.) > > > > > > I ran the flavor-create command after the introspection stage. But > > > introspection was not completed successfully, > > > I just ran deploy command to see if nova list fills during deployment. > > > > > > > > > Esra ÇEL?K > > > TÜB?TAK B?LGEM > > > www.bilgem.tubitak.gov.tr > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > ----- Sasha Chuzhoy ?öyle yaz?yor:> Esra, Is it > > > possible to check the console of the nodes being introspected and/or > > > deployed? I wonder if the instackenv.json file is accurate. Also, what's > > > the > > > output from 'nova flavor-list'? Thanks. Best regards, Sasha Chuzhoy. > > > ----- > > > Original Message ----- > From: "Esra Celik" > > > > To: "Marius Cornea" > Cc: "Sasha Chuzhoy" > > > , rdo-list at redhat.com > Sent: Monday, October 19, 2015 > > > 9:51:51 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error > > > "No > > > valid host was found" > > All 3 baremetal nodes (1 undercloud, 2 > > > overcloud) > > > have 2 nics. > > the undercloud machine's ip config is as follows: > > > > > [stack at undercloud ~]$ ip addr > 1: lo: mtu 65536 > > > qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd > > > 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever > > > preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever > > > preferred_lft forever > 2: em1: mtu > > > 1500 > > > qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd > > > ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > valid_lft forever preferred_lft forever > inet6 > > > fe80::a9e:1ff:fe50:8a21/64 > > > scope link > valid_lft forever preferred_lft forever > 3: em2: > > > mtu 1500 qdisc mq master ovs-system > > > > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > 4: > > > ovs-system: mtu 1500 qdisc noop state DOWN > > > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: br-ctlplane: > > > mtu 1500 qdisc noqueue > state UNKNOWN > > > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet 192.0.2.1/24 > > > brd > > > 192.0.2.255 scope global br-ctlplane > valid_lft forever preferred_lft > > > forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft forever > > > preferred_lft forever > 6: br-int: mtu 1500 qdisc > > > noop > > > state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > I am > > > using em2 for pxe boot on the other machines.. So I configured > > > > instackenv.json to have em2's MAC address > For overcloud nodes, em1 was > > > configured to have 10.1.34.x ip, but after image > deploy I am not sure > > > what > > > happened for that nic. > > Thanks > > Esra ÇEL?K > TÜB?TAK > > > B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > ----- > > > Orijinal Mesaj ----- > > > Kimden: "Marius Cornea" > > > > > > > > Kime: "Esra Celik" > > Kk: "Sasha Chuzhoy" > > > , rdo-list at redhat.com > > Gönderilenler: 19 Ekim > > > Pazartesi 2015 15:36:58 > > Konu: Re: [Rdo-list] OverCloud deploy fails > > > with > > > error "No valid host was > > found" > > > Hi, > > > I believe the nodes > > > were > > > stuck in introspection so they were not ready for > > deployment thus the > > > not enough hosts message. Can you describe the > > networking setup (how > > > many nics the nodes have and to what networks they're > > connected)? > > > > > > > > > Thanks, > > Marius > > > ----- Original Message ----- > > > From: "Esra > > > Celik" > > > To: "Sasha Chuzhoy" > > > > > > Cc: "Marius Cornea" , > > > rdo-list at redhat.com > > > Sent: Monday, October 19, 2015 12:34:32 PM > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host > > > > > > > > > > > > was found" > > > > > > Hi again, > > > > > > "nova list" was empty > > > > after > > > introspection stage which was not completed > > > successfully. So I > > > cloud > > > not ssh the nodes.. Is there another way to > > > obtain > > > the IP > > > addresses? > > > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > > > > > > > > openstack-ironic-api.service loaded active running OpenStack Ironic API > > > > > > > > > > service > > > openstack-ironic-conductor.service loaded active > > > > > running > > > OpenStack Ironic > > > Conductor service > > > > > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot > > > > > > > > > dnsmasq service for Ironic Inspector > > > > > > openstack-ironic-inspector.service loaded active running Hardware > > > > > > introspection service for OpenStack Ironic > > > > > > If I start > > > deployment > > > anyway I get 2 nodes in ERROR state > > > > > > [stack at undercloud ~]$ > > > openstack overcloud deploy --templates > > > Deploying templates in the > > > directory > > > /usr/share/openstack-tripleo-heat-templates > > > Stack > > > failed with status: resources.Controller: resources[0]: > > > > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > > > > "Message: > > > No valid host was found. There are not enough hosts > > > available., Code: > > > 500" > > > > > > [stack at undercloud ~]$ nova list > > > > > > > > > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > | ID | Name | Status | Task State | Power State | Networks | > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | overcloud-controller-0 | > > > ERROR | > > > | - > > > | | > > > | NOSTATE | | > > > | > > > 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | ERROR > > > > > > > > > > > > | | > > > | - > > > | | NOSTATE | | > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > > Did the repositories update during weekend? Should I better > > > restart the > > > overall Undercloud and Overcloud installation from the > > > beginning? > > > > > > Thanks. > > > > > > Esra ÇEL?K > > > Uzman > > > Ara?t?rmac? > > > Bili?im Teknolojileri Enstitüsü > > > > > > TÜB?TAK B?LGEM > > > 41470 GEBZE - KOCAEL? > > > T +90 262 675 3140 > > > > > > > > > > > > F +90 262 646 3187 > > > www.bilgem.tubitak.gov.tr > > > > > > celik.esra at tubitak.gov.tr > > > > > > ................................................................ > > > > > > > > > > > > > > > Sorumluluk Reddi > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra Celik" > > > > > > > Kk: "Marius Cornea" > > > , rdo-list at redhat.com > > > > Gönderilenler: 16 > > > Ekim Cuma 2015 18:44:49 > > > > Konu: Re: [Rdo-list] OverCloud deploy > > > fails > > > with error "No valid host > > > > was > > > > found" > > > > > > > Hi > > > Esra, > > > > > > > if the undercloud nodes are UP - you can login with: ssh > > > > > > > heat-admin@ > > > > You can see the IP of the nodes with: "nova > > > list". > > > > > > > > > > BTW, > > > > What do you see if you run "sudo systemctl|grep > > > > > > ironic" > > > on the > > > > undercloud? > > > > > > > Best regards, > > > > Sasha > > > Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > From: "Esra > > > Celik" > > > > > To: "Sasha Chuzhoy" > > > > > > > > Cc: "Marius Cornea" , > > > rdo-list at redhat.com > > > > > Sent: Friday, October 16, 2015 1:40:16 AM > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > > > > > > > host > > > > > was found" > > > > > > > > > > Hi Sasha, > > > > > > > > > > > > > > > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 > > > > > > > > Overcloud-Compute > > > > > > > > > > This is my undercloud.conf file: > > > > > > > > > > > > > > > > > > > image_path = . > > > > > local_ip = 192.0.2.1/24 > > > > > > > > local_interface = em2 > > > > > masquerade_network = 192.0.2.0/24 > > > > > > > > > > > dhcp_start = 192.0.2.5 > > > > > dhcp_end = 192.0.2.24 > > > > > > > > network_cidr = 192.0.2.0/24 > > > > > network_gateway = 192.0.2.1 > > > > > > > > > > > inspection_interface = br-ctlplane > > > > > inspection_iprange = > > > 192.0.2.100,192.0.2.120 > > > > > inspection_runbench = false > > > > > > > > undercloud_debug = true > > > > > enable_tuskar = false > > > > > > > > enable_tempest = false > > > > > > > > > > IP configuration for the > > > Undercloud is as follows: > > > > > > > > > > stack at undercloud ~]$ ip > > > addr > > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > > > > > > UNKNOWN > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever preferred_lft > > > forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft forever > > > preferred_lft forever > > > > > 2: em1: > > > mtu 1500 qdisc mq state UP > > > > > qlen > > > > > 1000 > > > > > > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > > > inet > > > 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > > valid_lft > > > forever > > > preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a21/64 scope > > > link > > > > > > > > valid_lft forever preferred_lft forever > > > > > 3: em2: > > > mtu 1500 qdisc mq master > > > > > > > > ovs-system > > > > > state UP qlen 1000 > > > > > link/ether > > > 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > 4: ovs-system: > > > mtu 1500 qdisc noop state DOWN > > > > > link/ether > > > 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > > > 5: br-ctlplane: > > > mtu 1500 qdisc > > > > > noqueue > > > > > > > > > > > > > > state UNKNOWN > > > > > link/ether 08:9e:01:50:8a:22 brd > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > > > > > > > > > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > inet6 > > > fe80::a9e:1ff:fe50:8a22/64 scope link > > > > > valid_lft forever > > > preferred_lft forever > > > > > 6: br-int: mtu 1500 > > > qdisc noop state DOWN > > > > > link/ether fa:85:ac:92:f5:41 brd > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > And I attached two screenshots > > > showing > > > the boot stage for overcloud > > > > > nodes > > > > > > > > > > Is there > > > a > > > way to login the overcloud nodes to see their IP > > > > > configuration? > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra ÇEL?K > > > > > > > > TÜB?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > > > > > Kime: "Esra Celik" > > > > > > Kk: "Marius > > > Cornea" , rdo-list at redhat.com > > > > > > > > > Gönderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > > > Konu: Re: > > > [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > > > > > > > was > > > > > > found" > > > > > > > > > > > Just my 2 cents. > > > > > > > > > > > > > > > > Did you make sure that all the registered nodes are configured to > > > > > > > > > > > > > > > boot > > > > > > off > > > > > > the right NIC first? > > > > > > > > > Can you watch the console and see what happens on the problematic > > > > > > > > > > > > nodes > > > > > > upon > > > > > > boot? > > > > > > > > > > > Best > > > regards, > > > > > > Sasha Chuzhoy. > > > > > > > > > > > ----- Original > > > Message ----- > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > Cc: > > > rdo-list at redhat.com > > > > > > > Sent: Thursday, October 15, 2015 > > > 4:40:46 > > > AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with > > > error > > > "No > > > > > > > valid > > > > > > > host > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic node-show results are below. I have my nodes power on > > > > > > > > > > > > > > > > > > > > > > > > > > after > > > > > > > introspection bulk start. And I get the > > > following warning > > > > > > > Introspection didn't finish for nodes > > > > > > > > > > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > > > > > > Doesn't seem to be the same issue with > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > | UUID | Name | Instance UUID | Power State | Provisioning > > > State > > > > > > > | | > > > > > > > | Maintenance | > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | > > > > > > > > > > | power > > > on | > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > > > > > > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on > > > > > > > > | | > > > > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > > > > > > > | > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic > > > node-show > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > | Property | Value | > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > | target_power_state | None | > > > > > > > | extra | {} | > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > > > > > > > | provision_state | available | > > > > > > > | clean_step | {} > > > > > > > > | | > > > > > > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > > > > > > > > > > > > | > > > > | console_enabled | False | > > > > > > > | target_provision_state | None > > > | | > > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | None > > > > > | | > > > > > | > > > > > > > > > > | inspection_finished_at | None | > > > > > > > | power_state > > > > > > > > > | | > > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > > > reservation | None | > > > > > > > | properties | {u'memory_mb': u'8192', > > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | > > > u'10', > > > | > > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} > > > | > > > > > > > | | | > > > > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > > | > > > > > > > | > > > > > > > | u'192.168.0.18', | > > > > > > > | | u'ipmi_username': u'root', > > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > | | | > > > > > > > | | > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | created_at > > > > > > > > | | | > > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > | > > > instance_info | {} | > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic > > > node-show > > > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > | Property | Value | > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > | target_power_state | None | > > > > > > > | extra | {} | > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > > > > > > > | provision_state | available | > > > > > > > | clean_step | {} > > > > > > > > | | > > > > > > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > > > > > > > > > > > > | > > > > | console_enabled | False | > > > > > > > | target_provision_state | None > > > | | > > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | None > > > > > | | > > > > > | > > > > > > > > > > | inspection_finished_at | None | > > > > > > > | power_state > > > > > > > > > | | > > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > > > reservation | None | > > > > > > > | properties | {u'memory_mb': u'8192', > > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | > > > u'100', > > > | > > > > > > > | | u'cpus': u'4', u'capabilities': u'boot_option:local'} > > > | > > > > > > > | | | > > > > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > > | > > > > > > > | > > > > > > > | u'192.168.0.19', | > > > > > > > | | u'ipmi_username': u'root', > > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > | | | > > > > > > > | | > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | created_at > > > > > > > > | | | > > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > | > > > instance_info | {} | > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack user. > > > > > > > > > > > > > > > I > > > don't think I > > > > > > > am > > > > > > > doing > > > > > > > > > > something > > > other than > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > > > > > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi instackenv.json > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2 sudo yum -y install epel-release > > > > > > > 3 sudo curl -o > > > /etc/yum.repos.d/delorean.repo > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > > > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > > > > > > > > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > > > > > > > > > > > > > > > > > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > > 6 sudo /bin/bash -c > > > "cat > > > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > > > > > > EOF" > > > > > > > 7 sudo curl -o > > > /etc/yum.repos.d/delorean-deps.repo > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > > > 9 sudo yum > > > install > > > -y python-tripleoclient > > > > > > > 10 cp > > > /usr/share/instack-undercloud/undercloud.conf.sample > > > > > > > > > > ~/undercloud.conf > > > > > > > 11 vi undercloud.conf > > > > > > > 12 > > > export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 13 openstack > > > undercloud install > > > > > > > 14 source stackrc > > > > > > > 15 > > > export > > > NODE_DIST=centos7 > > > > > > > 16 export > > > DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > > > 17 export > > > DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 18 openstack > > > overcloud > > > image build --all > > > > > > > 19 ls > > > > > > > 20 openstack > > > overcloud > > > image upload > > > > > > > 21 openstack baremetal import --json > > > instackenv.json > > > > > > > 22 openstack baremetal configure boot > > > > > > > > > > > > > 23 ironic node-list > > > > > > > 24 openstack baremetal > > > > > > introspection > > > bulk start > > > > > > > 25 ironic node-list > > > > > > > 26 ironic > > > node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > 27 ironic > > > node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > TÜB?TAK B?LGEM > > > > > > > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > Kk: "Ignacio Bravo" > > > , rdo-list at redhat.com > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 19:40:07 > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > > > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > > > > > > > > > > > > results? > > > > > > > Also > > > > > > > check the following suggestion > > > if > > > you're experiencing the same > > > > > > > issue: > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: > > > > > > > > > > > > > "Esra > > > Celik" > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > Cc: "Ignacio Bravo" > > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > > > Subject: Re: > > > [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > valid > > > > > > > > > > > > > > > > > host > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the > > > > > > > > > > > > > > > > > > introspection > > > > > > > > > > > > > > > > > > I > > > can see Client > > > > > > > > IP > > > > > > > > of > > > > > > > > > > > nodes > > > > > > > > > > > > > > (screenshot attached). But then I see continuous > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic-python-agent > > > > > > > > errors > > > > > > > > > > > > (screenshot-2 > > > attached). Errors repeat after time out.. And the > > > > > > > > nodes > > > > > > > > > > > > > > are > > > > > > > > not powered off. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the > > > > > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > > > > > > > > > > > > ADMINISTRATOR > > > > > > > > -U > > > > > > > > root -R 3 -N 5 -P > > > power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > chassis power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus > > > > > > > > > > -U > > > > > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis > > > power off > > > > > > > > Chassis Power Control: Down/Off > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power status > > > > > > > > > > > > > > Chassis Power is off > > > > > > > > [stack at undercloud ~]$ > > > ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > > > > > > > -P > > > > > > > > chassis power on > > > > > > > > Chassis > > > Power > > > Control: Up/On > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > chassis power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > TÜB?TAK B?LGEM > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > Kimden: > > > "Marius Cornea" > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > Kk: "Ignacio Bravo" > > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 14:59:30 > > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > valid > > > > > > > > host > > > > > > > > was > > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: "Ignacio > > > Bravo" , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well today I started with > > > re-installing the OS and nothing > > > > > > > > > seems > > > > > > > > > > > > > > > wrong > > > > > > > > > with > > > > > > > > > undercloud installation, > > > then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error > > > during image build > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > > > + dracut -N --install ' > > > curl partprobe lsblk targetcli tail > > > > > > > > > head > > > > > > > > > > > > > > > > > > awk > > > > > > > > > ifconfig > > > > > > > > > cut expr route ping nc > > > wget > > > tftp grep' --kernel-cmdline > > > > > > > > > 'rd.shell > > > > > > > > > > > > rd.debug > > > > > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > > > > > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > > > / > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > > > > > > > > > > > > > > virtio_net > > > > > > > > > virtio_blk target_core_mod iscsi_target_mod > > > > > > > > > > > > > > > > > > target_core_iblock > > > > > > > > > target_core_file > > > target_core_pscsi configfs' -o 'dash > > > > > > > > > plymouth' > > > > > > > > > > > > > > > > > > /tmp/ramdisk > > > > > > > > > cat: write error: Broken pipe > > > > > > > > > > > > > > > > > > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > > > > > > > > > > > > > + chmod o+r /tmp/kernel > > > > > > > > > + trap EXIT > > > > > > > > > > > > > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > > > + date +%s.%N > > > > > > > > > > > > > > > > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > > > > > > You can ignore that afaik, if you end up having all the > > > > > > > > > > > > > > > > > > > > required > > > > > > > > images > > > > > > > > it > > > > > > > > > > > > > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > > > > > > > Then, > > > during introspection stage I see ironic-python-agent > > > > > > > > > > > > errors > > > > > > > > > on > > > > > > > > > nodes > > > > > > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early stage > > > > > > > > > > > > > > of > > > the > > > > > > > > introspection? > > > > > > > > At > > > > > > > > > > > some > > > point it should receive an address by DHCP and the Network > > > > > > > > > > > > > > is > > > > > > > > unreachable error should disappear. Does the > > > introspection > > > > > > > > complete > > > > > > > > and > > > > > > > > > > > > > > the > > > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 10:30:12 > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > > > > > > > > > > Option "http_url" from group "pxe" is deprecated. Use option > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "http_url" > > > > > > > > > from > > > > > > > > > group > > > "deploy". > > > > > > > > > Oct 14 10:30:12 undercloud.rdo > > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > > > > > > > > > > > "http_root" > > > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This is odd too as I'm > > > expecting the nodes to be powered off > > > > > > > > before > > > > > > > > > > > > > > > > > running > > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > | UUID | Name | Instance UUID | Power State | > > > > > > > > > > > > | Provisioning > > > > > > > > > > > > | State > > > > > > > > > | | > > > > > > > > > | > > > Maintenance | > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | > > > power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | > > > available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power > > > > > > > > > | > > > > > | > > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > > > | > > > > > > | > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > During deployment I get following > > > > > > > > > > > > > > > > > > > > > errors > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl -u > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 11:29:01 > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 ERROR > > > ironic.drivers.modules.ipmitool [-] IPMI Error > > > > > > > > > while > > > > > > > > > > > > > > > > > > attempting > > > > > > > > > "ipmitool -I lanplus -H > > > 192.168.0.19 -L ADMINISTRATOR -U root > > > > > > > > > -R > > > > > > > > > > > > > > > > > > 3 > > > > > > > > > -N > > > > > > > > > 5 > > > > > > > > > -f > > > > > > > > > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > > > Error: Unexpected > > > error while running command. > > > > > > > > > Oct 14 11:29:01 > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 WARNING > > > ironic.drivers.modules.ipmitool [-] IPMI power > > > > > > > > > status > > > > > > > > > > > > > > > failed > > > > > > > > > for > > > > > > > > > node > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: > > > > > > > > > > > > Unexpected > > > > > > > > > error > > > > > > > > > while > > > > > > > > > > > > > > > > > > running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo > > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > 11:29:01.740 > > > > > > > > > 619 WARNING ironic.conductor.manager [-] > > > During > > > > > > > > > sync_power_state, > > > > > > > > > could > > > > > > > > > > > > > > > > > > not > > > > > > > > > get power state for node > > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > > > attempt > > > > > > > > > > > > > > > 1 > > > > > > > > > of > > > > > > > > > 3. Error: IPMI call > > > > > > failed: > > > power status.. > > > > > > > > > > > > > > > > > > > > > > > > > This > > > looks > > > like an ipmi error, can you try to manually run > > > > > > > > commands > > > > > > > > > > > > > > > > > using > > > > > > > > the > > > > > > > > ipmitool and see if > > > you get any success? It's also worth filing > > > > > > > > a > > > > > > > > > > > > > > bug > > > > > > > > with > > > > > > > > details such as the ipmitool > > > version, server model, drac > > > > > > > > firmware > > > > > > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > Kimden: "Marius > > > Cornea" > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > Gönderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > > > > > > > > Konu: > > > Re: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > error > > > > > > > > > > > > "No > > > > > > > > > valid > > > > > > > > > host was > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > Original > > > Message ----- > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > > > Cc: "Ignacio Bravo" > > > , > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > > > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > During deployment > > > they are powering on and deploying the > > > > > > > > > > images. > > > > > > > > > > > > > > > > > > > I > > > > > > > > > > see > > > > > > > > > > lot > > > > > > > > > > > > > > > > > > > > > of > > > > > > > > > > connection error messages about > > > ironic-python-agent but > > > > > > > > > > ignore > > > > > > > > > > > > > them > > > > > > > > > > > > > as > > > > > > > > > > mentioned here > > > > > > > > > > > > > > > > > > > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > > > > > > > > > > > > That was referring to the introspection > > > stage. From what I > > > > > > > > > can > > > > > > > > > tell > > > > > > > > > > > > > > > you > > > > > > > > > are > > > > > > > > > experiencing issues > > > > > > during > > > deployment as it fails to > > > > > > > > > provision > > > > > > > > > > > > the > > > > > > > > > > > > nova > > > > > > > > > instances, can you check if > > > > > > > > > > > > during > > > that stage the nodes get > > > > > > > > > powered > > > > > > > > > on? > > > > > > > > > > > > > > > > > > > > > > > > > > > Make sure that before overcloud deploy > > > > > > > > > > > > > > > > > > > the > > > ironic nodes are > > > > > > > > > available > > > > > > > > > for > > > > > > > > > > > > > > > > > > provisioning (ironic node-list and check the provisioning > > > > > > > > > > > > > > > > > > > state > > > > > > > > > column). > > > > > > > > > Also check > > > > > > > that > > > you didn't miss any step in the docs in > > > > > > > > > regards > > > > > > > > > > > > > > > to > > > > > > > > > kernel > > > > > > > > > and ramdisk > > > assignment, introspection, flavor creation(so it > > > > > > > > > > > > matches > > > > > > > > > > > > > > > the > > > > > > > > > nodes resources) > > > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > In > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > instackenv.json > > > file I do not need to add the undercloud > > > > > > > > > > node, > > > > > > > > > > > > > > > > > > > or > > > > > > > > > > do > > > > > > > > > > I? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And which log files should I watch during deployment? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You can check the > > > openstack-ironic-conductor logs(journalctl > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > -u > > > > > > > > > openstack-ironic-conductor.service) and the > > > > > > logs > > > in > > > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > > > > > > > > > > > > > > > > > Kime: > > > > > > > > > > Esra Celik > > > Kk: Ignacio Bravo > > > > > > > > > > > > > , > > > > > > > > > > > > > rdo-list at redhat.comGönderilenler: > > > > > > > > > > Tue, > > > > > > > > > > > > > > > > 13 > > > > > > > > > > Oct > > > > > > > > > > 2015 17:25:00 > > > > > > > +0300 > > > (EEST)Konu: Re: [Rdo-list] OverCloud > > > > > > > > > > deploy > > > > > > > > > > > > > > > > fails > > > > > > > > > > with > > > > > > > > > > error "No > > > > > > > valid > > > host was found" > > > > > > > > > > > > > > > > > > > > ----- Original > > > Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > To: "Ignacio Bravo" > > > > Cc: > > > > > > > > > > rdo-list at redhat.com> > > > > > > > > > > > > > > > > Sent: > > > > > > > > > > Tuesday, October 13, 2015 3:47:57 > > > PM> Subject: Re: > > > > > > > > > > [Rdo-list] > > > > > > > > > > > > > OverCloud > > > > > > > > > > deploy fails with error "No valid host was > > > found"> > > > > > > > > > > > > > Actually > > > > > > > > > > I > > > > > > > > > > > > > > > > > > > re-installed the OS for Undercloud before deploying. > > > > > > > > > > > > > > > > > > > > > > > > > > > However > > > > > > > > > > I > > > > > > > > > > did> > > > > > > > > > > > > > > > > > > > > > > > not > > > > > > > > > > re-install the OS in Compute and Controller > > > nodes.. I will > > > > > > > > > > reinstall> > > > > > > > > > > basic > > > > > > > > > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > > > > > > > > > > > > > > > > > > > > compute, > > > > > > > > > > they > > > > > > > > > > will > > > > > > > > > > > > > > > > > > > > > > > > > > > get the image served by the undercloud. I'd recommend that > > > > > > > > > > > > > > > > > > > > > > > > > > > > > during > > > > > > > > > > deployment > > > > > > > > > > you > > > watch the servers console and make sure they get > > > > > > > > > > > > > powered > > > > > > > > > > > > > on, > > > > > > > > > > pxe > > > > > > > > > > boot, > > > > > > > > > > > > > > > > > > > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks> > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > > > > > Kimden: > > > > > > > > > > > "Ignacio > > > > > > > > > > > Bravo" > > > > Kime: "Esra Celik" > > > > > > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > > > > > > > > > Gönderilenler: > > > > > > > > > > > 13 Ekim Sal? 2015 16:36:06> > > > Konu: > > > Re: [Rdo-list] > > > > > > > > > > > OverCloud > > > > > > > > > > > > > > deploy > > > > > > > > > > > > > > fails > > > > > > > > > > > with error "No valid > > > > > > > > > > > > > > host > > > was> found"> > Esra,> > I > > > > > > > > > > > encountered > > > > > > > > > > > > > > > > > the > > > > > > > > > > > same > > > > > > > > > > > problem after > > > deleting the stack and re-deploying.> > It > > > > > > > > > > > turns > > > > > > > > > > > > > > > > > > > > out > > > > > > > > > > > that > > > > > > > > > > > > > > > > > > > > > > 'heat > > > stack-delete overcloud’ does remove the nodes > > > > > > > > > > > > > > from> > > > > > > > > > > > ‘nova list’ and one would assume > > > that the > > > > > > > > > > > baremetal > > > > > > > > > > > servers > > > > > > > > > > > > > > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > redeploying, > > > > > > > > > > > I > > > > > > > > > > > > > > > > > > > > > > > > > get > > > > > > > > > > > the same message of> not enough hosts > > > available.> > > > > You > > > > > > > > > > > can > > > > > > > > > > > look > > > > > > > > > > > > > > > > > > > > > > into > > > > > > > > > > > the > > > > > > > > > > > nova logs and it > > > mentions something about ‘node xxx > > > > > > > > > > > is> > > > > > > > > > > > > > > > > > > > > already > > > > > > > > > > > associated with UUID > > > > > > > > > yyyy’ > > > and ‘I tried 3 > > > > > > > > > > > times > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > I’m > > > > > > > > > > > giving up’.> > > > > > > > > > > > > > The > > > issue is that the UUID yyyy > > > > > > > > > > > belonged > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > a > > > > > > > > > > > prior > > > > > > > > > > > > > > > > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > > > > > > > > > > > basic > > > > > > > > > > > OS > > > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > > > > > > > > > > > > > > > Federal, > > > > > > > > > > > Inc> > > > > > > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct > > > > > > > > > > > > > > > > > > > > 13, > > > > > > > > > > > 2015, > > > > > > > > > > > at > > > > > > > > > > > > > > > > > > > > > > 9:25 > > > > > > > > > > > AM, Esra Celik < celik.esra at tubitak.gov.tr > > > > > > > > > wrote:> > > > > > > > > > > > > > Hi > > > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with error "No > > > valid host was > > > > > > > > > > > found"> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --templates> > > > > > > > > > > > Deploying > > > > > > > > > > > > > > > > > > > > > templates in the directory> > > > > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > > > Stack > > > failed with status: Resource CREATE failed: > > > > > > > > > > > > > > resources.Compute:> > > > > > > > > > > > ResourceInError: > > > resources[0].resources.NovaCompute: Went > > > > > > > > > > > to > > > > > > > > > > > > > > > > > status > > > > > > > > > > > ERROR> > > > > > > > > > > > due > > > > > > > > > to > > > "Message: No valid host was found. There are not > > > > > > > > > > > > > > enough > > > > > > > > > > > hosts> > > > > > > > > > > > available., > > > Code: > > > 500"> Heat Stack create failed.> > Here > > > > > > > > > > > are > > > > > > > > > > > > > > > > > some > > > > > > > > > > > logs:> > > > > > > > > > > > > > > > > > > > > > Every > > > 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > > > > > > COMPLETE > > > > > > > > > > > > Tue > > > > > > > > > > > > Oct > > > > > > > > > > > > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > | resource_name | physical_resource_id | > > > > > > > > > > > > > > | resource_type > > > | > > > > > > > > > > > | resource_status > > > > > > > > > > > |> | > > > updated_time | stack_name |> > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > > > |> | > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > > > > > > > > |> | Controller > > > > > > > > > > > |> | | > > > > > > > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OS::TripleO::Controller > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | > > > > > > > > > > > > > > OS::TripleO::Compute > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > > > > > > > > > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > | > > > > > > > > > > > Controller | > > > > | > > > > > > > > > > > 2e9ac712-0566-49b5-958f-c3e151bb24d7 > > > | > > > > > > > > > > > OS::Nova::Server > > > > > > > > > > > |> > > > > > > > | > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | > > > > > > > > > | > > > > > > > > > > > 2015-10-13T10:20:54 > > > |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk > > > |> | > > > > > > > > > > > NovaCompute > > > > > > > > > > > | > > > > > > > > |> | > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > > > | > > > overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > [stack at undercloud ~]$ heat resource-show > > > > > > > > > > > > > > > > overcloud > > > > > > > > > > > > > > > > Compute> > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > | Property | Value |> > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > | attributes | { |> | | "attributes": null, |> | | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > | "refs": > > > > > > > > > > > | null > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > | |> > > > > > > > > > > > | | > > > > > > > > > > > | | > > > > > > > > > > > > > > | } > > > > > > > > > > > |> | creation_time | 2015-10-13T10:20:36 |> > > > > > | | > > > description > > > > > > > > > > > |> | | > > > > > > > > > > > |> | |> > > > > > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | links > > > > > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > |> | |> > > > > > > > > > > > | > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > > > > > > | (self) |> | | > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > > > > > > | | (stack) |> | | > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > > > > > > | | (nested) |> | logical_resource_id | Compute |> > > > > > > > > > > > > > > | | | > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | | physical_resource_id > > > > > > > > > > > | > > > e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > > > > > > > > > ComputeAllNodesDeployment |> | | > > > > > > > > > > > > > > ComputeNodesPostDeployment > > > > > > > > > > > |> > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > ComputeCephDeployment |> | > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | > > > > > > > > > > > > > > resource_name > > > > > > > > > > > | > > > > > > > > > > > Compute > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | resource_status | CREATE_FAILED > > > > > > > > > > |> > > > > > > > > > > > | |> > > > | > > > > > > > > > > > | resource_status_reason > > > > > > > > > > > | > > > | > > > > > > > > > > > | | > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > > > Went to > > > > > status > > > ERROR due to "Message:> | No valid host > > > > > > > > > > > was > > > > > > > > > > > > > > > > > found. > > > > > > > > > > > There > > > > > > > > > > > are > > > > > > > > > not > > > enough hosts available., Code: 500"> | |> | > > > > > > > > > > > > > > resource_type > > > > > > > > > > > | > > > > > > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > > > |> > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > This is my instackenv.json for 1 compute and > > > > > > > > > > > > > > > > > 1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > control > > > > > > > > > > > > > > node > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > > > > be > > > > > > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> > > > "memory":"8192",> > > > > > > > > > > > "disk":"10",> > > > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > "pm_addr":"192.168.0.18"> > > > },> > > > {> > > > > > > > > > > > "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> > > > "memory":"8192",> > > > > > > > > > > > "disk":"100",> > > > > > > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > "pm_addr":"192.168.0.19"> > > > }> > > > ]> }> > > Any ideas? Thanks > > > > > > > > > > > in > > > > > > > > > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > _______________________________________________> > > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________> > > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > Rdo-list > > > mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > From tshefi at redhat.com Tue Oct 20 10:18:21 2015 From: tshefi at redhat.com (Tzach Shefi) Date: Tue, 20 Oct 2015 13:18:21 +0300 Subject: [Rdo-list] Liberty Packstack fails to install Cinder and Ceilometer missing python packages. In-Reply-To: References: Message-ID: I've tested again this time added "packages" class under foreman deployment. Looks OK, packstack completed installation without any missing packages. On Tue, Oct 20, 2015 at 10:15 AM, Tzach Shefi wrote: > Hey guys, > > Alan, > Centos 7.1 came from theforeman (internal tlv) > How can I check if kickstart disabled these packages? > Don't have admin access to foreman if needed, but can ask ops people to > check this out if need be. > BTW another college ran into same missing packages, guess he used same > centos from foreman, would explain same results. > > David, > Repo list attached below: > > # yum repolist -v > Not loading "rhnplugin" plugin, as it is disabled > Loading "langpacks" plugin > Loading "priorities" plugin > Loading "product-id" plugin > Loading "subscription-manager" plugin > Adding en_US.UTF-8 to language list > Adding to language list > Updating Subscription Management repositories. > Unable to read consumer identity > This system is not registered to Red Hat Subscription Management. You can > use subscription-manager to register. > Config time: 0.162 > Yum version: 3.4.3 > Setting up Package Sacks > --> python-cliff-1.15.0-1.el7.noarch from delorean-common-testing > excluded (priority) > --> python-hardware-0.16-2.el7.noarch from delorean-common-testing > excluded (priority) > --> python-osprofiler-doc-0.3.0-1.el7.noarch from delorean-common-testing > excluded (priority) > --> python-stevedore-1.8.0-1.el7.noarch from delorean-common-testing > excluded > (priority) > > --> python-osprofiler-0.3.0-1.el7.noarch from delorean-common-testing > excluded > (priority) > > --> python-pysaml2-3.0.0-1.el7.noarch from delorean-common-testing > excluded > (priority) > > --> python-hardware-doc-0.16-2.el7.noarch from delorean-common-testing > excluded > (priority) > > --> python2-hacking-0.10.2-2.el7.noarch from delorean-common-testing > excluded > (priority) > > --> python-unicodecsv-0.14.1-1.el7.noarch from delorean-common-testing > excluded > (priority) > > --> python-cachetools-1.0.3-2.el7.noarch from delorean-common-testing > excluded > (priority) > > --> 1:openstack-neutron-brocade-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ceilometer-polling-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-cells-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python2-oslo-i18n-doc-2.6.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-manilaclient-doc-1.4.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-puppet-modules-7.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-zaqarclient-0.2.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-keystoneclient-1.7.2-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-swiftclient-2.6.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-ironic-python-agent-1.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-glance-store-0.9.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-nova-api-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python2-oslo-i18n-2.6.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-openstackclient-1.7.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-automaton-0.7.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-common-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> instack-0.0.8-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> python-oslo-middleware-doc-2.8.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-serialproxy-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-conductor-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-db-doc-2.6.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ceilometer-ipmi-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-sriov-nic-agent-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-tripleo-doc-0.0.6-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-fwaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-service-0.9.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-nova-cert-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-ironic-inspector-2.2.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-neutron-tests-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-ceilometerclient-1.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-keystone-8.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-opencontrail-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-doc-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-heat-common-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-tripleo-puppet-elements-0.0.2-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-dashboard-8.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-troveclient-1.3.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-heat-engine-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-rootwrap-2.3.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-manila-share-1.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ironic-api-4.2.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-ceilometerclient-doc-1.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-glance-11.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-trove-taskmanager-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-ceilometermiddleware-0.3.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-pycadf-1.1.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> tripleo-common-0.0.1-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-vmware-doc-1.21.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-nova-12.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-neutron-vpnaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-manila-doc-1.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-aodh-notifier-1.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-concurrency-2.6.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-7.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-nova-spicehtml5proxy-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> instack-undercloud-2.1.3-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-midonet-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-tripleo-0.0.6-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-packstack-2015.2-0.1.dev1654.gcbbf46e.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ceilometer-alarm-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-swift-container-2.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-ironic-inspector-doc-2.2.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python2-castellan-0.2.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-designateclient-1.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-cinder-7.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-aodh-listener-1.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-keystoneauth1-doc-1.1.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-taskflow-1.21.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-middleware-2.8.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-glanceclient-doc-1.1.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-dashboard-theme-8.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-zaqar-1.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ceilometer-notification-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-glance-doc-11.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-sahara-doc-3.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-keystonemiddleware-2.3.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-django-horizon-8.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-keystoneclient-doc-1.7.2-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-aodh-common-1.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-manila-1.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-metering-agent-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-policy-doc-0.11.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-scheduler-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-trove-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-neutron-lbaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-keystonemiddleware-doc-2.3.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-heat-api-5.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-glanceclient-1.1.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-aodh-expirer-1.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-log-doc-1.10.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-vpnaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-neutronclient-3.1.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-bigswitch-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-log-1.10.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-aodh-evaluator-1.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-cache-0.7.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ceilometer-common-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-aodh-api-1.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-swift-object-2.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-policy-0.11.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-trove-api-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-cinder-doc-7.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-cinder-7.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-keystone-8.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-nova-12.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-neutron-fwaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python2-oslotest-1.11.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python2-os-client-config-1.7.4-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> dib-utils-0.0.9-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> python-tripleoclient-0.0.11-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-cisco-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-neutron-fwaas-tests-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-neutron-lbaas-tests-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-embrane-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-novaclient-2.30.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-packstack-puppet-2015.2-0.1.dev1654.gcbbf46e.el7.noarch > from delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-mellanox-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-nuage-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-objectstore-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-lbaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-sahara-api-3.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-trove-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-swift-2.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-versionedobjects-0.10.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-oneconvergence-nvsd-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-versionedobjects-doc-0.10.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-sahara-common-3.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-ovsvapp-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-packstack-doc-2015.2-0.1.dev1654.gcbbf46e.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-concurrency-doc-2.6.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-network-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-openvswitch-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-novaclient-doc-2.30.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-trove-conductor-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-trove-guestagent-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-taskflow-doc-1.21.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-trove-common-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ceilometer-api-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-ceilometer-5.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-keystone-doc-8.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-common-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-swift-proxy-2.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ironic-common-4.2.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python2-futurist-0.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-manila-1.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-neutron-vpnaas-tests-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-glance-11.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ceilometer-central-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-linuxbridge-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-console-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-swiftclient-doc-2.6.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-ml2-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-cinderclient-1.4.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> diskimage-builder-1.1.3-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-messaging-doc-2.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-django-horizon-doc-8.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-heat-api-cfn-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 2:python2-oslo-config-doc-2.4.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ceilometer-compute-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-saharaclient-0.11.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-swift-doc-2.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-ofagent-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-tripleo-image-elements-0.9.7-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-manilaclient-1.4.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python2-mox3-0.10.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ceilometer-collector-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-cinderclient-doc-1.4.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-aodh-compat-1.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-sahara-engine-3.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ironic-conductor-4.2.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-novncproxy-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-heat-templates-0-0.1.20151019.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-compute-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-heatclient-doc-0.8.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-sahara-3.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 2:python2-oslo-config-2.4.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-openstackclient-doc-1.7.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-cache-doc-0.7.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-tripleo-heat-templates-0.8.7-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-rpc-server-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python2-os-client-config-doc-1.7.4-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-heat-api-cloudwatch-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-ironicclient-0.8.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-heatclient-0.8.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-dev-server-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-neutron-7.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-vmware-1.21.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-messaging-2.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-aodh-1.0.0-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> openstack-swift-account-2.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-db-2.6.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> fontawesome-fonts-web-4.1.0-1.el7.noarch from rhel-optional excluded > (priority) > --> libnetfilter_queue-devel-1.0.2-1.el7.i686 from rhel-optional excluded > (priority) > --> libnetfilter_queue-devel-1.0.2-1.el7.x86_64 from rhel-optional > excluded (priority) > --> pyOpenSSL-doc-0.13.1-3.el7.noarch from rhel-optional excluded > (priority) > --> pyparsing-doc-1.5.6-9.el7.noarch from rhel-optional excluded > (priority) > --> pytest-2.3.5-4.el7.noarch from rhel-optional excluded (priority) > --> python-dtopt-0.1-13.el7.noarch from rhel-optional excluded (priority) > --> python-nose-docs-1.3.0-2.el7.noarch from rhel-optional excluded > (priority) > --> python-py-1.4.14-4.el7.noarch from rhel-optional excluded (priority) > --> babel-0.9.6-8.el7.noarch from rhel-server excluded (priority) > --> fontawesome-fonts-4.1.0-1.el7.noarch from rhel-server excluded > (priority) > --> libnetfilter_queue-1.0.2-1.el7.i686 from rhel-server excluded > (priority) > --> libnetfilter_queue-1.0.2-1.el7.x86_64 from rhel-server excluded > (priority) > --> pyOpenSSL-0.13.1-3.el7.x86_64 from rhel-server excluded (priority) > --> pyparsing-1.5.6-9.el7.noarch from rhel-server excluded (priority) > --> python-babel-0.9.6-8.el7.noarch from rhel-server excluded (priority) > --> python-dns-1.11.1-2.20140901git9329daf.el7.noarch from rhel-server > excluded (priority) > --> python-netaddr-0.7.5-7.el7.noarch from rhel-server excluded (priority) > --> python-nose-1.3.0-2.el7.noarch from rhel-server excluded (priority) > --> python-requests-1.1.0-8.el7.noarch from rhel-server excluded > (priority) > --> python-six-1.3.0-4.el7.noarch from rhel-server excluded (priority) > --> python-tempita-0.5.1-6.el7.noarch from rhel-server excluded (priority) > --> python-urllib3-1.5-8.el7.noarch from rhel-server excluded (priority) > 224 packages excluded due to repository priority protections > pkgsack time: 0.420 > Repo-id : delorean > Repo-name : > delorean-python-tripleoclient-199a35f696208911021ed589d82fced0117d6292 > Repo-revision: 1444982084 > Repo-updated : Fri Oct 16 10:55:02 2015 > Repo-pkgs : 274 > Repo-size : 68 M > Repo-baseurl : > http://trunk.rdoproject.org/centos7-liberty/19/9a/199a35f696208911021ed589d82fced0117d6292_0b1ce934 > Repo-expire : 21,600 second(s) (last: Tue Oct 20 04:25:34 2015) > Repo-excluded: 111 > Repo-filename: /etc/yum.repos.d/delorean.repo > > Repo-id : delorean-common-testing/x86_64 > Repo-name : delorean-common-testing > Repo-revision: 1444822873 > Repo-tags : binary-x86_64 > Repo-updated : Wed Oct 14 14:41:15 2015 > Repo-pkgs : 489 > Repo-size : 299 M > Repo-baseurl : > http://cbs.centos.org/repos/cloud7-openstack-common-testing/x86_64/os/ > Repo-expire : 21,600 second(s) (last: Tue Oct 20 04:25:35 2015) > Repo-excluded: 10 > Repo-filename: /etc/yum.repos.d/delorean-deps.repo > > Repo-id : delorean-liberty-testing/x86_64 > Repo-name : delorean-liberty-testing > Repo-revision: 1445301663 > Repo-tags : binary-x86_64 > Repo-updated : Tue Oct 20 03:41:05 2015 > Repo-pkgs : 34 > Repo-size : 9.9 M > Repo-baseurl : > http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/x86_64/os/ > Repo-expire : 21,600 second(s) (last: Tue Oct 20 04:25:35 2015) > Repo-excluded: 191 > Repo-filename: /etc/yum.repos.d/delorean-deps.repo > > > Below internal repos also include, removed full urls from email -> ... > > Repo-id : qe-tlv > ... > Repo-filename: /etc/yum.repos.d/qe-tlv.repo > > Repo-id : rhel-optional > ... > Repo-filename: /etc/yum.repos.d/rhel-optional.repo > > Repo-id : rhel-server > ... > Repo-filename: /etc/yum.repos.d/rhel-server.repo > > repolist: 9,340 > > > > Thanks > Tzach > > On Mon, Oct 19, 2015 at 6:29 PM, Alan Pevec wrote: > >> > Figured I'd try packstack-ing a Liberty on centos7.1 >> >> How did you install centos 7.1? >> >> > Packstack failed to install Cinder due to missing: python-cheetah >> > After manually installing Python-cheetah, Cinder installed. >> > Also missing python-werkzeug for Ceilometer. >> >> To be clear, both deps are correctly expressed as Requires: in cinder >> and ceilometer .specs. >> Those two packages are in extras repo which is enabled out of the box >> in the default centos install. >> I guess kickstart you're using disables it? >> >> Cheers, >> Alan >> > > > > -- > *Tzach Shefi* > Quality Engineer, Redhat OSP > +972-54-4701080 > -- *Tzach Shefi* Quality Engineer, Redhat OSP +972-54-4701080 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mustafa.celik at tubitak.gov.tr Tue Oct 20 13:41:29 2015 From: mustafa.celik at tubitak.gov.tr (Mustafa =?utf-8?B?w4dFTMSwSyAoQsSwTEdFTS1CVEUp?=) Date: Tue, 20 Oct 2015 16:41:29 +0300 (EEST) Subject: [Rdo-list] Undercloud UI In-Reply-To: <312239508.320257.1445347438841.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1522468676.320587.1445348489875.JavaMail.zimbra@tubitak.gov.tr> Hi Everyone, Is there any UI for undercloud installation? or any project on progress? If we implement one, how to contribute it? Thanks, Mustafa ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr mustafa.celik at tubitak.gov.tr -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasha at redhat.com Tue Oct 20 15:07:06 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Tue, 20 Oct 2015 11:07:06 -0400 (EDT) Subject: [Rdo-list] Liberty Packstack fails to install Cinder and Ceilometer missing python packages. In-Reply-To: References: Message-ID: <911429489.61219191.1445353626085.JavaMail.zimbra@redhat.com> Hey, You can look into /root/anaconda-ks.cfg or alternatively I can help you with checking the missing packages on the foreman. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Tzach Shefi" > To: "Alan Pevec" , dms at redhat.com > Cc: "Rdo-list at redhat.com" > Sent: Tuesday, October 20, 2015 3:15:21 AM > Subject: Re: [Rdo-list] Liberty Packstack fails to install Cinder and Ceilometer missing python packages. > > Hey guys, > > Alan, > Centos 7.1 came from theforeman (internal tlv) > How can I check if kickstart disabled these packages? > Don't have admin access to foreman if needed, but can ask ops people to check > this out if need be. > BTW another college ran into same missing packages, guess he used same centos > from foreman, would explain same results. > > David, > Repo list attached below: > > # yum repolist -v > Not loading "rhnplugin" plugin, as it is disabled > Loading "langpacks" plugin > Loading "priorities" plugin > Loading "product-id" plugin > Loading "subscription-manager" plugin > Adding en_US.UTF-8 to language list > Adding to language list > Updating Subscription Management repositories. > Unable to read consumer identity > This system is not registered to Red Hat Subscription Management. You can use > subscription-manager to register. > Config time: 0.162 > Yum version: 3.4.3 > Setting up Package Sacks > --> python-cliff-1.15.0-1.el7.noarch from delorean-common-testing excluded > (priority) > --> python-hardware-0.16-2.el7.noarch from delorean-common-testing excluded > (priority) > --> python-osprofiler-doc-0.3.0-1.el7.noarch from delorean-common-testing > excluded (priority) > --> python-stevedore-1.8.0-1.el7.noarch from delorean-common-testing excluded > (priority) > --> python-osprofiler-0.3.0-1.el7.noarch from delorean-common-testing > excluded (priority) > --> python-pysaml2-3.0.0-1.el7.noarch from delorean-common-testing excluded > (priority) > --> python-hardware-doc-0.16-2.el7.noarch from delorean-common-testing > excluded (priority) > --> python2-hacking-0.10.2-2.el7.noarch from delorean-common-testing excluded > (priority) > --> python-unicodecsv-0.14.1-1.el7.noarch from delorean-common-testing > excluded (priority) > --> python-cachetools-1.0.3-2.el7.noarch from delorean-common-testing > excluded (priority) > --> 1:openstack-neutron-brocade-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ceilometer-polling-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-cells-12.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python2-oslo-i18n-doc-2.6.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-manilaclient-doc-1.4.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-puppet-modules-7.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-zaqarclient-0.2.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-keystoneclient-1.7.2-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-swiftclient-2.6.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-ironic-python-agent-1.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-glance-store-0.9.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-nova-api-12.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python2-oslo-i18n-2.6.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-openstackclient-1.7.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-automaton-0.7.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-common-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> instack-0.0.8-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> python-oslo-middleware-doc-2.8.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-serialproxy-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-conductor-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-db-doc-2.6.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ceilometer-ipmi-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-sriov-nic-agent-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-tripleo-doc-0.0.6-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-fwaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-service-0.9.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-nova-cert-12.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-ironic-inspector-2.2.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-neutron-tests-7.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-ceilometerclient-1.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-keystone-8.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-opencontrail-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-doc-12.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-heat-common-5.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-tripleo-puppet-elements-0.0.2-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-dashboard-8.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-troveclient-1.3.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-heat-engine-5.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-rootwrap-2.3.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-manila-share-1.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ironic-api-4.2.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-ceilometerclient-doc-1.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-glance-11.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-trove-taskmanager-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-ceilometermiddleware-0.3.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-pycadf-1.1.0-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> tripleo-common-0.0.1-2.el7.noarch from delorean-liberty-testing excluded > (priority) > --> python-oslo-vmware-doc-1.21.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-nova-12.0.0-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> 1:python-neutron-vpnaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-manila-doc-1.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-aodh-notifier-1.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-concurrency-2.6.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-7.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-nova-spicehtml5proxy-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> instack-undercloud-2.1.3-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-midonet-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-tripleo-0.0.6-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-packstack-2015.2-0.1.dev1654.gcbbf46e.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ceilometer-alarm-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-swift-container-2.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-ironic-inspector-doc-2.2.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python2-castellan-0.2.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-designateclient-1.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-cinder-7.0.0-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> openstack-aodh-listener-1.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-keystoneauth1-doc-1.1.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-taskflow-1.21.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-middleware-2.8.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-glanceclient-doc-1.1.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-dashboard-theme-8.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-zaqar-1.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ceilometer-notification-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-glance-doc-11.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-sahara-doc-3.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-keystonemiddleware-2.3.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-django-horizon-8.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-keystoneclient-doc-1.7.2-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-aodh-common-1.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-manila-1.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-metering-agent-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-policy-doc-0.11.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-nova-scheduler-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-trove-4.0.0-0.1.0rc2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-neutron-lbaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-keystonemiddleware-doc-2.3.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-heat-api-5.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-glanceclient-1.1.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-aodh-expirer-1.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-log-doc-1.10.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-vpnaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-neutronclient-3.1.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-bigswitch-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-log-1.10.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-aodh-evaluator-1.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-cache-0.7.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ceilometer-common-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-aodh-api-1.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-swift-object-2.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-policy-0.11.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-trove-api-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-cinder-doc-7.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-cinder-7.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-keystone-8.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-nova-12.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-neutron-fwaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python2-oslotest-1.11.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python2-os-client-config-1.7.4-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> dib-utils-0.0.9-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> python-tripleoclient-0.0.11-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-cisco-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-neutron-fwaas-tests-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-neutron-lbaas-tests-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-embrane-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-novaclient-2.30.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-packstack-puppet-2015.2-0.1.dev1654.gcbbf46e.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-mellanox-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-nuage-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-objectstore-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-lbaas-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-sahara-api-3.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-trove-4.0.0-0.1.0rc2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-swift-2.5.0-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> python-oslo-versionedobjects-0.10.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-oneconvergence-nvsd-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-versionedobjects-doc-0.10.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-sahara-common-3.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-ovsvapp-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-packstack-doc-2015.2-0.1.dev1654.gcbbf46e.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-concurrency-doc-2.6.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-network-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-openvswitch-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-novaclient-doc-2.30.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-trove-conductor-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-trove-guestagent-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-taskflow-doc-1.21.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-trove-common-4.0.0-0.1.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ceilometer-api-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-ceilometer-5.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-keystone-doc-8.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-nova-common-12.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-swift-proxy-2.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ironic-common-4.2.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python2-futurist-0.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:python-manila-1.0.0-2.el7.noarch from delorean-liberty-testing excluded > (priority) > --> 1:python-neutron-vpnaas-tests-7.0.0-0.3.0rc2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-glance-11.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-ceilometer-central-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-linuxbridge-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-console-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-swiftclient-doc-2.6.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-ml2-7.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-cinderclient-1.4.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> diskimage-builder-1.1.3-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-messaging-doc-2.5.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-django-horizon-doc-8.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-heat-api-cfn-5.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 2:python2-oslo-config-doc-2.4.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ceilometer-compute-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-saharaclient-0.11.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-swift-doc-2.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-ofagent-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-tripleo-image-elements-0.9.7-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-manilaclient-1.4.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python2-mox3-0.10.0-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> 1:openstack-ceilometer-collector-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-cinderclient-doc-1.4.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-aodh-compat-1.0.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-sahara-engine-3.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-ironic-conductor-4.2.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-novncproxy-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> openstack-heat-templates-0-0.1.20151019.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-nova-compute-12.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-heatclient-doc-0.8.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-sahara-3.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 2:python2-oslo-config-2.4.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-openstackclient-doc-1.7.1-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-oslo-cache-doc-0.7.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> openstack-tripleo-heat-templates-0.8.7-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-neutron-rpc-server-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python2-os-client-config-doc-1.7.4-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:openstack-heat-api-cloudwatch-5.0.0-1.el7.noarch from > delorean-liberty-testing excluded (priority) > --> python-ironicclient-0.8.1-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-heatclient-0.8.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> 1:openstack-neutron-dev-server-7.0.0-2.el7.noarch from > delorean-liberty-testing excluded (priority) > --> 1:python-neutron-7.0.0-2.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-vmware-1.21.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-messaging-2.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-aodh-1.0.0-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> openstack-swift-account-2.5.0-1.el7.noarch from delorean-liberty-testing > excluded (priority) > --> python-oslo-db-2.6.0-1.el7.noarch from delorean-liberty-testing excluded > (priority) > --> fontawesome-fonts-web-4.1.0-1.el7.noarch from rhel-optional excluded > (priority) > --> libnetfilter_queue-devel-1.0.2-1.el7.i686 from rhel-optional excluded > (priority) > --> libnetfilter_queue-devel-1.0.2-1.el7.x86_64 from rhel-optional excluded > (priority) > --> pyOpenSSL-doc-0.13.1-3.el7.noarch from rhel-optional excluded (priority) > --> pyparsing-doc-1.5.6-9.el7.noarch from rhel-optional excluded (priority) > --> pytest-2.3.5-4.el7.noarch from rhel-optional excluded (priority) > --> python-dtopt-0.1-13.el7.noarch from rhel-optional excluded (priority) > --> python-nose-docs-1.3.0-2.el7.noarch from rhel-optional excluded > (priority) > --> python-py-1.4.14-4.el7.noarch from rhel-optional excluded (priority) > --> babel-0.9.6-8.el7.noarch from rhel-server excluded (priority) > --> fontawesome-fonts-4.1.0-1.el7.noarch from rhel-server excluded (priority) > --> libnetfilter_queue-1.0.2-1.el7.i686 from rhel-server excluded (priority) > --> libnetfilter_queue-1.0.2-1.el7.x86_64 from rhel-server excluded > (priority) > --> pyOpenSSL-0.13.1-3.el7.x86_64 from rhel-server excluded (priority) > --> pyparsing-1.5.6-9.el7.noarch from rhel-server excluded (priority) > --> python-babel-0.9.6-8.el7.noarch from rhel-server excluded (priority) > --> python-dns-1.11.1-2.20140901git9329daf.el7.noarch from rhel-server > excluded (priority) > --> python-netaddr-0.7.5-7.el7.noarch from rhel-server excluded (priority) > --> python-nose-1.3.0-2.el7.noarch from rhel-server excluded (priority) > --> python-requests-1.1.0-8.el7.noarch from rhel-server excluded (priority) > --> python-six-1.3.0-4.el7.noarch from rhel-server excluded (priority) > --> python-tempita-0.5.1-6.el7.noarch from rhel-server excluded (priority) > --> python-urllib3-1.5-8.el7.noarch from rhel-server excluded (priority) > 224 packages excluded due to repository priority protections > pkgsack time: 0.420 > Repo-id : delorean > Repo-name : > delorean-python-tripleoclient-199a35f696208911021ed589d82fced0117d6292 > Repo-revision: 1444982084 > Repo-updated : Fri Oct 16 10:55:02 2015 > Repo-pkgs : 274 > Repo-size : 68 M > Repo-baseurl : > http://trunk.rdoproject.org/centos7-liberty/19/9a/199a35f696208911021ed589d82fced0117d6292_0b1ce934 > Repo-expire : 21,600 second(s) (last: Tue Oct 20 04:25:34 2015) > Repo-excluded: 111 > Repo-filename: /etc/yum.repos.d/delorean.repo > > Repo-id : delorean-common-testing/x86_64 > Repo-name : delorean-common-testing > Repo-revision: 1444822873 > Repo-tags : binary-x86_64 > Repo-updated : Wed Oct 14 14:41:15 2015 > Repo-pkgs : 489 > Repo-size : 299 M > Repo-baseurl : > http://cbs.centos.org/repos/cloud7-openstack-common-testing/x86_64/os/ > Repo-expire : 21,600 second(s) (last: Tue Oct 20 04:25:35 2015) > Repo-excluded: 10 > Repo-filename: /etc/yum.repos.d/delorean-deps.repo > > Repo-id : delorean-liberty-testing/x86_64 > Repo-name : delorean-liberty-testing > Repo-revision: 1445301663 > Repo-tags : binary-x86_64 > Repo-updated : Tue Oct 20 03:41:05 2015 > Repo-pkgs : 34 > Repo-size : 9.9 M > Repo-baseurl : > http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/x86_64/os/ > Repo-expire : 21,600 second(s) (last: Tue Oct 20 04:25:35 2015) > Repo-excluded: 191 > Repo-filename: /etc/yum.repos.d/delorean-deps.repo > > > Below internal repos also include, removed full urls from email -> ... > > Repo-id : qe-tlv > ... > Repo-filename: /etc/yum.repos.d/qe-tlv.repo > > Repo-id : rhel-optional > ... > Repo-filename: /etc/yum.repos.d/rhel-optional.repo > > Repo-id : rhel-server > ... > Repo-filename: /etc/yum.repos.d/rhel-server.repo > > repolist: 9,340 > > > > Thanks > Tzach > > On Mon, Oct 19, 2015 at 6:29 PM, Alan Pevec < apevec at gmail.com > wrote: > > > > Figured I'd try packstack-ing a Liberty on centos7.1 > > How did you install centos 7.1? > > > Packstack failed to install Cinder due to missing: python-cheetah > > After manually installing Python-cheetah, Cinder installed. > > Also missing python-werkzeug for Ceilometer. > > To be clear, both deps are correctly expressed as Requires: in cinder > and ceilometer .specs. > Those two packages are in extras repo which is enabled out of the box > in the default centos install. > I guess kickstart you're using disables it? > > Cheers, > Alan > > > > -- > Tzach Shefi > Quality Engineer, Redhat OSP > +972-54-4701080 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From sasha at redhat.com Tue Oct 20 15:30:23 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Tue, 20 Oct 2015 11:30:23 -0400 (EDT) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1238139899.44937284.1445335371251.JavaMail.zimbra@redhat.com> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1225535906.44268987.1445258218601.JavaMail.zimbra@redhat.com> <1327898261.6388383.1445262711501.JavaMail.zimbra@tubitak.gov.tr> <1632812339.60289332.1445266983089.JavaMail.zimbra@redhat.com> <1114892065.6426670.1445269186395.JavaMail.zimbra@tubitak.gov.tr> <1139664202.60372126.1445271204093.JavaMail.zimbra@redhat.com> <104309364.6608280.1445319101175.JavaMail.zimbra@tubitak.gov.tr> <1238139899.44937284.1445335371251.JavaMail.zimbra@redhat.com> Message-ID: <159935625.61244072.1445355023202.JavaMail.zimbra@redhat.com> Hi Esra, since the introspection fails continuously in addition to the deployment, I start wondering if everything is connected right. Could you please describe (and double check) how your nodes are interconnected in the setup, i.e. what nics and are connected and is there any additional configuration on the switch ports. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Marius Cornea" > To: "Esra Celik" > Cc: "Sasha Chuzhoy" , rdo-list at redhat.com > Sent: Tuesday, October 20, 2015 6:02:51 AM > Subject: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was found" > > Hi, > > From what I can tell from the screenshots DHCP fails for both of the nics > after loading the inspector image, thus the nodes have no ip address and the > Network is unreachable message. Can you see any DHCP messages(output of > dhclient) on the console? > You could try leaving the nodes connected *only* to the provisioning network > and rerun introspection. > > Thanks, > Marius > > ----- Original Message ----- > > From: "Esra Celik" > > To: "Sasha Chuzhoy" > > Cc: "Marius Cornea" , rdo-list at redhat.com > > Sent: Tuesday, October 20, 2015 7:31:41 AM > > Subject: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > valid host was found" > > > > Ok, I ran ironic node-set-provision-state [UUID] provide for each node and > > retried deployment. I attached the screenshots > > > > [stack at undercloud ~]$ ironic node-list > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > | Maintenance | > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power off | > > | available > > | | False | > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power off | > > | available > > | | False | > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > [stack at undercloud ~]$ nova flavor-list > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | > > | Is_Public | > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > | b9428c86-5696-4d68-a0e0-77faf4e7f627 | baremetal | 4096 | 40 | 0 | | 1 | > > | 1.0 | True | > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates > > Deploying templates in the directory > > /usr/share/openstack-tripleo-heat-templates > > Stack failed with status: resources.Controller: resources[0]: > > ResourceInError: resources.Controller: Went to status ERROR due to > > "Message: > > No valid host was found. There are not enough hosts available., Code: 500" > > Heat Stack update failed. > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > openstack-ironic-api.service loaded active running OpenStack Ironic API > > service > > openstack-ironic-conductor.service loaded active running OpenStack Ironic > > Conductor service > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot > > dnsmasq service for Ironic Inspector > > openstack-ironic-inspector.service loaded active running Hardware > > introspection service for OpenStack Ironic > > > > "journalctl -fl -u openstack-ironic-conductor.service" gives no warning or > > error. > > > > Regards > > > > Esra ?EL?K > > T?B?TAK B?LGEM > > www.bilgem.tubitak.gov.tr > > celik.esra at tubitak.gov.tr > > > > ----- Orijinal Mesaj ----- > > > > > Kimden: "Sasha Chuzhoy" > > > Kime: "Esra Celik" > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > G?nderilenler: 19 Ekim Pazartesi 2015 19:13:24 > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > host was found" > > > > > Could you please > > > 1.run: > > > 'ironic node-set-provision-state [UUID] provide' for each node where UUID > > > is > > > replaced with the actual UUID of the node (ironic node-list). > > > > > 2.retry the deployment > > > Thanks. > > > > > Best regards, > > > Sasha Chuzhoy. > > > > > ----- Original Message ----- > > > > From: "Esra Celik" > > > > To: "Sasha Chuzhoy" > > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > > Sent: Monday, October 19, 2015 11:39:46 AM > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > valid > > > > host was found" > > > > > > > > Hi Sasha > > > > > > > > > > > > > > > > This is my instackenv.json. MAC addresses are, em2 interface’s > > > > MAC > > > > address of the nodes > > > > > > > > { > > > > "nodes": [ > > > > { > > > > "pm_type":"pxe_ipmitool", > > > > "mac":[ > > > > "08:9E:01:58:CC:A1" > > > > ], > > > > "cpu":"4", > > > > "memory":"8192", > > > > "disk":"10", > > > > "arch":"x86_64", > > > > "pm_user":"root", > > > > "pm_password”:””, > > > > "pm_addr":"192.168.0.18" > > > > }, > > > > { > > > > "pm_type":"pxe_ipmitool", > > > > "mac":[ > > > > "08:9E:01:58:D0:3D" > > > > ], > > > > "cpu":"4", > > > > "memory":"8192", > > > > "disk":"100", > > > > "arch":"x86_64", > > > > "pm_user":"root", > > > > "pm_password”:””, > > > > "pm_addr":"192.168.0.19" > > > > } > > > > ] > > > > } > > > > > > > > This is my undercloud.conf file: > > > > image_path = . > > > > local_ip = 192.0.2.1/24 > > > > local_interface = em2 > > > > masquerade_network = 192.0.2.0/24 > > > > dhcp_start = 192.0.2.5 > > > > dhcp_end = 192.0.2.24 > > > > network_cidr = 192.0.2.0/24 > > > > network_gateway = 192.0.2.1 > > > > inspection_interface = br-ctlplane > > > > inspection_iprange = 192.0.2.100,192.0.2.120 > > > > inspection_runbench = false > > > > undercloud_debug = true > > > > enable_tuskar = false > > > > enable_tempest = false > > > > > > > > > > > > > > > > I have previously sent the screenshot of the consoles during > > > > introspection > > > > stage. Now I am attaching them again. > > > > I cannot login to consoles because introspection stage is not completed > > > > successfully and I don't know the IP addresses. (nova list is empty) > > > > (I don't know if I can login with the IP addresses that I was > > > > previously > > > > set > > > > by myself. I am not able to reach the nodes now, from home.) > > > > > > > > I ran the flavor-create command after the introspection stage. But > > > > introspection was not completed successfully, > > > > I just ran deploy command to see if nova list fills during deployment. > > > > > > > > > > > > Esra ÇEL?K > > > > TÜB?TAK B?LGEM > > > > www.bilgem.tubitak.gov.tr > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > ----- Sasha Chuzhoy ?öyle yaz?yor:> Esra, Is it > > > > possible to check the console of the nodes being introspected and/or > > > > deployed? I wonder if the instackenv.json file is accurate. Also, > > > > what's > > > > the > > > > output from 'nova flavor-list'? Thanks. Best regards, Sasha Chuzhoy. > > > > ----- > > > > Original Message ----- > From: "Esra Celik" > > > > > > > > > To: "Marius Cornea" > Cc: "Sasha Chuzhoy" > > > > , rdo-list at redhat.com > Sent: Monday, October 19, > > > > 2015 > > > > 9:51:51 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with error > > > > "No > > > > valid host was found" > > All 3 baremetal nodes (1 undercloud, 2 > > > > overcloud) > > > > have 2 nics. > > the undercloud machine's ip config is as follows: > > > > > > [stack at undercloud ~]$ ip addr > 1: lo: mtu 65536 > > > > qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd > > > > 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever > > > > preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever > > > > preferred_lft forever > 2: em1: mtu > > > > 1500 > > > > qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd > > > > ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > > > > > > valid_lft forever preferred_lft forever > inet6 > > > > fe80::a9e:1ff:fe50:8a21/64 > > > > scope link > valid_lft forever preferred_lft forever > 3: em2: > > > > mtu 1500 qdisc mq master ovs-system > > > > > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > > > > > 4: > > > > ovs-system: mtu 1500 qdisc noop state DOWN > > > > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: br-ctlplane: > > > > mtu 1500 qdisc noqueue > state > > > > UNKNOWN > > > > > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet 192.0.2.1/24 > > > > brd > > > > 192.0.2.255 scope global br-ctlplane > valid_lft forever preferred_lft > > > > forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft > > > > forever > > > > preferred_lft forever > 6: br-int: mtu 1500 qdisc > > > > noop > > > > state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > I > > > > am > > > > using em2 for pxe boot on the other machines.. So I configured > > > > > instackenv.json to have em2's MAC address > For overcloud nodes, em1 > > > > was > > > > configured to have 10.1.34.x ip, but after image > deploy I am not sure > > > > what > > > > happened for that nic. > > Thanks > > Esra ÇEL?K > TÜB?TAK > > > > B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > ----- > > > > Orijinal Mesaj ----- > > > Kimden: "Marius Cornea" > > > > > > > > > > > > > > Kime: "Esra Celik" > > Kk: "Sasha Chuzhoy" > > > > , rdo-list at redhat.com > > Gönderilenler: 19 Ekim > > > > Pazartesi 2015 15:36:58 > > Konu: Re: [Rdo-list] OverCloud deploy fails > > > > with > > > > error "No valid host was > > found" > > > Hi, > > > I believe the nodes > > > > were > > > > stuck in introspection so they were not ready for > > deployment thus > > > > the > > > > not enough hosts message. Can you describe the > > networking setup > > > > (how > > > > many nics the nodes have and to what networks they're > > connected)? > > > > > > > > > > > > > > > Thanks, > > Marius > > > ----- Original Message ----- > > > From: "Esra > > > > Celik" > > > To: "Sasha Chuzhoy" > > > > > > > Cc: "Marius Cornea" , > > > > rdo-list at redhat.com > > > Sent: Monday, October 19, 2015 12:34:32 PM > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > host > > > > > > > > > > > > > > > was found" > > > > > > Hi again, > > > > > > "nova list" was empty > > > > > after > > > > introspection stage which was not completed > > > successfully. So I > > > > cloud > > > > not ssh the nodes.. Is there another way to > > > obtain > > > the IP > > > > addresses? > > > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > > > > > > > > > > > > > > openstack-ironic-api.service loaded active running OpenStack Ironic > > > > > API > > > > > > > > > > > > service > > > openstack-ironic-conductor.service loaded active > > > > > > running > > > > OpenStack Ironic > > > Conductor service > > > > > > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE > > > > boot > > > > > > > > > > > dnsmasq service for Ironic Inspector > > > > > > > openstack-ironic-inspector.service loaded active running Hardware > > > > > > > introspection service for OpenStack Ironic > > > > > > If I start > > > > deployment > > > > anyway I get 2 nodes in ERROR state > > > > > > [stack at undercloud ~]$ > > > > openstack overcloud deploy --templates > > > Deploying templates in the > > > > directory > > > /usr/share/openstack-tripleo-heat-templates > > > Stack > > > > failed with status: resources.Controller: resources[0]: > > > > > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > > > > > > > > > "Message: > > > No valid host was found. There are not enough hosts > > > > available., Code: > > > 500" > > > > > > [stack at undercloud ~]$ nova > > > > list > > > > > > > > > > > > > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > | ID | Name | Status | Task State | Power State | Networks | > > > > > > > > > | > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | overcloud-controller-0 | > > > > ERROR | > > > | - > > > | | > > > | NOSTATE | | > > > | > > > > 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | ERROR > > > > > > > > > > > > > > > > > > > | | > > > | - > > > | | NOSTATE | | > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > > > Did the repositories update during weekend? Should I better > > > > restart the > > > overall Undercloud and Overcloud installation from > > > > the > > > > beginning? > > > > > > Thanks. > > > > > > Esra ÇEL?K > > > > > > > Uzman > > > > Ara?t?rmac? > > > Bili?im Teknolojileri Enstitüsü > > > > > > > TÜB?TAK B?LGEM > > > 41470 GEBZE - KOCAEL? > > > T +90 262 675 > > > > 3140 > > > > > > > > > > > > > > > F +90 262 646 3187 > > > www.bilgem.tubitak.gov.tr > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > ................................................................ > > > > > > > > > > > > > > > > > > > > > > Sorumluluk Reddi > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra Celik" > > > > > > > > Kk: "Marius Cornea" > > > > , rdo-list at redhat.com > > > > Gönderilenler: > > > > 16 > > > > Ekim Cuma 2015 18:44:49 > > > > Konu: Re: [Rdo-list] OverCloud deploy > > > > fails > > > > with error "No valid host > > > > was > > > > found" > > > > > > > Hi > > > > Esra, > > > > > > > > if the undercloud nodes are UP - you can login with: ssh > > > > > > > > > > > > > > > > heat-admin@ > > > > You can see the IP of the nodes with: "nova > > > > list". > > > > > > > > > > > > BTW, > > > > What do you see if you run "sudo systemctl|grep > > > > > > > ironic" > > > > on the > > > > undercloud? > > > > > > > Best regards, > > > > Sasha > > > > Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > From: > > > > "Esra > > > > Celik" > > > > > To: "Sasha Chuzhoy" > > > > > > > > > Cc: "Marius Cornea" , > > > > rdo-list at redhat.com > > > > > Sent: Friday, October 16, 2015 1:40:16 AM > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > valid > > > > > > > > > > > > > > > > host > > > > > was found" > > > > > > > > > > Hi Sasha, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 > > > > > > > > > > > > > > > > > Overcloud-Compute > > > > > > > > > > This is my undercloud.conf file: > > > > > > > > > > > > > > > > > > > > > > > > > > image_path = . > > > > > local_ip = 192.0.2.1/24 > > > > > > > > > > > > > > > > > > > > local_interface = em2 > > > > > masquerade_network = 192.0.2.0/24 > > > > > > > > > > > > > > > > > dhcp_start = 192.0.2.5 > > > > > dhcp_end = 192.0.2.24 > > > > > > > > > network_cidr = 192.0.2.0/24 > > > > > network_gateway = 192.0.2.1 > > > > > > > > > > > > > > > > > inspection_interface = br-ctlplane > > > > > inspection_iprange = > > > > 192.0.2.100,192.0.2.120 > > > > > inspection_runbench = false > > > > > > > > > undercloud_debug = true > > > > > enable_tuskar = false > > > > > > > > > enable_tempest = false > > > > > > > > > > IP configuration for the > > > > Undercloud is as follows: > > > > > > > > > > stack at undercloud ~]$ ip > > > > addr > > > > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > > > > > > > UNKNOWN > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever > > > > preferred_lft > > > > forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft forever > > > > preferred_lft forever > > > > > 2: em1: > > > > > > > > mtu 1500 qdisc mq state UP > > > > > qlen > > > > > 1000 > > > > > > > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > > > inet > > > > 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > > valid_lft > > > > forever > > > > preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a21/64 scope > > > > link > > > > > > > > > valid_lft forever preferred_lft forever > > > > > 3: em2: > > > > mtu 1500 qdisc mq master > > > > > > > > > ovs-system > > > > > state UP qlen 1000 > > > > > link/ether > > > > 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > 4: ovs-system: > > > > mtu 1500 qdisc noop state DOWN > > > > > > > > > link/ether > > > > 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > > > 5: br-ctlplane: > > > > mtu 1500 qdisc > > > > > noqueue > > > > > > > > > > > > > > > > > > > > > state UNKNOWN > > > > > link/ether 08:9e:01:50:8a:22 brd > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > inet6 > > > > fe80::a9e:1ff:fe50:8a22/64 scope link > > > > > valid_lft forever > > > > preferred_lft forever > > > > > 6: br-int: mtu > > > > 1500 > > > > qdisc noop state DOWN > > > > > link/ether fa:85:ac:92:f5:41 brd > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > And I attached two screenshots > > > > showing > > > > the boot stage for overcloud > > > > > nodes > > > > > > > > > > Is > > > > there > > > > a > > > > way to login the overcloud nodes to see their IP > > > > > > > > > configuration? > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra ÇEL?K > > > > > > > > > > > > > > > > > > > > > > TÜB?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ----- Orijinal Mesaj > > > > ----- > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > > > > > > > > > > > > > > > > > > > Kime: "Esra Celik" > > > > > > Kk: "Marius > > > > Cornea" , rdo-list at redhat.com > > > > > > > > > > Gönderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > > > Konu: > > > > Re: > > > > [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > > > > > > > > > > > > > was > > > > > > found" > > > > > > > > > > > Just my 2 cents. > > > > > > > > > > > > > > > > > > > > > > > > > > Did you make sure that all the registered nodes are configured to > > > > > > > > > > > > > > > > > > > > > > > > boot > > > > > > off > > > > > > the right NIC first? > > > > > > > > > > > > > > > > > > Can you watch the console and see what happens on the problematic > > > > > > > > > > > > > > > > > > nodes > > > > > > upon > > > > > > boot? > > > > > > > > > > > Best > > > > regards, > > > > > > Sasha Chuzhoy. > > > > > > > > > > > ----- > > > > Original > > > > Message ----- > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > Cc: > > > > rdo-list at redhat.com > > > > > > > Sent: Thursday, October 15, 2015 > > > > 4:40:46 > > > > AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with > > > > error > > > > "No > > > > > > > valid > > > > > > > host > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic node-show results are below. I have my nodes power on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > after > > > > > > > introspection bulk start. And I get the > > > > following warning > > > > > > > Introspection didn't finish for nodes > > > > > > > > > > > > > > > > > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > > > > > > > > Doesn't seem to be the same issue with > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > | UUID | Name | Instance UUID | Power State | > > > > > > > > > > > | Provisioning > > > > State > > > > > > > | | > > > > > > > | Maintenance | > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | > > > > > > > > > > > | power > > > > on | > > > > > > > | available > > > > > > > | | > > > > > > > | False > > > > | > > > > > > > > > > > > > > > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic > > > > node-show > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > | Property | Value | > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > | target_power_state | None | > > > > > > > | extra | {} > > > > > > > > > > > | | > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > > > > > > > > > > > > > | provision_state | available | > > > > > > > | clean_step | > > > > > > > > > | {} > > > > > > > > > | | > > > > > > > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > | console_enabled | False | > > > > > > > | target_provision_state | > > > > | None > > > > | | > > > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | > > > > > > | None > > > > > > | | > > > > > > | > > > > > > > > > > > | inspection_finished_at | None | > > > > > > > | > > > > > > > > > > | power_state > > > > > > > > > > | | > > > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > > > > reservation | None | > > > > > > > | properties | {u'memory_mb': > > > > u'8192', > > > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | > > > > u'10', > > > > | > > > > > > > | | u'cpus': u'4', u'capabilities': > > > > | > > > > > > > | | u'boot_option:local'} > > > > | > > > > > > > | | | > > > > > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > > | > > > > > > > > | > > > > > > > > | > > > > > > > > | u'192.168.0.18', | > > > > > > > | | u'ipmi_username': u'root', > > > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > | | | > > > > > > > | | > > > > > | | | > > > > > > > | | > > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | > > > > > > > > > | | created_at > > > > > > > > > | | | > > > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > > > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > > > > > | > > > > instance_info | {} | > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic > > > > node-show > > > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > | Property | Value | > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > | target_power_state | None | > > > > > > > | extra | {} > > > > > > > > > > > | | > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None | > > > > > > > > > > > > > > > > > > > | provision_state | available | > > > > > > > | clean_step | > > > > > > > > > | {} > > > > > > > > > | | > > > > > > > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > | console_enabled | False | > > > > > > > | target_provision_state | > > > > | None > > > > | | > > > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | > > > > > > | None > > > > > > | | > > > > > > | > > > > > > > > > > > | inspection_finished_at | None | > > > > > > > | > > > > > > > > > > | power_state > > > > > > > > > > | | > > > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > > > > reservation | None | > > > > > > > | properties | {u'memory_mb': > > > > u'8192', > > > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | > > > > u'100', > > > > | > > > > > > > | | u'cpus': u'4', u'capabilities': > > > > | > > > > > > > | | u'boot_option:local'} > > > > | > > > > > > > | | | > > > > > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > > | > > > > > > > > | > > > > > > > > | > > > > > > > > | u'192.168.0.19', | > > > > > > > | | u'ipmi_username': u'root', > > > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- | > > > > | | | > > > > > > > | | > > > > > | | | > > > > > > > | | > > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | > > > > > > > > > | | created_at > > > > > > > > > | | | > > > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > > > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > > > > > | > > > > instance_info | {} | > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack > > > > > > > > > > > > > > > > user. > > > > > > > > > > > > > > > > I > > > > don't think I > > > > > > > am > > > > > > > doing > > > > > > > > > > > something > > > > other than > > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > > > > > > > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > instackenv.json > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2 sudo yum -y install epel-release > > > > > > > 3 sudo curl > > > > > > > > > -o > > > > /etc/yum.repos.d/delorean.repo > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > > > > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > > > > > > > > > > > > > > > > > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > > 6 sudo /bin/bash > > > > -c > > > > "cat > > > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > > > > > > > EOF" > > > > > > > 7 sudo curl -o > > > > /etc/yum.repos.d/delorean-deps.repo > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > > > > > > > > > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > > > 9 sudo yum > > > > install > > > > -y python-tripleoclient > > > > > > > 10 cp > > > > /usr/share/instack-undercloud/undercloud.conf.sample > > > > > > > > > > > ~/undercloud.conf > > > > > > > 11 vi undercloud.conf > > > > > > > 12 > > > > export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 13 openstack > > > > undercloud install > > > > > > > 14 source stackrc > > > > > > > 15 > > > > export > > > > NODE_DIST=centos7 > > > > > > > 16 export > > > > DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > > > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > > > 17 export > > > > DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 18 openstack > > > > overcloud > > > > image build --all > > > > > > > 19 ls > > > > > > > 20 openstack > > > > overcloud > > > > image upload > > > > > > > 21 openstack baremetal import --json > > > > instackenv.json > > > > > > > 22 openstack baremetal configure boot > > > > > > > > > > > > > > > > > > > 23 ironic node-list > > > > > > > 24 openstack baremetal > > > > > > > introspection > > > > bulk start > > > > > > > 25 ironic node-list > > > > > > > 26 ironic > > > > node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > 27 ironic > > > > node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > 28 history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > TÜB?TAK B?LGEM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > , rdo-list at redhat.com > > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 19:40:07 > > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > > > > > > > > > > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > > > > > > > > > > > > > > > > > > > results? > > > > > > > Also > > > > > > > check the following > > > > suggestion > > > > if > > > > you're experiencing the same > > > > > > > issue: > > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: > > > > > > > > > > > > > > "Esra > > > > Celik" > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > > Cc: "Ignacio Bravo" > > > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > > > Subject: > > > > Re: > > > > [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > > > host > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the > > > > > > > > > > > > > > > > > > > introspection > > > > > > > > > > > > > > > > > > > I > > > > can see Client > > > > > > > > IP > > > > > > > > of > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > (screenshot attached). But then I see continuous > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic-python-agent > > > > > > > > errors > > > > > > > > > > > > > (screenshot-2 > > > > attached). Errors repeat after time out.. And the > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > are > > > > > > > > not powered off. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the > > > > > > > > > > > > > > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > > > > > > > > > > > > > > > > > > > ADMINISTRATOR > > > > > > > > -U > > > > > > > > root -R 3 -N 5 -P > > > > power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > > > > > chassis power status > > > > > > > > Chassis Power is > > > > > > > on > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus > > > > > > > > > > > -U > > > > > > > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > > > chassis > > > > power off > > > > > > > > Chassis Power Control: Down/Off > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power status > > > > > > > > > > > > > > > > > > > > Chassis Power is off > > > > > > > > [stack at undercloud ~]$ > > > > ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > > > > > > > > > > > > > -P > > > > > > > > chassis power on > > > > > > > > Chassis > > > > Power > > > > Control: Up/On > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > > > > > chassis power status > > > > > > > > Chassis Power is > > > > > > > on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > TÜB?TAK B?LGEM > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > Kimden: > > > > "Marius Cornea" > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 14:59:30 > > > > > > > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > > > > > > > valid > > > > > > > > host > > > > > > > > was > > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: "Ignacio > > > > Bravo" , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well today I started with > > > > re-installing the OS and nothing > > > > > > > > > seems > > > > > > > > > > > > > > > > > > > > > wrong > > > > > > > > > with > > > > > > > > > undercloud installation, > > > > then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an error > > > > during image build > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > > > + dracut -N --install > > > > ' > > > > curl partprobe lsblk targetcli tail > > > > > > > > > head > > > > > > > > > > > > > > > > > > > > > > > > > awk > > > > > > > > > ifconfig > > > > > > > > > cut expr route ping nc > > > > wget > > > > tftp grep' --kernel-cmdline > > > > > > > > > 'rd.shell > > > > > > > > > > > > > > > > > rd.debug > > > > > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > > > > > > > > > > > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > > > / > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > > > > > > > > > > > > > > > > > > > > > > virtio_net > > > > > > > > > virtio_blk target_core_mod > > > > iscsi_target_mod > > > > > > > > > > > > > > > > > > > > > target_core_iblock > > > > > > > > > target_core_file > > > > target_core_pscsi configfs' -o 'dash > > > > > > > > > plymouth' > > > > > > > > > > > > > > > > > > > > > > > > > /tmp/ramdisk > > > > > > > > > cat: write error: Broken pipe > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + chmod o+r /tmp/kernel > > > > > > > > > + trap EXIT > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > > > + date > > > > > > +%s.%N > > > > > > > > > > > > > > > > > > > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You can ignore that afaik, if you end up having all the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > required > > > > > > > > images > > > > > > > > it > > > > > > > > > > > > > > > > > > > > > > > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > > > > > > > Then, > > > > during introspection stage I see ironic-python-agent > > > > > > > > > > > > > errors > > > > > > > > > on > > > > > > > > > nodes > > > > > > > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early > > > > > > > > > > > > > > > stage > > > > > > > > > > > > > > > of > > > > the > > > > > > > > introspection? > > > > > > > > At > > > > > > > > > > > > some > > > > point it should receive an address by DHCP and the Network > > > > > > > > > > > > > > > > > > > > is > > > > > > > > unreachable error should disappear. Does the > > > > introspection > > > > > > > > complete > > > > > > > > and > > > > > > > > > > > > > > > > > > > > the > > > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 10:30:12 > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Option "http_url" from group "pxe" is deprecated. Use > > > > > > > > > > option > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "http_url" > > > > > > > > > from > > > > > > > > > group > > > > "deploy". > > > > > > > > > Oct 14 10:30:12 undercloud.rdo > > > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > > > > > > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > > > > > > > > > > > > > > > > > > > "http_root" > > > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Before deployment ironic node-list: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This is odd too as I'm > > > > expecting the nodes to be powered off > > > > > > > > before > > > > > > > > > > > > > > > > > > > > > > > > running > > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > | UUID | Name | Instance UUID | Power State | > > > > > > > > > > > > > | Provisioning > > > > > > > > > > > > > | State > > > > > > > > > | | > > > > > > > > > | > > > > Maintenance | > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None > > > > > > > > > > > > > | | > > > > power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | > > > > available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > > > > > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power > > > > > > > > > | > > > > > > | > > > > > > | > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > > > | > > > > > > > | > > > > > > > | > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > During deployment I get following > > > > > > > > > > > > > > > > > > > > > > errors > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl -fl > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 11:29:01 > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 ERROR > > > > ironic.drivers.modules.ipmitool [-] IPMI Error > > > > > > > > > while > > > > > > > > > > > > > > > > > > > > > > > > > attempting > > > > > > > > > "ipmitool -I lanplus -H > > > > 192.168.0.19 -L ADMINISTRATOR -U root > > > > > > > > > -R > > > > > > > > > > > > > > > > > > > > > > > > > 3 > > > > > > > > > -N > > > > > > > > > 5 > > > > > > > > > -f > > > > > > > > > > > > > > > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > > > Error: > > > > Unexpected > > > > error while running command. > > > > > > > > > Oct 14 11:29:01 > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 WARNING > > > > ironic.drivers.modules.ipmitool [-] IPMI power > > > > > > > > > status > > > > > > > > > > > > > > > > > > > > > failed > > > > > > > > > for > > > > > > > > > node > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: > > > > > > > > > > > > > Unexpected > > > > > > > > > error > > > > > > > > > while > > > > > > > > > > > > > > > > > > > > > > > > > running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo > > > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > 11:29:01.740 > > > > > > > > > 619 WARNING ironic.conductor.manager [-] > > > > During > > > > > > > > > sync_power_state, > > > > > > > > > could > > > > > > > > > > > > > > > > > > > > > > > > > not > > > > > > > > > get power state for node > > > > > > > > > > > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > > > attempt > > > > > > > > > > > > > > > > > > > > > 1 > > > > > > > > > of > > > > > > > > > 3. Error: IPMI call > > > > > > > failed: > > > > power status.. > > > > > > > > > > > > > > > > > > > > > > > > > This > > > > looks > > > > like an ipmi error, can you try to manually run > > > > > > > > > > > > commands > > > > > > > > > > > > > > > > > > > > using > > > > > > > > the > > > > > > > > ipmitool and see > > > > > > > > > > if > > > > you get any success? It's also worth filing > > > > > > > > a > > > > > > > > > > > > > > > > > > > > bug > > > > > > > > with > > > > > > > > details such as the ipmitool > > > > version, server model, drac > > > > > > > > firmware > > > > > > > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > Kimden: > > > > "Marius > > > > Cornea" > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > > > > > > > > > Konu: > > > > Re: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > > error > > > > > > > > > > > > > "No > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > > > > > was > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > Original > > > > Message ----- > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > > > > Cc: "Ignacio Bravo" > > > > , > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > > > > > > > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > During > > > > deployment > > > > they are powering on and deploying the > > > > > > > > > > images. > > > > > > > > > > > > > > > > > > > > > > > > > > I > > > > > > > > > > see > > > > > > > > > > lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > of > > > > > > > > > > connection error messages about > > > > ironic-python-agent but > > > > > > > > > > ignore > > > > > > > > > > > > > > them > > > > > > > > > > > > > > as > > > > > > > > > > mentioned here > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > That was referring to the introspection > > > > stage. From what I > > > > > > > > > can > > > > > > > > > tell > > > > > > > > > > > > > > > > > > > > > you > > > > > > > > > are > > > > > > > > > experiencing issues > > > > > > > during > > > > deployment as it fails to > > > > > > > > > provision > > > > > > > > > > > > > the > > > > > > > > > > > > > nova > > > > > > > > > instances, can you check if > > > > > > > > > > > > > during > > > > that stage the nodes get > > > > > > > > > powered > > > > > > > > > > > > > on? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Make sure that before overcloud deploy > > > > > > > > > > > > > > > > > > > > the > > > > ironic nodes are > > > > > > > > > available > > > > > > > > > for > > > > > > > > > > > > > > > > > > > > > > > > > provisioning (ironic node-list and check the provisioning > > > > > > > > > > > > > > > > > > > > > > > > > > > > > state > > > > > > > > > column). > > > > > > > > > Also check > > > > > > > > that > > > > you didn't miss any step in the docs in > > > > > > > > > regards > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > kernel > > > > > > > > > and ramdisk > > > > assignment, introspection, flavor creation(so it > > > > > > > > > > > > > matches > > > > > > > > > > > > > > > > > the > > > > > > > > > nodes resources) > > > > > > > > > > > > > > > > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > In > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > instackenv.json > > > > file I do not need to add the undercloud > > > > > > > > > > node, > > > > > > > > > > > > > > > > > > > > > > > > > > or > > > > > > > > > > do > > > > > > > > > > I? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And which log files should I watch during > > > > > > > > > > > > > > deployment? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You can check the > > > > openstack-ironic-conductor logs(journalctl > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > openstack-ironic-conductor.service) and the > > > > > > > logs > > > > in > > > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kime: > > > > > > > > > > Esra Celik > > > > Kk: Ignacio Bravo > > > > > > > > > > > > > > , > > > > > > > > > > > > > > rdo-list at redhat.comGönderilenler: > > > > > > > > > > Tue, > > > > > > > > > > > > > > > > > > > > > > 13 > > > > > > > > > > Oct > > > > > > > > > > 2015 17:25:00 > > > > > > > > +0300 > > > > (EEST)Konu: Re: [Rdo-list] OverCloud > > > > > > > > > > deploy > > > > > > > > > > > > > > > > > > > > > > fails > > > > > > > > > > with > > > > > > > > > > error "No > > > > > > > > valid > > > > host was found" > > > > > > > > > > > > > > > > > > > > ----- Original > > > > Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > To: "Ignacio Bravo" > > > > > Cc: > > > > > > > > > > rdo-list at redhat.com> > > > > > > > > > > > > > > > > > > > > > > Sent: > > > > > > > > > > Tuesday, October 13, 2015 > > > > > > > > > > > 3:47:57 > > > > PM> Subject: Re: > > > > > > > > > > [Rdo-list] > > > > > > > > > > > > > > OverCloud > > > > > > > > > > deploy fails with error "No valid host > > > > was > > > > found"> > > > > > > > > > > > > > Actually > > > > > > > > > > I > > > > > > > > > > > > > > > > > > > > > > > > > > re-installed the OS for Undercloud before deploying. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > However > > > > > > > > > > I > > > > > > > > > > did> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > not > > > > > > > > > > re-install the OS in Compute and Controller > > > > nodes.. I will > > > > > > > > > > reinstall> > > > > > > > > > > basic > > > > > > > > > > > > > > > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > compute, > > > > > > > > > > they > > > > > > > > > > will > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > get the image served by the undercloud. I'd recommend that > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > during > > > > > > > > > > deployment > > > > > > > > > > > > > > > > > > > > you > > > > watch the servers console and make sure they get > > > > > > > > > > > > > > powered > > > > > > > > > > > > > > on, > > > > > > > > > > pxe > > > > > > > > > > > > > > > > > > > > > > > > boot, > > > > > > > > > > > > > > > > > > > > > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks> > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: > > > > > > > > > > > "Ignacio > > > > > > > > > > > Bravo" > > > > > Kime: "Esra Celik" > > > > > > > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: > > > > > > > > > > > 13 Ekim Sal? 2015 16:36:06> > > > > Konu: > > > > Re: [Rdo-list] > > > > > > > > > > > OverCloud > > > > > > > > > > > > > > > deploy > > > > > > > > > > > > > > > fails > > > > > > > > > > > with error "No valid > > > > > > > > > > > > > > > host > > > > was> found"> > Esra,> > I > > > > > > > > > > > encountered > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > same > > > > > > > > > > > problem > > > > > > > after > > > > deleting the stack and re-deploying.> > It > > > > > > > > > > > turns > > > > > > > > > > > > > > > > > > > > > > > > > > > out > > > > > > > > > > > that > > > > > > > > > > > > > > > > > > > > > > > 'heat > > > > stack-delete overcloud’ does remove the nodes > > > > > > > > > > > > > > > > > > > from> > > > > > > > > > > > ‘nova list’ and one would > > > > assume > > > > that the > > > > > > > > > > > baremetal > > > > > > > > > > > servers > > > > > > > > > > > > > > > > > > > > > > > > > > > are now ready to> be used for the next stack, but when > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > redeploying, > > > > > > > > > > > I > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > get > > > > > > > > > > > the same message of> not enough hosts > > > > available.> > > > > > You > > > > > > > > > > > can > > > > > > > > > > > look > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > into > > > > > > > > > > > the > > > > > > > > > > > nova logs and > > > > > > it > > > > mentions something about ‘node xxx > > > > > > > > > > > is> > > > > > > > > > > > > > > > > > > > > > > > > > > > already > > > > > > > > > > > associated with UUID > > > > > > > > > > yyyy’ > > > > and ‘I tried 3 > > > > > > > > > > > times > > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > > I’m > > > > > > > > > > > giving up’.> > > > > > > > > > > > > > > The > > > > issue is that the UUID yyyy > > > > > > > > > > > belonged > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > a > > > > > > > > > > > prior > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > > > > > > > > > > > > > > > > > > > basic > > > > > > > > > > > OS > > > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Federal, > > > > > > > > > > > Inc> > > > > > > > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct > > > > > > > > > > > > > > > > > > > > > > > > > > > 13, > > > > > > > > > > > 2015, > > > > > > > > > > > at > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 9:25 > > > > > > > > > > > AM, Esra Celik < > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > wrote:> > > > > > > > > > > > > > Hi > > > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with error "No > > > > valid host was > > > > > > > > > > > found"> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --templates> > > > > > > > > > > > Deploying > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > templates in the directory> > > > > > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > > > > > > > Stack > > > > failed with status: Resource CREATE failed: > > > > > > > > > > > > > > > resources.Compute:> > > > > > > > > > > > ResourceInError: > > > > resources[0].resources.NovaCompute: Went > > > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > status > > > > > > > > > > > ERROR> > > > > > > > > > > > > > > > > > > > > > due > > > > > > > > > > to > > > > "Message: No valid host was found. There are not > > > > > > > > > > > > > > > enough > > > > > > > > > > > hosts> > > > > > > > > > > > available., > > > > Code: > > > > 500"> Heat Stack create failed.> > Here > > > > > > > > > > > are > > > > > > > > > > > > > > > > > > > > > > > some > > > > > > > > > > > logs:> > > > > > > > > > > > > > > > > > > > > > > Every > > > > 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > > > > > > > > > > > COMPLETE > > > > > > > > > > > > Tue > > > > > > > > > > > > Oct > > > > > > > > > > > > > > > > > > > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > | resource_name | physical_resource_id | > > > > > > > > > > > > > > > | resource_type > > > > | > > > > > > > > > > > | resource_status > > > > > > > > > > > |> | > > > > updated_time | stack_name |> > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > > > |> | > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > > > > > > > > > > > > > > |> | Controller > > > > > > > > > > > |> | | > > > > > > > > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > > > > 0 > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OS::TripleO::Controller > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | > > > > > > > > > > > > > > > OS::TripleO::Compute > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > Controller | > > > > > | > > > > > > > > > > > 2e9ac712-0566-49b5-958f-c3e151bb24d7 > > > > | > > > > > > > > > > > OS::Nova::Server > > > > > > > > > > > |> > > > > > > > | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | > > > > > > > > > > | > > > > > > > > > > > 2015-10-13T10:20:54 > > > > |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk > > > > |> | > > > > > > > > > > > NovaCompute > > > > > > > > > > > | > > > > > > > > |> | > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > > > | > > > > overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ heat resource-show > > > > > > > > > > > > > > > > > overcloud > > > > > > > > > > > > > > > > > Compute> > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > | Property | Value |> > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > | attributes | { |> | | "attributes": null, |> | > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | "refs": > > > > > > > > > > > | null > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > | |> > > > > > > > > > > > | | > > > > > > > > > > > | | > > > > > > > > > > > > > > > | } > > > > > > > > > > > |> | creation_time | 2015-10-13T10:20:36 > > > > > > | |> > > > > > > | | > > > > description > > > > > > > > > > > |> | | > > > > > > > > > > > |> | |> > > > > > > > > > > > > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | links > > > > > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > |> | |> > > > > > > > > > > > | > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > > > > > > > | (self) |> | | > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > > > > > > > | | (stack) |> | | > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > > > > > > > | | (nested) |> | logical_resource_id | Compute > > > > > > > > > > > > > > > | | |> > > > > > > > > > > > > > > > | | | > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > | | physical_resource_id > > > > > > > > > > > | > > > > e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > > > > > > > > > > > > > > > ComputeAllNodesDeployment |> | | > > > > > > > > > > > > > > > ComputeNodesPostDeployment > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > ComputeCephDeployment |> > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | > > > > > > > > > > > > > > > resource_name > > > > > > > > > > > | > > > > > > > > > > > Compute > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | resource_status | > > > > > > > > > > > |> > > > > > > > > > > > | CREATE_FAILED > > > > > > > > > > > |> > > > > > > > > > > > | |> > > > > | > > > > > > > > > > > | resource_status_reason > > > > > > > > > > > > > > > | > > > > > > > > > > > | | > > > > | > > > > > > > > > > > | | > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > > > Went to > > > > > > status > > > > ERROR due to "Message:> | No valid host > > > > > > > > > > > was > > > > > > > > > > > > > > > > > > > > > > > found. > > > > > > > > > > > There > > > > > > > > > > > > > > > > > > > > > are > > > > > > > > > > not > > > > enough hosts available., Code: 500"> | |> | > > > > > > > > > > > > > > > resource_type > > > > > > > > > > > | > > > > > > > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > > > |> > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > This is my instackenv.json for 1 compute > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > 1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > control > > > > > > > > > > > > > > node > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > > > > be > > > > > > > > > > > > > > > > > > > > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:CC:A1"> ],> "cpu":"4",> > > > > "memory":"8192",> > > > > > > > > > > > "disk":"10",> > > > > > > > > > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > > > > > "pm_addr":"192.168.0.18"> > > > > },> > > > > {> > > > > > > > > > > > "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> > > > > "memory":"8192",> > > > > > > > > > > > "disk":"100",> > > > > > > > > > > > > > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > > > > > "pm_addr":"192.168.0.19"> > > > > }> > > > > ]> }> > > Any ideas? Thanks > > > > > > > > > > > in > > > > > > > > > > > > > > > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > > _______________________________________________> > > > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________> > > > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > Rdo-list > > > > mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > From whayutin at redhat.com Tue Oct 20 17:36:42 2015 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 20 Oct 2015 13:36:42 -0400 Subject: [Rdo-list] [CI] rdo-manager liberty delete nodes fails Message-ID: FYI.. https://bugzilla.redhat.com/show_bug.cgi?id=1273574 StopIteration clean_up DeleteNode: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 112, in run ret_val = super(OpenStackShell, self).run(argv) File "/usr/lib/python2.7/site-packages/cliff/app.py", line 255, in run result = self.run_subcommand(remainder) File "/usr/lib/python2.7/site-packages/cliff/app.py", line 374, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/site-packages/cliff/command.py", line 54, in run self.take_action(parsed_args) File "/usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_node.py", line 64, in take_action scale_manager.scaledown(parsed_args.nodes) File "/usr/lib/python2.7/site-packages/tripleo_common/scale.py", line 91, in scaledown resources_by_role, resources) File "/usr/lib/python2.7/site-packages/tripleo_common/scale.py", line 131, in _get_removal_params_from_heat role, role_resources, resources) File "/usr/lib/python2.7/site-packages/tripleo_common/scale.py", line 29, in get_group_resources_after_delete group = next(res for res in resources if StopIteration END return value: 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From akrivoka at redhat.com Tue Oct 20 18:04:37 2015 From: akrivoka at redhat.com (Ana Krivokapic) Date: Tue, 20 Oct 2015 20:04:37 +0200 Subject: [Rdo-list] Undercloud UI In-Reply-To: <1522468676.320587.1445348489875.JavaMail.zimbra@tubitak.gov.tr> References: <312239508.320257.1445347438841.JavaMail.zimbra@tubitak.gov.tr> <1522468676.320587.1445348489875.JavaMail.zimbra@tubitak.gov.tr> Message-ID: Hi Mustafa, Yes we have one! :) The code is located on GitHub [1] and you can contribute by submitting patches to GerritHub [2]. The README at [1] contains info on how to get the installation up and running as well as the contribution process. Please note though, that this is a very new project and is still very much a work-in-progress. Let us know if you have any further questions. [1] https://github.com/rdo-management/rdo-director-ui [2] https://review.gerrithub.io/#/q/project:rdo-management/rdo-director-ui Kind Regards, Ana Krivokapic On Tue, Oct 20, 2015 at 3:41 PM, Mustafa ?EL?K (B?LGEM-BTE) < mustafa.celik at tubitak.gov.tr> wrote: > Hi Everyone, > > Is there any UI for undercloud installation? or any project on progress? > If we implement one, how to contribute it? > > Thanks, > > *Mustafa ?EL?K* > > T?B?TAK B?LGEM > > www.bilgem.tubitak.gov.tr > > mustafa.celik at tubitak.gov.tr > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Tue Oct 20 21:59:55 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 20 Oct 2015 23:59:55 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <5620BA64.6090501@berendt.io> Message-ID: >> Which packages should be used for testing Liberty? In the guide we wrote >> that >> http://rdo.fedorapeople.org/openstack-liberty/rdo-release-liberty.rpm >> should be used. This repository is not yet available, so we cannot test >> with this repository. For backward compatibility and branding reasons, I've uploaded rdo-release-liberty.rpm at the above URL, so this part of manual works now. I would just like it to be updated to use redirect URL we have on the main website, so we could change hosting location: http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm Cheers, Alan From sasha at redhat.com Tue Oct 20 22:54:25 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Tue, 20 Oct 2015 18:54:25 -0400 (EDT) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <159935625.61244072.1445355023202.JavaMail.zimbra@redhat.com> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1327898261.6388383.1445262711501.JavaMail.zimbra@tubitak.gov.tr> <1632812339.60289332.1445266983089.JavaMail.zimbra@redhat.com> <1114892065.6426670.1445269186395.JavaMail.zimbra@tubitak.gov.tr> <1139664202.60372126.1445271204093.JavaMail.zimbra@redhat.com> <104309364.6608280.1445319101175.JavaMail.zimbra@tubitak.gov.tr> <1238139899.44937284.1445335371251.JavaMail.zimbra@redhat.com> <159935625.61244072.1445355023202.JavaMail.zimbra@redhat.com> Message-ID: <713023706.61472910.1445381665550.JavaMail.zimbra@redhat.com> Esra, Just for a sake of trying,would it be possible for you to re-deploy the undercloud with the defaut IP addresses in undercloud.conf and let us know the result. I ran into something similar recently. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Sasha Chuzhoy" > To: "Esra Celik" > Cc: rdo-list at redhat.com > Sent: Tuesday, October 20, 2015 11:30:23 AM > Subject: Re: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" > > Hi Esra, > since the introspection fails continuously in addition to the deployment, I > start wondering if everything is connected right. > Could you please describe (and double check) how your nodes are > interconnected in the setup, i.e. what nics and are connected and is there > any additional configuration on the switch ports. > Thanks. > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Marius Cornea" > > To: "Esra Celik" > > Cc: "Sasha Chuzhoy" , rdo-list at redhat.com > > Sent: Tuesday, October 20, 2015 6:02:51 AM > > Subject: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > valid host was found" > > > > Hi, > > > > From what I can tell from the screenshots DHCP fails for both of the nics > > after loading the inspector image, thus the nodes have no ip address and > > the > > Network is unreachable message. Can you see any DHCP messages(output of > > dhclient) on the console? > > You could try leaving the nodes connected *only* to the provisioning > > network > > and rerun introspection. > > > > Thanks, > > Marius > > > > ----- Original Message ----- > > > From: "Esra Celik" > > > To: "Sasha Chuzhoy" > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > Sent: Tuesday, October 20, 2015 7:31:41 AM > > > Subject: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > valid host was found" > > > > > > Ok, I ran ironic node-set-provision-state [UUID] provide for each node > > > and > > > retried deployment. I attached the screenshots > > > > > > [stack at undercloud ~]$ ironic node-list > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > | Maintenance | > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power off | > > > | available > > > | | False | > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power off | > > > | available > > > | | False | > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > [stack at undercloud ~]$ nova flavor-list > > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > > | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | > > > | Is_Public | > > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > > | b9428c86-5696-4d68-a0e0-77faf4e7f627 | baremetal | 4096 | 40 | 0 | | 1 > > > | | > > > | 1.0 | True | > > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates > > > Deploying templates in the directory > > > /usr/share/openstack-tripleo-heat-templates > > > Stack failed with status: resources.Controller: resources[0]: > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > "Message: > > > No valid host was found. There are not enough hosts available., Code: > > > 500" > > > Heat Stack update failed. > > > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > > openstack-ironic-api.service loaded active running OpenStack Ironic API > > > service > > > openstack-ironic-conductor.service loaded active running OpenStack Ironic > > > Conductor service > > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE boot > > > dnsmasq service for Ironic Inspector > > > openstack-ironic-inspector.service loaded active running Hardware > > > introspection service for OpenStack Ironic > > > > > > "journalctl -fl -u openstack-ironic-conductor.service" gives no warning > > > or > > > error. > > > > > > Regards > > > > > > Esra ?EL?K > > > T?B?TAK B?LGEM > > > www.bilgem.tubitak.gov.tr > > > celik.esra at tubitak.gov.tr > > > > > > ----- Orijinal Mesaj ----- > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra Celik" > > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > > G?nderilenler: 19 Ekim Pazartesi 2015 19:13:24 > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > valid > > > > host was found" > > > > > > > Could you please > > > > 1.run: > > > > 'ironic node-set-provision-state [UUID] provide' for each node where > > > > UUID > > > > is > > > > replaced with the actual UUID of the node (ironic node-list). > > > > > > > 2.retry the deployment > > > > Thanks. > > > > > > > Best regards, > > > > Sasha Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > From: "Esra Celik" > > > > > To: "Sasha Chuzhoy" > > > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > > > Sent: Monday, October 19, 2015 11:39:46 AM > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > valid > > > > > host was found" > > > > > > > > > > Hi Sasha > > > > > > > > > > > > > > > > > > > > This is my instackenv.json. MAC addresses are, em2 interface’s > > > > > MAC > > > > > address of the nodes > > > > > > > > > > { > > > > > "nodes": [ > > > > > { > > > > > "pm_type":"pxe_ipmitool", > > > > > "mac":[ > > > > > "08:9E:01:58:CC:A1" > > > > > ], > > > > > "cpu":"4", > > > > > "memory":"8192", > > > > > "disk":"10", > > > > > "arch":"x86_64", > > > > > "pm_user":"root", > > > > > "pm_password”:””, > > > > > "pm_addr":"192.168.0.18" > > > > > }, > > > > > { > > > > > "pm_type":"pxe_ipmitool", > > > > > "mac":[ > > > > > "08:9E:01:58:D0:3D" > > > > > ], > > > > > "cpu":"4", > > > > > "memory":"8192", > > > > > "disk":"100", > > > > > "arch":"x86_64", > > > > > "pm_user":"root", > > > > > "pm_password”:””, > > > > > "pm_addr":"192.168.0.19" > > > > > } > > > > > ] > > > > > } > > > > > > > > > > This is my undercloud.conf file: > > > > > image_path = . > > > > > local_ip = 192.0.2.1/24 > > > > > local_interface = em2 > > > > > masquerade_network = 192.0.2.0/24 > > > > > dhcp_start = 192.0.2.5 > > > > > dhcp_end = 192.0.2.24 > > > > > network_cidr = 192.0.2.0/24 > > > > > network_gateway = 192.0.2.1 > > > > > inspection_interface = br-ctlplane > > > > > inspection_iprange = 192.0.2.100,192.0.2.120 > > > > > inspection_runbench = false > > > > > undercloud_debug = true > > > > > enable_tuskar = false > > > > > enable_tempest = false > > > > > > > > > > > > > > > > > > > > I have previously sent the screenshot of the consoles during > > > > > introspection > > > > > stage. Now I am attaching them again. > > > > > I cannot login to consoles because introspection stage is not > > > > > completed > > > > > successfully and I don't know the IP addresses. (nova list is empty) > > > > > (I don't know if I can login with the IP addresses that I was > > > > > previously > > > > > set > > > > > by myself. I am not able to reach the nodes now, from home.) > > > > > > > > > > I ran the flavor-create command after the introspection stage. But > > > > > introspection was not completed successfully, > > > > > I just ran deploy command to see if nova list fills during > > > > > deployment. > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > TÜB?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > ----- Sasha Chuzhoy ?öyle yaz?yor:> Esra, Is > > > > > it > > > > > possible to check the console of the nodes being introspected and/or > > > > > deployed? I wonder if the instackenv.json file is accurate. Also, > > > > > what's > > > > > the > > > > > output from 'nova flavor-list'? Thanks. Best regards, Sasha Chuzhoy. > > > > > ----- > > > > > Original Message ----- > From: "Esra Celik" > > > > > > > > > > > > > > > > To: "Marius Cornea" > Cc: "Sasha Chuzhoy" > > > > > , rdo-list at redhat.com > Sent: Monday, October 19, > > > > > 2015 > > > > > 9:51:51 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with > > > > > error > > > > > "No > > > > > valid host was found" > > All 3 baremetal nodes (1 undercloud, 2 > > > > > overcloud) > > > > > have 2 nics. > > the undercloud machine's ip config is as follows: > > > > > > > > > > > > [stack at undercloud ~]$ ip addr > 1: lo: mtu > > > > > 65536 > > > > > qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd > > > > > 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft > > > > > forever > > > > > preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever > > > > > preferred_lft forever > 2: em1: mtu > > > > > 1500 > > > > > qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd > > > > > ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope global > > > > > em1 > > > > > > > > > > > valid_lft forever preferred_lft forever > inet6 > > > > > fe80::a9e:1ff:fe50:8a21/64 > > > > > scope link > valid_lft forever preferred_lft forever > 3: em2: > > > > > mtu 1500 qdisc mq master ovs-system > > > > > > > > > > > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd > > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > > 4: > > > > > ovs-system: mtu 1500 qdisc noop state DOWN > > > > > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: br-ctlplane: > > > > > mtu 1500 qdisc noqueue > state > > > > > UNKNOWN > > > > > > > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet > > > > > 192.0.2.1/24 > > > > > brd > > > > > 192.0.2.255 scope global br-ctlplane > valid_lft forever > > > > > preferred_lft > > > > > forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft > > > > > forever > > > > > preferred_lft forever > 6: br-int: mtu 1500 > > > > > qdisc > > > > > noop > > > > > state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > I > > > > > am > > > > > using em2 for pxe boot on the other machines.. So I configured > > > > > > instackenv.json to have em2's MAC address > For overcloud nodes, em1 > > > > > was > > > > > configured to have 10.1.34.x ip, but after image > deploy I am not > > > > > sure > > > > > what > > > > > happened for that nic. > > Thanks > > Esra ÇEL?K > > > > > > TÜB?TAK > > > > > B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > > > > > ----- > > > > > Orijinal Mesaj ----- > > > Kimden: "Marius Cornea" > > > > > > > > > > > > > > > > > > > > > > Kime: "Esra Celik" > > Kk: "Sasha > > > > > Chuzhoy" > > > > > , rdo-list at redhat.com > > Gönderilenler: 19 > > > > > Ekim > > > > > Pazartesi 2015 15:36:58 > > Konu: Re: [Rdo-list] OverCloud deploy > > > > > fails > > > > > with > > > > > error "No valid host was > > found" > > > Hi, > > > I believe the > > > > > nodes > > > > > were > > > > > stuck in introspection so they were not ready for > > deployment thus > > > > > the > > > > > not enough hosts message. Can you describe the > > networking setup > > > > > (how > > > > > many nics the nodes have and to what networks they're > > connected)? > > > > > > > > > > > > > > > > > > > > > > > Thanks, > > Marius > > > ----- Original Message ----- > > > From: > > > > > "Esra > > > > > Celik" > > > To: "Sasha Chuzhoy" > > > > > > > > Cc: "Marius Cornea" , > > > > > rdo-list at redhat.com > > > Sent: Monday, October 19, 2015 12:34:32 PM > > > > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > host > > > > > > > > > > > > > > > > > > was found" > > > > > > Hi again, > > > > > > "nova list" was empty > > > > > > after > > > > > introspection stage which was not completed > > > successfully. So I > > > > > cloud > > > > > not ssh the nodes.. Is there another way to > > > obtain > > > the IP > > > > > addresses? > > > > > > [stack at undercloud ~]$ sudo systemctl|grep > > > > > ironic > > > > > > > > > > > > > > > > > > openstack-ironic-api.service loaded active running OpenStack Ironic > > > > > > API > > > > > > > > > > > > > > service > > > openstack-ironic-conductor.service loaded active > > > > > > > running > > > > > OpenStack Ironic > > > Conductor service > > > > > > > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE > > > > > boot > > > > > > > > > > > > > dnsmasq service for Ironic Inspector > > > > > > > > openstack-ironic-inspector.service loaded active running Hardware > > > > > > > > > > > > > introspection service for OpenStack Ironic > > > > > > If I start > > > > > deployment > > > > > anyway I get 2 nodes in ERROR state > > > > > > [stack at undercloud ~]$ > > > > > openstack overcloud deploy --templates > > > Deploying templates in > > > > > the > > > > > directory > > > /usr/share/openstack-tripleo-heat-templates > > > > > > > > Stack > > > > > failed with status: resources.Controller: resources[0]: > > > > > > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > > > > > > > > > > > > > > > > "Message: > > > No valid host was found. There are not enough hosts > > > > > available., Code: > > > 500" > > > > > > [stack at undercloud ~]$ nova > > > > > list > > > > > > > > > > > > > > > > > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > | ID | Name | Status | Task State | Power State | Networks | > > > > > > > > > | > > > > > > > > > | > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | overcloud-controller-0 > > > > > > > > | | > > > > > ERROR | > > > | - > > > | | > > > | NOSTATE | | > > > | > > > > > 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | > > > > > ERROR > > > > > > > > > > > > > > > > > > > > > > > | | > > > | - > > > | | NOSTATE | | > > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > > > > Did the repositories update during weekend? Should I > > > > > > > > > > > better > > > > > restart the > > > overall Undercloud and Overcloud installation from > > > > > the > > > > > beginning? > > > > > > Thanks. > > > > > > Esra ÇEL?K > > > > > > > > Uzman > > > > > Ara?t?rmac? > > > Bili?im Teknolojileri Enstitüsü > > > > > > > > TÜB?TAK B?LGEM > > > 41470 GEBZE - KOCAEL? > > > T +90 262 675 > > > > > 3140 > > > > > > > > > > > > > > > > > > F +90 262 646 3187 > > > www.bilgem.tubitak.gov.tr > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > ................................................................ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sorumluluk Reddi > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra Celik" > > > > > > > > > Kk: "Marius Cornea" > > > > > , rdo-list at redhat.com > > > > Gönderilenler: > > > > > 16 > > > > > Ekim Cuma 2015 18:44:49 > > > > Konu: Re: [Rdo-list] OverCloud deploy > > > > > fails > > > > > with error "No valid host > > > > was > > > > found" > > > > > > > Hi > > > > > Esra, > > > > > > > > > if the undercloud nodes are UP - you can login with: ssh > > > > > > > > > > > > > > > > > > > > > > > > > > > heat-admin@ > > > > You can see the IP of the nodes with: "nova > > > > > list". > > > > > > > > > > > > > > BTW, > > > > What do you see if you run "sudo systemctl|grep > > > > > > > > ironic" > > > > > on the > > > > undercloud? > > > > > > > Best regards, > > > > Sasha > > > > > Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > From: > > > > > "Esra > > > > > Celik" > > > > > To: "Sasha Chuzhoy" > > > > > > > > > > Cc: "Marius Cornea" > > > > > , > > > > > rdo-list at redhat.com > > > > > Sent: Friday, October 16, 2015 1:40:16 > > > > > AM > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > valid > > > > > > > > > > > > > > > > > > host > > > > > was found" > > > > > > > > > > Hi Sasha, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > Overcloud-Compute > > > > > > > > > > This is my undercloud.conf > > > > > file: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > image_path = . > > > > > local_ip = 192.0.2.1/24 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > local_interface = em2 > > > > > masquerade_network = 192.0.2.0/24 > > > > > > > > > > > > > > > > > > > > > > > > > dhcp_start = 192.0.2.5 > > > > > dhcp_end = 192.0.2.24 > > > > > > > > > > network_cidr = 192.0.2.0/24 > > > > > network_gateway = 192.0.2.1 > > > > > > > > > > > > > > > > > > > > > > > > > inspection_interface = br-ctlplane > > > > > inspection_iprange = > > > > > 192.0.2.100,192.0.2.120 > > > > > inspection_runbench = false > > > > > > > > > > > > > > > undercloud_debug = true > > > > > enable_tuskar = false > > > > > > > > > > enable_tempest = false > > > > > > > > > > IP configuration for the > > > > > Undercloud is as follows: > > > > > > > > > > stack at undercloud ~]$ ip > > > > > addr > > > > > > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > > > > > > > > UNKNOWN > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever > > > > > preferred_lft > > > > > forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft > > > > > forever > > > > > preferred_lft forever > > > > > 2: em1: > > > > > > > > > > mtu 1500 qdisc mq state UP > > > > > qlen > > > > > 1000 > > > > > > > > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > > > inet > > > > > 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > > valid_lft > > > > > forever > > > > > preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a21/64 > > > > > scope > > > > > link > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > 3: em2: > > > > > mtu 1500 qdisc mq master > > > > > > > > > > ovs-system > > > > > state UP qlen 1000 > > > > > link/ether > > > > > 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > 4: ovs-system: > > > > > mtu 1500 qdisc noop state DOWN > > > > > > > > > > link/ether > > > > > 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > > > 5: br-ctlplane: > > > > > mtu 1500 qdisc > > > > > noqueue > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > state UNKNOWN > > > > > link/ether 08:9e:01:50:8a:22 brd > > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > > > > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > inet6 > > > > > fe80::a9e:1ff:fe50:8a22/64 scope link > > > > > valid_lft forever > > > > > preferred_lft forever > > > > > 6: br-int: mtu > > > > > 1500 > > > > > qdisc noop state DOWN > > > > > link/ether fa:85:ac:92:f5:41 brd > > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > And I attached two screenshots > > > > > showing > > > > > the boot stage for overcloud > > > > > nodes > > > > > > > > > > Is > > > > > there > > > > > a > > > > > way to login the overcloud nodes to see their IP > > > > > > > > > > configuration? > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra ÇEL?K > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > TÜB?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ----- Orijinal Mesaj > > > > > ----- > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kime: "Esra Celik" > > > > > > Kk: > > > > > "Marius > > > > > Cornea" , rdo-list at redhat.com > > > > > > > > > > > Gönderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > > > Konu: > > > > > Re: > > > > > [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > > > > host > > > > > > > > > > > > > > > > > > > > > was > > > > > > found" > > > > > > > > > > > Just my 2 cents. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Did you make sure that all the registered nodes are configured > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > boot > > > > > > off > > > > > > the right NIC first? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Can you watch the console and see what happens on the problematic > > > > > > > > > > > > > > > > > > > > > > > > > > nodes > > > > > > upon > > > > > > boot? > > > > > > > > > > > Best > > > > > regards, > > > > > > Sasha Chuzhoy. > > > > > > > > > > > ----- > > > > > Original > > > > > Message ----- > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > > > > > > > > Cc: > > > > > rdo-list at redhat.com > > > > > > > Sent: Thursday, October 15, 2015 > > > > > 4:40:46 > > > > > AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with > > > > > error > > > > > "No > > > > > > > valid > > > > > > > host > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic node-show results are below. I have my nodes power > > > > > > > > > > on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > after > > > > > > > introspection bulk start. And I get the > > > > > following warning > > > > > > > Introspection didn't finish for nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > > > > > > > > > > Doesn't seem to be the same issue with > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > | UUID | Name | Instance UUID | Power State | > > > > > > > > > > > > | Provisioning > > > > > State > > > > > > > | | > > > > > > > | Maintenance | > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | > > > > > > > > > > > > | power > > > > > on | > > > > > > > | available > > > > > > > | | > > > > > > > | > > > > > False > > > > > | > > > > > > > > > > > > > > > > > > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | > > > > > > > > > > | power > > > > > > > > > > | on > > > > > > > > > > | | > > > > > > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > ironic > > > > > node-show > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > > > > > > > > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > | Property | Value | > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > | target_power_state | None | > > > > > > > | extra | > > > > > > > > > > > > | {} > > > > > > > > > > > > | | > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > > > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None > > > > > | > > > > > > > > > > > > > > > > > > > > > > | provision_state | available | > > > > > > > | clean_step > > > > > > > > > > | | > > > > > > > > > > | {} > > > > > > > > > > | | > > > > > > > > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > | console_enabled | False | > > > > > > > | target_provision_state | > > > > > | None > > > > > | | > > > > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | > > > > > > > | None > > > > > > > | | > > > > > > > | > > > > > > > > > > > > | inspection_finished_at | None | > > > > > > > | > > > > > > > > > > > | power_state > > > > > > > > > > > | | > > > > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > > > > > reservation | None | > > > > > > > | properties | {u'memory_mb': > > > > > u'8192', > > > > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | > > > > > u'10', > > > > > | > > > > > > > | | u'cpus': u'4', u'capabilities': > > > > > | > > > > > > > | | u'boot_option:local'} > > > > > | > > > > > > > | | | > > > > > > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | u'192.168.0.18', | > > > > > > > | | u'ipmi_username': > > > > > > > > | u'root', > > > > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- > > > > > | | | > > > > > > > | | | > > > > > | | | > > > > > > > | | > > > > > > | | | > > > > > > > | | > > > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | > > > > > > > > > > | | created_at > > > > > > > > > > | | | > > > > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > > > > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > > > > > > > > > > > | > > > > > instance_info | {} | > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > ironic > > > > > node-show > > > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > | Property | Value | > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > | target_power_state | None | > > > > > > > | extra | > > > > > > > > > > > > | {} > > > > > > > > > > > > | | > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > > > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | None > > > > > | > > > > > > > > > > > > > > > > > > > > > > | provision_state | available | > > > > > > > | clean_step > > > > > > > > > > | | > > > > > > > > > > | {} > > > > > > > > > > | | > > > > > > > > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > | console_enabled | False | > > > > > > > | target_provision_state | > > > > > | None > > > > > | | > > > > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | > > > > > > > | None > > > > > > > | | > > > > > > > | > > > > > > > > > > > > | inspection_finished_at | None | > > > > > > > | > > > > > > > > > > > | power_state > > > > > > > > > > > | | > > > > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > > > > > reservation | None | > > > > > > > | properties | {u'memory_mb': > > > > > u'8192', > > > > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > | > > > > > u'100', > > > > > | > > > > > > > | | u'cpus': u'4', u'capabilities': > > > > > | > > > > > > > | | u'boot_option:local'} > > > > > | > > > > > > > | | | > > > > > > > > > > > > | instance_uuid | None | > > > > > > > | name | None | > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > | driver_info | {u'ipmi_password': u'******', u'ipmi_address': > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | u'192.168.0.19', | > > > > > > > | | u'ipmi_username': > > > > > > > > | u'root', > > > > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- > > > > > | | | > > > > > > > | | | > > > > > | | | > > > > > > > | | > > > > > > | | | > > > > > > > | | > > > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | > > > > > > > > > > | | created_at > > > > > > > > > > | | | > > > > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > > > > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > > > > > > > > > > > | > > > > > instance_info | {} | > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack > > > > > > > > > > > > > > > > > user. > > > > > > > > > > > > > > > > > I > > > > > don't think I > > > > > > > am > > > > > > > doing > > > > > > > > > > > > something > > > > > other than > > > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > > > > > > > > > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > instackenv.json > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2 sudo yum -y install epel-release > > > > > > > 3 sudo > > > > > > > > > > curl > > > > > > > > > > -o > > > > > /etc/yum.repos.d/delorean.repo > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > > > > > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > > 6 sudo /bin/bash > > > > > -c > > > > > "cat > > > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > > > > > > > > EOF" > > > > > > > 7 sudo curl -o > > > > > /etc/yum.repos.d/delorean-deps.repo > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > > > 9 sudo yum > > > > > install > > > > > -y python-tripleoclient > > > > > > > 10 cp > > > > > /usr/share/instack-undercloud/undercloud.conf.sample > > > > > > > > > > > > ~/undercloud.conf > > > > > > > 11 vi undercloud.conf > > > > > > > > > > > > 12 > > > > > export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 13 > > > > > openstack > > > > > undercloud install > > > > > > > 14 source stackrc > > > > > > > 15 > > > > > export > > > > > NODE_DIST=centos7 > > > > > > > 16 export > > > > > DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > > > > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > > > 17 export > > > > > DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 18 openstack > > > > > overcloud > > > > > image build --all > > > > > > > 19 ls > > > > > > > 20 openstack > > > > > overcloud > > > > > image upload > > > > > > > 21 openstack baremetal import --json > > > > > instackenv.json > > > > > > > 22 openstack baremetal configure boot > > > > > > > > > > > > > > > > > > > > > > > > > > > 23 ironic node-list > > > > > > > 24 openstack baremetal > > > > > > > > introspection > > > > > bulk start > > > > > > > 25 ironic node-list > > > > > > > 26 ironic > > > > > node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > 27 > > > > > ironic > > > > > node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > 28 > > > > > history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > TÜB?TAK B?LGEM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > > , rdo-list at redhat.com > > > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 19:40:07 > > > > > > > > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > results? > > > > > > > Also > > > > > > > check the following > > > > > suggestion > > > > > if > > > > > you're experiencing the same > > > > > > > issue: > > > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > > > > > > > > > > > > > From: > > > > > > > > > > > > > > > "Esra > > > > > Celik" > > > > > > > > To: "Marius > > > > > Cornea" > > > > > > > > > > > > > Cc: "Ignacio Bravo" > > > > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > > > Subject: > > > > > Re: > > > > > [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > > > > > > > host > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the > > > > > > > > > > > > > > > > > > > > introspection > > > > > > > > > > > > > > > > > > > > I > > > > > can see Client > > > > > > > > IP > > > > > > > > of > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > (screenshot attached). But then I see continuous > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic-python-agent > > > > > > > > errors > > > > > > > > > > > > > > (screenshot-2 > > > > > attached). Errors repeat after time out.. And the > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > are > > > > > > > > not powered off. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ADMINISTRATOR > > > > > > > > -U > > > > > > > > root -R 3 -N 5 -P > > > > > power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > > > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > chassis power status > > > > > > > > Chassis Power > > > > > > > > is > > > > > > > > on > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I > > > > > > > > > > > > lanplus > > > > > > > > > > > > -U > > > > > > > > > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > chassis > > > > > power off > > > > > > > > Chassis Power Control: Down/Off > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power > > > > > status > > > > > > > > > > > > > > > > > > > > > > > Chassis Power is off > > > > > > > > [stack at undercloud > > > > > > > > > > > ~]$ > > > > > ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > > > > > > > > > > > > > > > > > > > > > -P > > > > > > > > chassis power on > > > > > > > > > > > > > Chassis > > > > > Power > > > > > Control: Up/On > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > > > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > chassis power status > > > > > > > > Chassis Power > > > > > > > > is > > > > > > > > on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > TÜB?TAK B?LGEM > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > > > > > > Kimden: > > > > > "Marius Cornea" > > > > > > > > Kime: "Esra > > > > > Celik" > > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 14:59:30 > > > > > > > > > > > > > > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > > > > > > > > > > > > > > > valid > > > > > > > > host > > > > > > > > was > > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: > > > > > "Ignacio > > > > > Bravo" , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > > > > > > > > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well today I started with > > > > > re-installing the OS and nothing > > > > > > > > > seems > > > > > > > > > > > > > > > > > > > > > > > > > > > > > wrong > > > > > > > > > with > > > > > > > > > undercloud > > > > > installation, > > > > > then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > error > > > > > during image build > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > > > + dracut -N > > > > > --install > > > > > ' > > > > > curl partprobe lsblk targetcli tail > > > > > > > > > head > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > awk > > > > > > > > > ifconfig > > > > > > > > > cut expr route ping > > > > > nc > > > > > wget > > > > > tftp grep' --kernel-cmdline > > > > > > > > > 'rd.shell > > > > > > > > > > > > > > > > > > > > > > > > rd.debug > > > > > > > > > rd.neednet=1 rd.driver.pre=ahci' --include > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > > > / > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > virtio_net > > > > > > > > > virtio_blk target_core_mod > > > > > iscsi_target_mod > > > > > > > > > > > > > > > > > > > > > > > > target_core_iblock > > > > > > > > > target_core_file > > > > > target_core_pscsi configfs' -o 'dash > > > > > > > > > plymouth' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /tmp/ramdisk > > > > > > > > > cat: write error: Broken pipe > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + chmod o+r /tmp/kernel > > > > > > > > > + trap EXIT > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > > > + date > > > > > > > +%s.%N > > > > > > > > > > > > > > > > > > > > > > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You can ignore that afaik, if you end up having all the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > required > > > > > > > > images > > > > > > > > it > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Then, > > > > > during introspection stage I see ironic-python-agent > > > > > > > > > > > > > > > > > > > errors > > > > > > > > > on > > > > > > > > > nodes > > > > > > > > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early > > > > > > > > > > > > > > > > stage > > > > > > > > > > > > > > > > of > > > > > the > > > > > > > > introspection? > > > > > > > > At > > > > > > > > > > > > > some > > > > > point it should receive an address by DHCP and the Network > > > > > > > > > > > > > > > > > > > > > > > > > > > > is > > > > > > > > unreachable error should disappear. Does the > > > > > introspection > > > > > > > > complete > > > > > > > > and > > > > > > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl > > > > > > > > > > > > > > > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 > > > > > > > > > > > 10:30:12 > > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Option "http_url" from group "pxe" is deprecated. Use > > > > > > > > > > > option > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "http_url" > > > > > > > > > from > > > > > > > > > > > > > > > > > > > > > group > > > > > "deploy". > > > > > > > > > Oct 14 10:30:12 undercloud.rdo > > > > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "http_root" > > > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Before deployment ironic > > > > > > > > > > > > > > > > > > > > > > > node-list: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This is odd too as I'm > > > > > expecting the nodes to be powered off > > > > > > > > before > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > running > > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > | UUID | Name | Instance UUID | Power State | > > > > > > > > > > > > > > | Provisioning > > > > > > > > > > > > > > | State > > > > > > > > > | | > > > > > > > > > | > > > > > Maintenance | > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | > > > > > > > > > > > > > > | None > > > > > > > > > > > > > > | | > > > > > power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > > > > > > | > > > > > available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power > > > > > > > > > | > > > > > > > | > > > > > > > | > > > > > > > | > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > > > | > > > > > > > > | > > > > > > > > | > > > > > > > > | > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > > During deployment I get following > > > > > > > > > > > > > > > > > > > > > > > errors > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl > > > > > > > > > > > > > > > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 > > > > > > > > > > > 11:29:01 > > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 ERROR > > > > > ironic.drivers.modules.ipmitool [-] IPMI Error > > > > > > > > > > > > > > while > > > > > > > > > > > > > > > > > > > > > > > > > > > > > attempting > > > > > > > > > "ipmitool -I lanplus -H > > > > > 192.168.0.19 -L ADMINISTRATOR -U root > > > > > > > > > -R > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 3 > > > > > > > > > -N > > > > > > > > > 5 > > > > > > > > > -f > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > > > Error: > > > > > Unexpected > > > > > error while running command. > > > > > > > > > Oct 14 11:29:01 > > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 WARNING > > > > > ironic.drivers.modules.ipmitool [-] IPMI power > > > > > > > > > > > > > > status > > > > > > > > > > > > > > > > > > > > > > > > failed > > > > > > > > > for > > > > > > > > > node > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: > > > > > > > > > > > > > > Unexpected > > > > > > > > > error > > > > > > > > > while > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo > > > > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > 11:29:01.740 > > > > > > > > > 619 WARNING ironic.conductor.manager > > > > > [-] > > > > > During > > > > > > > > > sync_power_state, > > > > > > > > > could > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > not > > > > > > > > > get power state for node > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > > > attempt > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 > > > > > > > > > of > > > > > > > > > 3. Error: IPMI call > > > > > > > > failed: > > > > > power status.. > > > > > > > > > > > > > > > > > > > > > > > > > This > > > > > looks > > > > > like an ipmi error, can you try to manually run > > > > > > > > > > > > > commands > > > > > > > > > > > > > > > > > > > > > > > using > > > > > > > > the > > > > > > > > ipmitool and > > > > > > > > > > > see > > > > > > > > > > > if > > > > > you get any success? It's also worth filing > > > > > > > > a > > > > > > > > > > > > > > > > > > > > > > > > > > > > bug > > > > > > > > with > > > > > > > > details such as the > > > > > > ipmitool > > > > > version, server model, drac > > > > > > > > firmware > > > > > > > > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > > Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > Kimden: > > > > > "Marius > > > > > Cornea" > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > > , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > > > > > > > > > > Konu: > > > > > Re: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > > > error > > > > > > > > > > > > > > "No > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > > > > > > was > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > > Original > > > > > Message ----- > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > > > > > Cc: "Ignacio Bravo" > > > > > , > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > > > > > > > > > > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > During > > > > > deployment > > > > > they are powering on and deploying the > > > > > > > > > > images. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I > > > > > > > > > > see > > > > > > > > > > lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > of > > > > > > > > > > connection error messages about > > > > > ironic-python-agent but > > > > > > > > > > ignore > > > > > > > > > > > > > > > > > > > > them > > > > > > > > > > > > > > > as > > > > > > > > > > mentioned here > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > That was referring to the > > > > > > > > > > > > > > > > > > > > > introspection > > > > > stage. From what I > > > > > > > > > can > > > > > > > > > tell > > > > > > > > > > > > > > > > > > > > > > > > > > > > > you > > > > > > > > > are > > > > > > > > > experiencing issues > > > > > > > > during > > > > > deployment as it fails to > > > > > > > > > provision > > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > > > > nova > > > > > > > > > instances, can you check if > > > > > > > > > > > > > > during > > > > > that stage the nodes get > > > > > > > > > powered > > > > > > > > > > > > > > on? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Make sure that before overcloud > > > > > > > > > > > > > > > > > > > > > deploy > > > > > > > > > > > > > > > > > > > > > the > > > > > ironic nodes are > > > > > > > > > available > > > > > > > > > for > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > provisioning (ironic node-list and check the provisioning > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > state > > > > > > > > > column). > > > > > > > > > Also check > > > > > > > > > that > > > > > you didn't miss any step in the docs in > > > > > > > > > regards > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > kernel > > > > > > > > > and ramdisk > > > > > assignment, introspection, flavor creation(so it > > > > > > > > > > > > > > matches > > > > > > > > > > > > > > > > > > > the > > > > > > > > > nodes resources) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > In > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > instackenv.json > > > > > file I do not need to add the undercloud > > > > > > > > > > node, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > or > > > > > > > > > > do > > > > > > > > > > I? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And which log files should I watch during > > > > > > > > > > > > > > > deployment? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You can check the > > > > > openstack-ironic-conductor logs(journalctl > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > openstack-ironic-conductor.service) and > > > > > > > > the > > > > > > > > logs > > > > > in > > > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kime: > > > > > > > > > > Esra Celik > > > > > Kk: Ignacio Bravo > > > > > > > > > > > > > > > , > > > > > > > > > > > > > > > rdo-list at redhat.comGönderilenler: > > > > > > > > > > Tue, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 13 > > > > > > > > > > Oct > > > > > > > > > > 2015 17:25:00 > > > > > > > > > +0300 > > > > > (EEST)Konu: Re: [Rdo-list] OverCloud > > > > > > > > > > deploy > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > fails > > > > > > > > > > with > > > > > > > > > > error "No > > > > > > > > > valid > > > > > host was found" > > > > > > > > > > > > > > > > > > > > ----- > > > > > Original > > > > > Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > > To: "Ignacio Bravo" > > > > > > Cc: > > > > > > > > > > rdo-list at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: > > > > > > > > > > Tuesday, October 13, 2015 > > > > > > > > > > > > 3:47:57 > > > > > PM> Subject: Re: > > > > > > > > > > [Rdo-list] > > > > > > > > > > > > > > > OverCloud > > > > > > > > > > deploy fails with error "No valid host > > > > > was > > > > > found"> > > > > > > > > > > > > > Actually > > > > > > > > > > I > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > re-installed the OS for Undercloud before deploying. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > However > > > > > > > > > > I > > > > > > > > > > did> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > not > > > > > > > > > > re-install the OS in Compute and Controller > > > > > nodes.. I will > > > > > > > > > > reinstall> > > > > > > > > > > > > > > > basic > > > > > > > > > > > > > > > > > > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > compute, > > > > > > > > > > they > > > > > > > > > > will > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > get the image served by the undercloud. I'd recommend that > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > during > > > > > > > > > > deployment > > > > > > > > > > > > > > > > > > > > > you > > > > > watch the servers console and make sure they get > > > > > > > > > > > > > > > powered > > > > > > > > > > > > > > > on, > > > > > > > > > > pxe > > > > > > > > > > > > > > > > > > > > > > > > > boot, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks> > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: > > > > > > > > > > > "Ignacio > > > > > > > > > > > > > > > > > > Bravo" > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: > > > > > > > > > > > 13 Ekim Sal? 2015 16:36:06> > > > > > Konu: > > > > > Re: [Rdo-list] > > > > > > > > > > > OverCloud > > > > > > > > > > > > > > > > deploy > > > > > > > > > > > > > > > > fails > > > > > > > > > > > with error "No > > > > > > > > > > > > > > > > valid > > > > > > > > > > > > > > > > host > > > > > was> found"> > Esra,> > I > > > > > > > > > > > encountered > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > same > > > > > > > > > > > problem > > > > > > > > after > > > > > deleting the stack and re-deploying.> > It > > > > > > > > > > > > > > > > turns > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > out > > > > > > > > > > > that > > > > > > > > > > > > > > > > > > > > > > > > 'heat > > > > > stack-delete overcloud’ does remove the nodes > > > > > > > > > > > > > > > > > > > > > > > > > > from> > > > > > > > > > > > ‘nova list’ and one would > > > > > assume > > > > > that the > > > > > > > > > > > baremetal > > > > > > > > > > > > > > > > servers > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > are now ready to> be used for the next stack, but > > > > > > > > > > > > > when > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > redeploying, > > > > > > > > > > > I > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > get > > > > > > > > > > > the same message of> not enough hosts > > > > > available.> > > > > > > You > > > > > > > > > > > can > > > > > > > > > > > look > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > into > > > > > > > > > > > the > > > > > > > > > > > nova logs > > > > > > > and > > > > > > > it > > > > > mentions something about ‘node xxx > > > > > > > > > > > is> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > already > > > > > > > > > > > associated with UUID > > > > > > > > > > > yyyy’ > > > > > and ‘I tried 3 > > > > > > > > > > > times > > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > > > > I’m > > > > > > > > > > > giving > > > > > > > > > > > > > > > up’.> > > > > > > > > > > > > > > > The > > > > > issue is that the UUID yyyy > > > > > > > > > > > belonged > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > a > > > > > > > > > > > prior > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > basic > > > > > > > > > > > OS > > > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Federal, > > > > > > > > > > > Inc> > > > > > > > > > > > > > > > > > > > > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 13, > > > > > > > > > > > 2015, > > > > > > > > > > > at > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 9:25 > > > > > > > > > > > AM, Esra Celik < > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > wrote:> > > > > > > > > > > > > > Hi > > > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with error > > > > > > > > > > > > > > > > > > > > > > "No > > > > > valid host was > > > > > > > > > > > found"> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --templates> > > > > > > > > > > > Deploying > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > templates in the directory> > > > > > > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > > > > > > > > Stack > > > > > failed with status: Resource CREATE failed: > > > > > > > > > > > > > > > > resources.Compute:> > > > > > > > > > > > ResourceInError: > > > > > resources[0].resources.NovaCompute: Went > > > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > status > > > > > > > > > > > ERROR> > > > > > > > > > > > > > > > > > > > > > > due > > > > > > > > > > > to > > > > > "Message: No valid host was found. There are not > > > > > > > > > > > > > > > > > > > > > enough > > > > > > > > > > > hosts> > > > > > > > > > > > available., > > > > > Code: > > > > > 500"> Heat Stack create failed.> > Here > > > > > > > > > > > are > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > some > > > > > > > > > > > logs:> > > > > > > > > > > > > > > > > > > > > > > > Every > > > > > 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > > > > > > > > > > > > > > > > > > COMPLETE > > > > > > > > > > > > Tue > > > > > > > > > > > > Oct > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > | resource_name | physical_resource_id | > > > > > > > > > > > > > > > > | resource_type > > > > > | > > > > > > > > > > > | resource_status > > > > > > > > > > > |> | > > > > > updated_time | stack_name |> > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > | Compute | > > > > > > > > > > > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > > > |> > > > > > > > > > > > > > > | | > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> | Controller > > > > > > > > > > > |> | | > > > > > > > > > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> > > > > > > > > > > > > > > | > > > > > > > > > > > > > > 0 > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OS::TripleO::Controller > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | > > > > > > > > > > > > > > > > OS::TripleO::Compute > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > Controller | > > > > > > | > > > > > > > > > > > 2e9ac712-0566-49b5-958f-c3e151bb24d7 > > > > > | > > > > > > > > > > > OS::Nova::Server > > > > > > > > > > > |> > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | > > > > > > > > > > > | > > > > > > > > > > > 2015-10-13T10:20:54 > > > > > |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk > > > > > |> | > > > > > > > > > > > NovaCompute > > > > > > > > > > > | > > > > > > > > |> | > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > > > | > > > > > overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ heat resource-show > > > > > > > > > > > > > > > > > > overcloud > > > > > > > > > > > > > > > > > > Compute> > > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > | Property | Value |> > > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > | attributes | { |> | | "attributes": null, |> > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | "refs": > > > > > > > > > > > | null > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > | |> > > > > > > > > > > > | | > > > > > > > > > > > | | > > > > > > > > > > > > > > > > | } > > > > > > > > > > > |> | creation_time | > > > > > > > | 2015-10-13T10:20:36 > > > > > > > | |> > > > > > > > | | > > > > > description > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | links > > > > > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > |> | |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > > > > > > > > | (self) |> | | > > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > > > > > > > > | | (stack) |> | | > > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > > > > > > > > | | (nested) |> | logical_resource_id | Compute > > > > > > > > > > > > > > > > | | |> > > > > > > > > > > > > > > > > | | | > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > | | physical_resource_id > > > > > > > > > > > | > > > > > e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ComputeAllNodesDeployment |> | | > > > > > > > > > > > > > > > > ComputeNodesPostDeployment > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > | > > > > > > > > > > > ComputeCephDeployment > > > > > > > > > > > > > > > > | > > > > > > > > > > > |> > > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | > > > > > > > > > > > > > > > > resource_name > > > > > > > > > > > | > > > > > > > > > > > Compute > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | resource_status | > > > > > > > > > > > > |> > > > > > > > > > > > | CREATE_FAILED > > > > > > > > > > > > |> > > > > > > > > > > > | |> > > > > > | > > > > > > > > > > > | resource_status_reason > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > | > > > > > > > > > > > | | > > > > > | > > > > > > > > > > > | | > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > > > Went to > > > > > > > status > > > > > ERROR due to "Message:> | No valid host > > > > > > > > > > > was > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > found. > > > > > > > > > > > There > > > > > > > > > > > > > > > > > > > > > > are > > > > > > > > > > > not > > > > > enough hosts available., Code: 500"> | |> | > > > > > > > > > > > > > > > > resource_type > > > > > > > > > > > | > > > > > > > > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > > > |> > > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > > This is my instackenv.json for 1 compute > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > > 1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > control > > > > > > > > > > > > > > node > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > > > > be > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:CC:A1"> ],> > > > > > > > "cpu":"4",> > > > > > "memory":"8192",> > > > > > > > > > > > "disk":"10",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > > > > > > "pm_addr":"192.168.0.18"> > > > > > },> > > > > > {> > > > > > > > > > > > "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> > > > > > "memory":"8192",> > > > > > > > > > > > "disk":"100",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > > > > > > "pm_addr":"192.168.0.19"> > > > > > }> > > > > > ]> }> > > Any ideas? Thanks > > > > > > > > > > > in > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > > > _______________________________________________> > > > > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________> > > > > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > > > Rdo-list > > > > > mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From sasha at redhat.com Wed Oct 21 01:44:26 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Tue, 20 Oct 2015 21:44:26 -0400 (EDT) Subject: [Rdo-list] Failed to deploy overcloud with network isolation on BM. In-Reply-To: <1421541564.61524022.1445391447568.JavaMail.zimbra@redhat.com> Message-ID: <1122769870.61527644.1445391866459.JavaMail.zimbra@redhat.com> Hi all, While I fail to install the overcloud using the latest bits[1]. We were able to deploy the undercloud[2] and build the images[3]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273680 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1273635 [3] https://bugzilla.redhat.com/show_bug.cgi?id=1273647 Thanks. Best regards, Sasha Chuzhoy. From celik.esra at tubitak.gov.tr Wed Oct 21 05:59:45 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Wed, 21 Oct 2015 08:59:45 +0300 (EEST) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <713023706.61472910.1445381665550.JavaMail.zimbra@redhat.com> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1632812339.60289332.1445266983089.JavaMail.zimbra@redhat.com> <1114892065.6426670.1445269186395.JavaMail.zimbra@tubitak.gov.tr> <1139664202.60372126.1445271204093.JavaMail.zimbra@redhat.com> <104309364.6608280.1445319101175.JavaMail.zimbra@tubitak.gov.tr> <1238139899.44937284.1445335371251.JavaMail.zimbra@redhat.com> <159935625.61244072.1445355023202.JavaMail.zimbra@redhat.com> <713023706.61472910.1445381665550.JavaMail.zimbra@redhat.com> Message-ID: <1827088230.7058274.1445407185286.JavaMail.zimbra@tubitak.gov.tr> Hi Shasha I finally catched something. As the error messages go quickly I was not able to see this error previously. I have set local_interface = em2 in undercloud. conf file, but diskimage-builder's dhcp-all-interfaces.sh script tries to inspect ethX interfaces. I thought stable-interface-names patch for diskimage-builder was solving this issue.. I don't know clearly.. Should I try converting the interface names to ethX, or is this a bug that should be fixed? Esra ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr celik.esra at tubitak.gov.tr ----- Orijinal Mesaj ----- > Kimden: "Sasha Chuzhoy" > Kime: "Esra Celik" > Kk: rdo-list at redhat.com > G?nderilenler: 21 Ekim ?ar?amba 2015 1:54:25 > Konu: Re: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid > host was found" > Esra, > Just for a sake of trying,would it be possible for you to re-deploy the > undercloud with the defaut IP addresses in undercloud.conf and let us know > the result. > I ran into something similar recently. > Thanks. > Best regards, > Sasha Chuzhoy. > ----- Original Message ----- > > From: "Sasha Chuzhoy" > > To: "Esra Celik" > > Cc: rdo-list at redhat.com > > Sent: Tuesday, October 20, 2015 11:30:23 AM > > Subject: Re: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No > > valid host was found" > > > > Hi Esra, > > since the introspection fails continuously in addition to the deployment, I > > start wondering if everything is connected right. > > Could you please describe (and double check) how your nodes are > > interconnected in the setup, i.e. what nics and are connected and is there > > any additional configuration on the switch ports. > > Thanks. > > > > Best regards, > > Sasha Chuzhoy. > > > > ----- Original Message ----- > > > From: "Marius Cornea" > > > To: "Esra Celik" > > > Cc: "Sasha Chuzhoy" , rdo-list at redhat.com > > > Sent: Tuesday, October 20, 2015 6:02:51 AM > > > Subject: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > valid host was found" > > > > > > Hi, > > > > > > From what I can tell from the screenshots DHCP fails for both of the nics > > > after loading the inspector image, thus the nodes have no ip address and > > > the > > > Network is unreachable message. Can you see any DHCP messages(output of > > > dhclient) on the console? > > > You could try leaving the nodes connected *only* to the provisioning > > > network > > > and rerun introspection. > > > > > > Thanks, > > > Marius > > > > > > ----- Original Message ----- > > > > From: "Esra Celik" > > > > To: "Sasha Chuzhoy" > > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > > Sent: Tuesday, October 20, 2015 7:31:41 AM > > > > Subject: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > valid host was found" > > > > > > > > Ok, I ran ironic node-set-provision-state [UUID] provide for each node > > > > and > > > > retried deployment. I attached the screenshots > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > > | Maintenance | > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power off | > > > > | available > > > > | | False | > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power off | > > > > | available > > > > | | False | > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > [stack at undercloud ~]$ nova flavor-list > > > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > > > | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor > > > > | | > > > > | Is_Public | > > > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > > > | b9428c86-5696-4d68-a0e0-77faf4e7f627 | baremetal | 4096 | 40 | 0 | | > > > > | 1 > > > > | | > > > > | 1.0 | True | > > > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates > > > > Deploying templates in the directory > > > > /usr/share/openstack-tripleo-heat-templates > > > > Stack failed with status: resources.Controller: resources[0]: > > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > > "Message: > > > > No valid host was found. There are not enough hosts available., Code: > > > > 500" > > > > Heat Stack update failed. > > > > > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > > > openstack-ironic-api.service loaded active running OpenStack Ironic API > > > > service > > > > openstack-ironic-conductor.service loaded active running OpenStack > > > > Ironic > > > > Conductor service > > > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE > > > > boot > > > > dnsmasq service for Ironic Inspector > > > > openstack-ironic-inspector.service loaded active running Hardware > > > > introspection service for OpenStack Ironic > > > > > > > > "journalctl -fl -u openstack-ironic-conductor.service" gives no warning > > > > or > > > > error. > > > > > > > > Regards > > > > > > > > Esra ?EL?K > > > > T?B?TAK B?LGEM > > > > www.bilgem.tubitak.gov.tr > > > > celik.esra at tubitak.gov.tr > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > Kime: "Esra Celik" > > > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > > > G?nderilenler: 19 Ekim Pazartesi 2015 19:13:24 > > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > valid > > > > > host was found" > > > > > > > > > Could you please > > > > > 1.run: > > > > > 'ironic node-set-provision-state [UUID] provide' for each node where > > > > > UUID > > > > > is > > > > > replaced with the actual UUID of the node (ironic node-list). > > > > > > > > > 2.retry the deployment > > > > > Thanks. > > > > > > > > > Best regards, > > > > > Sasha Chuzhoy. > > > > > > > > > ----- Original Message ----- > > > > > > From: "Esra Celik" > > > > > > To: "Sasha Chuzhoy" > > > > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > > > > Sent: Monday, October 19, 2015 11:39:46 AM > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > valid > > > > > > host was found" > > > > > > > > > > > > Hi Sasha > > > > > > > > > > > > > > > > > > > > > > > > This is my instackenv.json. MAC addresses are, em2 > > > > > > interface’s > > > > > > MAC > > > > > > address of the nodes > > > > > > > > > > > > { > > > > > > "nodes": [ > > > > > > { > > > > > > "pm_type":"pxe_ipmitool", > > > > > > "mac":[ > > > > > > "08:9E:01:58:CC:A1" > > > > > > ], > > > > > > "cpu":"4", > > > > > > "memory":"8192", > > > > > > "disk":"10", > > > > > > "arch":"x86_64", > > > > > > "pm_user":"root", > > > > > > "pm_password”:””, > > > > > > "pm_addr":"192.168.0.18" > > > > > > }, > > > > > > { > > > > > > "pm_type":"pxe_ipmitool", > > > > > > "mac":[ > > > > > > "08:9E:01:58:D0:3D" > > > > > > ], > > > > > > "cpu":"4", > > > > > > "memory":"8192", > > > > > > "disk":"100", > > > > > > "arch":"x86_64", > > > > > > "pm_user":"root", > > > > > > "pm_password”:””, > > > > > > "pm_addr":"192.168.0.19" > > > > > > } > > > > > > ] > > > > > > } > > > > > > > > > > > > This is my undercloud.conf file: > > > > > > image_path = . > > > > > > local_ip = 192.0.2.1/24 > > > > > > local_interface = em2 > > > > > > masquerade_network = 192.0.2.0/24 > > > > > > dhcp_start = 192.0.2.5 > > > > > > dhcp_end = 192.0.2.24 > > > > > > network_cidr = 192.0.2.0/24 > > > > > > network_gateway = 192.0.2.1 > > > > > > inspection_interface = br-ctlplane > > > > > > inspection_iprange = 192.0.2.100,192.0.2.120 > > > > > > inspection_runbench = false > > > > > > undercloud_debug = true > > > > > > enable_tuskar = false > > > > > > enable_tempest = false > > > > > > > > > > > > > > > > > > > > > > > > I have previously sent the screenshot of the consoles during > > > > > > introspection > > > > > > stage. Now I am attaching them again. > > > > > > I cannot login to consoles because introspection stage is not > > > > > > completed > > > > > > successfully and I don't know the IP addresses. (nova list is > > > > > > empty) > > > > > > (I don't know if I can login with the IP addresses that I was > > > > > > previously > > > > > > set > > > > > > by myself. I am not able to reach the nodes now, from home.) > > > > > > > > > > > > I ran the flavor-create command after the introspection stage. But > > > > > > introspection was not completed successfully, > > > > > > I just ran deploy command to see if nova list fills during > > > > > > deployment. > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > TÜB?TAK B?LGEM > > > > > > www.bilgem.tubitak.gov.tr > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > ----- Sasha Chuzhoy ?öyle yaz?yor:> Esra, > > > > > > Is > > > > > > it > > > > > > possible to check the console of the nodes being introspected > > > > > > and/or > > > > > > deployed? I wonder if the instackenv.json file is accurate. Also, > > > > > > what's > > > > > > the > > > > > > output from 'nova flavor-list'? Thanks. Best regards, Sasha > > > > > > Chuzhoy. > > > > > > ----- > > > > > > Original Message ----- > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > Cc: "Sasha Chuzhoy" > > > > > > , rdo-list at redhat.com > Sent: Monday, October 19, > > > > > > 2015 > > > > > > 9:51:51 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with > > > > > > error > > > > > > "No > > > > > > valid host was found" > > All 3 baremetal nodes (1 undercloud, 2 > > > > > > overcloud) > > > > > > have 2 nics. > > the undercloud machine's ip config is as follows: > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ip addr > 1: lo: mtu > > > > > > 65536 > > > > > > qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd > > > > > > 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft > > > > > > forever > > > > > > preferred_lft forever > inet6 ::1/128 scope host > valid_lft > > > > > > forever > > > > > > preferred_lft forever > 2: em1: > > > > > > mtu > > > > > > 1500 > > > > > > qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd > > > > > > ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope global > > > > > > em1 > > > > > > > > > > > > > valid_lft forever preferred_lft forever > inet6 > > > > > > fe80::a9e:1ff:fe50:8a21/64 > > > > > > scope link > valid_lft forever preferred_lft forever > 3: em2: > > > > > > mtu 1500 qdisc mq master > > > > > > ovs-system > > > > > > > > > > > > > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd > > > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > > > > 4: > > > > > > ovs-system: mtu 1500 qdisc noop state DOWN > > > > > > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: > > > > > > br-ctlplane: > > > > > > mtu 1500 qdisc noqueue > state > > > > > > UNKNOWN > > > > > > > > > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet > > > > > > 192.0.2.1/24 > > > > > > brd > > > > > > 192.0.2.255 scope global br-ctlplane > valid_lft forever > > > > > > preferred_lft > > > > > > forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft > > > > > > forever > > > > > > preferred_lft forever > 6: br-int: mtu 1500 > > > > > > qdisc > > > > > > noop > > > > > > state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > > > > > > > I > > > > > > am > > > > > > using em2 for pxe boot on the other machines.. So I configured > > > > > > > instackenv.json to have em2's MAC address > For overcloud nodes, > > > > > > em1 > > > > > > was > > > > > > configured to have 10.1.34.x ip, but after image > deploy I am not > > > > > > sure > > > > > > what > > > > > > happened for that nic. > > Thanks > > Esra ÇEL?K > > > > > > > TÜB?TAK > > > > > > B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > > > > > > ----- > > > > > > Orijinal Mesaj ----- > > > Kimden: "Marius Cornea" > > > > > > > > > > > > > > > > > > > > > > > > > > Kime: "Esra Celik" > > Kk: "Sasha > > > > > > Chuzhoy" > > > > > > , rdo-list at redhat.com > > Gönderilenler: 19 > > > > > > Ekim > > > > > > Pazartesi 2015 15:36:58 > > Konu: Re: [Rdo-list] OverCloud deploy > > > > > > fails > > > > > > with > > > > > > error "No valid host was > > found" > > > Hi, > > > I believe the > > > > > > nodes > > > > > > were > > > > > > stuck in introspection so they were not ready for > > deployment > > > > > > thus > > > > > > the > > > > > > not enough hosts message. Can you describe the > > networking setup > > > > > > (how > > > > > > many nics the nodes have and to what networks they're > > > > > > > > connected)? > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks, > > Marius > > > ----- Original Message ----- > > > From: > > > > > > "Esra > > > > > > Celik" > > > To: "Sasha Chuzhoy" > > > > > > > > > Cc: "Marius Cornea" , > > > > > > rdo-list at redhat.com > > > Sent: Monday, October 19, 2015 12:34:32 > > > > > > PM > > > > > > > > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > host > > > > > > > > > > > > > > > > > > > > > was found" > > > > > > Hi again, > > > > > > "nova list" was > > > > > > > empty > > > > > > > after > > > > > > introspection stage which was not completed > > > successfully. So > > > > > > I > > > > > > cloud > > > > > > not ssh the nodes.. Is there another way to > > > obtain > > > the > > > > > > IP > > > > > > addresses? > > > > > > [stack at undercloud ~]$ sudo systemctl|grep > > > > > > ironic > > > > > > > > > > > > > > > > > > > > > openstack-ironic-api.service loaded active running OpenStack > > > > > > > Ironic > > > > > > > API > > > > > > > > > > > > > > > > service > > > openstack-ironic-conductor.service loaded active > > > > > > > > running > > > > > > OpenStack Ironic > > > Conductor service > > > > > > > > > openstack-ironic-inspector-dnsmasq.service loaded active running > > > > > > PXE > > > > > > boot > > > > > > > > > > > > > > > dnsmasq service for Ironic Inspector > > > > > > > > > openstack-ironic-inspector.service loaded active running Hardware > > > > > > > > > > > > > > > > > > > > > introspection service for OpenStack Ironic > > > > > > If I start > > > > > > deployment > > > > > > anyway I get 2 nodes in ERROR state > > > > > > [stack at undercloud > > > > > > ~]$ > > > > > > openstack overcloud deploy --templates > > > Deploying templates in > > > > > > the > > > > > > directory > > > /usr/share/openstack-tripleo-heat-templates > > > > > > > > > Stack > > > > > > failed with status: resources.Controller: resources[0]: > > > > > > > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > > > > > > > > > > > > > > > > > > > > > > > > > "Message: > > > No valid host was found. There are not enough hosts > > > > > > available., Code: > > > 500" > > > > > > [stack at undercloud ~]$ nova > > > > > > list > > > > > > > > > > > > > > > > > > > > > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > > | ID | Name | Status | Task State | Power State | Networks | > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | > > > > > > > > > | overcloud-controller-0 > > > > > > > > > | | > > > > > > ERROR | > > > | - > > > | | > > > | NOSTATE | | > > > | > > > > > > 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | > > > > > > ERROR > > > > > > > > > > > > > > > > > > > > > > > > > > > | | > > > | - > > > | | NOSTATE | | > > > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > > > > > Did the repositories update during weekend? Should I > > > > > > > > > > > > better > > > > > > restart the > > > overall Undercloud and Overcloud installation > > > > > > from > > > > > > the > > > > > > beginning? > > > > > > Thanks. > > > > > > Esra ÇEL?K > > > > > > > > > Uzman > > > > > > Ara?t?rmac? > > > Bili?im Teknolojileri Enstitüsü > > > > > > > > > TÜB?TAK B?LGEM > > > 41470 GEBZE - KOCAEL? > > > T +90 262 675 > > > > > > 3140 > > > > > > > > > > > > > > > > > > > > > F +90 262 646 3187 > > > www.bilgem.tubitak.gov.tr > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > ................................................................ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sorumluluk Reddi > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra > > > > > > Celik" > > > > > > > > > > Kk: "Marius Cornea" > > > > > > , rdo-list at redhat.com > > > > > > > > > > Gönderilenler: > > > > > > 16 > > > > > > Ekim Cuma 2015 18:44:49 > > > > Konu: Re: [Rdo-list] OverCloud > > > > > > deploy > > > > > > fails > > > > > > with error "No valid host > > > > was > > > > found" > > > > > > > > > > > > > Hi > > > > > > Esra, > > > > > > > > > > if the undercloud nodes are UP - you can login with: ssh > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > heat-admin@ > > > > You can see the IP of the nodes with: "nova > > > > > > list". > > > > > > > > > > > > > > > > BTW, > > > > What do you see if you run "sudo systemctl|grep > > > > > > > > > ironic" > > > > > > on the > > > > undercloud? > > > > > > > Best regards, > > > > > > > > > > Sasha > > > > > > Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > From: > > > > > > "Esra > > > > > > Celik" > > > > > To: "Sasha Chuzhoy" > > > > > > > > > > > Cc: "Marius Cornea" > > > > > > , > > > > > > rdo-list at redhat.com > > > > > Sent: Friday, October 16, 2015 > > > > > > 1:40:16 > > > > > > AM > > > > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > host > > > > > was found" > > > > > > > > > > Hi Sasha, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Overcloud-Compute > > > > > > > > > > This is my undercloud.conf > > > > > > file: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > image_path = . > > > > > local_ip = 192.0.2.1/24 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > local_interface = em2 > > > > > masquerade_network = 192.0.2.0/24 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > dhcp_start = 192.0.2.5 > > > > > dhcp_end = 192.0.2.24 > > > > > > > > > > > network_cidr = 192.0.2.0/24 > > > > > network_gateway = 192.0.2.1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > inspection_interface = br-ctlplane > > > > > inspection_iprange = > > > > > > 192.0.2.100,192.0.2.120 > > > > > inspection_runbench = false > > > > > > > > > > > > > > > > > > > > > > > undercloud_debug = true > > > > > enable_tuskar = false > > > > > > > > > > > enable_tempest = false > > > > > > > > > > IP configuration for the > > > > > > Undercloud is as follows: > > > > > > > > > > stack at undercloud ~]$ > > > > > > ip > > > > > > addr > > > > > > > > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue state > > > > > > > > > > UNKNOWN > > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever > > > > > > preferred_lft > > > > > > forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft > > > > > > forever > > > > > > preferred_lft forever > > > > > 2: em1: > > > > > > > > > > > > mtu 1500 qdisc mq state UP > > > > > qlen > > > > > 1000 > > > > > > > > > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > > > inet > > > > > > 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > > valid_lft > > > > > > forever > > > > > > preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a21/64 > > > > > > scope > > > > > > link > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > 3: em2: > > > > > > mtu 1500 qdisc mq master > > > > > > > > > > > > > > > > > ovs-system > > > > > state UP qlen 1000 > > > > > link/ether > > > > > > 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > 4: ovs-system: > > > > > > mtu 1500 qdisc noop state DOWN > > > > > > > > > > > link/ether > > > > > > 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > > > 5: br-ctlplane: > > > > > > mtu 1500 qdisc > > > > > noqueue > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > state UNKNOWN > > > > > link/ether 08:9e:01:50:8a:22 brd > > > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > > > > > > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > inet6 > > > > > > fe80::a9e:1ff:fe50:8a22/64 scope link > > > > > valid_lft forever > > > > > > preferred_lft forever > > > > > 6: br-int: > > > > > > mtu > > > > > > 1500 > > > > > > qdisc noop state DOWN > > > > > link/ether fa:85:ac:92:f5:41 brd > > > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > And I attached two > > > > > > screenshots > > > > > > showing > > > > > > the boot stage for overcloud > > > > > nodes > > > > > > > > > > Is > > > > > > there > > > > > > a > > > > > > way to login the overcloud nodes to see their IP > > > > > > > > > > > configuration? > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra ÇEL?K > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > TÜB?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ----- Orijinal Mesaj > > > > > > ----- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kime: "Esra Celik" > > > > > > Kk: > > > > > > "Marius > > > > > > Cornea" , rdo-list at redhat.com > > > > > > > > > > > > Gönderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > > > > > > > > > Konu: > > > > > > Re: > > > > > > [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > > > > > host > > > > > > > > > > > > > > > > > > > > > > > > was > > > > > > found" > > > > > > > > > > > Just my 2 > > > > > > > > > > cents. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Did you make sure that all the registered nodes are > > > > > > > > > configured > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > boot > > > > > > off > > > > > > the right NIC first? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Can you watch the console and see what happens on the problematic > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > nodes > > > > > > upon > > > > > > boot? > > > > > > > > > > > > > > > > > > Best > > > > > > regards, > > > > > > Sasha Chuzhoy. > > > > > > > > > > > ----- > > > > > > Original > > > > > > Message ----- > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > > > > > > > > > Cc: > > > > > > rdo-list at redhat.com > > > > > > > Sent: Thursday, October 15, 2015 > > > > > > 4:40:46 > > > > > > AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails > > > > > > with > > > > > > error > > > > > > "No > > > > > > > valid > > > > > > > host > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic node-show results are below. I have my nodes power > > > > > > > > > > > on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > after > > > > > > > introspection bulk start. And I get the > > > > > > following warning > > > > > > > Introspection didn't finish for > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > > > > > > > > > > > > Doesn't seem to be the same issue with > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > | UUID | Name | Instance UUID | Power State | > > > > > > > > > > > > > | Provisioning > > > > > > State > > > > > > > | | > > > > > > > | Maintenance | > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None > > > > > > > > > > > > > | | > > > > > > > > > > > > > | power > > > > > > on | > > > > > > > | available > > > > > > > | | > > > > > > > | > > > > > > False > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | > > > > > > > > > > > | power > > > > > > > > > > > | on > > > > > > > > > > > | | > > > > > > > > > > > > > | available > > > > > > > | | > > > > > > > | False | > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic > > > > > > node-show > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > | Property | Value | > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > | target_power_state | None | > > > > > > > | extra | > > > > > > > > > > > > > | {} > > > > > > > > > > > > > | | > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > > > > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | > > > > > > None > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > | provision_state | available | > > > > > > > | > > > > > > > > > > > | clean_step > > > > > > > > > > > | | > > > > > > > > > > > | {} > > > > > > > > > > > | | > > > > > > > > > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > | console_enabled | False | > > > > > > > | target_provision_state > > > > > > | | > > > > > > | None > > > > > > | | > > > > > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | > > > > > > > > | None > > > > > > > > | | > > > > > > > > | > > > > > > > > > > > > > | inspection_finished_at | None | > > > > > > > | > > > > > > > > > > > > | power_state > > > > > > > > > > > > | | > > > > > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > > > > > > reservation | None | > > > > > > > | properties | {u'memory_mb': > > > > > > u'8192', > > > > > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > > > > > > > | > > > > > > u'10', > > > > > > | > > > > > > > | | u'cpus': u'4', u'capabilities': > > > > > > | > > > > > > > | | u'boot_option:local'} > > > > > > | > > > > > > > | | | > > > > > > > > > > > > > | instance_uuid | None | > > > > > > > | name | None > > > > > > > > > > > > > | | > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > | driver_info | {u'ipmi_password': u'******', > > > > > > > > > | u'ipmi_address': > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | u'192.168.0.18', | > > > > > > > | | u'ipmi_username': > > > > > > > > > | u'root', > > > > > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': > > > > > > | | | > > > > > > > | | u'3db3dbed- > > > > > > | | | > > > > > > > | | | > > > > > > | | | > > > > > > > | | > > > > > > > | | | > > > > > > > | | > > > > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | > > > > > > > > > > > | | created_at > > > > > > > > > > > | | | > > > > > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > > > > > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > instance_info | {} | > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic > > > > > > node-show > > > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > | Property | Value | > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > | target_power_state | None | > > > > > > > | extra | > > > > > > > > > > > > > | {} > > > > > > > > > > > > > | | > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > > > > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | > > > > > > None > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > | provision_state | available | > > > > > > > | > > > > > > > > > > > | clean_step > > > > > > > > > > > | | > > > > > > > > > > > | {} > > > > > > > > > > > | | > > > > > > > > > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > | console_enabled | False | > > > > > > > | target_provision_state > > > > > > | | > > > > > > | None > > > > > > | | > > > > > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 | > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at | > > > > > > > > | None > > > > > > > > | | > > > > > > > > | > > > > > > > > > > > > > | inspection_finished_at | None | > > > > > > > | > > > > > > > > > > > > | power_state > > > > > > > > > > > > | | > > > > > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > | > > > > > > reservation | None | > > > > > > > | properties | {u'memory_mb': > > > > > > u'8192', > > > > > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > > > > > > > | > > > > > > u'100', > > > > > > | > > > > > > > | | u'cpus': u'4', u'capabilities': > > > > > > | > > > > > > > | | u'boot_option:local'} > > > > > > | > > > > > > > | | | > > > > > > > > > > > > > | instance_uuid | None | > > > > > > > | name | None > > > > > > > > > > > > > | | > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | > > > > > > > > > > | driver_info | {u'ipmi_password': u'******', > > > > > > > > > | u'ipmi_address': > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | u'192.168.0.19', | > > > > > > > | | u'ipmi_username': > > > > > > > > > | u'root', > > > > > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': > > > > > > | | | > > > > > > > | | u'3db3dbed- > > > > > > | | | > > > > > > > | | | > > > > > > | | | > > > > > > > | | > > > > > > > | | | > > > > > > > | | > > > > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | > > > > > > > > > > > | | created_at > > > > > > > > > > > | | | > > > > > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info | > > > > > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > instance_info | {} | > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the stack > > > > > > > > > > > > > > > > > > user. > > > > > > > > > > > > > > > > > > I > > > > > > don't think I > > > > > > > am > > > > > > > doing > > > > > > > > > > > > > something > > > > > > other than > > > > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > > > > > > > > > > > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > instackenv.json > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2 sudo yum -y install epel-release > > > > > > > 3 sudo > > > > > > > > > > > curl > > > > > > > > > > > -o > > > > > > /etc/yum.repos.d/delorean.repo > > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > > > > > > > > > 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > > 6 sudo > > > > > > /bin/bash > > > > > > -c > > > > > > "cat > > > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > > > > > > > > > EOF" > > > > > > > 7 sudo curl -o > > > > > > /etc/yum.repos.d/delorean-deps.repo > > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > > > 9 sudo > > > > > > yum > > > > > > install > > > > > > -y python-tripleoclient > > > > > > > 10 cp > > > > > > /usr/share/instack-undercloud/undercloud.conf.sample > > > > > > > > > > > > > ~/undercloud.conf > > > > > > > 11 vi undercloud.conf > > > > > > > > > > > > > 12 > > > > > > export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 13 > > > > > > openstack > > > > > > undercloud install > > > > > > > 14 source stackrc > > > > > > > 15 > > > > > > export > > > > > > NODE_DIST=centos7 > > > > > > > 16 export > > > > > > DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > > > > > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > > > 17 export > > > > > > DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 18 openstack > > > > > > overcloud > > > > > > image build --all > > > > > > > 19 ls > > > > > > > 20 openstack > > > > > > overcloud > > > > > > image upload > > > > > > > 21 openstack baremetal import --json > > > > > > instackenv.json > > > > > > > 22 openstack baremetal configure boot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 23 ironic node-list > > > > > > > 24 openstack baremetal > > > > > > > > > introspection > > > > > > bulk start > > > > > > > 25 ironic node-list > > > > > > > 26 > > > > > > ironic > > > > > > node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > 27 > > > > > > ironic > > > > > > node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > 28 > > > > > > history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > TÜB?TAK > > > > > > > > > > > > > > > B?LGEM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > > > , rdo-list at redhat.com > > > > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 19:40:07 > > > > > > > > > > > > > > > > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > results? > > > > > > > Also > > > > > > > check the following > > > > > > suggestion > > > > > > if > > > > > > you're experiencing the same > > > > > > > issue: > > > > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > > > > > > > > > > > > > > From: > > > > > > > > > > > > > > > > "Esra > > > > > > Celik" > > > > > > > > To: "Marius > > > > > > Cornea" > > > > > > > > > > > > > > Cc: "Ignacio Bravo" > > > > > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > > > > > > > > > Subject: > > > > > > Re: > > > > > > [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > host > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the > > > > > > > > > > > > > > > > > > > > > introspection > > > > > > > > > > > > > > > > > > > > > I > > > > > > can see Client > > > > > > > > IP > > > > > > > > of > > > > > > > > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > (screenshot attached). But then I see continuous > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic-python-agent > > > > > > > > errors > > > > > > > > > > > > > > > (screenshot-2 > > > > > > attached). Errors repeat after time out.. And the > > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > are > > > > > > > > not powered off. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ADMINISTRATOR > > > > > > > > -U > > > > > > > > root -R 3 -N 5 -P > > > > > > power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > > > > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > chassis power status > > > > > > > > Chassis Power > > > > > > > > > is > > > > > > > > > on > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I > > > > > > > > > > > > > lanplus > > > > > > > > > > > > > -U > > > > > > > > > > > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > > chassis > > > > > > power off > > > > > > > > Chassis Power Control: Down/Off > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power > > > > > > status > > > > > > > > > > > > > > > > > > > > > > > > > > Chassis Power is off > > > > > > > > [stack at undercloud > > > > > > > > > > > > ~]$ > > > > > > ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -P > > > > > > > > chassis power on > > > > > > > > > > > > > > Chassis > > > > > > Power > > > > > > Control: Up/On > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > > > > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > chassis power status > > > > > > > > Chassis Power > > > > > > > > > is > > > > > > > > > on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > TÜB?TAK B?LGEM > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > > > > > > > Kimden: > > > > > > "Marius Cornea" > > > > > > > > Kime: "Esra > > > > > > Celik" > > > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 14:59:30 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > valid > > > > > > > > host > > > > > > > > was > > > > > > > > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: > > > > > > "Ignacio > > > > > > Bravo" , > > > > > > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well today I started with > > > > > > re-installing the OS and nothing > > > > > > > > > seems > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > wrong > > > > > > > > > with > > > > > > > > > undercloud > > > > > > installation, > > > > > > then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > error > > > > > > during image build > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > > > + dracut -N > > > > > > --install > > > > > > ' > > > > > > curl partprobe lsblk targetcli tail > > > > > > > > > head > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > awk > > > > > > > > > ifconfig > > > > > > > > > cut expr route > > > > > > ping > > > > > > nc > > > > > > wget > > > > > > tftp grep' --kernel-cmdline > > > > > > > > > 'rd.shell > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > rd.debug > > > > > > > > > rd.neednet=1 rd.driver.pre=ahci' > > > > > > --include > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > > > / > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > virtio_net > > > > > > > > > virtio_blk target_core_mod > > > > > > iscsi_target_mod > > > > > > > > > > > > > > > > > > > > > > > > > > > target_core_iblock > > > > > > > > > target_core_file > > > > > > target_core_pscsi configfs' -o 'dash > > > > > > > > > plymouth' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /tmp/ramdisk > > > > > > > > > cat: write error: Broken pipe > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + chmod o+r /tmp/kernel > > > > > > > > > + trap EXIT > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > > > + date > > > > > > > > +%s.%N > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You can ignore that afaik, if you end up having all the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > required > > > > > > > > images > > > > > > > > it > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Then, > > > > > > during introspection stage I see ironic-python-agent > > > > > > > > > > > > > > > > > > > > > > > > > > > errors > > > > > > > > > on > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the early > > > > > > > > > > > > > > > > > stage > > > > > > > > > > > > > > > > > of > > > > > > the > > > > > > > > introspection? > > > > > > > > At > > > > > > > > > > > > > > > > > > > > some > > > > > > point it should receive an address by DHCP and the Network > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > is > > > > > > > > unreachable error should disappear. Does the > > > > > > introspection > > > > > > > > complete > > > > > > > > and > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl > > > > > > > > > > > > > > > > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 > > > > > > > > > > > > 10:30:12 > > > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > > > > > > > > > > > ] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Option "http_url" from group "pxe" is deprecated. Use > > > > > > > > > > > > option > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "http_url" > > > > > > > > > from > > > > > > > > > > > > > > > > > > > > > > group > > > > > > "deploy". > > > > > > > > > Oct 14 10:30:12 undercloud.rdo > > > > > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "http_root" > > > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Before deployment ironic > > > > > > > > > > > > > > > > > > > > > > > > node-list: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This is odd too as > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I'm > > > > > > expecting the nodes to be powered off > > > > > > > > before > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > running > > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic > > > > > > > > > > > > > > > > > > > > > node-list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > | UUID | Name | Instance UUID | Power State | > > > > > > > > > > > > > > > | Provisioning > > > > > > > > > > > > > > > | State > > > > > > > > > | | > > > > > > > > > | > > > > > > Maintenance | > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | > > > > > > > > > > > > > > > | None > > > > > > > > > > > > > > > | | > > > > > > power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > > > > > > > > > > > > > | > > > > > > available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power > > > > > > > > > | > > > > > > > > | > > > > > > > > | > > > > > > > > | > > > > > > > > | > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > > > During deployment I get > > > > > > > > > > > > > > > > > > > > > > > > following > > > > > > > > > > > > > > > > > > > > > > > > errors > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# journalctl > > > > > > > > > > > > > > > > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 > > > > > > > > > > > > 11:29:01 > > > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 ERROR > > > > > > ironic.drivers.modules.ipmitool [-] IPMI Error > > > > > > > > > > > > > > > while > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > attempting > > > > > > > > > "ipmitool -I lanplus -H > > > > > > 192.168.0.19 -L ADMINISTRATOR -U root > > > > > > > > > -R > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 3 > > > > > > > > > -N > > > > > > > > > 5 > > > > > > > > > -f > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > > > Error: > > > > > > Unexpected > > > > > > error while running command. > > > > > > > > > Oct 14 11:29:01 > > > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 WARNING > > > > > > ironic.drivers.modules.ipmitool [-] IPMI power > > > > > > > > > > > > > > > status > > > > > > > > > > > > > > > > > > > > > > > > > > > failed > > > > > > > > > for > > > > > > > > > node > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: > > > > > > > > > > > > > > > Unexpected > > > > > > > > > error > > > > > > > > > while > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo > > > > > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > 11:29:01.740 > > > > > > > > > 619 WARNING ironic.conductor.manager > > > > > > [-] > > > > > > During > > > > > > > > > sync_power_state, > > > > > > > > > could > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > not > > > > > > > > > get power state for node > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > > > attempt > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 > > > > > > > > > of > > > > > > > > > 3. Error: IPMI call > > > > > > > > > failed: > > > > > > power status.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This > > > > > > looks > > > > > > like an ipmi error, can you try to manually run > > > > > > > > > > > > > > commands > > > > > > > > > > > > > > > > > > > > > > > > > > using > > > > > > > > the > > > > > > > > ipmitool and > > > > > > > > > > > > see > > > > > > > > > > > > if > > > > > > you get any success? It's also worth filing > > > > > > > > a > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > bug > > > > > > > > with > > > > > > > > details such as the > > > > > > > ipmitool > > > > > > version, server model, drac > > > > > > > > firmware > > > > > > > > > > > > > > > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > > > Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > Kimden: > > > > > > "Marius > > > > > > Cornea" > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > > > , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > > > > > > > > > > > > > > > > > > > Konu: > > > > > > Re: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > > > > > > > > > > error > > > > > > > > > > > > > > > "No > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > > > host > > > > > > > > > > > > > > > was > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > > > Original > > > > > > Message ----- > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > > > > > > Cc: "Ignacio Bravo" > > > > > > , > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > During > > > > > > deployment > > > > > > they are powering on and deploying the > > > > > > > > > > images. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I > > > > > > > > > > see > > > > > > > > > > lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > of > > > > > > > > > > connection error messages about > > > > > > ironic-python-agent but > > > > > > > > > > ignore > > > > > > > > > > > > > > > > > > > > > > > > > > > > them > > > > > > > > > > > > > > > > as > > > > > > > > > > mentioned here > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > That was referring to the > > > > > > > > > > > > > > > > > > > > > > introspection > > > > > > stage. From what I > > > > > > > > > can > > > > > > > > > tell > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > you > > > > > > > > > are > > > > > > > > > experiencing > > > > > > > > > issues > > > > > > > > > during > > > > > > deployment as it fails to > > > > > > > > > provision > > > > > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > > > > > nova > > > > > > > > > instances, can you check > > > > > > > > > > > > > > > if > > > > > > > > > > > > > > > during > > > > > > that stage the nodes get > > > > > > > > > powered > > > > > > > > > > > > > > > > > > > > > on? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Make sure that before overcloud > > > > > > > > > > > > > > > > > > > > > > deploy > > > > > > > > > > > > > > > > > > > > > > the > > > > > > ironic nodes are > > > > > > > > > available > > > > > > > > > for > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > provisioning (ironic node-list and check the provisioning > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > state > > > > > > > > > column). > > > > > > > > > Also > > > > > > > > > > check > > > > > > > > > > that > > > > > > you didn't miss any step in the docs in > > > > > > > > > regards > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > kernel > > > > > > > > > and ramdisk > > > > > > assignment, introspection, flavor creation(so it > > > > > > > > > > > > > > > matches > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > > nodes resources) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > In > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > instackenv.json > > > > > > file I do not need to add the undercloud > > > > > > > > > > node, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > or > > > > > > > > > > do > > > > > > > > > > I? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And which log files should I watch during > > > > > > > > > > > > > > > > deployment? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You can check the > > > > > > openstack-ironic-conductor logs(journalctl > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > openstack-ironic-conductor.service) and > > > > > > > > > the > > > > > > > > > logs > > > > > > in > > > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kime: > > > > > > > > > > Esra Celik > > > > > > Kk: Ignacio Bravo > > > > > > > > > > > > > > > > , > > > > > > > > > > > > > > > > rdo-list at redhat.comGönderilenler: > > > > > > > > > > Tue, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 13 > > > > > > > > > > Oct > > > > > > > > > > 2015 > > > > > > > > > > 17:25:00 > > > > > > > > > > +0300 > > > > > > (EEST)Konu: Re: [Rdo-list] OverCloud > > > > > > > > > > deploy > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > fails > > > > > > > > > > with > > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > ----- > > > > > > Original > > > > > > Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > > > To: "Ignacio > > > > > > Bravo" > > > > > > > Cc: > > > > > > > > > > > > > > > > rdo-list at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: > > > > > > > > > > Tuesday, October 13, 2015 > > > > > > > > > > > > > 3:47:57 > > > > > > PM> Subject: Re: > > > > > > > > > > [Rdo-list] > > > > > > > > > > > > > > > > OverCloud > > > > > > > > > > deploy fails with error "No valid > > > > > > host > > > > > > was > > > > > > found"> > > > > > > > > > > > > > Actually > > > > > > > > > > I > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > re-installed the OS for Undercloud before deploying. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > However > > > > > > > > > > I > > > > > > > > > > did> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > not > > > > > > > > > > re-install the OS in Compute and > > > > > > > Controller > > > > > > nodes.. I will > > > > > > > > > > reinstall> > > > > > > > > > > > > > > > > basic > > > > > > > > > > > > > > > > > > > > > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > compute, > > > > > > > > > > they > > > > > > > > > > will > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > get the image served by the undercloud. I'd recommend > > > > > > > > > > > that > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > during > > > > > > > > > > deployment > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > you > > > > > > watch the servers console and make sure they get > > > > > > > > > > > > > > > > > > > > > > powered > > > > > > > > > > > > > > > > on, > > > > > > > > > > pxe > > > > > > > > > > > > > > > > > > > > > > > > > > boot, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks> > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: > > > > > > > > > > > "Ignacio > > > > > > > > > > > > > > > > > > > Bravo" > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: > > > > > > > > > > > 13 Ekim Sal? 2015 > > > > > > 16:36:06> > > > > > > Konu: > > > > > > Re: [Rdo-list] > > > > > > > > > > > OverCloud > > > > > > > > > > > > > > > > > > > > > > > deploy > > > > > > > > > > > > > > > > > fails > > > > > > > > > > > with error "No > > > > > > > > > > > > > > > > > valid > > > > > > > > > > > > > > > > > host > > > > > > was> found"> > Esra,> > I > > > > > > > > > > > encountered > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > same > > > > > > > > > > > problem > > > > > > > > > after > > > > > > deleting the stack and re-deploying.> > It > > > > > > > > > > > > > > > > > turns > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > out > > > > > > > > > > > that > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 'heat > > > > > > stack-delete overcloud’ does remove the nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > from> > > > > > > > > > > > ‘nova list’ and one would > > > > > > assume > > > > > > that the > > > > > > > > > > > baremetal > > > > > > > > > > > > > > > > > servers > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > are now ready to> be used for the next stack, but > > > > > > > > > > > > > > when > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > redeploying, > > > > > > > > > > > I > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > get > > > > > > > > > > > the same message of> not enough hosts > > > > > > available.> > > > > > > > You > > > > > > > > > > > can > > > > > > > > > > > look > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > into > > > > > > > > > > > the > > > > > > > > > > > nova logs > > > > > > > > and > > > > > > > > it > > > > > > mentions something about ‘node xxx > > > > > > > > > > > is> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > already > > > > > > > > > > > associated with UUID > > > > > > > > > > > > yyyy’ > > > > > > and ‘I tried 3 > > > > > > > > > > > times > > > > > > > > > > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > > > > > > I’m > > > > > > > > > > > giving > > > > > > > > > > > > > > > > up’.> > > > > > > > > > > > > > > > > The > > > > > > issue is that the UUID yyyy > > > > > > > > > > > belonged > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > a > > > > > > > > > > > prior > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > basic > > > > > > > > > > > OS > > > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> LTG > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Federal, > > > > > > > > > > > Inc> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 13, > > > > > > > > > > > 2015, > > > > > > > > > > > at > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 9:25 > > > > > > > > > > > AM, Esra Celik < > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > wrote:> > > > > > > > > > > > > > Hi > > > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with error > > > > > > > > > > > > > > > > > > > > > > > "No > > > > > > valid host was > > > > > > > > > > > found"> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --templates> > > > > > > > > > > > Deploying > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > templates in the directory> > > > > > > > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > > > > > > > > > Stack > > > > > > failed with status: Resource CREATE failed: > > > > > > > > > > > > > > > > > resources.Compute:> > > > > > > > > > > > ResourceInError: > > > > > > resources[0].resources.NovaCompute: Went > > > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > status > > > > > > > > > > > ERROR> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > due > > > > > > > > > > > > to > > > > > > "Message: No valid host was found. There are not > > > > > > > > > > > > > > > > > > > > > > > > > > > > > enough > > > > > > > > > > > hosts> > > > > > > > > > > > > > > > > > available., > > > > > > Code: > > > > > > 500"> Heat Stack create failed.> > Here > > > > > > > > > > > are > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > some > > > > > > > > > > > logs:> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Every > > > > > > 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > COMPLETE > > > > > > > > > > > > Tue > > > > > > > > > > > > Oct > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > | resource_name | physical_resource_id | > > > > > > > > > > > > > > > > > | resource_type > > > > > > | > > > > > > > > > > > | resource_status > > > > > > > > > > > |> > > > > > > | > > > > > > > > > > > | | > > > > > > updated_time | stack_name |> > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > | Compute | > > > > > > > > > > > > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > > > > > > > > > > > > > > > > > > | |> > > > > > > > > > > > > > > > | | > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> | Controller > > > > > > > > > > > |> | | > > > > > > > > > > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud > > > > > > > > > > > > > > > |> > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > 0 > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OS::TripleO::Controller > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | > > > > > > > > > > > > > > > > > OS::TripleO::Compute > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > Controller | > > > > > > > | > > > > > > > > > > > 2e9ac712-0566-49b5-958f-c3e151bb24d7 > > > > > > | > > > > > > > > > > > OS::Nova::Server > > > > > > > > > > > |> > > > > > > > | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | > > > > > > > > > > > > | > > > > > > > > > > > 2015-10-13T10:20:54 > > > > > > |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk > > > > > > |> | > > > > > > > > > > > NovaCompute > > > > > > > > > > > | > > > > > > > > |> | > > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > > > > > > > > > > > > > > > > > > | | > > > > > > overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ heat resource-show > > > > > > > > > > > > > > > > > > > overcloud > > > > > > > > > > > > > > > > > > > Compute> > > > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > | Property | Value |> > > > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > | attributes | { |> | | "attributes": null, > > > > > > > > > > > > > > > > > | |> > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > | "refs": > > > > > > > > > > > | null > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > | > > > > > > > > | |> > > > > > > > > > > > | | > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > | } > > > > > > > > > > > |> | creation_time | > > > > > > > > | 2015-10-13T10:20:36 > > > > > > > > | |> > > > > > > > > | | > > > > > > description > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | links > > > > > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > |> | |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > > > > > > > > > | (self) |> | | > > > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > > > > > > > > > | | (stack) |> | | > > > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > > > > > > > > > | | (nested) |> | logical_resource_id | > > > > > > > > > > > > > > > > > | | Compute > > > > > > > > > > > > > > > > > | | |> > > > > > > > > > > > > > > > > > | | | > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > | | physical_resource_id > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ComputeAllNodesDeployment |> | | > > > > > > > > > > > > > > > > > ComputeNodesPostDeployment > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > ComputeCephDeployment > > > > > > > > > > > > > > > > > | > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | > > > > > > > > > > > > > > > > > > > > > > > resource_name > > > > > > > > > > > | > > > > > > > > > > > Compute > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | resource_status | > > > > > > > > > > > > > |> > > > > > > > > > > > | CREATE_FAILED > > > > > > > > > > > > > |> > > > > > > > > > > > | |> > > > > > > | > > > > > > > > > > > | resource_status_reason > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > | > > > > > > > > > > > | > > > > > > > | > > > > > > > > > > > | | > > > > > > | > > > > > > > > > > > | | > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > > > Went > > > > > > > > to > > > > > > > > status > > > > > > ERROR due to "Message:> | No valid host > > > > > > > > > > > was > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > found. > > > > > > > > > > > There > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > are > > > > > > > > > > > > not > > > > > > enough hosts available., Code: 500"> | |> | > > > > > > > > > > > > > > > > > resource_type > > > > > > > > > > > | > > > > > > > > > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > > > This is my instackenv.json for 1 > > > > > > > > > > > > > > > > > > > > compute > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > > > 1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > control > > > > > > > > > > > > > > node > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > > > > be > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:CC:A1"> ],> > > > > > > > > "cpu":"4",> > > > > > > "memory":"8192",> > > > > > > > > > > > "disk":"10",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > > > > > > > "pm_addr":"192.168.0.18"> > > > > > > },> > > > > > > {> > > > > > > > > > > > "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:D0:3D"> ],> "cpu":"4",> > > > > > > "memory":"8192",> > > > > > > > > > > > "disk":"100",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > > > > > > > "pm_addr":"192.168.0.19"> > > > > > > }> > > > > > > ]> }> > > Any ideas? Thanks > > > > > > > > > > > in > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > > > > _______________________________________________> > > > > > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: > > > > > > > > > > > > > > > > rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________> > > > > > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > > > > Rdo-list > > > > > > mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mustafa.celik at tubitak.gov.tr Wed Oct 21 07:24:43 2015 From: mustafa.celik at tubitak.gov.tr (Mustafa =?utf-8?B?w4dFTMSwSyAoQsSwTEdFTS1CVEUp?=) Date: Wed, 21 Oct 2015 10:24:43 +0300 (EEST) Subject: [Rdo-list] Undercloud UI In-Reply-To: References: <312239508.320257.1445347438841.JavaMail.zimbra@tubitak.gov.tr> <1522468676.320587.1445348489875.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1648064419.323036.1445412283114.JavaMail.zimbra@tubitak.gov.tr> Ana, We want to contribute your rdo-director-ui project. * Is Git/Gerrit Hub document enough for installation and contribution steps? or is there any other document you can share with us? * Which IDE do you use for development? * Is there anything else that we should know, any suggestion, document, tutorial, whatever? Thanks... Mustafa ----- Orijinal Mesaj ----- Kimden: "Ana Krivokapic" Kime: "Mustafa ?EL?K (B?LGEM-BTE)" Kk: rdo-list at redhat.com G?nderilenler: 20 Ekim Sal? 2015 21:04:37 Konu: Re: [Rdo-list] Undercloud UI Hi Mustafa, Yes we have one! :) The code is located on GitHub [1] and you can contribute by submitting patches to GerritHub [2]. The README at [1] contains info on how to get the installation up and running as well as the contribution process. Please note though, that this is a very new project and is still very much a work-in-progress. Let us know if you have any further questions. [1] https://github.com/rdo-management/rdo-director-ui [2] https://review.gerrithub.io/#/q/project:rdo-management/rdo-director-ui Kind Regards, Ana Krivokapic On Tue, Oct 20, 2015 at 3:41 PM, Mustafa ?EL?K (B?LGEM-BTE) < mustafa.celik at tubitak.gov.tr > wrote: Hi Everyone, Is there any UI for undercloud installation? or any project on progress? If we implement one, how to contribute it? Thanks, Mustafa ?EL?K T?B?TAK B?LGEM www.bilgem.tubitak.gov.tr mustafa.celik at tubitak.gov.tr _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Wed Oct 21 07:57:27 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 21 Oct 2015 09:57:27 +0200 Subject: [Rdo-list] Failed to deploy overcloud with network isolation on BM. In-Reply-To: <1122769870.61527644.1445391866459.JavaMail.zimbra@redhat.com> References: <1421541564.61524022.1445391447568.JavaMail.zimbra@redhat.com> <1122769870.61527644.1445391866459.JavaMail.zimbra@redhat.com> Message-ID: 2015-10-21 3:44 GMT+02:00 Sasha Chuzhoy : > While I fail to install the overcloud using the latest bits[1]. > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273680 Does non-HA work? I see there's RPC timeout, is rabbitmq up and running? Cheers, Alan From ihrachys at redhat.com Wed Oct 21 08:48:14 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 21 Oct 2015 10:48:14 +0200 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> Message-ID: > On 15 Oct 2015, at 20:31, Matt Kassawara wrote: > > 4) Packages only reference upstream configuration files in standard > locations (e.g., /etc/keystone). Not sure what exactly it means. RDO packages are using neutron-dist.conf that contains RDO specific default configuration located under /usr/share/ for quite a long time. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ihrachys at redhat.com Wed Oct 21 08:50:59 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 21 Oct 2015 10:50:59 +0200 Subject: [Rdo-list] Failed to deploy overcloud with network isolation on BM. In-Reply-To: References: <1421541564.61524022.1445391447568.JavaMail.zimbra@redhat.com> <1122769870.61527644.1445391866459.JavaMail.zimbra@redhat.com> Message-ID: May I ask all bug reporters to attach logs and config files to bugs? It?s so often the case that logs cited are not enough to understand what?s going on, and there is no idea which configuration components were using. Can we please make an additional step to make bug fixing more effective? Ihar > On 21 Oct 2015, at 09:57, Alan Pevec wrote: > > 2015-10-21 3:44 GMT+02:00 Sasha Chuzhoy : >> While I fail to install the overcloud using the latest bits[1]. >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273680 > > Does non-HA work? > I see there's RPC timeout, is rabbitmq up and running? > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mcornea at redhat.com Wed Oct 21 09:39:18 2015 From: mcornea at redhat.com (Marius Cornea) Date: Wed, 21 Oct 2015 05:39:18 -0400 (EDT) Subject: [Rdo-list] Failed to deploy overcloud with network isolation on BM. In-Reply-To: References: <1421541564.61524022.1445391447568.JavaMail.zimbra@redhat.com> <1122769870.61527644.1445391866459.JavaMail.zimbra@redhat.com> Message-ID: <1395841671.45650012.1445420358417.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Ihar Hrachyshka" > To: "Alan Pevec" > Cc: "Rdo-list at redhat.com" > Sent: Wednesday, October 21, 2015 10:50:59 AM > Subject: Re: [Rdo-list] Failed to deploy overcloud with network isolation on BM. > > May I ask all bug reporters to attach logs and config files to bugs? It?s so > often the case that logs cited are not enough to understand what?s going on, > and there is no idea which configuration components were using. Just to bring some context on this, with RDO Manager deployments pulling the logs is not an easy task: 1. We cannot straightforward scp the log files. You can SSH to the overcloud nodes using the heat-admin user and the private key of the undercloud stack user. The heat-admin user doesn't have permissions to access the log/config files so you need to use sudo to get them accessible or enable root login. Then scp them to the undercloud node and use that to scp on your machine and upload to BZ. If there's an easier way to pull the logs from the overcloud nodes I'd be more than happy to use it. 2. A failed deployment doesn't point to the failed component so when something goes wrong you need to identify it which is not always something trivial. The deploy command (with its arguments and environment files) should describe the best what components are being used and configured. > Can we please make an additional step to make bug fixing more effective? > > Ihar > > > On 21 Oct 2015, at 09:57, Alan Pevec wrote: > > > > 2015-10-21 3:44 GMT+02:00 Sasha Chuzhoy : > >> While I fail to install the overcloud using the latest bits[1]. > >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273680 > > > > Does non-HA work? > > I see there's RPC timeout, is rabbitmq up and running? > > > > Cheers, > > Alan > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Wed Oct 21 10:02:29 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 21 Oct 2015 12:02:29 +0200 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> Message-ID: 2015-10-21 10:48 GMT+02:00 Ihar Hrachyshka : >> On 15 Oct 2015, at 20:31, Matt Kassawara wrote: >> >> 4) Packages only reference upstream configuration files in standard >> locations (e.g., /etc/keystone). > > Not sure what exactly it means. RDO packages are using neutron-dist.conf that contains RDO specific default configuration located under /usr/share/ for quite a long time. Yes, it's about dist.conf that are unique solution to provide distro specific default values. I'm not sure how are other distros solving this if at all, they probably either rely on upstream defaults or their configuration management tools? Thing is that upstream defaults cannot fit all distributions, so I would expect all distros to pick up our dist.conf solution but we probably have not been exaplaning and advertising it enough hence confusion in the upstream docs. Cheers, Alan From ihrachys at redhat.com Wed Oct 21 10:19:51 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 21 Oct 2015 12:19:51 +0200 Subject: [Rdo-list] Failed to deploy overcloud with network isolation on BM. In-Reply-To: <1395841671.45650012.1445420358417.JavaMail.zimbra@redhat.com> References: <1421541564.61524022.1445391447568.JavaMail.zimbra@redhat.com> <1122769870.61527644.1445391866459.JavaMail.zimbra@redhat.com> <1395841671.45650012.1445420358417.JavaMail.zimbra@redhat.com> Message-ID: > On 21 Oct 2015, at 11:39, Marius Cornea wrote: > > > ----- Original Message ----- >> From: "Ihar Hrachyshka" >> To: "Alan Pevec" >> Cc: "Rdo-list at redhat.com" >> Sent: Wednesday, October 21, 2015 10:50:59 AM >> Subject: Re: [Rdo-list] Failed to deploy overcloud with network isolation on BM. >> >> May I ask all bug reporters to attach logs and config files to bugs? It?s so >> often the case that logs cited are not enough to understand what?s going on, >> and there is no idea which configuration components were using. > > Just to bring some context on this, with RDO Manager deployments pulling the logs is not an easy task: > > 1. We cannot straightforward scp the log files. You can SSH to the overcloud nodes using the heat-admin user and the private key of the undercloud stack user. > The heat-admin user doesn't have permissions to access the log/config files so you need to use sudo to get them accessible or enable root login. Then scp them to the undercloud node and use that to scp on your machine and upload to BZ. > If there's an easier way to pull the logs from the overcloud nodes I'd be more than happy to use it. It sounds like a huge usability issue. How are deployers supposed to debug issues with their cloud if they can?t easily access logs? > > 2. A failed deployment doesn't point to the failed component so when something goes wrong you need to identify it which is not always something trivial. > > The deploy command (with its arguments and environment files) should describe the best what components are being used and configured. Sure. But I see neutron failures mentioned in the bug, so I would assume that neutron logs can reveal the issue. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ihrachys at redhat.com Wed Oct 21 10:22:21 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 21 Oct 2015 12:22:21 +0200 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> Message-ID: <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> > On 21 Oct 2015, at 12:02, Alan Pevec wrote: > > 2015-10-21 10:48 GMT+02:00 Ihar Hrachyshka : >>> On 15 Oct 2015, at 20:31, Matt Kassawara wrote: >>> >>> 4) Packages only reference upstream configuration files in standard >>> locations (e.g., /etc/keystone). >> >> Not sure what exactly it means. RDO packages are using neutron-dist.conf that contains RDO specific default configuration located under /usr/share/ for quite a long time. > > Yes, it's about dist.conf that are unique solution to provide distro > specific default values. > I'm not sure how are other distros solving this if at all, they > probably either rely on upstream defaults or their configuration > management tools? > Thing is that upstream defaults cannot fit all distributions, so I > would expect all distros to pick up our dist.conf solution but we > probably have not been exaplaning and advertising it enough hence > confusion in the upstream docs. I suspect other distros may just modify /etc/neutron/neutron.conf as they fit. It?s obviously not the cleanest solution. I believe enforcing a specific way to configure services upon distributions is not the job of upstream, as long as default upstream way (modifying upstream configuration files located in /etc//*.conf) works. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pgsousa at gmail.com Wed Oct 21 10:39:47 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Wed, 21 Oct 2015 11:39:47 +0100 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: Hi Marius, I've followed your howto and managed to get overcloud deployed in HA, thanks. However I cannot login to it (via CLI or Horizon) : *ERROR (Unauthorized): The request you have made requires authentication. (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1)* So I rebooted the controllers and now I cannot login through Provisioning network, seems some openvswitch bridge conf problem, heres my conf: # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0f0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic enp1s0f0 valid_lft 84562sec preferred_lft 84562sec inet6 fe80::7ea2:3eff:fefb:2555/64 scope link valid_lft forever preferred_lft forever 3: enp1s0f1: mtu 1500 qdisc mq master ovs-system state UP qlen 1000 link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff inet6 fe80::7ea2:3eff:fefb:2556/64 scope link valid_lft forever preferred_lft forever 4: ovs-system: mtu 1500 qdisc noop state DOWN link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff 5: br-tun: mtu 1500 qdisc noop state DOWN link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff 6: vlan20: mtu 1500 qdisc noqueue state UNKNOWN link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 valid_lft forever preferred_lft forever inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 valid_lft forever preferred_lft forever inet6 fe80::e479:56ff:fe5d:7f2/64 scope link valid_lft forever preferred_lft forever 7: vlan40: mtu 1500 qdisc noqueue state UNKNOWN link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 valid_lft forever preferred_lft forever inet6 fe80::e843:69ff:fec3:bfa2/64 scope link valid_lft forever preferred_lft forever 8: vlan174: mtu 1500 qdisc noqueue state UNKNOWN link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff inet 192.168.174.36/24 brd 192.168.174.255 scope global vlan174 valid_lft forever preferred_lft forever inet 192.168.174.35/32 brd 192.168.174.255 scope global vlan174 valid_lft forever preferred_lft forever inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link valid_lft forever preferred_lft forever 9: br-ex: mtu 1500 qdisc noqueue state UNKNOWN link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex valid_lft forever preferred_lft forever inet6 fe80::7ea2:3eff:fefb:2556/64 scope link valid_lft forever preferred_lft forever 10: vlan50: mtu 1500 qdisc noqueue state UNKNOWN link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 valid_lft forever preferred_lft forever inet6 fe80::d815:7fff:feb9:724b/64 scope link valid_lft forever preferred_lft forever 11: vlan30: mtu 1500 qdisc noqueue state UNKNOWN link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 valid_lft forever preferred_lft forever inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 valid_lft forever preferred_lft forever inet6 fe80::78b3:4dff:fead:f172/64 scope link valid_lft forever preferred_lft forever 12: br-int: mtu 1500 qdisc noop state DOWN link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff # ovs-vsctl show 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 Bridge br-ex Port br-ex Interface br-ex type: internal Port "enp1s0f1" Interface "enp1s0f1" Port "vlan40" tag: 40 Interface "vlan40" type: internal Port "vlan20" tag: 20 Interface "vlan20" type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "vlan50" tag: 50 Interface "vlan50" type: internal Port "vlan30" tag: 30 Interface "vlan30" type: internal Port "vlan174" tag: 174 Interface "vlan174" type: internal Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Bridge br-tun fail_mode: secure Port "gre-0a00140b" Interface "gre-0a00140b" type: gre options: {df_default="true", in_key=flow, local_ip="10.0.20.10", out_key=flow, remote_ip="10.0.20.11"} Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "gre-0a00140d" Interface "gre-0a00140d" type: gre options: {df_default="true", in_key=flow, local_ip="10.0.20.10", out_key=flow, remote_ip="10.0.20.13"} Port "gre-0a00140c" Interface "gre-0a00140c" type: gre options: {df_default="true", in_key=flow, local_ip="10.0.20.10", out_key=flow, remote_ip="10.0.20.12"} Port br-tun Interface br-tun type: internal ovs_version: "2.4.0" Regards, Pedro Sousa On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea wrote: > Hi everyone, > > I wrote a blog post about how to deploy a HA with network isolation > overcloud on top of the virtual environment. I tried to provide some > insights into what instack-virt-setup creates and how to use the > network isolation templates in the virtual environment. I hope you > find it useful. > > https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > > Thanks, > Marius > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Wed Oct 21 10:54:26 2015 From: trown at redhat.com (John Trowbridge) Date: Wed, 21 Oct 2015 06:54:26 -0400 Subject: [Rdo-list] RDO Manager status for Liberty GA Message-ID: <56276EE2.6010109@redhat.com> Hola rdoers, The plan is to GA RDO Liberty today (woot!), so I wanted to send out a status update for the RDO Manager installer. I would also like to gather feedback on how other community participants feel about that status as it relates to RDO Manager participating in the GA. That feedback can come as replies to this thread, or even better there is a packaging meeting on #rdo at 1500 UTC today and we can discuss it further then. tldr; RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on virtual hardware have been verified to work with GA bits, however bare metal installs have not yet been verified. I would like to start with some historical context here, as it seems we have picked up quite a few new active community members recently (again woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a successful end to end demo with a single controller and single compute node, and only by using a special delorean server pulling bits from a special github organization (rdo-management). We were able to get it consistently deploying **virtual** HA w/ ceph in CI by the middle of the Liberty upstream cycle. Then, due largely to the fact that there was nobody being paid to work full time on RDO Manager, and the people who were contributing in more or less "extra" time were getting swamped with releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief 24 hour periods where someone would spend a weekend fixing things only to have it break again early the following week. There have been many improvements in the recent weeks to this sad state of affairs. Firstly, we have upstreamed almost everything from the rdo-management github org directly into openstack projects. Secondly, there is a single source for delorean packages for both core openstack packages and the tripleo and ironic packages that make up RDO Manager. These two things may seem a bit trivial to a newcomer to the project, but they are actually fixes for the biggest cause of the RDO Manager Kilo CI breaking. I think with those two fixes (plus some work on upstream tripleo CI) we have set ourselves up to make steady forward progress rather than spending all our time troubleshooting complete breakages. (Although this is still openstack so complete breakages will still happen from time to time :p) Another very easy to overlook improvement over where we were at Kilo GA, is that we actually have all RDO Manager packages (minus a couple EPEL dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we did not even have everything officially packaged, rather only in our special delorean instance. All this leads to my opinion that RDO Manager should participate in the RDO GA. I am unconvinced that bare metal installs can not be made to work with some extra documentation or configuration changes. However, even if that is not the case, we are in a drastically better place than we were at the beginning of the Kilo cycle. That said, this is a community, and I would like to hear how other community participants both from RDO in general and RDO Manager specifically feel about this. Ideally, if someone thinks the RDO Manager release should be blocked, there should be a BZ with the blocker flag proposed so that there is actionable criteria to unblock the release. Thanks for all your hard work to get to this point, and lets keep it rolling. -trown [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 From akrivoka at redhat.com Wed Oct 21 10:55:12 2015 From: akrivoka at redhat.com (Ana Krivokapic) Date: Wed, 21 Oct 2015 12:55:12 +0200 Subject: [Rdo-list] Undercloud UI In-Reply-To: <1648064419.323036.1445412283114.JavaMail.zimbra@tubitak.gov.tr> References: <312239508.320257.1445347438841.JavaMail.zimbra@tubitak.gov.tr> <1522468676.320587.1445348489875.JavaMail.zimbra@tubitak.gov.tr> <1648064419.323036.1445412283114.JavaMail.zimbra@tubitak.gov.tr> Message-ID: Hi Mustafa, Thanks for your interested - we'll looking forward to your contributions! :) Please see responses inline. On Wed, Oct 21, 2015 at 9:24 AM, Mustafa ?EL?K (B?LGEM-BTE) < mustafa.celik at tubitak.gov.tr> wrote: > Ana, > We want to contribute your rdo-director-ui project. > > - Is Git/Gerrit Hub document enough for installation and contribution > steps? or is there any other document you can share with us? > > The README doc on GitHub[1] should be enough to get you started on the installation and contribution. If you encounter any problems, do let us know, either on this list or on IRC. You can find the developers on the #rdo and #tripleo channels on Freenode. > > - Which IDE do you use for development? > > The UI is written in ReactJS so any JS capable IDE will do. Most of us use Sublime Text. > > - Is there anything else that we should know, any suggestion, > document, tutorial, whatever? > > As I said before, the project is in its early stages, so we have no extensive documentation at the moment. Having said that, the doc mentioned before should definitely contain enough to get you started. > Thanks... > > Mustafa > [1] https://github.com/rdo-management/rdo-director-ui/blob/master/README.md > > ------------------------------ > *Kimden: *"Ana Krivokapic" > *Kime: *"Mustafa ?EL?K (B?LGEM-BTE)" > *Kk: *rdo-list at redhat.com > *G?nderilenler: *20 Ekim Sal? 2015 21:04:37 > *Konu: *Re: [Rdo-list] Undercloud UI > > > Hi Mustafa, > > Yes we have one! :) > > The code is located on GitHub [1] and you can contribute by submitting > patches to GerritHub [2]. The README at [1] contains info on how to get the > installation up and running as well as the contribution process. Please > note though, that this is a very new project and is still very much a > work-in-progress. Let us know if you have any further questions. > > [1] https://github.com/rdo-management/rdo-director-ui > [2] https://review.gerrithub.io/#/q/project:rdo-management/rdo-director-ui > > > Kind Regards, > Ana Krivokapic > > > On Tue, Oct 20, 2015 at 3:41 PM, Mustafa ?EL?K (B?LGEM-BTE) < > mustafa.celik at tubitak.gov.tr> wrote: > >> Hi Everyone, >> >> Is there any UI for undercloud installation? or any project on progress? >> If we implement one, how to contribute it? >> >> Thanks, >> >> *Mustafa ?EL?K* >> >> T?B?TAK B?LGEM >> >> www.bilgem.tubitak.gov.tr >> >> mustafa.celik at tubitak.gov.tr >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yeylon at redhat.com Wed Oct 21 11:00:43 2015 From: yeylon at redhat.com (Yaniv Eylon) Date: Wed, 21 Oct 2015 14:00:43 +0300 Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <56276EE2.6010109@redhat.com> References: <56276EE2.6010109@redhat.com> Message-ID: On Wed, Oct 21, 2015 at 1:54 PM, John Trowbridge wrote: > Hola rdoers, > > The plan is to GA RDO Liberty today (woot!), so I wanted to send out a > status update for the RDO Manager installer. I would also like to gather > feedback on how other community participants feel about that status as > it relates to RDO Manager participating in the GA. That feedback can > come as replies to this thread, or even better there is a packaging > meeting on #rdo at 1500 UTC today and we can discuss it further then. > > tldr; > RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > virtual hardware have been verified to work with GA bits, however bare > metal installs have not yet been verified. we know that trying to install on BM with network isolation fail to install the overcloud using the latest bits[1]. We were able to deploy the undercloud[2] and build the images[3]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273680 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1273635 [3] https://bugzilla.redhat.com/show_bug.cgi?id=1273647 > > I would like to start with some historical context here, as it seems we > have picked up quite a few new active community members recently (again > woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > successful end to end demo with a single controller and single compute > node, and only by using a special delorean server pulling bits from a > special github organization (rdo-management). We were able to get it > consistently deploying **virtual** HA w/ ceph in CI by the middle of the > Liberty upstream cycle. Then, due largely to the fact that there was > nobody being paid to work full time on RDO Manager, and the people who > were contributing in more or less "extra" time were getting swamped with > releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief > 24 hour periods where someone would spend a weekend fixing things only > to have it break again early the following week. > > There have been many improvements in the recent weeks to this sad state > of affairs. Firstly, we have upstreamed almost everything from the > rdo-management github org directly into openstack projects. Secondly, > there is a single source for delorean packages for both core openstack > packages and the tripleo and ironic packages that make up RDO Manager. > These two things may seem a bit trivial to a newcomer to the project, > but they are actually fixes for the biggest cause of the RDO Manager > Kilo CI breaking. I think with those two fixes (plus some work on > upstream tripleo CI) we have set ourselves up to make steady forward > progress rather than spending all our time troubleshooting complete > breakages. (Although this is still openstack so complete breakages will > still happen from time to time :p) > > Another very easy to overlook improvement over where we were at Kilo GA, > is that we actually have all RDO Manager packages (minus a couple EPEL > dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we > did not even have everything officially packaged, rather only in our > special delorean instance. > > All this leads to my opinion that RDO Manager should participate in the > RDO GA. I am unconvinced that bare metal installs can not be made to > work with some extra documentation or configuration changes. However, > even if that is not the case, we are in a drastically better place than > we were at the beginning of the Kilo cycle. > > That said, this is a community, and I would like to hear how other > community participants both from RDO in general and RDO Manager > specifically feel about this. Ideally, if someone thinks the RDO Manager > release should be blocked, there should be a BZ with the blocker flag > proposed so that there is actionable criteria to unblock the release. > > Thanks for all your hard work to get to this point, and lets keep it > rolling. > > -trown > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Yaniv. From trown at redhat.com Wed Oct 21 11:05:15 2015 From: trown at redhat.com (John Trowbridge) Date: Wed, 21 Oct 2015 07:05:15 -0400 Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: References: <56276EE2.6010109@redhat.com> Message-ID: <5627716B.9000706@redhat.com> On 10/21/2015 07:00 AM, Yaniv Eylon wrote: > On Wed, Oct 21, 2015 at 1:54 PM, John Trowbridge wrote: >> Hola rdoers, >> >> The plan is to GA RDO Liberty today (woot!), so I wanted to send out a >> status update for the RDO Manager installer. I would also like to gather >> feedback on how other community participants feel about that status as >> it relates to RDO Manager participating in the GA. That feedback can >> come as replies to this thread, or even better there is a packaging >> meeting on #rdo at 1500 UTC today and we can discuss it further then. >> >> tldr; >> RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on >> virtual hardware have been verified to work with GA bits, however bare >> metal installs have not yet been verified. > > > we know that trying to install on BM with network isolation fail to > install the overcloud using the latest bits[1]. > We were able to deploy the undercloud[2] and build the images[3]. > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273680 Looks to be a documentation issue due to the change that fixed 1273647. > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1273635 Fixed in the released package. > [3] https://bugzilla.redhat.com/show_bug.cgi?id=1273647 Fixed in the released package. > >> >> I would like to start with some historical context here, as it seems we >> have picked up quite a few new active community members recently (again >> woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a >> successful end to end demo with a single controller and single compute >> node, and only by using a special delorean server pulling bits from a >> special github organization (rdo-management). We were able to get it >> consistently deploying **virtual** HA w/ ceph in CI by the middle of the >> Liberty upstream cycle. Then, due largely to the fact that there was >> nobody being paid to work full time on RDO Manager, and the people who >> were contributing in more or less "extra" time were getting swamped with >> releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief >> 24 hour periods where someone would spend a weekend fixing things only >> to have it break again early the following week. >> >> There have been many improvements in the recent weeks to this sad state >> of affairs. Firstly, we have upstreamed almost everything from the >> rdo-management github org directly into openstack projects. Secondly, >> there is a single source for delorean packages for both core openstack >> packages and the tripleo and ironic packages that make up RDO Manager. >> These two things may seem a bit trivial to a newcomer to the project, >> but they are actually fixes for the biggest cause of the RDO Manager >> Kilo CI breaking. I think with those two fixes (plus some work on >> upstream tripleo CI) we have set ourselves up to make steady forward >> progress rather than spending all our time troubleshooting complete >> breakages. (Although this is still openstack so complete breakages will >> still happen from time to time :p) >> >> Another very easy to overlook improvement over where we were at Kilo GA, >> is that we actually have all RDO Manager packages (minus a couple EPEL >> dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we >> did not even have everything officially packaged, rather only in our >> special delorean instance. >> >> All this leads to my opinion that RDO Manager should participate in the >> RDO GA. I am unconvinced that bare metal installs can not be made to >> work with some extra documentation or configuration changes. However, >> even if that is not the case, we are in a drastically better place than >> we were at the beginning of the Kilo cycle. >> >> That said, this is a community, and I would like to hear how other >> community participants both from RDO in general and RDO Manager >> specifically feel about this. Ideally, if someone thinks the RDO Manager >> release should be blocked, there should be a BZ with the blocker flag >> proposed so that there is actionable criteria to unblock the release. >> >> Thanks for all your hard work to get to this point, and lets keep it >> rolling. >> >> -trown >> >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From marius at remote-lab.net Wed Oct 21 11:05:53 2015 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 21 Oct 2015 13:05:53 +0200 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: Hi Pedro, One issue I can quickly see is that br-ex has assigned the same IP address as enp1s0f0. Can you post the nic templates you used for deployment? 2: enp1s0f0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic enp1s0f0 9: br-ex: mtu 1500 qdisc noqueue state UNKNOWN link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex Thanks, Marius On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa wrote: > Hi Marius, > > I've followed your howto and managed to get overcloud deployed in HA, > thanks. However I cannot login to it (via CLI or Horizon) : > > ERROR (Unauthorized): The request you have made requires authentication. > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) > > So I rebooted the controllers and now I cannot login through Provisioning > network, seems some openvswitch bridge conf problem, heres my conf: > > # ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: enp1s0f0: mtu 1500 qdisc mq state UP > qlen 1000 > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic enp1s0f0 > valid_lft 84562sec preferred_lft 84562sec > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link > valid_lft forever preferred_lft forever > 3: enp1s0f1: mtu 1500 qdisc mq master > ovs-system state UP qlen 1000 > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > valid_lft forever preferred_lft forever > 4: ovs-system: mtu 1500 qdisc noop state DOWN > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff > 5: br-tun: mtu 1500 qdisc noop state DOWN > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff > 6: vlan20: mtu 1500 qdisc noqueue state > UNKNOWN > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 > valid_lft forever preferred_lft forever > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 > valid_lft forever preferred_lft forever > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link > valid_lft forever preferred_lft forever > 7: vlan40: mtu 1500 qdisc noqueue state > UNKNOWN > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 > valid_lft forever preferred_lft forever > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link > valid_lft forever preferred_lft forever > 8: vlan174: mtu 1500 qdisc noqueue state > UNKNOWN > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff > inet 192.168.174.36/24 brd 192.168.174.255 scope global vlan174 > valid_lft forever preferred_lft forever > inet 192.168.174.35/32 brd 192.168.174.255 scope global vlan174 > valid_lft forever preferred_lft forever > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link > valid_lft forever preferred_lft forever > 9: br-ex: mtu 1500 qdisc noqueue state > UNKNOWN > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > valid_lft forever preferred_lft forever > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > valid_lft forever preferred_lft forever > 10: vlan50: mtu 1500 qdisc noqueue state > UNKNOWN > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 > valid_lft forever preferred_lft forever > inet6 fe80::d815:7fff:feb9:724b/64 scope link > valid_lft forever preferred_lft forever > 11: vlan30: mtu 1500 qdisc noqueue state > UNKNOWN > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 > valid_lft forever preferred_lft forever > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 > valid_lft forever preferred_lft forever > inet6 fe80::78b3:4dff:fead:f172/64 scope link > valid_lft forever preferred_lft forever > 12: br-int: mtu 1500 qdisc noop state DOWN > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff > > > # ovs-vsctl show > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Port "enp1s0f1" > Interface "enp1s0f1" > Port "vlan40" > tag: 40 > Interface "vlan40" > type: internal > Port "vlan20" > tag: 20 > Interface "vlan20" > type: internal > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port "vlan50" > tag: 50 > Interface "vlan50" > type: internal > Port "vlan30" > tag: 30 > Interface "vlan30" > type: internal > Port "vlan174" > tag: 174 > Interface "vlan174" > type: internal > Bridge br-int > fail_mode: secure > Port br-int > Interface br-int > type: internal > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > Port int-br-ex > Interface int-br-ex > type: patch > options: {peer=phy-br-ex} > Bridge br-tun > fail_mode: secure > Port "gre-0a00140b" > Interface "gre-0a00140b" > type: gre > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > out_key=flow, remote_ip="10.0.20.11"} > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > Port "gre-0a00140d" > Interface "gre-0a00140d" > type: gre > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > out_key=flow, remote_ip="10.0.20.13"} > Port "gre-0a00140c" > Interface "gre-0a00140c" > type: gre > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > out_key=flow, remote_ip="10.0.20.12"} > Port br-tun > Interface br-tun > type: internal > ovs_version: "2.4.0" > > Regards, > Pedro Sousa > > > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea > wrote: >> >> Hi everyone, >> >> I wrote a blog post about how to deploy a HA with network isolation >> overcloud on top of the virtual environment. I tried to provide some >> insights into what instack-virt-setup creates and how to use the >> network isolation templates in the virtual environment. I hope you >> find it useful. >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ >> >> Thanks, >> Marius >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > From pgsousa at gmail.com Wed Oct 21 11:16:36 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Wed, 21 Oct 2015 12:16:36 +0100 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: Hi, here you go. Regards, Pedro Sousa On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea wrote: > Hi Pedro, > > One issue I can quickly see is that br-ex has assigned the same IP > address as enp1s0f0. Can you post the nic templates you used for > deployment? > > 2: enp1s0f0: mtu 1500 qdisc mq state > UP qlen 1000 > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic enp1s0f0 > 9: br-ex: mtu 1500 qdisc noqueue state > UNKNOWN > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > > Thanks, > Marius > > On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa wrote: > > Hi Marius, > > > > I've followed your howto and managed to get overcloud deployed in HA, > > thanks. However I cannot login to it (via CLI or Horizon) : > > > > ERROR (Unauthorized): The request you have made requires authentication. > > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) > > > > So I rebooted the controllers and now I cannot login through Provisioning > > network, seems some openvswitch bridge conf problem, heres my conf: > > > > # ip a > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > inet6 ::1/128 scope host > > valid_lft forever preferred_lft forever > > 2: enp1s0f0: mtu 1500 qdisc mq state UP > > qlen 1000 > > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > enp1s0f0 > > valid_lft 84562sec preferred_lft 84562sec > > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link > > valid_lft forever preferred_lft forever > > 3: enp1s0f1: mtu 1500 qdisc mq master > > ovs-system state UP qlen 1000 > > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > > valid_lft forever preferred_lft forever > > 4: ovs-system: mtu 1500 qdisc noop state DOWN > > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff > > 5: br-tun: mtu 1500 qdisc noop state DOWN > > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff > > 6: vlan20: mtu 1500 qdisc noqueue state > > UNKNOWN > > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff > > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 > > valid_lft forever preferred_lft forever > > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 > > valid_lft forever preferred_lft forever > > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link > > valid_lft forever preferred_lft forever > > 7: vlan40: mtu 1500 qdisc noqueue state > > UNKNOWN > > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff > > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 > > valid_lft forever preferred_lft forever > > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link > > valid_lft forever preferred_lft forever > > 8: vlan174: mtu 1500 qdisc noqueue > state > > UNKNOWN > > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff > > inet 192.168.174.36/24 brd 192.168.174.255 scope global vlan174 > > valid_lft forever preferred_lft forever > > inet 192.168.174.35/32 brd 192.168.174.255 scope global vlan174 > > valid_lft forever preferred_lft forever > > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link > > valid_lft forever preferred_lft forever > > 9: br-ex: mtu 1500 qdisc noqueue state > > UNKNOWN > > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > > valid_lft forever preferred_lft forever > > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > > valid_lft forever preferred_lft forever > > 10: vlan50: mtu 1500 qdisc noqueue > state > > UNKNOWN > > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff > > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 > > valid_lft forever preferred_lft forever > > inet6 fe80::d815:7fff:feb9:724b/64 scope link > > valid_lft forever preferred_lft forever > > 11: vlan30: mtu 1500 qdisc noqueue > state > > UNKNOWN > > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff > > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 > > valid_lft forever preferred_lft forever > > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 > > valid_lft forever preferred_lft forever > > inet6 fe80::78b3:4dff:fead:f172/64 scope link > > valid_lft forever preferred_lft forever > > 12: br-int: mtu 1500 qdisc noop state DOWN > > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff > > > > > > # ovs-vsctl show > > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 > > Bridge br-ex > > Port br-ex > > Interface br-ex > > type: internal > > Port "enp1s0f1" > > Interface "enp1s0f1" > > Port "vlan40" > > tag: 40 > > Interface "vlan40" > > type: internal > > Port "vlan20" > > tag: 20 > > Interface "vlan20" > > type: internal > > Port phy-br-ex > > Interface phy-br-ex > > type: patch > > options: {peer=int-br-ex} > > Port "vlan50" > > tag: 50 > > Interface "vlan50" > > type: internal > > Port "vlan30" > > tag: 30 > > Interface "vlan30" > > type: internal > > Port "vlan174" > > tag: 174 > > Interface "vlan174" > > type: internal > > Bridge br-int > > fail_mode: secure > > Port br-int > > Interface br-int > > type: internal > > Port patch-tun > > Interface patch-tun > > type: patch > > options: {peer=patch-int} > > Port int-br-ex > > Interface int-br-ex > > type: patch > > options: {peer=phy-br-ex} > > Bridge br-tun > > fail_mode: secure > > Port "gre-0a00140b" > > Interface "gre-0a00140b" > > type: gre > > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > > out_key=flow, remote_ip="10.0.20.11"} > > Port patch-int > > Interface patch-int > > type: patch > > options: {peer=patch-tun} > > Port "gre-0a00140d" > > Interface "gre-0a00140d" > > type: gre > > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > > out_key=flow, remote_ip="10.0.20.13"} > > Port "gre-0a00140c" > > Interface "gre-0a00140c" > > type: gre > > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > > out_key=flow, remote_ip="10.0.20.12"} > > Port br-tun > > Interface br-tun > > type: internal > > ovs_version: "2.4.0" > > > > Regards, > > Pedro Sousa > > > > > > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea > > wrote: > >> > >> Hi everyone, > >> > >> I wrote a blog post about how to deploy a HA with network isolation > >> overcloud on top of the virtual environment. I tried to provide some > >> insights into what instack-virt-setup creates and how to use the > >> network isolation templates in the virtual environment. I hope you > >> find it useful. > >> > >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > >> > >> Thanks, > >> Marius > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: compute.yaml Type: application/x-yaml Size: 3682 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: controller.yaml Type: application/x-yaml Size: 4409 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: network-environment.yaml Type: application/x-yaml Size: 798 bytes Desc: not available URL: From celik.esra at tubitak.gov.tr Wed Oct 21 11:22:16 2015 From: celik.esra at tubitak.gov.tr (Esra Celik) Date: Wed, 21 Oct 2015 14:22:16 +0300 (EEST) Subject: [Rdo-list] Undercloud UI In-Reply-To: References: <312239508.320257.1445347438841.JavaMail.zimbra@tubitak.gov.tr> <1522468676.320587.1445348489875.JavaMail.zimbra@tubitak.gov.tr> <1648064419.323036.1445412283114.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <718260268.7224258.1445426536949.JavaMail.zimbra@tubitak.gov.tr> Hi Ana Documentation of rdo-director-ui at github says TripleO installation is a prerequirity. However we want to implement a web UI for the installation stage of Undercloud itself Could you please briefly tell what rdo-director-ui is for? Best regards esra ----- Ana Krivokapic ??yle yaz?yor: > Hi Mustafa, > > Thanks for your interested - we'll looking forward to your contributions! :) > > Please see responses inline. > > On Wed, Oct 21, 2015 at 9:24 AM, Mustafa ?EL?K (B?LGEM-BTE) < > mustafa.celik at tubitak.gov.tr> wrote: > > > Ana, > > We want to contribute your rdo-director-ui project. > > > > - Is Git/Gerrit Hub document enough for installation and contribution > > steps? or is there any other document you can share with us? > > > > The README doc on GitHub[1] should be enough to get you started on the > installation and contribution. If you encounter any problems, do let us > know, either on this list or on IRC. You can find the developers on the > #rdo and #tripleo channels on Freenode. > > > > > > - Which IDE do you use for development? > > > > The UI is written in ReactJS so any JS capable IDE will do. Most of us use > Sublime Text. > > > > > > - Is there anything else that we should know, any suggestion, > > document, tutorial, whatever? > > > > As I said before, the project is in its early stages, so we have no > extensive documentation at the moment. Having said that, the doc mentioned > before should definitely contain enough to get you started. > > > > Thanks... > > > > Mustafa > > > > > [1] https://github.com/rdo-management/rdo-director-ui/blob/master/README.md > > > > > > ------------------------------ > > *Kimden: *"Ana Krivokapic" > > *Kime: *"Mustafa ?EL?K (B?LGEM-BTE)" > > *Kk: *rdo-list at redhat.com > > *G?nderilenler: *20 Ekim Sal? 2015 21:04:37 > > *Konu: *Re: [Rdo-list] Undercloud UI > > > > > > Hi Mustafa, > > > > Yes we have one! :) > > > > The code is located on GitHub [1] and you can contribute by submitting > > patches to GerritHub [2]. The README at [1] contains info on how to get the > > installation up and running as well as the contribution process. Please > > note though, that this is a very new project and is still very much a > > work-in-progress. Let us know if you have any further questions. > > > > [1] https://github.com/rdo-management/rdo-director-ui > > [2] https://review.gerrithub.io/#/q/project:rdo-management/rdo-director-ui > > > > > > Kind Regards, > > Ana Krivokapic > > > > > > On Tue, Oct 20, 2015 at 3:41 PM, Mustafa ?EL?K (B?LGEM-BTE) < > > mustafa.celik at tubitak.gov.tr> wrote: > > > >> Hi Everyone, > >> > >> Is there any UI for undercloud installation? or any project on progress? > >> If we implement one, how to contribute it? > >> > >> Thanks, > >> > >> *Mustafa ?EL?K* > >> > >> T?B?TAK B?LGEM > >> > >> www.bilgem.tubitak.gov.tr > >> > >> mustafa.celik at tubitak.gov.tr > >> > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > > > > From marius at remote-lab.net Wed Oct 21 11:32:28 2015 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 21 Oct 2015 13:32:28 +0200 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: Here's an adjusted controller.yaml which disables DHCP on the first nic: enp1s0f0 so it doesn't get an IP address http://paste.openstack.org/show/476981/ Please note that this assumes that your overcloud nodes are PXE booting on the 2nd NIC(basically disabling the 1st nic) Given your setup(I'm doing some assumptions here so I might be wrong) I would use the 1st nic for PXE booting and provisioning network and 2nd nic for running the isolated networks with this kind of template: http://paste.openstack.org/show/476986/ Let me know if it works for you. Thanks, Marius On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa wrote: > Hi, > > here you go. > > Regards, > Pedro Sousa > > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea > wrote: >> >> Hi Pedro, >> >> One issue I can quickly see is that br-ex has assigned the same IP >> address as enp1s0f0. Can you post the nic templates you used for >> deployment? >> >> 2: enp1s0f0: mtu 1500 qdisc mq state >> UP qlen 1000 >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic enp1s0f0 >> 9: br-ex: mtu 1500 qdisc noqueue state >> UNKNOWN >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex >> >> Thanks, >> Marius >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa wrote: >> > Hi Marius, >> > >> > I've followed your howto and managed to get overcloud deployed in HA, >> > thanks. However I cannot login to it (via CLI or Horizon) : >> > >> > ERROR (Unauthorized): The request you have made requires authentication. >> > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) >> > >> > So I rebooted the controllers and now I cannot login through >> > Provisioning >> > network, seems some openvswitch bridge conf problem, heres my conf: >> > >> > # ip a >> > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> > inet 127.0.0.1/8 scope host lo >> > valid_lft forever preferred_lft forever >> > inet6 ::1/128 scope host >> > valid_lft forever preferred_lft forever >> > 2: enp1s0f0: mtu 1500 qdisc mq state >> > UP >> > qlen 1000 >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic >> > enp1s0f0 >> > valid_lft 84562sec preferred_lft 84562sec >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link >> > valid_lft forever preferred_lft forever >> > 3: enp1s0f1: mtu 1500 qdisc mq master >> > ovs-system state UP qlen 1000 >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> > valid_lft forever preferred_lft forever >> > 4: ovs-system: mtu 1500 qdisc noop state DOWN >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff >> > 5: br-tun: mtu 1500 qdisc noop state DOWN >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff >> > 6: vlan20: mtu 1500 qdisc noqueue >> > state >> > UNKNOWN >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 >> > valid_lft forever preferred_lft forever >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 >> > valid_lft forever preferred_lft forever >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link >> > valid_lft forever preferred_lft forever >> > 7: vlan40: mtu 1500 qdisc noqueue >> > state >> > UNKNOWN >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 >> > valid_lft forever preferred_lft forever >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link >> > valid_lft forever preferred_lft forever >> > 8: vlan174: mtu 1500 qdisc noqueue >> > state >> > UNKNOWN >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global vlan174 >> > valid_lft forever preferred_lft forever >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global vlan174 >> > valid_lft forever preferred_lft forever >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link >> > valid_lft forever preferred_lft forever >> > 9: br-ex: mtu 1500 qdisc noqueue state >> > UNKNOWN >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex >> > valid_lft forever preferred_lft forever >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> > valid_lft forever preferred_lft forever >> > 10: vlan50: mtu 1500 qdisc noqueue >> > state >> > UNKNOWN >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 >> > valid_lft forever preferred_lft forever >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link >> > valid_lft forever preferred_lft forever >> > 11: vlan30: mtu 1500 qdisc noqueue >> > state >> > UNKNOWN >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 >> > valid_lft forever preferred_lft forever >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 >> > valid_lft forever preferred_lft forever >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link >> > valid_lft forever preferred_lft forever >> > 12: br-int: mtu 1500 qdisc noop state DOWN >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff >> > >> > >> > # ovs-vsctl show >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 >> > Bridge br-ex >> > Port br-ex >> > Interface br-ex >> > type: internal >> > Port "enp1s0f1" >> > Interface "enp1s0f1" >> > Port "vlan40" >> > tag: 40 >> > Interface "vlan40" >> > type: internal >> > Port "vlan20" >> > tag: 20 >> > Interface "vlan20" >> > type: internal >> > Port phy-br-ex >> > Interface phy-br-ex >> > type: patch >> > options: {peer=int-br-ex} >> > Port "vlan50" >> > tag: 50 >> > Interface "vlan50" >> > type: internal >> > Port "vlan30" >> > tag: 30 >> > Interface "vlan30" >> > type: internal >> > Port "vlan174" >> > tag: 174 >> > Interface "vlan174" >> > type: internal >> > Bridge br-int >> > fail_mode: secure >> > Port br-int >> > Interface br-int >> > type: internal >> > Port patch-tun >> > Interface patch-tun >> > type: patch >> > options: {peer=patch-int} >> > Port int-br-ex >> > Interface int-br-ex >> > type: patch >> > options: {peer=phy-br-ex} >> > Bridge br-tun >> > fail_mode: secure >> > Port "gre-0a00140b" >> > Interface "gre-0a00140b" >> > type: gre >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> > out_key=flow, remote_ip="10.0.20.11"} >> > Port patch-int >> > Interface patch-int >> > type: patch >> > options: {peer=patch-tun} >> > Port "gre-0a00140d" >> > Interface "gre-0a00140d" >> > type: gre >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> > out_key=flow, remote_ip="10.0.20.13"} >> > Port "gre-0a00140c" >> > Interface "gre-0a00140c" >> > type: gre >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> > out_key=flow, remote_ip="10.0.20.12"} >> > Port br-tun >> > Interface br-tun >> > type: internal >> > ovs_version: "2.4.0" >> > >> > Regards, >> > Pedro Sousa >> > >> > >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea >> > wrote: >> >> >> >> Hi everyone, >> >> >> >> I wrote a blog post about how to deploy a HA with network isolation >> >> overcloud on top of the virtual environment. I tried to provide some >> >> insights into what instack-virt-setup creates and how to use the >> >> network isolation templates in the virtual environment. I hope you >> >> find it useful. >> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ >> >> >> >> Thanks, >> >> Marius >> >> >> >> _______________________________________________ >> >> Rdo-list mailing list >> >> Rdo-list at redhat.com >> >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> > > > From morazi at redhat.com Wed Oct 21 12:19:05 2015 From: morazi at redhat.com (Mike Orazi) Date: Wed, 21 Oct 2015 08:19:05 -0400 Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <5627716B.9000706@redhat.com> References: <56276EE2.6010109@redhat.com> <5627716B.9000706@redhat.com> Message-ID: <562782B9.4020104@redhat.com> On 10/21/2015 07:05 AM, John Trowbridge wrote: > > > On 10/21/2015 07:00 AM, Yaniv Eylon wrote: >> On Wed, Oct 21, 2015 at 1:54 PM, John Trowbridge wrote: >>> Hola rdoers, >>> >>> The plan is to GA RDO Liberty today (woot!), so I wanted to send out a >>> status update for the RDO Manager installer. I would also like to gather >>> feedback on how other community participants feel about that status as >>> it relates to RDO Manager participating in the GA. That feedback can >>> come as replies to this thread, or even better there is a packaging >>> meeting on #rdo at 1500 UTC today and we can discuss it further then. >>> >>> tldr; >>> RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on >>> virtual hardware have been verified to work with GA bits, however bare >>> metal installs have not yet been verified. >> >> >> we know that trying to install on BM with network isolation fail to >> install the overcloud using the latest bits[1]. >> We were able to deploy the undercloud[2] and build the images[3]. >> >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273680 > Looks to be a documentation issue due to the change that fixed 1273647. Can we get that at least into assigned and some indication on the bug regarding the docs update that is needed? >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1273635 > Fixed in the released package. >> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1273647 > Fixed in the released package. >> >>> >>> I would like to start with some historical context here, as it seems we >>> have picked up quite a few new active community members recently (again >>> woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a >>> successful end to end demo with a single controller and single compute >>> node, and only by using a special delorean server pulling bits from a >>> special github organization (rdo-management). We were able to get it >>> consistently deploying **virtual** HA w/ ceph in CI by the middle of the >>> Liberty upstream cycle. Then, due largely to the fact that there was >>> nobody being paid to work full time on RDO Manager, and the people who >>> were contributing in more or less "extra" time were getting swamped with >>> releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief >>> 24 hour periods where someone would spend a weekend fixing things only >>> to have it break again early the following week. >>> >>> There have been many improvements in the recent weeks to this sad state >>> of affairs. Firstly, we have upstreamed almost everything from the >>> rdo-management github org directly into openstack projects. Secondly, >>> there is a single source for delorean packages for both core openstack >>> packages and the tripleo and ironic packages that make up RDO Manager. >>> These two things may seem a bit trivial to a newcomer to the project, >>> but they are actually fixes for the biggest cause of the RDO Manager >>> Kilo CI breaking. I think with those two fixes (plus some work on >>> upstream tripleo CI) we have set ourselves up to make steady forward >>> progress rather than spending all our time troubleshooting complete >>> breakages. (Although this is still openstack so complete breakages will >>> still happen from time to time :p) >>> >>> Another very easy to overlook improvement over where we were at Kilo GA, >>> is that we actually have all RDO Manager packages (minus a couple EPEL >>> dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we >>> did not even have everything officially packaged, rather only in our >>> special delorean instance. >>> >>> All this leads to my opinion that RDO Manager should participate in the >>> RDO GA. I am unconvinced that bare metal installs can not be made to >>> work with some extra documentation or configuration changes. However, >>> even if that is not the case, we are in a drastically better place than >>> we were at the beginning of the Kilo cycle. >>> >>> That said, this is a community, and I would like to hear how other >>> community participants both from RDO in general and RDO Manager >>> specifically feel about this. Ideally, if someone thinks the RDO Manager >>> release should be blocked, there should be a BZ with the blocker flag >>> proposed so that there is actionable criteria to unblock the release. >>> >>> Thanks for all your hard work to get to this point, and lets keep it >>> rolling. >>> >>> -trown >>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > From sasha at redhat.com Wed Oct 21 13:17:33 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Wed, 21 Oct 2015 09:17:33 -0400 (EDT) Subject: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" In-Reply-To: <1827088230.7058274.1445407185286.JavaMail.zimbra@tubitak.gov.tr> References: <799030604.4626522.1444898446264.JavaMail.zimbra@tubitak.gov.tr> <1114892065.6426670.1445269186395.JavaMail.zimbra@tubitak.gov.tr> <1139664202.60372126.1445271204093.JavaMail.zimbra@redhat.com> <104309364.6608280.1445319101175.JavaMail.zimbra@tubitak.gov.tr> <1238139899.44937284.1445335371251.JavaMail.zimbra@redhat.com> <159935625.61244072.1445355023202.JavaMail.zimbra@redhat.com> <713023706.61472910.1445381665550.JavaMail.zimbra@redhat.com> <1827088230.7058274.1445407185286.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1768368877.61913548.1445433453630.JavaMail.zimbra@redhat.com> Hi Esra, local_interface is a "Network interface on the Undercloud that will be handling the PXE boots and DHCP for Overcloud instances." So on the undercloud host you identify the NIC attached to the provisioning network with all the overcloud nodes and set the local_interface to whatever the name of the NIC is. Thanks. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Esra Celik" > To: "Sasha Chuzhoy" > Cc: rdo-list at redhat.com > Sent: Wednesday, October 21, 2015 1:59:45 AM > Subject: Re: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid host was found" > > Hi Shasha > > I finally catched something. As the error messages go quickly I was not able > to see this error previously. > I have set local_interface = em2 in undercloud. conf file, but > diskimage-builder's dhcp-all-interfaces.sh script tries to inspect ethX > interfaces. > I thought stable-interface-names patch for diskimage-builder was solving this > issue.. I don't know clearly.. > > Should I try converting the interface names to ethX, or is this a bug that > should be fixed? > > Esra ?EL?K > T?B?TAK B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > ----- Orijinal Mesaj ----- > > > Kimden: "Sasha Chuzhoy" > > Kime: "Esra Celik" > > Kk: rdo-list at redhat.com > > G?nderilenler: 21 Ekim ?ar?amba 2015 1:54:25 > > Konu: Re: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No valid > > host was found" > > > Esra, > > Just for a sake of trying,would it be possible for you to re-deploy the > > undercloud with the defaut IP addresses in undercloud.conf and let us know > > the result. > > I ran into something similar recently. > > > Thanks. > > > Best regards, > > Sasha Chuzhoy. > > > ----- Original Message ----- > > > From: "Sasha Chuzhoy" > > > To: "Esra Celik" > > > Cc: rdo-list at redhat.com > > > Sent: Tuesday, October 20, 2015 11:30:23 AM > > > Subject: Re: [Rdo-list] Yan: Re: OverCloud deploy fails with error "No > > > valid host was found" > > > > > > Hi Esra, > > > since the introspection fails continuously in addition to the deployment, > > > I > > > start wondering if everything is connected right. > > > Could you please describe (and double check) how your nodes are > > > interconnected in the setup, i.e. what nics and are connected and is > > > there > > > any additional configuration on the switch ports. > > > Thanks. > > > > > > Best regards, > > > Sasha Chuzhoy. > > > > > > ----- Original Message ----- > > > > From: "Marius Cornea" > > > > To: "Esra Celik" > > > > Cc: "Sasha Chuzhoy" , rdo-list at redhat.com > > > > Sent: Tuesday, October 20, 2015 6:02:51 AM > > > > Subject: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > valid host was found" > > > > > > > > Hi, > > > > > > > > From what I can tell from the screenshots DHCP fails for both of the > > > > nics > > > > after loading the inspector image, thus the nodes have no ip address > > > > and > > > > the > > > > Network is unreachable message. Can you see any DHCP messages(output of > > > > dhclient) on the console? > > > > You could try leaving the nodes connected *only* to the provisioning > > > > network > > > > and rerun introspection. > > > > > > > > Thanks, > > > > Marius > > > > > > > > ----- Original Message ----- > > > > > From: "Esra Celik" > > > > > To: "Sasha Chuzhoy" > > > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > > > Sent: Tuesday, October 20, 2015 7:31:41 AM > > > > > Subject: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error > > > > > "No > > > > > valid host was found" > > > > > > > > > > Ok, I ran ironic node-set-provision-state [UUID] provide for each > > > > > node > > > > > and > > > > > retried deployment. I attached the screenshots > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > | UUID | Name | Instance UUID | Power State | Provisioning State | > > > > > | Maintenance | > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power off | > > > > > | available > > > > > | | False | > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power off | > > > > > | available > > > > > | | False | > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > [stack at undercloud ~]$ nova flavor-list > > > > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > > > > | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | > > > > > | RXTX_Factor > > > > > | | > > > > > | Is_Public | > > > > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > > > > | b9428c86-5696-4d68-a0e0-77faf4e7f627 | baremetal | 4096 | 40 | 0 | > > > > > | | > > > > > | 1 > > > > > | | > > > > > | 1.0 | True | > > > > > +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy --templates > > > > > Deploying templates in the directory > > > > > /usr/share/openstack-tripleo-heat-templates > > > > > Stack failed with status: resources.Controller: resources[0]: > > > > > ResourceInError: resources.Controller: Went to status ERROR due to > > > > > "Message: > > > > > No valid host was found. There are not enough hosts available., Code: > > > > > 500" > > > > > Heat Stack update failed. > > > > > > > > > > [stack at undercloud ~]$ sudo systemctl|grep ironic > > > > > openstack-ironic-api.service loaded active running OpenStack Ironic > > > > > API > > > > > service > > > > > openstack-ironic-conductor.service loaded active running OpenStack > > > > > Ironic > > > > > Conductor service > > > > > openstack-ironic-inspector-dnsmasq.service loaded active running PXE > > > > > boot > > > > > dnsmasq service for Ironic Inspector > > > > > openstack-ironic-inspector.service loaded active running Hardware > > > > > introspection service for OpenStack Ironic > > > > > > > > > > "journalctl -fl -u openstack-ironic-conductor.service" gives no > > > > > warning > > > > > or > > > > > error. > > > > > > > > > > Regards > > > > > > > > > > Esra ?EL?K > > > > > T?B?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > > Kime: "Esra Celik" > > > > > > Kk: "Marius Cornea" , rdo-list at redhat.com > > > > > > G?nderilenler: 19 Ekim Pazartesi 2015 19:13:24 > > > > > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > valid > > > > > > host was found" > > > > > > > > > > > Could you please > > > > > > 1.run: > > > > > > 'ironic node-set-provision-state [UUID] provide' for each node > > > > > > where > > > > > > UUID > > > > > > is > > > > > > replaced with the actual UUID of the node (ironic node-list). > > > > > > > > > > > 2.retry the deployment > > > > > > Thanks. > > > > > > > > > > > Best regards, > > > > > > Sasha Chuzhoy. > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "Esra Celik" > > > > > > > To: "Sasha Chuzhoy" > > > > > > > Cc: "Marius Cornea" , rdo-list at redhat.com > > > > > > > Sent: Monday, October 19, 2015 11:39:46 AM > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error > > > > > > > "No > > > > > > > valid > > > > > > > host was found" > > > > > > > > > > > > > > Hi Sasha > > > > > > > > > > > > > > > > > > > > > > > > > > > > This is my instackenv.json. MAC addresses are, em2 > > > > > > > interface’s > > > > > > > MAC > > > > > > > address of the nodes > > > > > > > > > > > > > > { > > > > > > > "nodes": [ > > > > > > > { > > > > > > > "pm_type":"pxe_ipmitool", > > > > > > > "mac":[ > > > > > > > "08:9E:01:58:CC:A1" > > > > > > > ], > > > > > > > "cpu":"4", > > > > > > > "memory":"8192", > > > > > > > "disk":"10", > > > > > > > "arch":"x86_64", > > > > > > > "pm_user":"root", > > > > > > > "pm_password”:””, > > > > > > > "pm_addr":"192.168.0.18" > > > > > > > }, > > > > > > > { > > > > > > > "pm_type":"pxe_ipmitool", > > > > > > > "mac":[ > > > > > > > "08:9E:01:58:D0:3D" > > > > > > > ], > > > > > > > "cpu":"4", > > > > > > > "memory":"8192", > > > > > > > "disk":"100", > > > > > > > "arch":"x86_64", > > > > > > > "pm_user":"root", > > > > > > > "pm_password”:””, > > > > > > > "pm_addr":"192.168.0.19" > > > > > > > } > > > > > > > ] > > > > > > > } > > > > > > > > > > > > > > This is my undercloud.conf file: > > > > > > > image_path = . > > > > > > > local_ip = 192.0.2.1/24 > > > > > > > local_interface = em2 > > > > > > > masquerade_network = 192.0.2.0/24 > > > > > > > dhcp_start = 192.0.2.5 > > > > > > > dhcp_end = 192.0.2.24 > > > > > > > network_cidr = 192.0.2.0/24 > > > > > > > network_gateway = 192.0.2.1 > > > > > > > inspection_interface = br-ctlplane > > > > > > > inspection_iprange = 192.0.2.100,192.0.2.120 > > > > > > > inspection_runbench = false > > > > > > > undercloud_debug = true > > > > > > > enable_tuskar = false > > > > > > > enable_tempest = false > > > > > > > > > > > > > > > > > > > > > > > > > > > > I have previously sent the screenshot of the consoles during > > > > > > > introspection > > > > > > > stage. Now I am attaching them again. > > > > > > > I cannot login to consoles because introspection stage is not > > > > > > > completed > > > > > > > successfully and I don't know the IP addresses. (nova list is > > > > > > > empty) > > > > > > > (I don't know if I can login with the IP addresses that I was > > > > > > > previously > > > > > > > set > > > > > > > by myself. I am not able to reach the nodes now, from home.) > > > > > > > > > > > > > > I ran the flavor-create command after the introspection stage. > > > > > > > But > > > > > > > introspection was not completed successfully, > > > > > > > I just ran deploy command to see if nova list fills during > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > TÜB?TAK B?LGEM > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Sasha Chuzhoy ?öyle yaz?yor:> Esra, > > > > > > > Is > > > > > > > it > > > > > > > possible to check the console of the nodes being introspected > > > > > > > and/or > > > > > > > deployed? I wonder if the instackenv.json file is accurate. Also, > > > > > > > what's > > > > > > > the > > > > > > > output from 'nova flavor-list'? Thanks. Best regards, Sasha > > > > > > > Chuzhoy. > > > > > > > ----- > > > > > > > Original Message ----- > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > Cc: "Sasha Chuzhoy" > > > > > > > , rdo-list at redhat.com > Sent: Monday, October > > > > > > > 19, > > > > > > > 2015 > > > > > > > 9:51:51 AM > Subject: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > error > > > > > > > "No > > > > > > > valid host was found" > > All 3 baremetal nodes (1 undercloud, 2 > > > > > > > overcloud) > > > > > > > have 2 nics. > > the undercloud machine's ip config is as > > > > > > > follows: > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ip addr > 1: lo: mtu > > > > > > > 65536 > > > > > > > qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd > > > > > > > 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft > > > > > > > forever > > > > > > > preferred_lft forever > inet6 ::1/128 scope host > valid_lft > > > > > > > forever > > > > > > > preferred_lft forever > 2: em1: > > > > > > > mtu > > > > > > > 1500 > > > > > > > qdisc mq state UP qlen > 1000 > link/ether 08:9e:01:50:8a:21 brd > > > > > > > ff:ff:ff:ff:ff:ff > inet 10.1.34.81/24 brd 10.1.34.255 scope > > > > > > > global > > > > > > > em1 > > > > > > > > > > > > > > > valid_lft forever preferred_lft forever > inet6 > > > > > > > fe80::a9e:1ff:fe50:8a21/64 > > > > > > > scope link > valid_lft forever preferred_lft forever > 3: em2: > > > > > > > mtu 1500 qdisc mq master > > > > > > > ovs-system > > > > > > > > > > > > > > > state UP qlen 1000 > link/ether 08:9e:01:50:8a:22 brd > > > > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > > > > > > 4: > > > > > > > ovs-system: mtu 1500 qdisc noop state DOWN > > > > > > > > > > > > > > > link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > 5: > > > > > > > br-ctlplane: > > > > > > > mtu 1500 qdisc noqueue > state > > > > > > > UNKNOWN > > > > > > > > > > > > > > > link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > inet > > > > > > > 192.0.2.1/24 > > > > > > > brd > > > > > > > 192.0.2.255 scope global br-ctlplane > valid_lft forever > > > > > > > preferred_lft > > > > > > > forever > inet6 fe80::a9e:1ff:fe50:8a22/64 scope link > valid_lft > > > > > > > forever > > > > > > > preferred_lft forever > 6: br-int: mtu 1500 > > > > > > > qdisc > > > > > > > noop > > > > > > > state DOWN > link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > > > > > > I > > > > > > > am > > > > > > > using em2 for pxe boot on the other machines.. So I configured > > > > > > > > instackenv.json to have em2's MAC address > For overcloud nodes, > > > > > > > em1 > > > > > > > was > > > > > > > configured to have 10.1.34.x ip, but after image > deploy I am > > > > > > > not > > > > > > > sure > > > > > > > what > > > > > > > happened for that nic. > > Thanks > > Esra ÇEL?K > > > > > > > > TÜB?TAK > > > > > > > B?LGEM > www.bilgem.tubitak.gov.tr > celik.esra at tubitak.gov.tr > > > > > > > > ----- > > > > > > > Orijinal Mesaj ----- > > > Kimden: "Marius Cornea" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kime: "Esra Celik" > > Kk: "Sasha > > > > > > > Chuzhoy" > > > > > > > , rdo-list at redhat.com > > Gönderilenler: > > > > > > > 19 > > > > > > > Ekim > > > > > > > Pazartesi 2015 15:36:58 > > Konu: Re: [Rdo-list] OverCloud deploy > > > > > > > fails > > > > > > > with > > > > > > > error "No valid host was > > found" > > > Hi, > > > I believe the > > > > > > > nodes > > > > > > > were > > > > > > > stuck in introspection so they were not ready for > > deployment > > > > > > > thus > > > > > > > the > > > > > > > not enough hosts message. Can you describe the > > networking > > > > > > > setup > > > > > > > (how > > > > > > > many nics the nodes have and to what networks they're > > > > > > > > > connected)? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks, > > Marius > > > ----- Original Message ----- > > > From: > > > > > > > "Esra > > > > > > > Celik" > > > To: "Sasha Chuzhoy" > > > > > > > > > > Cc: "Marius Cornea" > > > > > > > , > > > > > > > rdo-list at redhat.com > > > Sent: Monday, October 19, 2015 12:34:32 > > > > > > > PM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > valid > > > > > > > host > > > > > > > > > > > > > > > > > > > > > > > > was found" > > > > > > Hi again, > > > > > > "nova list" was > > > > > > > > empty > > > > > > > > after > > > > > > > introspection stage which was not completed > > > successfully. > > > > > > > So > > > > > > > I > > > > > > > cloud > > > > > > > not ssh the nodes.. Is there another way to > > > obtain > > > > > > > > > > the > > > > > > > IP > > > > > > > addresses? > > > > > > [stack at undercloud ~]$ sudo systemctl|grep > > > > > > > ironic > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-api.service loaded active running OpenStack > > > > > > > > Ironic > > > > > > > > API > > > > > > > > > > > > > > > > > > service > > > openstack-ironic-conductor.service loaded > > > > > > > > > active > > > > > > > > > running > > > > > > > OpenStack Ironic > > > Conductor service > > > > > > > > > > openstack-ironic-inspector-dnsmasq.service loaded active running > > > > > > > PXE > > > > > > > boot > > > > > > > > > > > > > > > > > dnsmasq service for Ironic Inspector > > > > > > > > > > openstack-ironic-inspector.service loaded active running Hardware > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > introspection service for OpenStack Ironic > > > > > > If I start > > > > > > > deployment > > > > > > > anyway I get 2 nodes in ERROR state > > > > > > [stack at undercloud > > > > > > > ~]$ > > > > > > > openstack overcloud deploy --templates > > > Deploying templates > > > > > > > in > > > > > > > the > > > > > > > directory > > > /usr/share/openstack-tripleo-heat-templates > > > > > > > > > > Stack > > > > > > > failed with status: resources.Controller: resources[0]: > > > > > > > > > > ResourceInError: resources.Controller: Went to status ERROR due > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "Message: > > > No valid host was found. There are not enough > > > > > > > hosts > > > > > > > available., Code: > > > 500" > > > > > > [stack at undercloud ~]$ > > > > > > > nova > > > > > > > list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > > > | ID | Name | Status | Task State | Power State | Networks > > > > > > > > > > | | > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > > > | 3a8e1fe4-d189-4fce-9912-dcf49fefb000 | > > > > > > > > > > | overcloud-controller-0 > > > > > > > > > > | | > > > > > > > ERROR | > > > | - > > > | | > > > | NOSTATE | | > > > | > > > > > > > 616b45c6-2749-418f-8aa4-fe2bfe164782 | overcloud-novacompute-0 | > > > > > > > ERROR > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | > > > | - > > > | | NOSTATE | | > > > > > > > > > > +--------------------------------------+-------------------------+--------+------------+-------------+----------+ > > > > > > > > > > > > > Did the repositories update during weekend? Should I > > > > > > > > > > > > > better > > > > > > > restart the > > > overall Undercloud and Overcloud installation > > > > > > > from > > > > > > > the > > > > > > > beginning? > > > > > > Thanks. > > > > > > Esra ÇEL?K > > > > > > > > > > > > > > > > > Uzman > > > > > > > Ara?t?rmac? > > > Bili?im Teknolojileri Enstitüsü > > > > > > > > > > TÜB?TAK B?LGEM > > > 41470 GEBZE - KOCAEL? > > > T +90 262 > > > > > > > 675 > > > > > > > 3140 > > > > > > > > > > > > > > > > > > > > > > > > F +90 262 646 3187 > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ................................................................ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sorumluluk Reddi > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > Kime: "Esra > > > > > > > Celik" > > > > > > > > > > > Kk: "Marius Cornea" > > > > > > > , rdo-list at redhat.com > > > > > > > > > > > Gönderilenler: > > > > > > > 16 > > > > > > > Ekim Cuma 2015 18:44:49 > > > > Konu: Re: [Rdo-list] OverCloud > > > > > > > deploy > > > > > > > fails > > > > > > > with error "No valid host > > > > was > > > > found" > > > > > > > > > > > > > > > > > > > > > Hi > > > > > > > Esra, > > > > > > > > > > > if the undercloud nodes are UP - you can login with: ssh > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > heat-admin@ > > > > You can see the IP of the nodes with: > > > > > > > "nova > > > > > > > list". > > > > > > > > > > > > > > > > > > BTW, > > > > What do you see if you run "sudo > > > > > > > > > > systemctl|grep > > > > > > > > > > ironic" > > > > > > > on the > > > > undercloud? > > > > > > > Best regards, > > > > > > > > > > > Sasha > > > > > > > Chuzhoy. > > > > > > > ----- Original Message ----- > > > > > > > > > > > > From: > > > > > > > "Esra > > > > > > > Celik" > > > > > To: "Sasha Chuzhoy" > > > > > > > > > > > > Cc: "Marius Cornea" > > > > > > > , > > > > > > > rdo-list at redhat.com > > > > > Sent: Friday, October 16, 2015 > > > > > > > 1:40:16 > > > > > > > AM > > > > > > > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > host > > > > > was found" > > > > > > > > > > Hi Sasha, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Overcloud-Compute > > > > > > > > > > This is my undercloud.conf > > > > > > > file: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > image_path = . > > > > > local_ip = 192.0.2.1/24 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > local_interface = em2 > > > > > masquerade_network = 192.0.2.0/24 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > dhcp_start = 192.0.2.5 > > > > > dhcp_end = 192.0.2.24 > > > > > > > > > > > > network_cidr = 192.0.2.0/24 > > > > > network_gateway = 192.0.2.1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > inspection_interface = br-ctlplane > > > > > inspection_iprange = > > > > > > > 192.0.2.100,192.0.2.120 > > > > > inspection_runbench = false > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > undercloud_debug = true > > > > > enable_tuskar = false > > > > > > > > > > > > enable_tempest = false > > > > > > > > > > IP configuration for > > > > > > > the > > > > > > > Undercloud is as follows: > > > > > > > > > > stack at undercloud > > > > > > > ~]$ > > > > > > > ip > > > > > > > addr > > > > > > > > > > > > > > > > > > > 1: lo: mtu 65536 qdisc noqueue > > > > > > > > > > > state > > > > > > > > > > > UNKNOWN > > > > > > > > > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > inet 127.0.0.1/8 scope host lo > > > > > valid_lft forever > > > > > > > preferred_lft > > > > > > > forever > > > > > inet6 ::1/128 scope host > > > > > valid_lft > > > > > > > forever > > > > > > > preferred_lft forever > > > > > 2: em1: > > > > > > > > > > > > > > mtu 1500 qdisc mq state UP > > > > > qlen > > > > > 1000 > > > > > > > > > > > > > > > > > > > link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff > > > > > inet > > > > > > > 10.1.34.81/24 brd 10.1.34.255 scope global em1 > > > > > > > > > > > > valid_lft > > > > > > > forever > > > > > > > preferred_lft forever > > > > > inet6 fe80::a9e:1ff:fe50:8a21/64 > > > > > > > scope > > > > > > > link > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > 3: > > > > > > > > > > > > em2: > > > > > > > mtu 1500 qdisc mq master > > > > > > > > > > > > > > > > > > > > > > > > > > ovs-system > > > > > state UP qlen 1000 > > > > > link/ether > > > > > > > 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff > > > > > 4: ovs-system: > > > > > > > mtu 1500 qdisc noop state DOWN > > > > > > > > > > > > link/ether > > > > > > > 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff > > > > > 5: br-ctlplane: > > > > > > > mtu 1500 qdisc > > > > > > > > > > > > noqueue > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > state UNKNOWN > > > > > link/ether 08:9e:01:50:8a:22 brd > > > > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > > > > > > > > > > inet 192.0.2.1/24 brd 192.0.2.255 scope global > > > > > > > > > > > br-ctlplane > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > valid_lft forever preferred_lft forever > > > > > inet6 > > > > > > > fe80::a9e:1ff:fe50:8a22/64 scope link > > > > > valid_lft forever > > > > > > > preferred_lft forever > > > > > 6: br-int: > > > > > > > mtu > > > > > > > 1500 > > > > > > > qdisc noop state DOWN > > > > > link/ether fa:85:ac:92:f5:41 brd > > > > > > > ff:ff:ff:ff:ff:ff > > > > > > > > > > And I attached two > > > > > > > screenshots > > > > > > > showing > > > > > > > the boot stage for overcloud > > > > > nodes > > > > > > > > > > > > > > > > > Is > > > > > > > there > > > > > > > a > > > > > > > way to login the overcloud nodes to see their IP > > > > > > > > > > > > configuration? > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra ÇEL?K > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > TÜB?TAK B?LGEM > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > ----- Orijinal > > > > > > > Mesaj > > > > > > > ----- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Sasha Chuzhoy" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kime: "Esra Celik" > > > > > > Kk: > > > > > > > "Marius > > > > > > > Cornea" , rdo-list at redhat.com > > > > > > > > > > > > > Gönderilenler: 15 Ekim Per?embe 2015 16:58:41 > > > > > > > > > > > > > Konu: > > > > > > > Re: > > > > > > > [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > > > > > > > > > > > > > host > > > > > > > > > > > > > > > > > > > > > > > > > > > was > > > > > > found" > > > > > > > > > > > Just my 2 > > > > > > > > > > > cents. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Did you make sure that all the registered nodes are > > > > > > > > > > configured > > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > boot > > > > > > off > > > > > > the right NIC first? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Can you watch the console and see what happens on the problematic > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > nodes > > > > > > upon > > > > > > boot? > > > > > > > > > > > > > > > > > > > Best > > > > > > > regards, > > > > > > Sasha Chuzhoy. > > > > > > > > > > > ----- > > > > > > > Original > > > > > > > Message ----- > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Cc: > > > > > > > rdo-list at redhat.com > > > > > > > Sent: Thursday, October 15, > > > > > > > 2015 > > > > > > > 4:40:46 > > > > > > > AM > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails > > > > > > > with > > > > > > > error > > > > > > > "No > > > > > > > valid > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sorry for the late reply > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic node-show results are below. I have my nodes > > > > > > > > > > > > power > > > > > > > > > > > > on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > after > > > > > > > introspection bulk start. And I get > > > > > > > > > > > the > > > > > > > following warning > > > > > > > Introspection didn't finish for > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > > > > > > > > > > > > > > Doesn't seem to be the same issue with > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic node-list > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > | UUID | Name | Instance UUID | Power State | > > > > > > > > > > > > > > | Provisioning > > > > > > > State > > > > > > > | | > > > > > > > | Maintenance | > > > > > > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | > > > > > > > > > > > > > > | None > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | power > > > > > > > on | > > > > > > > | available > > > > > > > | | > > > > > > > | > > > > > > > False > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | > > > > > > > > > > > > | power > > > > > > > > > > > > | on > > > > > > > > > > > > | | > > > > > > > > > > > > > > | available > > > > > > > | | > > > > > > > | False > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic > > > > > > > node-show > > > > > > > 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > | Property | Value | > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > | target_power_state | None | > > > > > > > | extra > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | {} > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > > > > > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | > > > > > > > None > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > | provision_state | available | > > > > > > > | > > > > > > > > > > > > | clean_step > > > > > > > > > > > > | | > > > > > > > > > > > > | {} > > > > > > > > > > > > | | > > > > > > > > > > > > > > | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > | console_enabled | False | > > > > > > > | > > > > > > > | target_provision_state > > > > > > > | | > > > > > > > | None > > > > > > > | | > > > > > > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at > > > > > > > > > | | > > > > > > > > > | None > > > > > > > > > | | > > > > > > > > > | > > > > > > > > > > > > > > | inspection_finished_at | None | > > > > > > > | > > > > > > > > > > > > > | power_state > > > > > > > > > > > > > | | > > > > > > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > > > > > > > > | > > > > > > > reservation | None | > > > > > > > | properties | {u'memory_mb': > > > > > > > u'8192', > > > > > > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > > > > > > > > > > > > > > > | > > > > > > > u'10', > > > > > > > | > > > > > > > | | u'cpus': u'4', u'capabilities': > > > > > > > | > > > > > > > | | u'boot_option:local'} > > > > > > > | > > > > > > > | | | > > > > > > > > > > > > > > | instance_uuid | None | > > > > > > > | name | > > > > > > > > > > > > > > | None > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > | driver_info | {u'ipmi_password': u'******', > > > > > > > > > > | u'ipmi_address': > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > | u'192.168.0.18', | > > > > > > > | | u'ipmi_username': > > > > > > > > > > | u'root', > > > > > > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': > > > > > > > | | | > > > > > > > | | u'3db3dbed- > > > > > > > | | | > > > > > > > | | | > > > > > > > | | | > > > > > > > | | > > > > > > > > | | | > > > > > > > | | > > > > > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | > > > > > > > > > > > > | | created_at > > > > > > > > > > > > | | | > > > > > > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info > > > > > > > | > > > > > > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > instance_info | {} | > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic > > > > > > > node-show > > > > > > > 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > | Property | Value | > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > | target_power_state | None | > > > > > > > | extra > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | {} > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > | last_error | None | > > > > > > > | updated_at | > > > > > > > 2015-10-15T08:26:42+00:00 | > > > > > > > | maintenance_reason | > > > > > > > None > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > | provision_state | available | > > > > > > > | > > > > > > > > > > > > | clean_step > > > > > > > > > > > > | | > > > > > > > > > > > > | {} > > > > > > > > > > > > | | > > > > > > > > > > > > > > | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > | console_enabled | False | > > > > > > > | > > > > > > > | target_provision_state > > > > > > > | | > > > > > > > | None > > > > > > > | | > > > > > > > > > > > > > > | provision_updated_at | 2015-10-15T08:26:42+00:00 > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > | maintenance | False | > > > > > > > | inspection_started_at > > > > > > > > > | | > > > > > > > > > | None > > > > > > > > > | | > > > > > > > > > | > > > > > > > > > > > > > > | inspection_finished_at | None | > > > > > > > | > > > > > > > > > > > > > | power_state > > > > > > > > > > > > > | | > > > > > > > power on | > > > > > > > | driver | pxe_ipmitool | > > > > > > > > > > > > > > | > > > > > > > reservation | None | > > > > > > > | properties | {u'memory_mb': > > > > > > > u'8192', > > > > > > > u'cpu_arch': u'x86_64', > > > > > > > | u'local_gb': > > > > > > > > > > > > > > > > > > > > > | > > > > > > > u'100', > > > > > > > | > > > > > > > | | u'cpus': u'4', u'capabilities': > > > > > > > | > > > > > > > | | u'boot_option:local'} > > > > > > > | > > > > > > > | | | > > > > > > > > > > > > > > | instance_uuid | None | > > > > > > > | name | > > > > > > > > > > > > > > | None > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > | > > > > > > > > > > > | driver_info | {u'ipmi_password': u'******', > > > > > > > > > > | u'ipmi_address': > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > | u'192.168.0.19', | > > > > > > > | | u'ipmi_username': > > > > > > > > > > | u'root', > > > > > > > u'deploy_kernel': > > > > > > > | | u'49a2c8d4-a283-4bdf-8d6f- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > > | | e83ae28da047', u'deploy_ramdisk': > > > > > > > | | | > > > > > > > | | u'3db3dbed- > > > > > > > | | | > > > > > > > | | | > > > > > > > | | | > > > > > > > | | > > > > > > > > | | | > > > > > > > | | > > > > > > > > > > > > > | | 0d88-4632-af98-8defb05ca6e2'} | > > > > > > > | > > > > > > > > > > > > | | created_at > > > > > > > > > > > > | | | > > > > > > > 2015-10-15T07:49:08+00:00 | > > > > > > > | driver_internal_info > > > > > > > | > > > > > > > {u'clean_steps': None} | > > > > > > > | chassis_uuid | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > instance_info | {} | > > > > > > > > > > > > > > +------------------------+-------------------------------------------------------------------------+ > > > > > > > > > > > > > > [stack at undercloud ~]$ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And below I added my history for the > > > > > > > > > > > > > > > > > > > stack > > > > > > > > > > > > > > > > > > > user. > > > > > > > > > > > > > > > > > > > I > > > > > > > don't think I > > > > > > > am > > > > > > > doing > > > > > > > > > > > > > > something > > > > > > > other than > > > > > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty > > > > > > > > > > > > > > > > > > > > > doc > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 vi > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > instackenv.json > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 2 sudo yum -y install epel-release > > > > > > > 3 sudo > > > > > > > > > > > > curl > > > > > > > > > > > > -o > > > > > > > /etc/yum.repos.d/delorean.repo > > > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo > > > > > > > > > > > > > > 4 sudo curl -o > > > > > > > > > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /etc/yum.repos.d/delorean-current.repo > > > > > > > 6 sudo > > > > > > > /bin/bash > > > > > > > -c > > > > > > > "cat > > > > > > > <>/etc/yum.repos.d/delorean-current.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules > > > > > > > > > > > > > > EOF" > > > > > > > 7 sudo curl -o > > > > > > > /etc/yum.repos.d/delorean-deps.repo > > > > > > > > > > > > > > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 8 sudo yum -y install yum-plugin-priorities > > > > > > > 9 sudo > > > > > > > yum > > > > > > > install > > > > > > > -y python-tripleoclient > > > > > > > 10 cp > > > > > > > /usr/share/instack-undercloud/undercloud.conf.sample > > > > > > > > > > > > > > > > > > > > > ~/undercloud.conf > > > > > > > 11 vi undercloud.conf > > > > > > > > > > > > > > > > > > > > > 12 > > > > > > > export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 13 > > > > > > > openstack > > > > > > > undercloud install > > > > > > > 14 source stackrc > > > > > > > > > > > > > > 15 > > > > > > > export > > > > > > > NODE_DIST=centos7 > > > > > > > 16 export > > > > > > > DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo > > > > > > > > > > > > > > /etc/yum.repos.d/delorean-deps.repo" > > > > > > > 17 export > > > > > > > DIB_INSTALLTYPE_puppet_modules=source > > > > > > > 18 openstack > > > > > > > overcloud > > > > > > > image build --all > > > > > > > 19 ls > > > > > > > 20 openstack > > > > > > > overcloud > > > > > > > image upload > > > > > > > 21 openstack baremetal import --json > > > > > > > instackenv.json > > > > > > > 22 openstack baremetal configure > > > > > > > boot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 23 ironic node-list > > > > > > > 24 openstack baremetal > > > > > > > > > > introspection > > > > > > > bulk start > > > > > > > 25 ironic node-list > > > > > > > 26 > > > > > > > ironic > > > > > > > node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed > > > > > > > 27 > > > > > > > ironic > > > > > > > node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218 > > > > > > > 28 > > > > > > > history > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > TÜB?TAK > > > > > > > > > > > > > > > > B?LGEM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: "Marius Cornea" > > > > > > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > > > > , rdo-list at redhat.com > > > > > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 19:40:07 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > host > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Can you do ironic node-show for your ironic nodes and post the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > results? > > > > > > > Also > > > > > > > check the following > > > > > > > suggestion > > > > > > > if > > > > > > > you're experiencing the same > > > > > > > issue: > > > > > > > > > > > > > > https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > > > > > > > > > > > > > > > > > > From: > > > > > > > > > > > > > > > > > "Esra > > > > > > > Celik" > > > > > > > > To: "Marius > > > > > > > Cornea" > > > > > > > > > > > > > > > Cc: "Ignacio Bravo" > > > > > > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 3:22:20 PM > > > > > > > > > > > > > > > Subject: > > > > > > > Re: > > > > > > > [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > host > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well in the early stage of the > > > > > > > > > > > > > > > > > > > > > > introspection > > > > > > > > > > > > > > > > > > > > > > I > > > > > > > can see Client > > > > > > > > IP > > > > > > > > of > > > > > > > > > > > > > > > > > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > (screenshot attached). But then I see continuous > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ironic-python-agent > > > > > > > > errors > > > > > > > > > > > > > > > > (screenshot-2 > > > > > > > attached). Errors repeat after time out.. And the > > > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > are > > > > > > > > not powered off. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Seems like I am stuck in introspection stage.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I can use ipmitool command to successfully power on/off the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ADMINISTRATOR > > > > > > > > -U > > > > > > > > root -R 3 -N 5 > > > > > > > -P > > > > > > > power status > > > > > > > > Chassis Power is on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > > > > > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > chassis power status > > > > > > > > Chassis > > > > > > > > > > Power > > > > > > > > > > is > > > > > > > > > > on > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I > > > > > > > > > > > > > > lanplus > > > > > > > > > > > > > > -U > > > > > > > > > > > > > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > > > chassis > > > > > > > power off > > > > > > > > Chassis Power Control: Down/Off > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > root > > > > > > > > -P > > > > > > > > chassis power > > > > > > > status > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Chassis Power is off > > > > > > > > > > > > > > > > > > > > > [stack at undercloud > > > > > > > > > > > > > ~]$ > > > > > > > ipmitool -H 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -P > > > > > > > > chassis power on > > > > > > > > > > > > > > > Chassis > > > > > > > Power > > > > > > > Control: Up/On > > > > > > > > [stack at undercloud ~]$ ipmitool -H > > > > > > > 192.168.0.18 -I lanplus -U > > > > > > > > root > > > > > > > > > > > > > > > -P > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > chassis power status > > > > > > > > Chassis > > > > > > > > > > Power > > > > > > > > > > is > > > > > > > > > > on > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > TÜB?TAK B?LGEM > > > > > > > > www.bilgem.tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: > > > > > > > "Marius Cornea" > > > > > > > > Kime: "Esra > > > > > > > Celik" > > > > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > > > > , > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: 14 Ekim Çar?amba 2015 14:59:30 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > valid > > > > > > > > host > > > > > > > > was > > > > > > > > > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To: "Marius Cornea" > > > > > > > > > Cc: > > > > > > > "Ignacio > > > > > > > Bravo" , > > > > > > > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: Wednesday, October 14, 2015 10:49:01 AM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > valid > > > > > > > > > host > > > > > > > > > was found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Well today I started with > > > > > > > re-installing the OS and nothing > > > > > > > > > seems > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > wrong > > > > > > > > > with > > > > > > > > > undercloud > > > > > > > installation, > > > > > > > then; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I see an > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > error > > > > > > > during image build > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud image build --all > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ++ cat /etc/dib_dracut_drivers > > > > > > > > > + dracut -N > > > > > > > --install > > > > > > > ' > > > > > > > curl partprobe lsblk targetcli tail > > > > > > > > > head > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > awk > > > > > > > > > ifconfig > > > > > > > > > cut expr route > > > > > > > ping > > > > > > > nc > > > > > > > wget > > > > > > > tftp grep' --kernel-cmdline > > > > > > > > > 'rd.shell > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > rd.debug > > > > > > > > > rd.neednet=1 rd.driver.pre=ahci' > > > > > > > --include > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /var/tmp/image.YVhwuArQ/mnt/ > > > > > > > > > / > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > virtio_net > > > > > > > > > virtio_blk target_core_mod > > > > > > > iscsi_target_mod > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > target_core_iblock > > > > > > > > > > > > > > > > > > > > > > > target_core_file > > > > > > > target_core_pscsi configfs' -o 'dash > > > > > > > > > plymouth' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /tmp/ramdisk > > > > > > > > > cat: write error: Broken > > > > > > > > > > pipe > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + chmod o+r /tmp/kernel > > > > > > > > > + trap EXIT > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + target_tag=99-build-dracut-ramdisk > > > > > > > > > + date > > > > > > > > > +%s.%N > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + output '99-build-dracut-ramdisk completed' > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > a lot of log > > > > > > > > > ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You can ignore that afaik, if you end up having all > > > > > > > > > > > > > the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > required > > > > > > > > images > > > > > > > > it > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > should be ok. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Then, > > > > > > > during introspection stage I see ironic-python-agent > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > errors > > > > > > > > > on > > > > > > > > > nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > (screenshot attached) and the following warnings > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > That looks odd. Is it showing up in the > > > > > > > > > > > > > > > > > > early > > > > > > > > > > > > > > > > > > stage > > > > > > > > > > > > > > > > > > of > > > > > > > the > > > > > > > > introspection? > > > > > > > > At > > > > > > > > > > > > > > > > > > > > > > > > > > > > > some > > > > > > > point it should receive an address by DHCP and the Network > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > is > > > > > > > > unreachable error should disappear. Does the > > > > > > > introspection > > > > > > > > complete > > > > > > > > and > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > nodes are turned off? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# > > > > > > > > > > > > > > > > > > > > > > > > > journalctl > > > > > > > > > > > > > > > > > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 > > > > > > > > > > > > > 10:30:12 > > > > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING > > > > > > > > > > > oslo_config.cfg > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Option "http_url" from group "pxe" is deprecated. Use > > > > > > > > > > > > > option > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "http_url" > > > > > > > > > from > > > > > > > > > > > > > > > > > > > > > > > group > > > > > > > "deploy". > > > > > > > > > Oct 14 10:30:12 undercloud.rdo > > > > > > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 10:30:12.119 > > > > > > > > > 619 WARNING oslo_config.cfg > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b > > > > > > > > > ] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Option "http_root" from group "pxe" is deprecated. Use option > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "http_root" > > > > > > > > > from group "deploy". > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Before deployment ironic > > > > > > > > > > > > > > > > > > > > > > > > > node-list: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This is odd too as > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I'm > > > > > > > expecting the nodes to be powered off > > > > > > > > before > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > running > > > > > > > > deployment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ ironic > > > > > > > > > > > > > > > > > > > > > > node-list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > | UUID | Name | Instance UUID | Power State | > > > > > > > > > > > > > > > > | Provisioning > > > > > > > > > > > > > > > > | State > > > > > > > > > | | > > > > > > > > > > > > > > > > > > > > > > > > > | | > > > > > > > Maintenance | > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | > > > > > > > > > > > > > > > > | None > > > > > > > > > > > > > > > > | | > > > > > > > power > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > available > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > > | > > > > > > > > > > | on > > > > > > > > > | | > > > > > > > > > | available > > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > | | > > > > > > > > > | False | > > > > > > > > > > > > > > > > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > > > > > > > > > > > > > > > > > > > > > > > During deployment I get > > > > > > > > > > > > > > > > > > > > > > > > > following > > > > > > > > > > > > > > > > > > > > > > > > > errors > > > > > > > > > > > > > > > > > > > > > > > > > [root at localhost ~]# > > > > > > > > > > > > > > > > > > > > > > > > > journalctl > > > > > > > > > > > > > > > > > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > openstack-ironic-conductor.service > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > grep -i "warning\|error" > > > > > > > > > Oct 14 > > > > > > > > > > > > > 11:29:01 > > > > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 ERROR > > > > > > > ironic.drivers.modules.ipmitool [-] IPMI Error > > > > > > > > > > > > > > > > while > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > attempting > > > > > > > > > "ipmitool -I lanplus -H > > > > > > > 192.168.0.19 -L ADMINISTRATOR -U root > > > > > > > > > -R > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 3 > > > > > > > > > -N > > > > > > > > > 5 > > > > > > > > > -f > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > /tmp/tmpSCKHIv power status"for node > > > > > > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553. > > > > > > > > > Error: > > > > > > > Unexpected > > > > > > > error while running command. > > > > > > > > > Oct 14 11:29:01 > > > > > > > undercloud.rdo ironic-conductor[619]: > > > > > > > > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 11:29:01.739 > > > > > > > > > 619 WARNING > > > > > > > ironic.drivers.modules.ipmitool [-] IPMI power > > > > > > > > > > > > > > > > status > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > failed > > > > > > > > > for > > > > > > > > > node > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: > > > > > > > > > > > > > > > > > > > > > > > Unexpected > > > > > > > > > error > > > > > > > > > while > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > running command. > > > > > > > > > Oct 14 11:29:01 undercloud.rdo > > > > > > > ironic-conductor[619]: > > > > > > > > > 2015-10-14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 11:29:01.740 > > > > > > > > > 619 WARNING > > > > > > > ironic.conductor.manager > > > > > > > [-] > > > > > > > During > > > > > > > > > sync_power_state, > > > > > > > > > > > > > > > > could > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > not > > > > > > > > > get power state for node > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > b5811c06-d5d1-41f1-87b3-2fd55ae63553, > > > > > > > > > attempt > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 1 > > > > > > > > > of > > > > > > > > > 3. Error: IPMI > > > > > > > > > > call > > > > > > > > > > failed: > > > > > > > power status.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This > > > > > > > looks > > > > > > > like an ipmi error, can you try to manually run > > > > > > > > > > > > > > > commands > > > > > > > > > > > > > > > > > > > > > > > > > > > > > using > > > > > > > > the > > > > > > > > ipmitool > > > > > > > > > > > > > and > > > > > > > > > > > > > see > > > > > > > > > > > > > if > > > > > > > you get any success? It's also worth filing > > > > > > > > a > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > bug > > > > > > > > with > > > > > > > > details such as the > > > > > > > > ipmitool > > > > > > > version, server model, drac > > > > > > > > firmware > > > > > > > > > > > > > > > > > > > > > > > > > > > > > version. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks a > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > > > > Orijinal Mesaj ----- > > > > > > > > > > > > > > > > > > Kimden: > > > > > > > "Marius > > > > > > > Cornea" > > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > > > > > Kk: "Ignacio Bravo" > > > > > > > , > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: 13 Ekim Sal? 2015 21:16:14 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Konu: > > > > > > > Re: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > error > > > > > > > > > > > > > > > > "No > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > > > > host > > > > > > > > > > > > > > > > was > > > > > > > found" > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- > > > > > > > Original > > > > > > > Message ----- > > > > > > > > > > From: "Esra Celik" > > > > > > > > > > > > > > > > > To: "Marius > > > > > > > Cornea" > > > > > > > > > > > > > > > > > Cc: "Ignacio Bravo" > > > > > > > , > > > > > > > > > > rdo-list at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: Tuesday, October 13, 2015 5:02:09 PM > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > error > > > > > > > > > > "No > > > > > > > > > > valid > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > During > > > > > > > deployment > > > > > > > they are powering on and deploying the > > > > > > > > > > > > > > > > > images. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I > > > > > > > > > > see > > > > > > > > > > lot > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > of > > > > > > > > > > connection error messages about > > > > > > > ironic-python-agent but > > > > > > > > > > ignore > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > them > > > > > > > > > > > > > > > > > as > > > > > > > > > > mentioned here > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > (https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > That was referring to the > > > > > > > > > > > > > > > > > > > > > > > introspection > > > > > > > stage. From what I > > > > > > > > > can > > > > > > > > > tell > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > you > > > > > > > > > are > > > > > > > > > experiencing > > > > > > > > > > issues > > > > > > > > > > during > > > > > > > deployment as it fails to > > > > > > > > > provision > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > > > > > > nova > > > > > > > > > instances, can you check > > > > > > > > > > > > > > > > if > > > > > > > > > > > > > > > > during > > > > > > > that stage the nodes get > > > > > > > > > powered > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > on? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Make sure that before overcloud > > > > > > > > > > > > > > > > > > > > > > > deploy > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > ironic nodes are > > > > > > > > > available > > > > > > > > > > > > > > > > for > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > provisioning (ironic node-list and check the provisioning > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > state > > > > > > > > > column). > > > > > > > > > Also > > > > > > > > > > > check > > > > > > > > > > > that > > > > > > > you didn't miss any step in the docs in > > > > > > > > > regards > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > kernel > > > > > > > > > and ramdisk > > > > > > > assignment, introspection, flavor creation(so it > > > > > > > > > > > > > > > > > > > > > > > matches > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > > nodes resources) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/basic_deployment/basic_deployment_cli.html > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > In > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > instackenv.json > > > > > > > file I do not need to add the undercloud > > > > > > > > > > > > > > > > > node, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > or > > > > > > > > > > do > > > > > > > > > > I? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > No, the nodes details should be enough. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And which log files should I watch during > > > > > > > > > > > > > > > > > deployment? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You can check the > > > > > > > openstack-ironic-conductor logs(journalctl > > > > > > > > > -fl > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -u > > > > > > > > > openstack-ironic-conductor.service) > > > > > > > > > > and > > > > > > > > > > the > > > > > > > > > > logs > > > > > > > in > > > > > > > > > /var/log/nova. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Esra > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kime: > > > > > > > > > > Esra Celik > > > > > > > Kk: Ignacio Bravo > > > > > > > > > > > > > > > > > , > > > > > > > > > > > > > > > > > rdo-list at redhat.comGönderilenler: > > > > > > > > > > Tue, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 13 > > > > > > > > > > Oct > > > > > > > > > > 2015 > > > > > > > > > > > 17:25:00 > > > > > > > > > > > +0300 > > > > > > > (EEST)Konu: Re: [Rdo-list] OverCloud > > > > > > > > > > deploy > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > fails > > > > > > > > > > with > > > > > > > > > > error > > > > > > > > > > > "No > > > > > > > > > > > valid > > > > > > > host was found" > > > > > > > > > > > > > > > > > > > > ----- > > > > > > > Original > > > > > > > Message -----> From: "Esra Celik" > > > > > > > > > > > > > > > > > > > > > > > > > > > > To: "Ignacio > > > > > > > Bravo" > > > > > > > > Cc: > > > > > > > > > > > > > > > > > rdo-list at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sent: > > > > > > > > > > Tuesday, October 13, 2015 > > > > > > > > > > > > > > 3:47:57 > > > > > > > PM> Subject: Re: > > > > > > > > > > [Rdo-list] > > > > > > > > > > > > > > > > > > > > > > > > OverCloud > > > > > > > > > > deploy fails with error "No valid > > > > > > > host > > > > > > > was > > > > > > > found"> > > > > > > > > > > > > > Actually > > > > > > > > > > I > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > re-installed the OS for Undercloud before deploying. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > However > > > > > > > > > > I > > > > > > > > > > did> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > not > > > > > > > > > > re-install the OS in Compute and > > > > > > > > Controller > > > > > > > nodes.. I will > > > > > > > > > > reinstall> > > > > > > > > > > > > > > > > > basic > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OS for them too, and retry.. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > You don't need to reinstall the OS on the controller and > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > compute, > > > > > > > > > > they > > > > > > > > > > > > > > > > > > > > > will > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > get the image served by the undercloud. I'd recommend > > > > > > > > > > > > that > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > during > > > > > > > > > > deployment > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > you > > > > > > > watch the servers console and make sure they get > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > powered > > > > > > > > > > > > > > > > > on, > > > > > > > > > > pxe > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > boot, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > and actually get the image deployed. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks> > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Kimden: > > > > > > > > > > > "Ignacio > > > > > > > > > > > > > > > > > > > > Bravo" > > > > > > > > Kime: "Esra Celik" > > > > > > > > > > > > > > > > > > > Kk: rdo-list at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Gönderilenler: > > > > > > > > > > > 13 Ekim Sal? 2015 > > > > > > > 16:36:06> > > > > > > > Konu: > > > > > > > Re: [Rdo-list] > > > > > > > > > > > OverCloud > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > deploy > > > > > > > > > > > > > > > > > > fails > > > > > > > > > > > with error "No > > > > > > > > > > > > > > > > > > valid > > > > > > > > > > > > > > > > > > host > > > > > > > was> found"> > Esra,> > I > > > > > > > > > > > encountered > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > same > > > > > > > > > > > > > > > > > > > > > problem > > > > > > > > > > after > > > > > > > deleting the stack and re-deploying.> > It > > > > > > > > > > > > > > > > > > turns > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > out > > > > > > > > > > > that > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 'heat > > > > > > > stack-delete overcloud’ does remove the nodes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > from> > > > > > > > > > > > ‘nova list’ and one would > > > > > > > assume > > > > > > > that the > > > > > > > > > > > baremetal > > > > > > > > > > > > > > > > > > servers > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > are now ready to> be used for the next stack, but > > > > > > > > > > > > > > > when > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > redeploying, > > > > > > > > > > > I > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > get > > > > > > > > > > > the same message of> not enough hosts > > > > > > > available.> > > > > > > > > You > > > > > > > > > > > can > > > > > > > > > > > look > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > into > > > > > > > > > > > the > > > > > > > > > > > nova > > > > > > > > > logs > > > > > > > > > and > > > > > > > > > it > > > > > > > mentions something about ‘node xxx > > > > > > > > > > > > > > > > > > is> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > already > > > > > > > > > > > associated with UUID > > > > > > > > > > > > > yyyy’ > > > > > > > and ‘I tried 3 > > > > > > > > > > > times > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > > > > > > > > I’m > > > > > > > > > > > giving > > > > > > > > > > > > > > > > > up’.> > > > > > > > > > > > > > > > > > The > > > > > > > issue is that the UUID yyyy > > > > > > > > > > > belonged > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > a > > > > > > > > > > > prior > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > unsuccessful deployment.> > I’m now redeploying the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > basic > > > > > > > > > > > OS > > > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > start from scratch again.> > IB> > __> Ignacio Bravo> > > > > > > > > > > > > LTG > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Federal, > > > > > > > > > > > Inc> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 13, > > > > > > > > > > > 2015, > > > > > > > > > > > at > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 9:25 > > > > > > > > > > > AM, Esra Celik < > > > > > > > > > celik.esra at tubitak.gov.tr > > > > > > > > > > > > > > > > > wrote:> > > > > > > > > > > > > > Hi > > > > > > > > > > > all,> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OverCloud deploy fails with > > > > > > > > > > > > > > > > > > > > > > > > error > > > > > > > > > > > > > > > > > > > > > > > > "No > > > > > > > valid host was > > > > > > > > > > > found"> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ openstack overcloud deploy > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --templates> > > > > > > > > > > > Deploying > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > templates in the directory> > > > > > > > > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates> > > > > > > > > > > > > > > > > > > > > > > > > > Stack > > > > > > > failed with status: Resource CREATE failed: > > > > > > > > > > > > > > > > > > resources.Compute:> > > > > > > > > > > > ResourceInError: > > > > > > > resources[0].resources.NovaCompute: Went > > > > > > > > > > > to > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > status > > > > > > > > > > > ERROR> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > due > > > > > > > > > > > > > to > > > > > > > "Message: No valid host was found. There are not > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > enough > > > > > > > > > > > hosts> > > > > > > > > > > > > > > > > > > available., > > > > > > > Code: > > > > > > > 500"> Heat Stack create failed.> > Here > > > > > > > > > > > are > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > some > > > > > > > > > > > logs:> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Every > > > > > > > 2.0s: heat resource-list -n 5 overcloud | grep -v > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > COMPLETE > > > > > > > > > > > > Tue > > > > > > > > > > > > Oct > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 13> 16:18:17 2015> > > > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > | resource_name | physical_resource_id | > > > > > > > > > > > > > > > > > > | resource_type > > > > > > > | > > > > > > > > > > > | resource_status > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > | |> > > > > > > > | > > > > > > > > > > > | | > > > > > > > updated_time | stack_name |> > > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > | Compute | > > > > > > > > > > > > > > > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 > > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | OS::Heat::ResourceGroup > > > > > > > > > > > > > > > > > > > > > > > > > > > | |> > > > > > > > > > > > > > > > > | | > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> | Controller > > > > > > > > > > > |> | | > > > > > > > > > > > > > > > > > |> | > > > > > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | > > > > > > > > > > > > > > > > > > OS::Heat::ResourceGroup> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud > > > > > > > > > > > > > > > > |> > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > 0 > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > OS::TripleO::Controller > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs |> | 0 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | > > > > > > > > > > > > > > > > > > OS::TripleO::Compute > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > CREATE_FAILED | 2015-10-13T10:20:54 | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > overcloud-Compute-vqk632ysg64r > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > Controller | > > > > > > > > | > > > > > > > > > > > 2e9ac712-0566-49b5-958f-c3e151bb24d7 > > > > > > > | > > > > > > > > > > > OS::Nova::Server > > > > > > > > > > > |> > > > > > > > | > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > CREATE_IN_PROGRESS | > > > > > > > > > > > > > | > > > > > > > > > > > 2015-10-13T10:20:54 > > > > > > > |> | > > > > > > > > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk > > > > > > > |> | > > > > > > > > > > > NovaCompute > > > > > > > > > > > | > > > > > > > > |> | > > > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > > > > > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > CREATE_FAILED > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 2015-10-13T10:20:56 |> | > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | | > > > > > > > overcloud-Compute-vqk632ysg64r-0-32nalzkofmef > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > +-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > > > [stack at undercloud ~]$ heat > > > > > > > > > > > > > > > > > > > > resource-show > > > > > > > > > > > > > > > > > > > > overcloud > > > > > > > > > > > > > > > > > > > > Compute> > > > > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > | Property | Value |> > > > > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > | attributes | { |> | | "attributes": null, > > > > > > > > > > > > > > > > > > | |> > > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | "refs": > > > > > > > > > > > | null > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > > > > | |> > > > > > > > > > > > | | > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > > | } > > > > > > > > > > > |> | creation_time | > > > > > > > > > | 2015-10-13T10:20:36 > > > > > > > > > | |> > > > > > > > > > | | > > > > > > > description > > > > > > > > > > > |> | | > > > > > > > > > > > |> > > > > > > > | > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | links > > > > > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > > > > > > > > |> | | > > > > > > > > > > > |> | > > > > > > > > > |> | |> > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70/resources/Compute> > > > > > > > > > > > > > > > > > > | (self) |> | | > > > > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud/620ada83-fb7d-4769-b1e8-431cfbb37d70> > > > > > > > > > > > > > > > > > > | | (stack) |> | | > > > > > > > > > > > > > > > > > > http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overcloud-Compute-vqk632ysg64r/e33b6b1e-8740-4ded-ad7f-720617a03393> > > > > > > > > > > > > > > > > > > | | (nested) |> | logical_resource_id | > > > > > > > > > > > > > > > > > > | | Compute > > > > > > > > > > > > > > > > > > | | |> > > > > > > > > > > > > > > > > > > | | | > > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > > | | physical_resource_id > > > > > > > > > > > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > > | | | > > > > > > > e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ComputeAllNodesDeployment |> | | > > > > > > > > > > > > > > > > > > ComputeNodesPostDeployment > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > ComputeCephDeployment > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ComputeAllNodesValidationDeployment > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > AllNodesExtraConfig |> | | allNodesConfig |> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > resource_name > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > Compute > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |> > > > > > > > > > > > | resource_status | > > > > > > > > > > > > > > |> > > > > > > > > > > > | CREATE_FAILED > > > > > > > > > > > > > > |> > > > > > > > > > > > | |> > > > > > > > | > > > > > > > > > > > | resource_status_reason > > > > > > > > > > > > > > > | > > > > > > > > > > > | > > > > > > > > | > > > > > > > > > > > | > > > > > > > > | > > > > > > > > > > > | > > > > > > > > | > > > > > > > > > > > | | > > > > > > > | > > > > > > > > > > > | | > > > > > > > | > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > resources.Compute: ResourceInError:> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > resources[0].resources.NovaCompute: > > > > > > > > > > > > > > > > > > > > Went > > > > > > > > > to > > > > > > > > > status > > > > > > > ERROR due to "Message:> | No valid host > > > > > > > > > > > was > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > found. > > > > > > > > > > > There > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > are > > > > > > > > > > > > > not > > > > > > > enough hosts available., Code: 500"> | |> | > > > > > > > > > > > > > > > > > > resource_type > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > OS::Heat::ResourceGroup |> | updated_time | > > > > > > > > > > > > > > > > > > 2015-10-13T10:20:36 > > > > > > > > > > > |> > > > > > > > > > > > > > > > > > > > > > > > > > +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+> > > > > > > > > > > > > > > > > > > > > > This is my instackenv.json for 1 > > > > > > > > > > > > > > > > > > > > > compute > > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > > > > 1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > control > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > node > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > to > > > > > > > > > > > > > > be > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > deployed.> > {> "nodes": [> {> "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:CC:A1"> ],> > > > > > > > > > "cpu":"4",> > > > > > > > "memory":"8192",> > > > > > > > > > > > "disk":"10",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > > > > > > > > "pm_addr":"192.168.0.18"> > > > > > > > },> > > > > > > > {> > > > > > > > > > > > "pm_type":"pxe_ipmitool",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "mac":[> > > > > > > > > > > > "08:9E:01:58:D0:3D"> ],> > > > > > > > "cpu":"4",> > > > > > > > "memory":"8192",> > > > > > > > > > > > "disk":"100",> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > "arch":"x86_64",> "pm_user":"root",> > > > > > > > > > > > > > > > > > > "pm_password":"calvin",> > > > > > > > > > > > > > > > > > > "pm_addr":"192.168.0.19"> > > > > > > > }> > > > > > > > ]> }> > > Any ideas? Thanks > > > > > > > > > > > in > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > advance> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Esra ÇEL?K> TÜB?TAK B?LGEM> > > > > > > > > > > > > > > > > > > www.bilgem.tubitak.gov.tr> > > > > > > > > > > > > > > > > > > celik.esra at tubitak.gov.tr> > > > > > > > > > > > > > > > > > > > _______________________________________________> > > > > > > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > > > > > > > > list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: > > > > > > > > > > > > > > > > > rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________> > > > > > > > Rdo-list > > > > > > > > > > > mailing > > > > > > > > > > > > > > > > > > list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Rdo-list at redhat.com> > > > > > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: > > > > > > > > > > > > > > > > > rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > > > > > Rdo-list > > > > > > > mailing list > > > > > > > Rdo-list at redhat.com > > > > > > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mkassawara at gmail.com Wed Oct 21 13:32:23 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Wed, 21 Oct 2015 07:32:23 -0600 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: I think packages available for standalone installation (i.e., without a deployment tool) should include complete upstream configuration files in standard locations without modification. In the case of *-dist.conf files with RDO packages, they seldom receive updates leading to deprecation warnings and sometimes override useful upstream default values. For example, most if not all services default to keystone for authentication (auth_strategy), yet the RDO neutron packages revert authentication to "noauth" in the *-dist.conf file. In another example, the RDO keystone package only includes the keystone-paste.ini file as /usr/share/keystone/keystone-dist-paste.ini rather than using the standard location and name which leads to confusion, particularly for new users. The installation guide contains quite a few extra steps and option-value pairs that work around the existence and contents of *-dist.conf files... additions that unnecessarily increase complexity for our audience of new users. On Wed, Oct 21, 2015 at 4:22 AM, Ihar Hrachyshka wrote: > > > On 21 Oct 2015, at 12:02, Alan Pevec wrote: > > > > 2015-10-21 10:48 GMT+02:00 Ihar Hrachyshka : > >>> On 15 Oct 2015, at 20:31, Matt Kassawara wrote: > >>> > >>> 4) Packages only reference upstream configuration files in standard > >>> locations (e.g., /etc/keystone). > >> > >> Not sure what exactly it means. RDO packages are using > neutron-dist.conf that contains RDO specific default configuration located > under /usr/share/ for quite a long time. > > > > Yes, it's about dist.conf that are unique solution to provide distro > > specific default values. > > I'm not sure how are other distros solving this if at all, they > > probably either rely on upstream defaults or their configuration > > management tools? > > Thing is that upstream defaults cannot fit all distributions, so I > > would expect all distros to pick up our dist.conf solution but we > > probably have not been exaplaning and advertising it enough hence > > confusion in the upstream docs. > > I suspect other distros may just modify /etc/neutron/neutron.conf as they > fit. It?s obviously not the cleanest solution. > > I believe enforcing a specific way to configure services upon > distributions is not the job of upstream, as long as default upstream way > (modifying upstream configuration files located in > /etc//*.conf) works. > > Ihar > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrichar1 at ball.com Wed Oct 21 13:34:38 2015 From: jrichar1 at ball.com (Richards, Jeff) Date: Wed, 21 Oct 2015 13:34:38 +0000 Subject: [Rdo-list] Overcloud controller UI Message-ID: <6D1DB475E9650E4EADE65C051EFBB98B468B103E@EX2010-DTN-03.AERO.BALL.com> A few days ago I was finally able to setup a 1+1 basic deploy all virtual. Now I am trying to get into the overcloud controller dashboard. I fixed some issues with the Apache config on the overcloud controller with server name aliases causing problems, then I fixed a bug with the Django CACHES setting (passing a list for Location which crashes LocMemCache). Then I figured out that Keystone handles the authentication and where the username/pass were. I can now log in successfully with the admin account, but the dashboard just redirects back to the login screen immediately (logs show successful login). Any tips on where to start learning how to enable the dashboard? I'm not sure where to go from here other than diving into the Horizon python code. Jeff Richards This message and any enclosures are intended only for the addressee. Please notify the sender by email if you are not the intended recipient. If you are not the intended recipient, you may not use, copy, disclose, or distribute this message or its contents or enclosures to any other person and any such actions may be unlawful. Ball reserves the right to monitor and review all messages and enclosures sent to or from this email address. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Wed Oct 21 11:10:38 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Wed, 21 Oct 2015 12:10:38 +0100 Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <56276EE2.6010109@redhat.com> References: <56276EE2.6010109@redhat.com> Message-ID: Hi John, I've managed to install on baremetal following this howto: https://remote-lab.net/rdo-manager-ha-openstack-deployment/ (based on liberty) I have 3 Controllers + 1 Compute (HA and Network Isolation). However I'm having some issues logging on (maybe some keystone issue) and some issue with openvswitch that I'm trying to address with Marius Cornea help. Regards, Pedro Sousa On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge wrote: > Hola rdoers, > > The plan is to GA RDO Liberty today (woot!), so I wanted to send out a > status update for the RDO Manager installer. I would also like to gather > feedback on how other community participants feel about that status as > it relates to RDO Manager participating in the GA. That feedback can > come as replies to this thread, or even better there is a packaging > meeting on #rdo at 1500 UTC today and we can discuss it further then. > > tldr; > RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > virtual hardware have been verified to work with GA bits, however bare > metal installs have not yet been verified. > > I would like to start with some historical context here, as it seems we > have picked up quite a few new active community members recently (again > woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > successful end to end demo with a single controller and single compute > node, and only by using a special delorean server pulling bits from a > special github organization (rdo-management). We were able to get it > consistently deploying **virtual** HA w/ ceph in CI by the middle of the > Liberty upstream cycle. Then, due largely to the fact that there was > nobody being paid to work full time on RDO Manager, and the people who > were contributing in more or less "extra" time were getting swamped with > releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief > 24 hour periods where someone would spend a weekend fixing things only > to have it break again early the following week. > > There have been many improvements in the recent weeks to this sad state > of affairs. Firstly, we have upstreamed almost everything from the > rdo-management github org directly into openstack projects. Secondly, > there is a single source for delorean packages for both core openstack > packages and the tripleo and ironic packages that make up RDO Manager. > These two things may seem a bit trivial to a newcomer to the project, > but they are actually fixes for the biggest cause of the RDO Manager > Kilo CI breaking. I think with those two fixes (plus some work on > upstream tripleo CI) we have set ourselves up to make steady forward > progress rather than spending all our time troubleshooting complete > breakages. (Although this is still openstack so complete breakages will > still happen from time to time :p) > > Another very easy to overlook improvement over where we were at Kilo GA, > is that we actually have all RDO Manager packages (minus a couple EPEL > dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we > did not even have everything officially packaged, rather only in our > special delorean instance. > > All this leads to my opinion that RDO Manager should participate in the > RDO GA. I am unconvinced that bare metal installs can not be made to > work with some extra documentation or configuration changes. However, > even if that is not the case, we are in a drastically better place than > we were at the beginning of the Kilo cycle. > > That said, this is a community, and I would like to hear how other > community participants both from RDO in general and RDO Manager > specifically feel about this. Ideally, if someone thinks the RDO Manager > release should be blocked, there should be a BZ with the blocker flag > proposed so that there is actionable criteria to unblock the release. > > Thanks for all your hard work to get to this point, and lets keep it > rolling. > > -trown > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Wed Oct 21 15:10:19 2015 From: dms at redhat.com (David Moreau Simard) Date: Wed, 21 Oct 2015 11:10:19 -0400 Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <56276EE2.6010109@redhat.com> References: <56276EE2.6010109@redhat.com> Message-ID: ++ for all the work involved in getting RDO Manager back in shape. GA or not, I'm very happy to aim for it gating for package releases along with the rest of the CI (packstack et al). It will greatly improve the quality and stability of what we ship in the bigger RDO project. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Oct 21, 2015 at 6:54 AM, John Trowbridge wrote: > Hola rdoers, > > The plan is to GA RDO Liberty today (woot!), so I wanted to send out a > status update for the RDO Manager installer. I would also like to gather > feedback on how other community participants feel about that status as > it relates to RDO Manager participating in the GA. That feedback can > come as replies to this thread, or even better there is a packaging > meeting on #rdo at 1500 UTC today and we can discuss it further then. > > tldr; > RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > virtual hardware have been verified to work with GA bits, however bare > metal installs have not yet been verified. > > I would like to start with some historical context here, as it seems we > have picked up quite a few new active community members recently (again > woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > successful end to end demo with a single controller and single compute > node, and only by using a special delorean server pulling bits from a > special github organization (rdo-management). We were able to get it > consistently deploying **virtual** HA w/ ceph in CI by the middle of the > Liberty upstream cycle. Then, due largely to the fact that there was > nobody being paid to work full time on RDO Manager, and the people who > were contributing in more or less "extra" time were getting swamped with > releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief > 24 hour periods where someone would spend a weekend fixing things only > to have it break again early the following week. > > There have been many improvements in the recent weeks to this sad state > of affairs. Firstly, we have upstreamed almost everything from the > rdo-management github org directly into openstack projects. Secondly, > there is a single source for delorean packages for both core openstack > packages and the tripleo and ironic packages that make up RDO Manager. > These two things may seem a bit trivial to a newcomer to the project, > but they are actually fixes for the biggest cause of the RDO Manager > Kilo CI breaking. I think with those two fixes (plus some work on > upstream tripleo CI) we have set ourselves up to make steady forward > progress rather than spending all our time troubleshooting complete > breakages. (Although this is still openstack so complete breakages will > still happen from time to time :p) > > Another very easy to overlook improvement over where we were at Kilo GA, > is that we actually have all RDO Manager packages (minus a couple EPEL > dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we > did not even have everything officially packaged, rather only in our > special delorean instance. > > All this leads to my opinion that RDO Manager should participate in the > RDO GA. I am unconvinced that bare metal installs can not be made to > work with some extra documentation or configuration changes. However, > even if that is not the case, we are in a drastically better place than > we were at the beginning of the Kilo cycle. > > That said, this is a community, and I would like to hear how other > community participants both from RDO in general and RDO Manager > specifically feel about this. Ideally, if someone thinks the RDO Manager > release should be blocked, there should be a BZ with the blocker flag > proposed so that there is actionable criteria to unblock the release. > > Thanks for all your hard work to get to this point, and lets keep it > rolling. > > -trown > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ohochman at redhat.com Wed Oct 21 15:16:03 2015 From: ohochman at redhat.com (Omri Hochman) Date: Wed, 21 Oct 2015 11:16:03 -0400 (EDT) Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: References: <56276EE2.6010109@redhat.com> Message-ID: <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Pedro Sousa" > To: "John Trowbridge" > Cc: "rdo-list" > Sent: Wednesday, October 21, 2015 7:10:38 AM > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > Hi John, > > I've managed to install on baremetal following this howto: > https://remote-lab.net/rdo-manager-ha-openstack-deployment/ (based on > liberty) Hey Pedro, Are you using: yum install -y http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm to get the latest RDO GA bits ? We're failing in overcloud deployment on BM with several issues. Thanks, Omri. > > I have 3 Controllers + 1 Compute (HA and Network Isolation). However I'm > having some issues logging on (maybe some keystone issue) and some issue > with openvswitch that I'm trying to address with Marius Cornea help. > > Regards, > Pedro Sousa > > > On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < trown at redhat.com > wrote: > > > Hola rdoers, > > The plan is to GA RDO Liberty today (woot!), so I wanted to send out a > status update for the RDO Manager installer. I would also like to gather > feedback on how other community participants feel about that status as > it relates to RDO Manager participating in the GA. That feedback can > come as replies to this thread, or even better there is a packaging > meeting on #rdo at 1500 UTC today and we can discuss it further then. > > tldr; > RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > virtual hardware have been verified to work with GA bits, however bare > metal installs have not yet been verified. > > I would like to start with some historical context here, as it seems we > have picked up quite a few new active community members recently (again > woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > successful end to end demo with a single controller and single compute > node, and only by using a special delorean server pulling bits from a > special github organization (rdo-management). We were able to get it > consistently deploying **virtual** HA w/ ceph in CI by the middle of the > Liberty upstream cycle. Then, due largely to the fact that there was > nobody being paid to work full time on RDO Manager, and the people who > were contributing in more or less "extra" time were getting swamped with > releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief > 24 hour periods where someone would spend a weekend fixing things only > to have it break again early the following week. > > There have been many improvements in the recent weeks to this sad state > of affairs. Firstly, we have upstreamed almost everything from the > rdo-management github org directly into openstack projects. Secondly, > there is a single source for delorean packages for both core openstack > packages and the tripleo and ironic packages that make up RDO Manager. > These two things may seem a bit trivial to a newcomer to the project, > but they are actually fixes for the biggest cause of the RDO Manager > Kilo CI breaking. I think with those two fixes (plus some work on > upstream tripleo CI) we have set ourselves up to make steady forward > progress rather than spending all our time troubleshooting complete > breakages. (Although this is still openstack so complete breakages will > still happen from time to time :p) > > Another very easy to overlook improvement over where we were at Kilo GA, > is that we actually have all RDO Manager packages (minus a couple EPEL > dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we > did not even have everything officially packaged, rather only in our > special delorean instance. > > All this leads to my opinion that RDO Manager should participate in the > RDO GA. I am unconvinced that bare metal installs can not be made to > work with some extra documentation or configuration changes. However, > even if that is not the case, we are in a drastically better place than > we were at the beginning of the Kilo cycle. > > That said, this is a community, and I would like to hear how other > community participants both from RDO in general and RDO Manager > specifically feel about this. Ideally, if someone thinks the RDO Manager > release should be blocked, there should be a BZ with the blocker flag > proposed so that there is actionable criteria to unblock the release. > > Thanks for all your hard work to get to this point, and lets keep it > rolling. > > -trown > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From marius at remote-lab.net Wed Oct 21 15:56:17 2015 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 21 Oct 2015 17:56:17 +0200 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: I believe the keystone init failed. It is done in a postconfig step via ssh on the public VIP(see lines 3-13 in https://gist.github.com/remoteur/920109a31083942ba5e1 ). Did you get that kind of output for the deploy command? Try also journalctl -l -u os-collect-config | grep -i error on the controller nodes, it should indicate if something went wrong during deployment. On Wed, Oct 21, 2015 at 5:05 PM, Pedro Sousa wrote: > Hi Marius, > > your tip worked fine thanks, bridges seems to be correctly created, however > I still cannot login, seems some keystone problem: > > #keystone --debug tenant-list > > DEBUG:keystoneclient.auth.identity.v2:Making authentication request to > http://192.168.174.35:5000/v2.0/tokens > INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection > (1): 192.168.174.35 > DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" > 401 114 > DEBUG:keystoneclient.session:Request returned failure status: 401 > DEBUG:keystoneclient.v2_0.client:Authorization Failed. > The request you have made requires authentication. (HTTP 401) (Request-ID: > req-accee3b3-b552-4c6b-ac39-d0791b5c1390) > > Did you had this issue when deployed on virtual? > > Regards > > > > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea > wrote: >> >> Here's an adjusted controller.yaml which disables DHCP on the first >> nic: enp1s0f0 so it doesn't get an IP address >> http://paste.openstack.org/show/476981/ >> >> Please note that this assumes that your overcloud nodes are PXE >> booting on the 2nd NIC(basically disabling the 1st nic) >> >> Given your setup(I'm doing some assumptions here so I might be wrong) >> I would use the 1st nic for PXE booting and provisioning network and >> 2nd nic for running the isolated networks with this kind of template: >> http://paste.openstack.org/show/476986/ >> >> Let me know if it works for you. >> >> Thanks, >> Marius >> >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa wrote: >> > Hi, >> > >> > here you go. >> > >> > Regards, >> > Pedro Sousa >> > >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea >> > wrote: >> >> >> >> Hi Pedro, >> >> >> >> One issue I can quickly see is that br-ex has assigned the same IP >> >> address as enp1s0f0. Can you post the nic templates you used for >> >> deployment? >> >> >> >> 2: enp1s0f0: mtu 1500 qdisc mq state >> >> UP qlen 1000 >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic >> >> enp1s0f0 >> >> 9: br-ex: mtu 1500 qdisc noqueue >> >> state >> >> UNKNOWN >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex >> >> >> >> Thanks, >> >> Marius >> >> >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa >> >> wrote: >> >> > Hi Marius, >> >> > >> >> > I've followed your howto and managed to get overcloud deployed in HA, >> >> > thanks. However I cannot login to it (via CLI or Horizon) : >> >> > >> >> > ERROR (Unauthorized): The request you have made requires >> >> > authentication. >> >> > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) >> >> > >> >> > So I rebooted the controllers and now I cannot login through >> >> > Provisioning >> >> > network, seems some openvswitch bridge conf problem, heres my conf: >> >> > >> >> > # ip a >> >> > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> >> > inet 127.0.0.1/8 scope host lo >> >> > valid_lft forever preferred_lft forever >> >> > inet6 ::1/128 scope host >> >> > valid_lft forever preferred_lft forever >> >> > 2: enp1s0f0: mtu 1500 qdisc mq >> >> > state >> >> > UP >> >> > qlen 1000 >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic >> >> > enp1s0f0 >> >> > valid_lft 84562sec preferred_lft 84562sec >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 3: enp1s0f1: mtu 1500 qdisc mq >> >> > master >> >> > ovs-system state UP qlen 1000 >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 4: ovs-system: mtu 1500 qdisc noop state DOWN >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff >> >> > 5: br-tun: mtu 1500 qdisc noop state DOWN >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff >> >> > 6: vlan20: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 >> >> > valid_lft forever preferred_lft forever >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 7: vlan40: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 8: vlan174: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global vlan174 >> >> > valid_lft forever preferred_lft forever >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global vlan174 >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 9: br-ex: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 10: vlan50: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 11: vlan30: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 >> >> > valid_lft forever preferred_lft forever >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 12: br-int: mtu 1500 qdisc noop state DOWN >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff >> >> > >> >> > >> >> > # ovs-vsctl show >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 >> >> > Bridge br-ex >> >> > Port br-ex >> >> > Interface br-ex >> >> > type: internal >> >> > Port "enp1s0f1" >> >> > Interface "enp1s0f1" >> >> > Port "vlan40" >> >> > tag: 40 >> >> > Interface "vlan40" >> >> > type: internal >> >> > Port "vlan20" >> >> > tag: 20 >> >> > Interface "vlan20" >> >> > type: internal >> >> > Port phy-br-ex >> >> > Interface phy-br-ex >> >> > type: patch >> >> > options: {peer=int-br-ex} >> >> > Port "vlan50" >> >> > tag: 50 >> >> > Interface "vlan50" >> >> > type: internal >> >> > Port "vlan30" >> >> > tag: 30 >> >> > Interface "vlan30" >> >> > type: internal >> >> > Port "vlan174" >> >> > tag: 174 >> >> > Interface "vlan174" >> >> > type: internal >> >> > Bridge br-int >> >> > fail_mode: secure >> >> > Port br-int >> >> > Interface br-int >> >> > type: internal >> >> > Port patch-tun >> >> > Interface patch-tun >> >> > type: patch >> >> > options: {peer=patch-int} >> >> > Port int-br-ex >> >> > Interface int-br-ex >> >> > type: patch >> >> > options: {peer=phy-br-ex} >> >> > Bridge br-tun >> >> > fail_mode: secure >> >> > Port "gre-0a00140b" >> >> > Interface "gre-0a00140b" >> >> > type: gre >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> >> > out_key=flow, remote_ip="10.0.20.11"} >> >> > Port patch-int >> >> > Interface patch-int >> >> > type: patch >> >> > options: {peer=patch-tun} >> >> > Port "gre-0a00140d" >> >> > Interface "gre-0a00140d" >> >> > type: gre >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> >> > out_key=flow, remote_ip="10.0.20.13"} >> >> > Port "gre-0a00140c" >> >> > Interface "gre-0a00140c" >> >> > type: gre >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> >> > out_key=flow, remote_ip="10.0.20.12"} >> >> > Port br-tun >> >> > Interface br-tun >> >> > type: internal >> >> > ovs_version: "2.4.0" >> >> > >> >> > Regards, >> >> > Pedro Sousa >> >> > >> >> > >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea >> >> > >> >> > wrote: >> >> >> >> >> >> Hi everyone, >> >> >> >> >> >> I wrote a blog post about how to deploy a HA with network isolation >> >> >> overcloud on top of the virtual environment. I tried to provide some >> >> >> insights into what instack-virt-setup creates and how to use the >> >> >> network isolation templates in the virtual environment. I hope you >> >> >> find it useful. >> >> >> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ >> >> >> >> >> >> Thanks, >> >> >> Marius >> >> >> >> >> >> _______________________________________________ >> >> >> Rdo-list mailing list >> >> >> Rdo-list at redhat.com >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > >> >> > >> > >> > > > From dsneddon at redhat.com Wed Oct 21 15:57:56 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 21 Oct 2015 08:57:56 -0700 Subject: [Rdo-list] Failed to deploy overcloud with network isolation on BM. In-Reply-To: References: <1421541564.61524022.1445391447568.JavaMail.zimbra@redhat.com> <1122769870.61527644.1445391866459.JavaMail.zimbra@redhat.com> Message-ID: <5627B604.4010706@redhat.com> On 10/21/2015 01:50 AM, Ihar Hrachyshka wrote: > May I ask all bug reporters to attach logs and config files to bugs? It?s so often the case that logs cited are not enough to understand what?s going on, and there is no idea which configuration components were using. > > Can we please make an additional step to make bug fixing more effective? > > Ihar > >> On 21 Oct 2015, at 09:57, Alan Pevec wrote: >> >> 2015-10-21 3:44 GMT+02:00 Sasha Chuzhoy : >>> While I fail to install the overcloud using the latest bits[1]. >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273680 >> >> Does non-HA work? >> I see there's RPC timeout, is rabbitmq up and running? >> >> Cheers, >> Alan >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > What is a good general set of logs for a failed deployment where the failure cause isn't clear? -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From trown at redhat.com Wed Oct 21 16:00:56 2015 From: trown at redhat.com (John Trowbridge) Date: Wed, 21 Oct 2015 12:00:56 -0400 Subject: [Rdo-list] [Meeting] RDO meeting (2015-10-21) Message-ID: <5627B6B8.4010800@redhat.com> ============================== #rdo: RDO meeting (2015-10-21) ============================== Meeting started by trown at 15:05:50 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-10-21/rdo.2015-10-21-15.05.log.html . Meeting summary --------------- * HELP: ? (apevec, 15:06:15) * AI reports (trown, 15:07:51) * LINK: https://etherpad.openstack.org/p/RDO-Packaging (trown, 15:09:06) * Any RDO Liberty GA blockers? (trown, 15:09:26) * ACTION: apevec explain "AI reports" in agenda (apevec, 15:09:38) * LINK: http://cbs.centos.org/koji/buildinfo?buildID=7094 openstack-ironic-inspector-2.2.2-1.el7 (apevec, 15:10:34) * LINK: https://github.com/openstack-packages/aodh/commit/b25f8dc1fb480fa51281a48b1e0d3704b85399b2 (number80, 15:11:47) * LINK: http://cbs.centos.org/koji/buildinfo?buildID=7093 (number80, 15:12:00) * LINK: https://github.com/redhat-openstack/rdo-release (apevec, 15:16:12) * LINK: https://github.com/redhat-openstack/rdo-release/issues (apevec, 15:16:25) * ACTION: apevec fix rdo-release-liberty.rpm for rhel (apevec, 15:16:55) * ACTION: apevec to draft release announcement and post back for quick reviews (trown, 15:27:00) * ACTION: apevec ping KB after the meeting to start sync to mirros (apevec, 15:32:24) * Package Reviews (trown, 15:32:57) * ACTION: trown update rdo manager docs for rdo-release-liberty.rpm (trown, 15:34:08) * Delorean server rebuild (after Summit) (trown, 15:42:08) * ACTION: jpena to cherry pick https://github.com/openstack-packages/delorean/commit/dbd4271b23cf50d0f68768af5d33905be55aec38 into production delorean (jpena, 15:44:43) * open floor (trown, 15:45:47) * LINK: http://openstacksummitoctober2015tokyo.sched.org/event/3f4fc45fd4e9ee0a0a02749f97a67499 (dmsimard, 15:47:46) * LINK: http://openstacksummitoctober2015tokyo.sched.org/event/3f4fc45fd4e9ee0a0a02749f97a67499 (dmsimard, 15:47:52) * chair for next meeting? (trown, 15:49:03) * LINK: https://www.google.ca/?gfe_rd=cr&ei=7LMnVuz9MoSJMcasiqAB&gws_rd=ssl#safe=off&q=3pm+utc+in+tokyo (dmsimard, 15:49:20) * ACTION: apevec to chair next meeting (trown, 15:49:34) Meeting ended at 15:50:37 UTC. Action Items ------------ * apevec explain "AI reports" in agenda * apevec fix rdo-release-liberty.rpm for rhel * apevec to draft release announcement and post back for quick reviews * apevec ping KB after the meeting to start sync to mirros * trown update rdo manager docs for rdo-release-liberty.rpm * jpena to cherry pick https://github.com/openstack-packages/delorean/commit/dbd4271b23cf50d0f68768af5d33905be55aec38 into production delorean * apevec to chair next meeting Action Items, by person ----------------------- * apevec * apevec explain "AI reports" in agenda * apevec fix rdo-release-liberty.rpm for rhel * apevec to draft release announcement and post back for quick reviews * apevec ping KB after the meeting to start sync to mirros * apevec to chair next meeting * jpena * jpena to cherry pick https://github.com/openstack-packages/delorean/commit/dbd4271b23cf50d0f68768af5d33905be55aec38 into production delorean * trown * trown update rdo manager docs for rdo-release-liberty.rpm * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (104) * trown (52) * number80 (21) * dmsimard (20) * rook (14) * rbowen (9) * dtantsur (8) * zodbot (8) * jpena (8) * sasha21 (8) * Goneri (2) * jschlueter (2) * chandankumar_awa (1) * jruzicka (1) * social (1) * EmilienM (1) * csim (1) * chandankumar (0) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From pgsousa at gmail.com Wed Oct 21 16:29:30 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Wed, 21 Oct 2015 17:29:30 +0100 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: You're right, I didn't get that output, keystone init didn't run: *$ openstack overcloud deploy --control-scale 3 --compute-scale 1 --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e ~/the-cloud/environments/puppet-pacemaker.yaml -e ~/the-cloud/environments/network-isolation.yaml -e ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e ~/the-cloud/environments/network-environment.yaml --control-flavor controller --compute-flavor compute* *Deploying templates in the directory /home/stack/the-cloudOvercloud Endpoint: http://192.168.174.35:5000/v2.0/ Overcloud Deployed* In fact I have some mysql errors in my controllers, see below. Is there a way to redeploy? Because I've run "openstack overcloud deploy" and nothing happens. Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: [2015-10-21 14:21:50,903] (heat-config) [INFO] Error: Could not prefetch mysql_user provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: Error: Could not prefetch mysql_database provider 'mysql': Execution of '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) Thanks On Wed, Oct 21, 2015 at 4:56 PM, Marius Cornea wrote: > I believe the keystone init failed. It is done in a postconfig step > via ssh on the public VIP(see lines 3-13 in > https://gist.github.com/remoteur/920109a31083942ba5e1 ). Did you get > that kind of output for the deploy command? > > Try also journalctl -l -u os-collect-config | grep -i error on the > controller nodes, it should indicate if something went wrong during > deployment. > > On Wed, Oct 21, 2015 at 5:05 PM, Pedro Sousa wrote: > > Hi Marius, > > > > your tip worked fine thanks, bridges seems to be correctly created, > however > > I still cannot login, seems some keystone problem: > > > > #keystone --debug tenant-list > > > > DEBUG:keystoneclient.auth.identity.v2:Making authentication request to > > http://192.168.174.35:5000/v2.0/tokens > > INFO:requests.packages.urllib3.connectionpool:Starting new HTTP > connection > > (1): 192.168.174.35 > > DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens > HTTP/1.1" > > 401 114 > > DEBUG:keystoneclient.session:Request returned failure status: 401 > > DEBUG:keystoneclient.v2_0.client:Authorization Failed. > > The request you have made requires authentication. (HTTP 401) > (Request-ID: > > req-accee3b3-b552-4c6b-ac39-d0791b5c1390) > > > > Did you had this issue when deployed on virtual? > > > > Regards > > > > > > > > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea > > wrote: > >> > >> Here's an adjusted controller.yaml which disables DHCP on the first > >> nic: enp1s0f0 so it doesn't get an IP address > >> http://paste.openstack.org/show/476981/ > >> > >> Please note that this assumes that your overcloud nodes are PXE > >> booting on the 2nd NIC(basically disabling the 1st nic) > >> > >> Given your setup(I'm doing some assumptions here so I might be wrong) > >> I would use the 1st nic for PXE booting and provisioning network and > >> 2nd nic for running the isolated networks with this kind of template: > >> http://paste.openstack.org/show/476986/ > >> > >> Let me know if it works for you. > >> > >> Thanks, > >> Marius > >> > >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa wrote: > >> > Hi, > >> > > >> > here you go. > >> > > >> > Regards, > >> > Pedro Sousa > >> > > >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea < > marius at remote-lab.net> > >> > wrote: > >> >> > >> >> Hi Pedro, > >> >> > >> >> One issue I can quickly see is that br-ex has assigned the same IP > >> >> address as enp1s0f0. Can you post the nic templates you used for > >> >> deployment? > >> >> > >> >> 2: enp1s0f0: mtu 1500 qdisc mq > state > >> >> UP qlen 1000 > >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > >> >> enp1s0f0 > >> >> 9: br-ex: mtu 1500 qdisc noqueue > >> >> state > >> >> UNKNOWN > >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > >> >> > >> >> Thanks, > >> >> Marius > >> >> > >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa > >> >> wrote: > >> >> > Hi Marius, > >> >> > > >> >> > I've followed your howto and managed to get overcloud deployed in > HA, > >> >> > thanks. However I cannot login to it (via CLI or Horizon) : > >> >> > > >> >> > ERROR (Unauthorized): The request you have made requires > >> >> > authentication. > >> >> > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) > >> >> > > >> >> > So I rebooted the controllers and now I cannot login through > >> >> > Provisioning > >> >> > network, seems some openvswitch bridge conf problem, heres my conf: > >> >> > > >> >> > # ip a > >> >> > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >> >> > inet 127.0.0.1/8 scope host lo > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 ::1/128 scope host > >> >> > valid_lft forever preferred_lft forever > >> >> > 2: enp1s0f0: mtu 1500 qdisc mq > >> >> > state > >> >> > UP > >> >> > qlen 1000 > >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > >> >> > enp1s0f0 > >> >> > valid_lft 84562sec preferred_lft 84562sec > >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 3: enp1s0f1: mtu 1500 qdisc mq > >> >> > master > >> >> > ovs-system state UP qlen 1000 > >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 4: ovs-system: mtu 1500 qdisc noop state DOWN > >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff > >> >> > 5: br-tun: mtu 1500 qdisc noop state DOWN > >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff > >> >> > 6: vlan20: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 7: vlan40: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 8: vlan174: mtu 1500 qdisc > noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global > vlan174 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global > vlan174 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 9: br-ex: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 10: vlan50: mtu 1500 qdisc > noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff > >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 11: vlan30: mtu 1500 qdisc > noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 12: br-int: mtu 1500 qdisc noop state DOWN > >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff > >> >> > > >> >> > > >> >> > # ovs-vsctl show > >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 > >> >> > Bridge br-ex > >> >> > Port br-ex > >> >> > Interface br-ex > >> >> > type: internal > >> >> > Port "enp1s0f1" > >> >> > Interface "enp1s0f1" > >> >> > Port "vlan40" > >> >> > tag: 40 > >> >> > Interface "vlan40" > >> >> > type: internal > >> >> > Port "vlan20" > >> >> > tag: 20 > >> >> > Interface "vlan20" > >> >> > type: internal > >> >> > Port phy-br-ex > >> >> > Interface phy-br-ex > >> >> > type: patch > >> >> > options: {peer=int-br-ex} > >> >> > Port "vlan50" > >> >> > tag: 50 > >> >> > Interface "vlan50" > >> >> > type: internal > >> >> > Port "vlan30" > >> >> > tag: 30 > >> >> > Interface "vlan30" > >> >> > type: internal > >> >> > Port "vlan174" > >> >> > tag: 174 > >> >> > Interface "vlan174" > >> >> > type: internal > >> >> > Bridge br-int > >> >> > fail_mode: secure > >> >> > Port br-int > >> >> > Interface br-int > >> >> > type: internal > >> >> > Port patch-tun > >> >> > Interface patch-tun > >> >> > type: patch > >> >> > options: {peer=patch-int} > >> >> > Port int-br-ex > >> >> > Interface int-br-ex > >> >> > type: patch > >> >> > options: {peer=phy-br-ex} > >> >> > Bridge br-tun > >> >> > fail_mode: secure > >> >> > Port "gre-0a00140b" > >> >> > Interface "gre-0a00140b" > >> >> > type: gre > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> > out_key=flow, remote_ip="10.0.20.11"} > >> >> > Port patch-int > >> >> > Interface patch-int > >> >> > type: patch > >> >> > options: {peer=patch-tun} > >> >> > Port "gre-0a00140d" > >> >> > Interface "gre-0a00140d" > >> >> > type: gre > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> > out_key=flow, remote_ip="10.0.20.13"} > >> >> > Port "gre-0a00140c" > >> >> > Interface "gre-0a00140c" > >> >> > type: gre > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> > out_key=flow, remote_ip="10.0.20.12"} > >> >> > Port br-tun > >> >> > Interface br-tun > >> >> > type: internal > >> >> > ovs_version: "2.4.0" > >> >> > > >> >> > Regards, > >> >> > Pedro Sousa > >> >> > > >> >> > > >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> Hi everyone, > >> >> >> > >> >> >> I wrote a blog post about how to deploy a HA with network > isolation > >> >> >> overcloud on top of the virtual environment. I tried to provide > some > >> >> >> insights into what instack-virt-setup creates and how to use the > >> >> >> network isolation templates in the virtual environment. I hope you > >> >> >> find it useful. > >> >> >> > >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > >> >> >> > >> >> >> Thanks, > >> >> >> Marius > >> >> >> > >> >> >> _______________________________________________ > >> >> >> Rdo-list mailing list > >> >> >> Rdo-list at redhat.com > >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list > >> >> >> > >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Wed Oct 21 16:50:58 2015 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 21 Oct 2015 18:50:58 +0200 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: To delete the overcloud you need to run heat stack-delete overcloud and wait until it finishes(check heat stack-list) On Wed, Oct 21, 2015 at 6:29 PM, Pedro Sousa wrote: > You're right, I didn't get that output, keystone init didn't run: > > $ openstack overcloud deploy --control-scale 3 --compute-scale 1 > --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e > ~/the-cloud/environments/puppet-pacemaker.yaml -e > ~/the-cloud/environments/network-isolation.yaml -e > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > ~/the-cloud/environments/network-environment.yaml --control-flavor > controller --compute-flavor compute > > Deploying templates in the directory /home/stack/the-cloud > Overcloud Endpoint: http://192.168.174.35:5000/v2.0/ > Overcloud Deployed > > > In fact I have some mysql errors in my controllers, see below. Is there a > way to redeploy? Because I've run "openstack overcloud deploy" and nothing > happens. > > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: [2015-10-21 > 14:21:50,903] (heat-config) [INFO] Error: Could not prefetch mysql_user > provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT CONCAT(User, > '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): Can't > connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: Error: > Could not prefetch mysql_database provider 'mysql': Execution of > '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000): Can't > connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) > > Thanks > > > > > > > On Wed, Oct 21, 2015 at 4:56 PM, Marius Cornea > wrote: >> >> I believe the keystone init failed. It is done in a postconfig step >> via ssh on the public VIP(see lines 3-13 in >> https://gist.github.com/remoteur/920109a31083942ba5e1 ). Did you get >> that kind of output for the deploy command? >> >> Try also journalctl -l -u os-collect-config | grep -i error on the >> controller nodes, it should indicate if something went wrong during >> deployment. >> >> On Wed, Oct 21, 2015 at 5:05 PM, Pedro Sousa wrote: >> > Hi Marius, >> > >> > your tip worked fine thanks, bridges seems to be correctly created, >> > however >> > I still cannot login, seems some keystone problem: >> > >> > #keystone --debug tenant-list >> > >> > DEBUG:keystoneclient.auth.identity.v2:Making authentication request to >> > http://192.168.174.35:5000/v2.0/tokens >> > INFO:requests.packages.urllib3.connectionpool:Starting new HTTP >> > connection >> > (1): 192.168.174.35 >> > DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens >> > HTTP/1.1" >> > 401 114 >> > DEBUG:keystoneclient.session:Request returned failure status: 401 >> > DEBUG:keystoneclient.v2_0.client:Authorization Failed. >> > The request you have made requires authentication. (HTTP 401) >> > (Request-ID: >> > req-accee3b3-b552-4c6b-ac39-d0791b5c1390) >> > >> > Did you had this issue when deployed on virtual? >> > >> > Regards >> > >> > >> > >> > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea >> > wrote: >> >> >> >> Here's an adjusted controller.yaml which disables DHCP on the first >> >> nic: enp1s0f0 so it doesn't get an IP address >> >> http://paste.openstack.org/show/476981/ >> >> >> >> Please note that this assumes that your overcloud nodes are PXE >> >> booting on the 2nd NIC(basically disabling the 1st nic) >> >> >> >> Given your setup(I'm doing some assumptions here so I might be wrong) >> >> I would use the 1st nic for PXE booting and provisioning network and >> >> 2nd nic for running the isolated networks with this kind of template: >> >> http://paste.openstack.org/show/476986/ >> >> >> >> Let me know if it works for you. >> >> >> >> Thanks, >> >> Marius >> >> >> >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa wrote: >> >> > Hi, >> >> > >> >> > here you go. >> >> > >> >> > Regards, >> >> > Pedro Sousa >> >> > >> >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea >> >> > >> >> > wrote: >> >> >> >> >> >> Hi Pedro, >> >> >> >> >> >> One issue I can quickly see is that br-ex has assigned the same IP >> >> >> address as enp1s0f0. Can you post the nic templates you used for >> >> >> deployment? >> >> >> >> >> >> 2: enp1s0f0: mtu 1500 qdisc mq >> >> >> state >> >> >> UP qlen 1000 >> >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic >> >> >> enp1s0f0 >> >> >> 9: br-ex: mtu 1500 qdisc noqueue >> >> >> state >> >> >> UNKNOWN >> >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex >> >> >> >> >> >> Thanks, >> >> >> Marius >> >> >> >> >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa >> >> >> wrote: >> >> >> > Hi Marius, >> >> >> > >> >> >> > I've followed your howto and managed to get overcloud deployed in >> >> >> > HA, >> >> >> > thanks. However I cannot login to it (via CLI or Horizon) : >> >> >> > >> >> >> > ERROR (Unauthorized): The request you have made requires >> >> >> > authentication. >> >> >> > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) >> >> >> > >> >> >> > So I rebooted the controllers and now I cannot login through >> >> >> > Provisioning >> >> >> > network, seems some openvswitch bridge conf problem, heres my >> >> >> > conf: >> >> >> > >> >> >> > # ip a >> >> >> > 1: lo: mtu 65536 qdisc noqueue state >> >> >> > UNKNOWN >> >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> >> >> > inet 127.0.0.1/8 scope host lo >> >> >> > valid_lft forever preferred_lft forever >> >> >> > inet6 ::1/128 scope host >> >> >> > valid_lft forever preferred_lft forever >> >> >> > 2: enp1s0f0: mtu 1500 qdisc mq >> >> >> > state >> >> >> > UP >> >> >> > qlen 1000 >> >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic >> >> >> > enp1s0f0 >> >> >> > valid_lft 84562sec preferred_lft 84562sec >> >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link >> >> >> > valid_lft forever preferred_lft forever >> >> >> > 3: enp1s0f1: mtu 1500 qdisc mq >> >> >> > master >> >> >> > ovs-system state UP qlen 1000 >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> >> >> > valid_lft forever preferred_lft forever >> >> >> > 4: ovs-system: mtu 1500 qdisc noop state >> >> >> > DOWN >> >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff >> >> >> > 5: br-tun: mtu 1500 qdisc noop state DOWN >> >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff >> >> >> > 6: vlan20: mtu 1500 qdisc >> >> >> > noqueue >> >> >> > state >> >> >> > UNKNOWN >> >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff >> >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 >> >> >> > valid_lft forever preferred_lft forever >> >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 >> >> >> > valid_lft forever preferred_lft forever >> >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link >> >> >> > valid_lft forever preferred_lft forever >> >> >> > 7: vlan40: mtu 1500 qdisc >> >> >> > noqueue >> >> >> > state >> >> >> > UNKNOWN >> >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff >> >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 >> >> >> > valid_lft forever preferred_lft forever >> >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link >> >> >> > valid_lft forever preferred_lft forever >> >> >> > 8: vlan174: mtu 1500 qdisc >> >> >> > noqueue >> >> >> > state >> >> >> > UNKNOWN >> >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff >> >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global >> >> >> > vlan174 >> >> >> > valid_lft forever preferred_lft forever >> >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global >> >> >> > vlan174 >> >> >> > valid_lft forever preferred_lft forever >> >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link >> >> >> > valid_lft forever preferred_lft forever >> >> >> > 9: br-ex: mtu 1500 qdisc noqueue >> >> >> > state >> >> >> > UNKNOWN >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex >> >> >> > valid_lft forever preferred_lft forever >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> >> >> > valid_lft forever preferred_lft forever >> >> >> > 10: vlan50: mtu 1500 qdisc >> >> >> > noqueue >> >> >> > state >> >> >> > UNKNOWN >> >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff >> >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 >> >> >> > valid_lft forever preferred_lft forever >> >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link >> >> >> > valid_lft forever preferred_lft forever >> >> >> > 11: vlan30: mtu 1500 qdisc >> >> >> > noqueue >> >> >> > state >> >> >> > UNKNOWN >> >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff >> >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 >> >> >> > valid_lft forever preferred_lft forever >> >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 >> >> >> > valid_lft forever preferred_lft forever >> >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link >> >> >> > valid_lft forever preferred_lft forever >> >> >> > 12: br-int: mtu 1500 qdisc noop state DOWN >> >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff >> >> >> > >> >> >> > >> >> >> > # ovs-vsctl show >> >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 >> >> >> > Bridge br-ex >> >> >> > Port br-ex >> >> >> > Interface br-ex >> >> >> > type: internal >> >> >> > Port "enp1s0f1" >> >> >> > Interface "enp1s0f1" >> >> >> > Port "vlan40" >> >> >> > tag: 40 >> >> >> > Interface "vlan40" >> >> >> > type: internal >> >> >> > Port "vlan20" >> >> >> > tag: 20 >> >> >> > Interface "vlan20" >> >> >> > type: internal >> >> >> > Port phy-br-ex >> >> >> > Interface phy-br-ex >> >> >> > type: patch >> >> >> > options: {peer=int-br-ex} >> >> >> > Port "vlan50" >> >> >> > tag: 50 >> >> >> > Interface "vlan50" >> >> >> > type: internal >> >> >> > Port "vlan30" >> >> >> > tag: 30 >> >> >> > Interface "vlan30" >> >> >> > type: internal >> >> >> > Port "vlan174" >> >> >> > tag: 174 >> >> >> > Interface "vlan174" >> >> >> > type: internal >> >> >> > Bridge br-int >> >> >> > fail_mode: secure >> >> >> > Port br-int >> >> >> > Interface br-int >> >> >> > type: internal >> >> >> > Port patch-tun >> >> >> > Interface patch-tun >> >> >> > type: patch >> >> >> > options: {peer=patch-int} >> >> >> > Port int-br-ex >> >> >> > Interface int-br-ex >> >> >> > type: patch >> >> >> > options: {peer=phy-br-ex} >> >> >> > Bridge br-tun >> >> >> > fail_mode: secure >> >> >> > Port "gre-0a00140b" >> >> >> > Interface "gre-0a00140b" >> >> >> > type: gre >> >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> >> >> > out_key=flow, remote_ip="10.0.20.11"} >> >> >> > Port patch-int >> >> >> > Interface patch-int >> >> >> > type: patch >> >> >> > options: {peer=patch-tun} >> >> >> > Port "gre-0a00140d" >> >> >> > Interface "gre-0a00140d" >> >> >> > type: gre >> >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> >> >> > out_key=flow, remote_ip="10.0.20.13"} >> >> >> > Port "gre-0a00140c" >> >> >> > Interface "gre-0a00140c" >> >> >> > type: gre >> >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> >> >> > out_key=flow, remote_ip="10.0.20.12"} >> >> >> > Port br-tun >> >> >> > Interface br-tun >> >> >> > type: internal >> >> >> > ovs_version: "2.4.0" >> >> >> > >> >> >> > Regards, >> >> >> > Pedro Sousa >> >> >> > >> >> >> > >> >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea >> >> >> > >> >> >> > wrote: >> >> >> >> >> >> >> >> Hi everyone, >> >> >> >> >> >> >> >> I wrote a blog post about how to deploy a HA with network >> >> >> >> isolation >> >> >> >> overcloud on top of the virtual environment. I tried to provide >> >> >> >> some >> >> >> >> insights into what instack-virt-setup creates and how to use the >> >> >> >> network isolation templates in the virtual environment. I hope >> >> >> >> you >> >> >> >> find it useful. >> >> >> >> >> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ >> >> >> >> >> >> >> >> Thanks, >> >> >> >> Marius >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> >> Rdo-list mailing list >> >> >> >> Rdo-list at redhat.com >> >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> >> >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> > >> >> >> > >> >> > >> >> > >> > >> > > > From pgsousa at gmail.com Wed Oct 21 16:54:02 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Wed, 21 Oct 2015 17:54:02 +0100 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: Yes, I've done that already, however it never runs keystone init. Is it something wrong in my deployment command "*openstack overcloud deploy"* or do you think it's a bug/conf issue? Thanks On Wed, Oct 21, 2015 at 5:50 PM, Marius Cornea wrote: > To delete the overcloud you need to run heat stack-delete overcloud > and wait until it finishes(check heat stack-list) > > On Wed, Oct 21, 2015 at 6:29 PM, Pedro Sousa wrote: > > You're right, I didn't get that output, keystone init didn't run: > > > > $ openstack overcloud deploy --control-scale 3 --compute-scale 1 > > --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e > > ~/the-cloud/environments/puppet-pacemaker.yaml -e > > ~/the-cloud/environments/network-isolation.yaml -e > > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > > ~/the-cloud/environments/network-environment.yaml --control-flavor > > controller --compute-flavor compute > > > > Deploying templates in the directory /home/stack/the-cloud > > Overcloud Endpoint: http://192.168.174.35:5000/v2.0/ > > Overcloud Deployed > > > > > > In fact I have some mysql errors in my controllers, see below. Is there a > > way to redeploy? Because I've run "openstack overcloud deploy" and > nothing > > happens. > > > > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: > [2015-10-21 > > 14:21:50,903] (heat-config) [INFO] Error: Could not prefetch mysql_user > > provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT CONCAT(User, > > '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): Can't > > connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' > (2) > > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: Error: > > Could not prefetch mysql_database provider 'mysql': Execution of > > '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000): > Can't > > connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' > (2) > > > > Thanks > > > > > > > > > > > > > > On Wed, Oct 21, 2015 at 4:56 PM, Marius Cornea > > wrote: > >> > >> I believe the keystone init failed. It is done in a postconfig step > >> via ssh on the public VIP(see lines 3-13 in > >> https://gist.github.com/remoteur/920109a31083942ba5e1 ). Did you get > >> that kind of output for the deploy command? > >> > >> Try also journalctl -l -u os-collect-config | grep -i error on the > >> controller nodes, it should indicate if something went wrong during > >> deployment. > >> > >> On Wed, Oct 21, 2015 at 5:05 PM, Pedro Sousa wrote: > >> > Hi Marius, > >> > > >> > your tip worked fine thanks, bridges seems to be correctly created, > >> > however > >> > I still cannot login, seems some keystone problem: > >> > > >> > #keystone --debug tenant-list > >> > > >> > DEBUG:keystoneclient.auth.identity.v2:Making authentication request to > >> > http://192.168.174.35:5000/v2.0/tokens > >> > INFO:requests.packages.urllib3.connectionpool:Starting new HTTP > >> > connection > >> > (1): 192.168.174.35 > >> > DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens > >> > HTTP/1.1" > >> > 401 114 > >> > DEBUG:keystoneclient.session:Request returned failure status: 401 > >> > DEBUG:keystoneclient.v2_0.client:Authorization Failed. > >> > The request you have made requires authentication. (HTTP 401) > >> > (Request-ID: > >> > req-accee3b3-b552-4c6b-ac39-d0791b5c1390) > >> > > >> > Did you had this issue when deployed on virtual? > >> > > >> > Regards > >> > > >> > > >> > > >> > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea < > marius at remote-lab.net> > >> > wrote: > >> >> > >> >> Here's an adjusted controller.yaml which disables DHCP on the first > >> >> nic: enp1s0f0 so it doesn't get an IP address > >> >> http://paste.openstack.org/show/476981/ > >> >> > >> >> Please note that this assumes that your overcloud nodes are PXE > >> >> booting on the 2nd NIC(basically disabling the 1st nic) > >> >> > >> >> Given your setup(I'm doing some assumptions here so I might be wrong) > >> >> I would use the 1st nic for PXE booting and provisioning network and > >> >> 2nd nic for running the isolated networks with this kind of template: > >> >> http://paste.openstack.org/show/476986/ > >> >> > >> >> Let me know if it works for you. > >> >> > >> >> Thanks, > >> >> Marius > >> >> > >> >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa > wrote: > >> >> > Hi, > >> >> > > >> >> > here you go. > >> >> > > >> >> > Regards, > >> >> > Pedro Sousa > >> >> > > >> >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> Hi Pedro, > >> >> >> > >> >> >> One issue I can quickly see is that br-ex has assigned the same IP > >> >> >> address as enp1s0f0. Can you post the nic templates you used for > >> >> >> deployment? > >> >> >> > >> >> >> 2: enp1s0f0: mtu 1500 qdisc mq > >> >> >> state > >> >> >> UP qlen 1000 > >> >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > >> >> >> enp1s0f0 > >> >> >> 9: br-ex: mtu 1500 qdisc noqueue > >> >> >> state > >> >> >> UNKNOWN > >> >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > >> >> >> > >> >> >> Thanks, > >> >> >> Marius > >> >> >> > >> >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa > >> >> >> wrote: > >> >> >> > Hi Marius, > >> >> >> > > >> >> >> > I've followed your howto and managed to get overcloud deployed > in > >> >> >> > HA, > >> >> >> > thanks. However I cannot login to it (via CLI or Horizon) : > >> >> >> > > >> >> >> > ERROR (Unauthorized): The request you have made requires > >> >> >> > authentication. > >> >> >> > (HTTP 401) (Request-ID: > req-96310dfa-3d64-4f05-966f-f4d92702e2b1) > >> >> >> > > >> >> >> > So I rebooted the controllers and now I cannot login through > >> >> >> > Provisioning > >> >> >> > network, seems some openvswitch bridge conf problem, heres my > >> >> >> > conf: > >> >> >> > > >> >> >> > # ip a > >> >> >> > 1: lo: mtu 65536 qdisc noqueue state > >> >> >> > UNKNOWN > >> >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >> >> >> > inet 127.0.0.1/8 scope host lo > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > inet6 ::1/128 scope host > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > 2: enp1s0f0: mtu 1500 qdisc mq > >> >> >> > state > >> >> >> > UP > >> >> >> > qlen 1000 > >> >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global > dynamic > >> >> >> > enp1s0f0 > >> >> >> > valid_lft 84562sec preferred_lft 84562sec > >> >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > 3: enp1s0f1: mtu 1500 qdisc mq > >> >> >> > master > >> >> >> > ovs-system state UP qlen 1000 > >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > 4: ovs-system: mtu 1500 qdisc noop state > >> >> >> > DOWN > >> >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff > >> >> >> > 5: br-tun: mtu 1500 qdisc noop state DOWN > >> >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff > >> >> >> > 6: vlan20: mtu 1500 qdisc > >> >> >> > noqueue > >> >> >> > state > >> >> >> > UNKNOWN > >> >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff > >> >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global > vlan20 > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global > vlan20 > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > 7: vlan40: mtu 1500 qdisc > >> >> >> > noqueue > >> >> >> > state > >> >> >> > UNKNOWN > >> >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff > >> >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global > vlan40 > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > 8: vlan174: mtu 1500 qdisc > >> >> >> > noqueue > >> >> >> > state > >> >> >> > UNKNOWN > >> >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff > >> >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global > >> >> >> > vlan174 > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global > >> >> >> > vlan174 > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > 9: br-ex: mtu 1500 qdisc > noqueue > >> >> >> > state > >> >> >> > UNKNOWN > >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > 10: vlan50: mtu 1500 qdisc > >> >> >> > noqueue > >> >> >> > state > >> >> >> > UNKNOWN > >> >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff > >> >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > 11: vlan30: mtu 1500 qdisc > >> >> >> > noqueue > >> >> >> > state > >> >> >> > UNKNOWN > >> >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff > >> >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global > vlan30 > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global > vlan30 > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link > >> >> >> > valid_lft forever preferred_lft forever > >> >> >> > 12: br-int: mtu 1500 qdisc noop state DOWN > >> >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff > >> >> >> > > >> >> >> > > >> >> >> > # ovs-vsctl show > >> >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 > >> >> >> > Bridge br-ex > >> >> >> > Port br-ex > >> >> >> > Interface br-ex > >> >> >> > type: internal > >> >> >> > Port "enp1s0f1" > >> >> >> > Interface "enp1s0f1" > >> >> >> > Port "vlan40" > >> >> >> > tag: 40 > >> >> >> > Interface "vlan40" > >> >> >> > type: internal > >> >> >> > Port "vlan20" > >> >> >> > tag: 20 > >> >> >> > Interface "vlan20" > >> >> >> > type: internal > >> >> >> > Port phy-br-ex > >> >> >> > Interface phy-br-ex > >> >> >> > type: patch > >> >> >> > options: {peer=int-br-ex} > >> >> >> > Port "vlan50" > >> >> >> > tag: 50 > >> >> >> > Interface "vlan50" > >> >> >> > type: internal > >> >> >> > Port "vlan30" > >> >> >> > tag: 30 > >> >> >> > Interface "vlan30" > >> >> >> > type: internal > >> >> >> > Port "vlan174" > >> >> >> > tag: 174 > >> >> >> > Interface "vlan174" > >> >> >> > type: internal > >> >> >> > Bridge br-int > >> >> >> > fail_mode: secure > >> >> >> > Port br-int > >> >> >> > Interface br-int > >> >> >> > type: internal > >> >> >> > Port patch-tun > >> >> >> > Interface patch-tun > >> >> >> > type: patch > >> >> >> > options: {peer=patch-int} > >> >> >> > Port int-br-ex > >> >> >> > Interface int-br-ex > >> >> >> > type: patch > >> >> >> > options: {peer=phy-br-ex} > >> >> >> > Bridge br-tun > >> >> >> > fail_mode: secure > >> >> >> > Port "gre-0a00140b" > >> >> >> > Interface "gre-0a00140b" > >> >> >> > type: gre > >> >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> >> > out_key=flow, remote_ip="10.0.20.11"} > >> >> >> > Port patch-int > >> >> >> > Interface patch-int > >> >> >> > type: patch > >> >> >> > options: {peer=patch-tun} > >> >> >> > Port "gre-0a00140d" > >> >> >> > Interface "gre-0a00140d" > >> >> >> > type: gre > >> >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> >> > out_key=flow, remote_ip="10.0.20.13"} > >> >> >> > Port "gre-0a00140c" > >> >> >> > Interface "gre-0a00140c" > >> >> >> > type: gre > >> >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> >> > out_key=flow, remote_ip="10.0.20.12"} > >> >> >> > Port br-tun > >> >> >> > Interface br-tun > >> >> >> > type: internal > >> >> >> > ovs_version: "2.4.0" > >> >> >> > > >> >> >> > Regards, > >> >> >> > Pedro Sousa > >> >> >> > > >> >> >> > > >> >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea > >> >> >> > > >> >> >> > wrote: > >> >> >> >> > >> >> >> >> Hi everyone, > >> >> >> >> > >> >> >> >> I wrote a blog post about how to deploy a HA with network > >> >> >> >> isolation > >> >> >> >> overcloud on top of the virtual environment. I tried to provide > >> >> >> >> some > >> >> >> >> insights into what instack-virt-setup creates and how to use > the > >> >> >> >> network isolation templates in the virtual environment. I hope > >> >> >> >> you > >> >> >> >> find it useful. > >> >> >> >> > >> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > >> >> >> >> > >> >> >> >> Thanks, > >> >> >> >> Marius > >> >> >> >> > >> >> >> >> _______________________________________________ > >> >> >> >> Rdo-list mailing list > >> >> >> >> Rdo-list at redhat.com > >> >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list > >> >> >> >> > >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> >> >> > > >> >> >> > > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Wed Oct 21 17:41:58 2015 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 21 Oct 2015 19:41:58 +0200 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: It's definitely a bug, the deployment shouldn't pass without completing keystone init. What's the content of your network-environment.yaml? I'm not sure if this relates but it's worth trying an installation with the GA bits, the docs are being updated to describe the steps. Some useful notes can be found here: https://etherpad.openstack.org/p/RDO-Manager_liberty trown ? mcornea: the important bit is to use `yum install -y http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm` for undercloud repos, and `export RDO_RELEASE='liberty'` for image build On Wed, Oct 21, 2015 at 6:54 PM, Pedro Sousa wrote: > Yes, I've done that already, however it never runs keystone init. Is it > something wrong in my deployment command "openstack overcloud deploy" or do > you think it's a bug/conf issue? > > Thanks > > On Wed, Oct 21, 2015 at 5:50 PM, Marius Cornea > wrote: >> >> To delete the overcloud you need to run heat stack-delete overcloud >> and wait until it finishes(check heat stack-list) >> >> On Wed, Oct 21, 2015 at 6:29 PM, Pedro Sousa wrote: >> > You're right, I didn't get that output, keystone init didn't run: >> > >> > $ openstack overcloud deploy --control-scale 3 --compute-scale 1 >> > --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e >> > ~/the-cloud/environments/puppet-pacemaker.yaml -e >> > ~/the-cloud/environments/network-isolation.yaml -e >> > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e >> > ~/the-cloud/environments/network-environment.yaml --control-flavor >> > controller --compute-flavor compute >> > >> > Deploying templates in the directory /home/stack/the-cloud >> > Overcloud Endpoint: http://192.168.174.35:5000/v2.0/ >> > Overcloud Deployed >> > >> > >> > In fact I have some mysql errors in my controllers, see below. Is there >> > a >> > way to redeploy? Because I've run "openstack overcloud deploy" and >> > nothing >> > happens. >> > >> > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: >> > [2015-10-21 >> > 14:21:50,903] (heat-config) [INFO] Error: Could not prefetch mysql_user >> > provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT CONCAT(User, >> > '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): Can't >> > connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' >> > (2) >> > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: Error: >> > Could not prefetch mysql_database provider 'mysql': Execution of >> > '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000): >> > Can't >> > connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' >> > (2) >> > >> > Thanks >> > >> > >> > >> > >> > >> > >> > On Wed, Oct 21, 2015 at 4:56 PM, Marius Cornea >> > wrote: >> >> >> >> I believe the keystone init failed. It is done in a postconfig step >> >> via ssh on the public VIP(see lines 3-13 in >> >> https://gist.github.com/remoteur/920109a31083942ba5e1 ). Did you get >> >> that kind of output for the deploy command? >> >> >> >> Try also journalctl -l -u os-collect-config | grep -i error on the >> >> controller nodes, it should indicate if something went wrong during >> >> deployment. >> >> >> >> On Wed, Oct 21, 2015 at 5:05 PM, Pedro Sousa wrote: >> >> > Hi Marius, >> >> > >> >> > your tip worked fine thanks, bridges seems to be correctly created, >> >> > however >> >> > I still cannot login, seems some keystone problem: >> >> > >> >> > #keystone --debug tenant-list >> >> > >> >> > DEBUG:keystoneclient.auth.identity.v2:Making authentication request >> >> > to >> >> > http://192.168.174.35:5000/v2.0/tokens >> >> > INFO:requests.packages.urllib3.connectionpool:Starting new HTTP >> >> > connection >> >> > (1): 192.168.174.35 >> >> > DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens >> >> > HTTP/1.1" >> >> > 401 114 >> >> > DEBUG:keystoneclient.session:Request returned failure status: 401 >> >> > DEBUG:keystoneclient.v2_0.client:Authorization Failed. >> >> > The request you have made requires authentication. (HTTP 401) >> >> > (Request-ID: >> >> > req-accee3b3-b552-4c6b-ac39-d0791b5c1390) >> >> > >> >> > Did you had this issue when deployed on virtual? >> >> > >> >> > Regards >> >> > >> >> > >> >> > >> >> > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea >> >> > >> >> > wrote: >> >> >> >> >> >> Here's an adjusted controller.yaml which disables DHCP on the first >> >> >> nic: enp1s0f0 so it doesn't get an IP address >> >> >> http://paste.openstack.org/show/476981/ >> >> >> >> >> >> Please note that this assumes that your overcloud nodes are PXE >> >> >> booting on the 2nd NIC(basically disabling the 1st nic) >> >> >> >> >> >> Given your setup(I'm doing some assumptions here so I might be >> >> >> wrong) >> >> >> I would use the 1st nic for PXE booting and provisioning network and >> >> >> 2nd nic for running the isolated networks with this kind of >> >> >> template: >> >> >> http://paste.openstack.org/show/476986/ >> >> >> >> >> >> Let me know if it works for you. >> >> >> >> >> >> Thanks, >> >> >> Marius >> >> >> >> >> >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa >> >> >> wrote: >> >> >> > Hi, >> >> >> > >> >> >> > here you go. >> >> >> > >> >> >> > Regards, >> >> >> > Pedro Sousa >> >> >> > >> >> >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea >> >> >> > >> >> >> > wrote: >> >> >> >> >> >> >> >> Hi Pedro, >> >> >> >> >> >> >> >> One issue I can quickly see is that br-ex has assigned the same >> >> >> >> IP >> >> >> >> address as enp1s0f0. Can you post the nic templates you used for >> >> >> >> deployment? >> >> >> >> >> >> >> >> 2: enp1s0f0: mtu 1500 qdisc mq >> >> >> >> state >> >> >> >> UP qlen 1000 >> >> >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic >> >> >> >> enp1s0f0 >> >> >> >> 9: br-ex: mtu 1500 qdisc >> >> >> >> noqueue >> >> >> >> state >> >> >> >> UNKNOWN >> >> >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex >> >> >> >> >> >> >> >> Thanks, >> >> >> >> Marius >> >> >> >> >> >> >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa >> >> >> >> wrote: >> >> >> >> > Hi Marius, >> >> >> >> > >> >> >> >> > I've followed your howto and managed to get overcloud deployed >> >> >> >> > in >> >> >> >> > HA, >> >> >> >> > thanks. However I cannot login to it (via CLI or Horizon) : >> >> >> >> > >> >> >> >> > ERROR (Unauthorized): The request you have made requires >> >> >> >> > authentication. >> >> >> >> > (HTTP 401) (Request-ID: >> >> >> >> > req-96310dfa-3d64-4f05-966f-f4d92702e2b1) >> >> >> >> > >> >> >> >> > So I rebooted the controllers and now I cannot login through >> >> >> >> > Provisioning >> >> >> >> > network, seems some openvswitch bridge conf problem, heres my >> >> >> >> > conf: >> >> >> >> > >> >> >> >> > # ip a >> >> >> >> > 1: lo: mtu 65536 qdisc noqueue state >> >> >> >> > UNKNOWN >> >> >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> >> >> >> > inet 127.0.0.1/8 scope host lo >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > inet6 ::1/128 scope host >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > 2: enp1s0f0: mtu 1500 qdisc >> >> >> >> > mq >> >> >> >> > state >> >> >> >> > UP >> >> >> >> > qlen 1000 >> >> >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global >> >> >> >> > dynamic >> >> >> >> > enp1s0f0 >> >> >> >> > valid_lft 84562sec preferred_lft 84562sec >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > 3: enp1s0f1: mtu 1500 qdisc >> >> >> >> > mq >> >> >> >> > master >> >> >> >> > ovs-system state UP qlen 1000 >> >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > 4: ovs-system: mtu 1500 qdisc noop state >> >> >> >> > DOWN >> >> >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff >> >> >> >> > 5: br-tun: mtu 1500 qdisc noop state DOWN >> >> >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff >> >> >> >> > 6: vlan20: mtu 1500 qdisc >> >> >> >> > noqueue >> >> >> >> > state >> >> >> >> > UNKNOWN >> >> >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff >> >> >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global >> >> >> >> > vlan20 >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global >> >> >> >> > vlan20 >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > 7: vlan40: mtu 1500 qdisc >> >> >> >> > noqueue >> >> >> >> > state >> >> >> >> > UNKNOWN >> >> >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff >> >> >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global >> >> >> >> > vlan40 >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > 8: vlan174: mtu 1500 qdisc >> >> >> >> > noqueue >> >> >> >> > state >> >> >> >> > UNKNOWN >> >> >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff >> >> >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global >> >> >> >> > vlan174 >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global >> >> >> >> > vlan174 >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > 9: br-ex: mtu 1500 qdisc >> >> >> >> > noqueue >> >> >> >> > state >> >> >> >> > UNKNOWN >> >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > 10: vlan50: mtu 1500 qdisc >> >> >> >> > noqueue >> >> >> >> > state >> >> >> >> > UNKNOWN >> >> >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff >> >> >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > 11: vlan30: mtu 1500 qdisc >> >> >> >> > noqueue >> >> >> >> > state >> >> >> >> > UNKNOWN >> >> >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff >> >> >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global >> >> >> >> > vlan30 >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global >> >> >> >> > vlan30 >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> > 12: br-int: mtu 1500 qdisc noop state >> >> >> >> > DOWN >> >> >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff >> >> >> >> > >> >> >> >> > >> >> >> >> > # ovs-vsctl show >> >> >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 >> >> >> >> > Bridge br-ex >> >> >> >> > Port br-ex >> >> >> >> > Interface br-ex >> >> >> >> > type: internal >> >> >> >> > Port "enp1s0f1" >> >> >> >> > Interface "enp1s0f1" >> >> >> >> > Port "vlan40" >> >> >> >> > tag: 40 >> >> >> >> > Interface "vlan40" >> >> >> >> > type: internal >> >> >> >> > Port "vlan20" >> >> >> >> > tag: 20 >> >> >> >> > Interface "vlan20" >> >> >> >> > type: internal >> >> >> >> > Port phy-br-ex >> >> >> >> > Interface phy-br-ex >> >> >> >> > type: patch >> >> >> >> > options: {peer=int-br-ex} >> >> >> >> > Port "vlan50" >> >> >> >> > tag: 50 >> >> >> >> > Interface "vlan50" >> >> >> >> > type: internal >> >> >> >> > Port "vlan30" >> >> >> >> > tag: 30 >> >> >> >> > Interface "vlan30" >> >> >> >> > type: internal >> >> >> >> > Port "vlan174" >> >> >> >> > tag: 174 >> >> >> >> > Interface "vlan174" >> >> >> >> > type: internal >> >> >> >> > Bridge br-int >> >> >> >> > fail_mode: secure >> >> >> >> > Port br-int >> >> >> >> > Interface br-int >> >> >> >> > type: internal >> >> >> >> > Port patch-tun >> >> >> >> > Interface patch-tun >> >> >> >> > type: patch >> >> >> >> > options: {peer=patch-int} >> >> >> >> > Port int-br-ex >> >> >> >> > Interface int-br-ex >> >> >> >> > type: patch >> >> >> >> > options: {peer=phy-br-ex} >> >> >> >> > Bridge br-tun >> >> >> >> > fail_mode: secure >> >> >> >> > Port "gre-0a00140b" >> >> >> >> > Interface "gre-0a00140b" >> >> >> >> > type: gre >> >> >> >> > options: {df_default="true", in_key=flow, >> >> >> >> > local_ip="10.0.20.10", >> >> >> >> > out_key=flow, remote_ip="10.0.20.11"} >> >> >> >> > Port patch-int >> >> >> >> > Interface patch-int >> >> >> >> > type: patch >> >> >> >> > options: {peer=patch-tun} >> >> >> >> > Port "gre-0a00140d" >> >> >> >> > Interface "gre-0a00140d" >> >> >> >> > type: gre >> >> >> >> > options: {df_default="true", in_key=flow, >> >> >> >> > local_ip="10.0.20.10", >> >> >> >> > out_key=flow, remote_ip="10.0.20.13"} >> >> >> >> > Port "gre-0a00140c" >> >> >> >> > Interface "gre-0a00140c" >> >> >> >> > type: gre >> >> >> >> > options: {df_default="true", in_key=flow, >> >> >> >> > local_ip="10.0.20.10", >> >> >> >> > out_key=flow, remote_ip="10.0.20.12"} >> >> >> >> > Port br-tun >> >> >> >> > Interface br-tun >> >> >> >> > type: internal >> >> >> >> > ovs_version: "2.4.0" >> >> >> >> > >> >> >> >> > Regards, >> >> >> >> > Pedro Sousa >> >> >> >> > >> >> >> >> > >> >> >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea >> >> >> >> > >> >> >> >> > wrote: >> >> >> >> >> >> >> >> >> >> Hi everyone, >> >> >> >> >> >> >> >> >> >> I wrote a blog post about how to deploy a HA with network >> >> >> >> >> isolation >> >> >> >> >> overcloud on top of the virtual environment. I tried to >> >> >> >> >> provide >> >> >> >> >> some >> >> >> >> >> insights into what instack-virt-setup creates and how to use >> >> >> >> >> the >> >> >> >> >> network isolation templates in the virtual environment. I hope >> >> >> >> >> you >> >> >> >> >> find it useful. >> >> >> >> >> >> >> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ >> >> >> >> >> >> >> >> >> >> Thanks, >> >> >> >> >> Marius >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> >> >> Rdo-list mailing list >> >> >> >> >> Rdo-list at redhat.com >> >> >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> >> >> >> >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> > >> >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> > >> > > > From ohochman at redhat.com Wed Oct 21 18:30:28 2015 From: ohochman at redhat.com (Omri Hochman) Date: Wed, 21 Oct 2015 14:30:28 -0400 (EDT) Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> References: <56276EE2.6010109@redhat.com> <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> Message-ID: <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Omri Hochman" > To: "Pedro Sousa" > Cc: "rdo-list" > Sent: Wednesday, October 21, 2015 11:16:03 AM > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > ----- Original Message ----- > > From: "Pedro Sousa" > > To: "John Trowbridge" > > Cc: "rdo-list" > > Sent: Wednesday, October 21, 2015 7:10:38 AM > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > Hi John, > > > > I've managed to install on baremetal following this howto: > > https://remote-lab.net/rdo-manager-ha-openstack-deployment/ (based on > > liberty) > > Hey Pedro, > > Are you using: yum install -y > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > to get the latest RDO GA bits ? > > We're failing in overcloud deployment on BM with several issues. Actually, an update : After using the workaround from this issue: https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c9 We've manage to get HA on Bare-Metal (*using the latest rdo-release-liberty.rpm) That was the deployment command : openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /home/stack/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server 10.5.26.10 --neutron-network-type vxlan --neutron-tunnel-types vxlan --timeout 90 > > Thanks, > Omri. > > > > > I have 3 Controllers + 1 Compute (HA and Network Isolation). However I'm > > having some issues logging on (maybe some keystone issue) and some issue > > with openvswitch that I'm trying to address with Marius Cornea help. > > > > Regards, > > Pedro Sousa > > > > > > On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < trown at redhat.com > > > wrote: > > > > > > Hola rdoers, > > > > The plan is to GA RDO Liberty today (woot!), so I wanted to send out a > > status update for the RDO Manager installer. I would also like to gather > > feedback on how other community participants feel about that status as > > it relates to RDO Manager participating in the GA. That feedback can > > come as replies to this thread, or even better there is a packaging > > meeting on #rdo at 1500 UTC today and we can discuss it further then. > > > > tldr; > > RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > > virtual hardware have been verified to work with GA bits, however bare > > metal installs have not yet been verified. > > > > I would like to start with some historical context here, as it seems we > > have picked up quite a few new active community members recently (again > > woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > > successful end to end demo with a single controller and single compute > > node, and only by using a special delorean server pulling bits from a > > special github organization (rdo-management). We were able to get it > > consistently deploying **virtual** HA w/ ceph in CI by the middle of the > > Liberty upstream cycle. Then, due largely to the fact that there was > > nobody being paid to work full time on RDO Manager, and the people who > > were contributing in more or less "extra" time were getting swamped with > > releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief > > 24 hour periods where someone would spend a weekend fixing things only > > to have it break again early the following week. > > > > There have been many improvements in the recent weeks to this sad state > > of affairs. Firstly, we have upstreamed almost everything from the > > rdo-management github org directly into openstack projects. Secondly, > > there is a single source for delorean packages for both core openstack > > packages and the tripleo and ironic packages that make up RDO Manager. > > These two things may seem a bit trivial to a newcomer to the project, > > but they are actually fixes for the biggest cause of the RDO Manager > > Kilo CI breaking. I think with those two fixes (plus some work on > > upstream tripleo CI) we have set ourselves up to make steady forward > > progress rather than spending all our time troubleshooting complete > > breakages. (Although this is still openstack so complete breakages will > > still happen from time to time :p) > > > > Another very easy to overlook improvement over where we were at Kilo GA, > > is that we actually have all RDO Manager packages (minus a couple EPEL > > dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we > > did not even have everything officially packaged, rather only in our > > special delorean instance. > > > > All this leads to my opinion that RDO Manager should participate in the > > RDO GA. I am unconvinced that bare metal installs can not be made to > > work with some extra documentation or configuration changes. However, > > even if that is not the case, we are in a drastically better place than > > we were at the beginning of the Kilo cycle. > > > > That said, this is a community, and I would like to hear how other > > community participants both from RDO in general and RDO Manager > > specifically feel about this. Ideally, if someone thinks the RDO Manager > > release should be blocked, there should be a BZ with the blocker flag > > proposed so that there is actionable criteria to unblock the release. > > > > Thanks for all your hard work to get to this point, and lets keep it > > rolling. > > > > -trown > > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From pgsousa at gmail.com Wed Oct 21 19:17:41 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Wed, 21 Oct 2015 20:17:41 +0100 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: Hi Marius, [stack at undercloud environments]$ cat network-environment.yaml parameter_defaults: InternalApiNetCidr: 192.168.100.0/24 StorageNetCidr: 192.168.101.0/24 StorageMgmtNetCidr: 192.168.102.0/24 TenantNetCidr: 10.0.20.0/24 ExternalNetCidr: 192.168.174.0/24 InternalApiAllocationPools: [{'start': '192.168.100.10', 'end': '192.168.100.100'}] StorageAllocationPools: [{'start': '192.168.101.10', 'end': '192.168.101.100'}] StorageMgmtAllocationPools: [{'start': '192.168.102.10', 'end': '192.168.102.100'}] TenantAllocationPools: [{'start': '10.0.20.10', 'end': '10.0.20.100'}] ExternalAllocationPools: [{'start': '192.168.174.35', 'end': '192.168.174.50'}] ExternalInterfaceDefaultRoute: 192.168.174.1 ControlPlaneSubnetCidr: "24" ControlPlaneDefaultRoute: 192.168.21.30 EC2MetadataIp: 192.168.21.30 DnsServers: ["8.8.8.8", "8.8.4.4"] I'll test it out following etherpad thanks On Wed, Oct 21, 2015 at 6:41 PM, Marius Cornea wrote: > It's definitely a bug, the deployment shouldn't pass without > completing keystone init. What's the content of your > network-environment.yaml? > > I'm not sure if this relates but it's worth trying an installation > with the GA bits, the docs are being updated to describe the steps. > Some useful notes can be found here: > https://etherpad.openstack.org/p/RDO-Manager_liberty > > trown ? mcornea: the important bit is to use `yum install -y > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm` > for undercloud repos, and `export RDO_RELEASE='liberty'` for image > build > > On Wed, Oct 21, 2015 at 6:54 PM, Pedro Sousa wrote: > > Yes, I've done that already, however it never runs keystone init. Is it > > something wrong in my deployment command "openstack overcloud deploy" or > do > > you think it's a bug/conf issue? > > > > Thanks > > > > On Wed, Oct 21, 2015 at 5:50 PM, Marius Cornea > > wrote: > >> > >> To delete the overcloud you need to run heat stack-delete overcloud > >> and wait until it finishes(check heat stack-list) > >> > >> On Wed, Oct 21, 2015 at 6:29 PM, Pedro Sousa wrote: > >> > You're right, I didn't get that output, keystone init didn't run: > >> > > >> > $ openstack overcloud deploy --control-scale 3 --compute-scale 1 > >> > --libvirt-type kvm --ntp-server pool.ntp.org --templates > ~/the-cloud/ -e > >> > ~/the-cloud/environments/puppet-pacemaker.yaml -e > >> > ~/the-cloud/environments/network-isolation.yaml -e > >> > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > >> > ~/the-cloud/environments/network-environment.yaml --control-flavor > >> > controller --compute-flavor compute > >> > > >> > Deploying templates in the directory /home/stack/the-cloud > >> > Overcloud Endpoint: http://192.168.174.35:5000/v2.0/ > >> > Overcloud Deployed > >> > > >> > > >> > In fact I have some mysql errors in my controllers, see below. Is > there > >> > a > >> > way to redeploy? Because I've run "openstack overcloud deploy" and > >> > nothing > >> > happens. > >> > > >> > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: > >> > [2015-10-21 > >> > 14:21:50,903] (heat-config) [INFO] Error: Could not prefetch > mysql_user > >> > provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT > CONCAT(User, > >> > '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): > Can't > >> > connect to local MySQL server through socket > '/var/lib/mysql/mysql.sock' > >> > (2) > >> > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: > Error: > >> > Could not prefetch mysql_database provider 'mysql': Execution of > >> > '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000): > >> > Can't > >> > connect to local MySQL server through socket > '/var/lib/mysql/mysql.sock' > >> > (2) > >> > > >> > Thanks > >> > > >> > > >> > > >> > > >> > > >> > > >> > On Wed, Oct 21, 2015 at 4:56 PM, Marius Cornea > > >> > wrote: > >> >> > >> >> I believe the keystone init failed. It is done in a postconfig step > >> >> via ssh on the public VIP(see lines 3-13 in > >> >> https://gist.github.com/remoteur/920109a31083942ba5e1 ). Did you get > >> >> that kind of output for the deploy command? > >> >> > >> >> Try also journalctl -l -u os-collect-config | grep -i error on the > >> >> controller nodes, it should indicate if something went wrong during > >> >> deployment. > >> >> > >> >> On Wed, Oct 21, 2015 at 5:05 PM, Pedro Sousa > wrote: > >> >> > Hi Marius, > >> >> > > >> >> > your tip worked fine thanks, bridges seems to be correctly created, > >> >> > however > >> >> > I still cannot login, seems some keystone problem: > >> >> > > >> >> > #keystone --debug tenant-list > >> >> > > >> >> > DEBUG:keystoneclient.auth.identity.v2:Making authentication request > >> >> > to > >> >> > http://192.168.174.35:5000/v2.0/tokens > >> >> > INFO:requests.packages.urllib3.connectionpool:Starting new HTTP > >> >> > connection > >> >> > (1): 192.168.174.35 > >> >> > DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens > >> >> > HTTP/1.1" > >> >> > 401 114 > >> >> > DEBUG:keystoneclient.session:Request returned failure status: 401 > >> >> > DEBUG:keystoneclient.v2_0.client:Authorization Failed. > >> >> > The request you have made requires authentication. (HTTP 401) > >> >> > (Request-ID: > >> >> > req-accee3b3-b552-4c6b-ac39-d0791b5c1390) > >> >> > > >> >> > Did you had this issue when deployed on virtual? > >> >> > > >> >> > Regards > >> >> > > >> >> > > >> >> > > >> >> > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> Here's an adjusted controller.yaml which disables DHCP on the > first > >> >> >> nic: enp1s0f0 so it doesn't get an IP address > >> >> >> http://paste.openstack.org/show/476981/ > >> >> >> > >> >> >> Please note that this assumes that your overcloud nodes are PXE > >> >> >> booting on the 2nd NIC(basically disabling the 1st nic) > >> >> >> > >> >> >> Given your setup(I'm doing some assumptions here so I might be > >> >> >> wrong) > >> >> >> I would use the 1st nic for PXE booting and provisioning network > and > >> >> >> 2nd nic for running the isolated networks with this kind of > >> >> >> template: > >> >> >> http://paste.openstack.org/show/476986/ > >> >> >> > >> >> >> Let me know if it works for you. > >> >> >> > >> >> >> Thanks, > >> >> >> Marius > >> >> >> > >> >> >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa > >> >> >> wrote: > >> >> >> > Hi, > >> >> >> > > >> >> >> > here you go. > >> >> >> > > >> >> >> > Regards, > >> >> >> > Pedro Sousa > >> >> >> > > >> >> >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea > >> >> >> > > >> >> >> > wrote: > >> >> >> >> > >> >> >> >> Hi Pedro, > >> >> >> >> > >> >> >> >> One issue I can quickly see is that br-ex has assigned the same > >> >> >> >> IP > >> >> >> >> address as enp1s0f0. Can you post the nic templates you used > for > >> >> >> >> deployment? > >> >> >> >> > >> >> >> >> 2: enp1s0f0: mtu 1500 qdisc > mq > >> >> >> >> state > >> >> >> >> UP qlen 1000 > >> >> >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global > dynamic > >> >> >> >> enp1s0f0 > >> >> >> >> 9: br-ex: mtu 1500 qdisc > >> >> >> >> noqueue > >> >> >> >> state > >> >> >> >> UNKNOWN > >> >> >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global > br-ex > >> >> >> >> > >> >> >> >> Thanks, > >> >> >> >> Marius > >> >> >> >> > >> >> >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa < > pgsousa at gmail.com> > >> >> >> >> wrote: > >> >> >> >> > Hi Marius, > >> >> >> >> > > >> >> >> >> > I've followed your howto and managed to get overcloud > deployed > >> >> >> >> > in > >> >> >> >> > HA, > >> >> >> >> > thanks. However I cannot login to it (via CLI or Horizon) : > >> >> >> >> > > >> >> >> >> > ERROR (Unauthorized): The request you have made requires > >> >> >> >> > authentication. > >> >> >> >> > (HTTP 401) (Request-ID: > >> >> >> >> > req-96310dfa-3d64-4f05-966f-f4d92702e2b1) > >> >> >> >> > > >> >> >> >> > So I rebooted the controllers and now I cannot login through > >> >> >> >> > Provisioning > >> >> >> >> > network, seems some openvswitch bridge conf problem, heres my > >> >> >> >> > conf: > >> >> >> >> > > >> >> >> >> > # ip a > >> >> >> >> > 1: lo: mtu 65536 qdisc noqueue state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >> >> >> >> > inet 127.0.0.1/8 scope host lo > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 ::1/128 scope host > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 2: enp1s0f0: mtu 1500 qdisc > >> >> >> >> > mq > >> >> >> >> > state > >> >> >> >> > UP > >> >> >> >> > qlen 1000 > >> >> >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global > >> >> >> >> > dynamic > >> >> >> >> > enp1s0f0 > >> >> >> >> > valid_lft 84562sec preferred_lft 84562sec > >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 3: enp1s0f1: mtu 1500 qdisc > >> >> >> >> > mq > >> >> >> >> > master > >> >> >> >> > ovs-system state UP qlen 1000 > >> >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 4: ovs-system: mtu 1500 qdisc noop > state > >> >> >> >> > DOWN > >> >> >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > 5: br-tun: mtu 1500 qdisc noop state > DOWN > >> >> >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > 6: vlan20: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global > >> >> >> >> > vlan20 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global > >> >> >> >> > vlan20 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 7: vlan40: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global > >> >> >> >> > vlan40 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 8: vlan174: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global > >> >> >> >> > vlan174 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global > >> >> >> >> > vlan174 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 9: br-ex: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global > br-ex > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 10: vlan50: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 11: vlan30: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global > >> >> >> >> > vlan30 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global > >> >> >> >> > vlan30 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 12: br-int: mtu 1500 qdisc noop state > >> >> >> >> > DOWN > >> >> >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > # ovs-vsctl show > >> >> >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 > >> >> >> >> > Bridge br-ex > >> >> >> >> > Port br-ex > >> >> >> >> > Interface br-ex > >> >> >> >> > type: internal > >> >> >> >> > Port "enp1s0f1" > >> >> >> >> > Interface "enp1s0f1" > >> >> >> >> > Port "vlan40" > >> >> >> >> > tag: 40 > >> >> >> >> > Interface "vlan40" > >> >> >> >> > type: internal > >> >> >> >> > Port "vlan20" > >> >> >> >> > tag: 20 > >> >> >> >> > Interface "vlan20" > >> >> >> >> > type: internal > >> >> >> >> > Port phy-br-ex > >> >> >> >> > Interface phy-br-ex > >> >> >> >> > type: patch > >> >> >> >> > options: {peer=int-br-ex} > >> >> >> >> > Port "vlan50" > >> >> >> >> > tag: 50 > >> >> >> >> > Interface "vlan50" > >> >> >> >> > type: internal > >> >> >> >> > Port "vlan30" > >> >> >> >> > tag: 30 > >> >> >> >> > Interface "vlan30" > >> >> >> >> > type: internal > >> >> >> >> > Port "vlan174" > >> >> >> >> > tag: 174 > >> >> >> >> > Interface "vlan174" > >> >> >> >> > type: internal > >> >> >> >> > Bridge br-int > >> >> >> >> > fail_mode: secure > >> >> >> >> > Port br-int > >> >> >> >> > Interface br-int > >> >> >> >> > type: internal > >> >> >> >> > Port patch-tun > >> >> >> >> > Interface patch-tun > >> >> >> >> > type: patch > >> >> >> >> > options: {peer=patch-int} > >> >> >> >> > Port int-br-ex > >> >> >> >> > Interface int-br-ex > >> >> >> >> > type: patch > >> >> >> >> > options: {peer=phy-br-ex} > >> >> >> >> > Bridge br-tun > >> >> >> >> > fail_mode: secure > >> >> >> >> > Port "gre-0a00140b" > >> >> >> >> > Interface "gre-0a00140b" > >> >> >> >> > type: gre > >> >> >> >> > options: {df_default="true", in_key=flow, > >> >> >> >> > local_ip="10.0.20.10", > >> >> >> >> > out_key=flow, remote_ip="10.0.20.11"} > >> >> >> >> > Port patch-int > >> >> >> >> > Interface patch-int > >> >> >> >> > type: patch > >> >> >> >> > options: {peer=patch-tun} > >> >> >> >> > Port "gre-0a00140d" > >> >> >> >> > Interface "gre-0a00140d" > >> >> >> >> > type: gre > >> >> >> >> > options: {df_default="true", in_key=flow, > >> >> >> >> > local_ip="10.0.20.10", > >> >> >> >> > out_key=flow, remote_ip="10.0.20.13"} > >> >> >> >> > Port "gre-0a00140c" > >> >> >> >> > Interface "gre-0a00140c" > >> >> >> >> > type: gre > >> >> >> >> > options: {df_default="true", in_key=flow, > >> >> >> >> > local_ip="10.0.20.10", > >> >> >> >> > out_key=flow, remote_ip="10.0.20.12"} > >> >> >> >> > Port br-tun > >> >> >> >> > Interface br-tun > >> >> >> >> > type: internal > >> >> >> >> > ovs_version: "2.4.0" > >> >> >> >> > > >> >> >> >> > Regards, > >> >> >> >> > Pedro Sousa > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea > >> >> >> >> > > >> >> >> >> > wrote: > >> >> >> >> >> > >> >> >> >> >> Hi everyone, > >> >> >> >> >> > >> >> >> >> >> I wrote a blog post about how to deploy a HA with network > >> >> >> >> >> isolation > >> >> >> >> >> overcloud on top of the virtual environment. I tried to > >> >> >> >> >> provide > >> >> >> >> >> some > >> >> >> >> >> insights into what instack-virt-setup creates and how to use > >> >> >> >> >> the > >> >> >> >> >> network isolation templates in the virtual environment. I > hope > >> >> >> >> >> you > >> >> >> >> >> find it useful. > >> >> >> >> >> > >> >> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > >> >> >> >> >> > >> >> >> >> >> Thanks, > >> >> >> >> >> Marius > >> >> >> >> >> > >> >> >> >> >> _______________________________________________ > >> >> >> >> >> Rdo-list mailing list > >> >> >> >> >> Rdo-list at redhat.com > >> >> >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list > >> >> >> >> >> > >> >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> >> >> >> > > >> >> >> >> > > >> >> >> > > >> >> >> > > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xzhao at bnl.gov Wed Oct 21 20:26:39 2015 From: xzhao at bnl.gov (Zhao, Xin) Date: Wed, 21 Oct 2015 16:26:39 -0400 Subject: [Rdo-list] Kilo swift works with icehouse services? Message-ID: <5627F4FF.1010404@bnl.gov> Hello, We have an openstack cluster running icehouse release, on RHEL6. I wonder, if I only upgrade swift to kilo, leaving all the other services on icehouse release, will they work together ? Thanks, Xin From pgsousa at gmail.com Wed Oct 21 19:10:22 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Wed, 21 Oct 2015 20:10:22 +0100 Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> References: <56276EE2.6010109@redhat.com> <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> Message-ID: Hi Omri, I'll test it out thanks. Did you build your overcloud images based on Liberty? export RDO_RELEASE='liberty' openstack overcloud image build --all On Wed, Oct 21, 2015 at 7:30 PM, Omri Hochman wrote: > > > ----- Original Message ----- > > From: "Omri Hochman" > > To: "Pedro Sousa" > > Cc: "rdo-list" > > Sent: Wednesday, October 21, 2015 11:16:03 AM > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > > > > > ----- Original Message ----- > > > From: "Pedro Sousa" > > > To: "John Trowbridge" > > > Cc: "rdo-list" > > > Sent: Wednesday, October 21, 2015 7:10:38 AM > > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > > > Hi John, > > > > > > I've managed to install on baremetal following this howto: > > > https://remote-lab.net/rdo-manager-ha-openstack-deployment/ (based on > > > liberty) > > > > Hey Pedro, > > > > Are you using: yum install -y > > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > > to get the latest RDO GA bits ? > > > > We're failing in overcloud deployment on BM with several issues. > > Actually, an update : > > After using the workaround from this issue: > https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c9 > > We've manage to get HA on Bare-Metal (*using the latest > rdo-release-liberty.rpm) > > That was the deployment command : > > openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 > --ceph-storage-scale 1 -e > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > -e /home/stack/network-environment.yaml -e > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > --ntp-server 10.5.26.10 --neutron-network-type vxlan --neutron-tunnel-types > vxlan --timeout 90 > > > > > Thanks, > > Omri. > > > > > > > > I have 3 Controllers + 1 Compute (HA and Network Isolation). However > I'm > > > having some issues logging on (maybe some keystone issue) and some > issue > > > with openvswitch that I'm trying to address with Marius Cornea help. > > > > > > Regards, > > > Pedro Sousa > > > > > > > > > On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < trown at redhat.com > > > > wrote: > > > > > > > > > Hola rdoers, > > > > > > The plan is to GA RDO Liberty today (woot!), so I wanted to send out a > > > status update for the RDO Manager installer. I would also like to > gather > > > feedback on how other community participants feel about that status as > > > it relates to RDO Manager participating in the GA. That feedback can > > > come as replies to this thread, or even better there is a packaging > > > meeting on #rdo at 1500 UTC today and we can discuss it further then. > > > > > > tldr; > > > RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > > > virtual hardware have been verified to work with GA bits, however bare > > > metal installs have not yet been verified. > > > > > > I would like to start with some historical context here, as it seems we > > > have picked up quite a few new active community members recently (again > > > woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > > > successful end to end demo with a single controller and single compute > > > node, and only by using a special delorean server pulling bits from a > > > special github organization (rdo-management). We were able to get it > > > consistently deploying **virtual** HA w/ ceph in CI by the middle of > the > > > Liberty upstream cycle. Then, due largely to the fact that there was > > > nobody being paid to work full time on RDO Manager, and the people who > > > were contributing in more or less "extra" time were getting swamped > with > > > releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief > > > 24 hour periods where someone would spend a weekend fixing things only > > > to have it break again early the following week. > > > > > > There have been many improvements in the recent weeks to this sad state > > > of affairs. Firstly, we have upstreamed almost everything from the > > > rdo-management github org directly into openstack projects. Secondly, > > > there is a single source for delorean packages for both core openstack > > > packages and the tripleo and ironic packages that make up RDO Manager. > > > These two things may seem a bit trivial to a newcomer to the project, > > > but they are actually fixes for the biggest cause of the RDO Manager > > > Kilo CI breaking. I think with those two fixes (plus some work on > > > upstream tripleo CI) we have set ourselves up to make steady forward > > > progress rather than spending all our time troubleshooting complete > > > breakages. (Although this is still openstack so complete breakages will > > > still happen from time to time :p) > > > > > > Another very easy to overlook improvement over where we were at Kilo > GA, > > > is that we actually have all RDO Manager packages (minus a couple EPEL > > > dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we > > > did not even have everything officially packaged, rather only in our > > > special delorean instance. > > > > > > All this leads to my opinion that RDO Manager should participate in the > > > RDO GA. I am unconvinced that bare metal installs can not be made to > > > work with some extra documentation or configuration changes. However, > > > even if that is not the case, we are in a drastically better place than > > > we were at the beginning of the Kilo cycle. > > > > > > That said, this is a community, and I would like to hear how other > > > community participants both from RDO in general and RDO Manager > > > specifically feel about this. Ideally, if someone thinks the RDO > Manager > > > release should be blocked, there should be a BZ with the blocker flag > > > proposed so that there is actionable criteria to unblock the release. > > > > > > Thanks for all your hard work to get to this point, and lets keep it > > > rolling. > > > > > > -trown > > > > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkassawara at gmail.com Wed Oct 21 22:02:11 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Wed, 21 Oct 2015 16:02:11 -0600 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: In Liberty, the database connection option for services that use a SQL database contains "mysql+pymysql://" which uses the PyMySQL library instead of "mysql://" which use the conventional MySQL-Python library. The RDO repository doesn't contain a package for PyMySQL. Development of the upstream installation guide cannot continue without this package because the workarounds either involve installing this library via pip which potentially leads to versioning conflicts with packages or changing the database connection option everywhere specifically for RDO. On Wed, Oct 21, 2015 at 7:32 AM, Matt Kassawara wrote: > I think packages available for standalone installation (i.e., without a > deployment tool) should include complete upstream configuration files in > standard locations without modification. In the case of *-dist.conf files > with RDO packages, they seldom receive updates leading to deprecation > warnings and sometimes override useful upstream default values. For > example, most if not all services default to keystone for authentication > (auth_strategy), yet the RDO neutron packages revert authentication to > "noauth" in the *-dist.conf file. In another example, the RDO keystone > package only includes the keystone-paste.ini file as > /usr/share/keystone/keystone-dist-paste.ini rather than using the standard > location and name which leads to confusion, particularly for new users. The > installation guide contains quite a few extra steps and option-value pairs > that work around the existence and contents of *-dist.conf files... > additions that unnecessarily increase complexity for our audience of new > users. > > On Wed, Oct 21, 2015 at 4:22 AM, Ihar Hrachyshka > wrote: > >> >> > On 21 Oct 2015, at 12:02, Alan Pevec wrote: >> > >> > 2015-10-21 10:48 GMT+02:00 Ihar Hrachyshka : >> >>> On 15 Oct 2015, at 20:31, Matt Kassawara >> wrote: >> >>> >> >>> 4) Packages only reference upstream configuration files in standard >> >>> locations (e.g., /etc/keystone). >> >> >> >> Not sure what exactly it means. RDO packages are using >> neutron-dist.conf that contains RDO specific default configuration located >> under /usr/share/ for quite a long time. >> > >> > Yes, it's about dist.conf that are unique solution to provide distro >> > specific default values. >> > I'm not sure how are other distros solving this if at all, they >> > probably either rely on upstream defaults or their configuration >> > management tools? >> > Thing is that upstream defaults cannot fit all distributions, so I >> > would expect all distros to pick up our dist.conf solution but we >> > probably have not been exaplaning and advertising it enough hence >> > confusion in the upstream docs. >> >> I suspect other distros may just modify /etc/neutron/neutron.conf as they >> fit. It?s obviously not the cleanest solution. >> >> I believe enforcing a specific way to configure services upon >> distributions is not the job of upstream, as long as default upstream way >> (modifying upstream configuration files located in >> /etc//*.conf) works. >> >> Ihar >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasha at redhat.com Wed Oct 21 23:03:45 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Wed, 21 Oct 2015 19:03:45 -0400 (EDT) Subject: [Rdo-list] HA with network isolation on BM. In-Reply-To: <1999074695.62278013.1445468123501.JavaMail.zimbra@redhat.com> Message-ID: <1616445595.62279728.1445468625069.JavaMail.zimbra@redhat.com> Hi all, We saw successful HA deployments today on BM with network isolation. Still had to use the workaround from the below BZ for the overcloud to complete: https://bugzilla.redhat.com/show_bug.cgi?id=1271289 Executed tempest on one setup: ====== Totals ====== Ran: 876 tests in 2893.0000 sec. - Passed: 607 - Skipped: 79 - Expected Fail: 0 - Unexpected Success: 0 - Failed: 190 Sum of execute time for each test: 2492.6529 sec. ============== Worker Balance ============== - Worker 0 (90 tests) => 0:42:33.564868 - Worker 1 (187 tests) => 0:29:58.702303 - Worker 2 (127 tests) => 0:46:40.407688 - Worker 3 (124 tests) => 0:41:33.320266 - Worker 4 (225 tests) => 0:47:49.272869 - Worker 5 (123 tests) => 0:29:48.752037 Thanks. Best regards, Sasha Chuzhoy. From apevec at gmail.com Thu Oct 22 00:44:28 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 22 Oct 2015 02:44:28 +0200 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: > The RDO repository doesn't contain a package for PyMySQL. This was explained in the thread Steve linked earlier in this thread: https://www.redhat.com/archives/rdo-list/2015-October/msg00004.html Cheers, Alan From mkassawara at gmail.com Thu Oct 22 00:51:07 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Wed, 21 Oct 2015 18:51:07 -0600 Subject: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO In-Reply-To: References: <561DF4B5.7040303@lanabrindley.com> <561E4133.8060804@redhat.com> <561FA38F.6050508@redhat.com> <1771534090.73974478.1444920200858.JavaMail.zimbra@redhat.com> <256C1D83-5E0D-47B2-BD69-9D3CAAEB78E1@redhat.com> Message-ID: Yeah, except... # yum search python-mysql No matches found On Wed, Oct 21, 2015 at 6:44 PM, Alan Pevec wrote: > > The RDO repository doesn't contain a package for PyMySQL. > > This was explained in the thread Steve linked earlier in this thread: > https://www.redhat.com/archives/rdo-list/2015-October/msg00004.html > > Cheers, > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Thu Oct 22 01:40:53 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 22 Oct 2015 03:40:53 +0200 Subject: [Rdo-list] [rdo-manager] liberty Message-ID: Hello I am installing rdo-manager from these docs https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html (i hope the docs are the correct ones) what i have done so far - added user - sudo - checked hostname - sudo yum install -y http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm - sudo yum install -y python-tripleoclient - time openstack undercloud install the errors i am getting is as follows + set -o pipefail + '[' -x /usr/sbin/semanage ']' + semodule -i /opt/stack/selinux-policy/ipxe.pp dib-run-parts Thu Oct 22 03:37:24 EET 2015 00-apply-selinux-policy completed dib-run-parts Thu Oct 22 03:37:24 EET 2015 Running /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies + set -o pipefail ++ mktemp -d + TMPDIR=/tmp/tmp.2o0g4QF9tm + '[' -x /usr/sbin/semanage ']' + cd /tmp/tmp.2o0g4QF9tm ++ ls '/opt/stack/selinux-policy/*.te' ls: cannot access /opt/stack/selinux-policy/*.te: No such file or directory + semodule -i '/tmp/tmp.2o0g4QF9tm/*.pp' semodule: Failed on /tmp/tmp.2o0g4QF9tm/*.pp! [2015-10-22 03:37:24,898] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] [2015-10-22 03:37:24,898] (os-refresh-config) [ERROR] Aborting... what can i do to fix this *805010942448935* * * *GR750055912MA* *Link to me on LinkedIn * From sasha at redhat.com Thu Oct 22 02:27:25 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Wed, 21 Oct 2015 22:27:25 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] liberty In-Reply-To: References: Message-ID: <254968662.62317561.1445480845328.JavaMail.zimbra@redhat.com> Hello, the following steps are missing?: sudo yum -y install epel-release cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Mohammed Arafa" > To: rdo-list at redhat.com > Sent: Wednesday, October 21, 2015 9:40:53 PM > Subject: [Rdo-list] [rdo-manager] liberty > > Hello > > I am installing rdo-manager from these docs > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html > (i hope the docs are the correct ones) > > what i have done so far > - added user > - sudo > - checked hostname > - sudo yum install -y > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > - sudo yum install -y python-tripleoclient > - time openstack undercloud install > > the errors i am getting is as follows > > + set -o pipefail > + '[' -x /usr/sbin/semanage ']' > + semodule -i /opt/stack/selinux-policy/ipxe.pp > dib-run-parts Thu Oct 22 03:37:24 EET 2015 00-apply-selinux-policy completed > dib-run-parts Thu Oct 22 03:37:24 EET 2015 Running > /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > + set -o pipefail > ++ mktemp -d > + TMPDIR=/tmp/tmp.2o0g4QF9tm > + '[' -x /usr/sbin/semanage ']' > + cd /tmp/tmp.2o0g4QF9tm > ++ ls '/opt/stack/selinux-policy/*.te' > ls: cannot access /opt/stack/selinux-policy/*.te: No such file or directory > + semodule -i '/tmp/tmp.2o0g4QF9tm/*.pp' > semodule: Failed on /tmp/tmp.2o0g4QF9tm/*.pp! > [2015-10-22 03:37:24,898] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > > [2015-10-22 03:37:24,898] (os-refresh-config) [ERROR] Aborting... > > what can i do to fix this > > > > > *805010942448935* > * * > > *GR750055912MA* > > *Link to me on LinkedIn * > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mohammed.arafa at gmail.com Thu Oct 22 02:33:05 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 21 Oct 2015 22:33:05 -0400 Subject: [Rdo-list] [rdo-manager] liberty In-Reply-To: <254968662.62317561.1445480845328.JavaMail.zimbra@redhat.com> References: <254968662.62317561.1445480845328.JavaMail.zimbra@redhat.com> Message-ID: apologies to all i am installing on top of a previously failed install. so those steps have been done. i also have instackenv.json verified On Wed, Oct 21, 2015 at 10:27 PM, Sasha Chuzhoy wrote: > Hello, > the following steps are missing?: > sudo yum -y install epel-release > cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf > > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Mohammed Arafa" > > To: rdo-list at redhat.com > > Sent: Wednesday, October 21, 2015 9:40:53 PM > > Subject: [Rdo-list] [rdo-manager] liberty > > > > Hello > > > > I am installing rdo-manager from these docs > > > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html > > (i hope the docs are the correct ones) > > > > what i have done so far > > - added user > > - sudo > > - checked hostname > > - sudo yum install -y > > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > > - sudo yum install -y python-tripleoclient > > - time openstack undercloud install > > > > the errors i am getting is as follows > > > > + set -o pipefail > > + '[' -x /usr/sbin/semanage ']' > > + semodule -i /opt/stack/selinux-policy/ipxe.pp > > dib-run-parts Thu Oct 22 03:37:24 EET 2015 00-apply-selinux-policy > completed > > dib-run-parts Thu Oct 22 03:37:24 EET 2015 Running > > > /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > > + set -o pipefail > > ++ mktemp -d > > + TMPDIR=/tmp/tmp.2o0g4QF9tm > > + '[' -x /usr/sbin/semanage ']' > > + cd /tmp/tmp.2o0g4QF9tm > > ++ ls '/opt/stack/selinux-policy/*.te' > > ls: cannot access /opt/stack/selinux-policy/*.te: No such file or > directory > > + semodule -i '/tmp/tmp.2o0g4QF9tm/*.pp' > > semodule: Failed on /tmp/tmp.2o0g4QF9tm/*.pp! > > [2015-10-22 03:37:24,898] (os-refresh-config) [ERROR] during configure > > phase. [Command '['dib-run-parts', > > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > > status 1] > > > > [2015-10-22 03:37:24,898] (os-refresh-config) [ERROR] Aborting... > > > > what can i do to fix this > > > > > > < > https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995903C076FA394F816CC136539DBA6A32D7305539E4219F5A650358C02CA2ED9F1F26319&AspxAutoDetectCookieSupport=1 > > > > > > *805010942448935*< > https://www.redhat.com/wapps/training/certification/verify.html?certNumber=805010942448935&verify=Verify > > > > * * > > > > *GR750055912MA*< > https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995903C076FA394F816CC136539DBA6A32D7305539E4219F5A650358C02CA2ED9F1F26319&AspxAutoDetectCookieSupport=1 > > > > > > *Link to me on LinkedIn * > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From yeylon at redhat.com Thu Oct 22 05:57:25 2015 From: yeylon at redhat.com (Yaniv Eylon) Date: Thu, 22 Oct 2015 08:57:25 +0300 Subject: [Rdo-list] HA with network isolation on BM. In-Reply-To: <1616445595.62279728.1445468625069.JavaMail.zimbra@redhat.com> References: <1999074695.62278013.1445468123501.JavaMail.zimbra@redhat.com> <1616445595.62279728.1445468625069.JavaMail.zimbra@redhat.com> Message-ID: Sasha, please work with someone from the automation team to review the tempest results and see what we are misconfiguring probably during the deployment that cause so many tempest failures. On Thu, Oct 22, 2015 at 2:03 AM, Sasha Chuzhoy wrote: > Hi all, > We saw successful HA deployments today on BM with network isolation. > > Still had to use the workaround from the below BZ for the overcloud to complete: > https://bugzilla.redhat.com/show_bug.cgi?id=1271289 > > > Executed tempest on one setup: > ====== > Totals > ====== > Ran: 876 tests in 2893.0000 sec. > - Passed: 607 > - Skipped: 79 > - Expected Fail: 0 > - Unexpected Success: 0 > - Failed: 190 > Sum of execute time for each test: 2492.6529 sec. > > ============== > Worker Balance > ============== > - Worker 0 (90 tests) => 0:42:33.564868 > - Worker 1 (187 tests) => 0:29:58.702303 > - Worker 2 (127 tests) => 0:46:40.407688 > - Worker 3 (124 tests) => 0:41:33.320266 > - Worker 4 (225 tests) => 0:47:49.272869 > - Worker 5 (123 tests) => 0:29:48.752037 > > > Thanks. > > > Best regards, > Sasha Chuzhoy. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Yaniv. From trown at redhat.com Thu Oct 22 10:37:55 2015 From: trown at redhat.com (John Trowbridge) Date: Thu, 22 Oct 2015 06:37:55 -0400 Subject: [Rdo-list] [rdo-manager] liberty In-Reply-To: References: <254968662.62317561.1445480845328.JavaMail.zimbra@redhat.com> Message-ID: <5628BC83.6020408@redhat.com> On 10/21/2015 10:33 PM, Mohammed Arafa wrote: > apologies to all > i am installing on top of a previously failed install. This is actually your issue. The previously failed install left some stuff in /usr/libexec/os-refresh-config. Try: rm -rf /usr/libexec/os-refresh-config/* openstack undercloud install > so those steps have been done. i also have instackenv.json verified > > On Wed, Oct 21, 2015 at 10:27 PM, Sasha Chuzhoy wrote: > >> Hello, >> the following steps are missing?: >> sudo yum -y install epel-release >> cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf >> >> >> Best regards, >> Sasha Chuzhoy. >> >> ----- Original Message ----- >>> From: "Mohammed Arafa" >>> To: rdo-list at redhat.com >>> Sent: Wednesday, October 21, 2015 9:40:53 PM >>> Subject: [Rdo-list] [rdo-manager] liberty >>> >>> Hello >>> >>> I am installing rdo-manager from these docs >>> >> https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html >>> (i hope the docs are the correct ones) >>> >>> what i have done so far >>> - added user >>> - sudo >>> - checked hostname >>> - sudo yum install -y >>> http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm >>> - sudo yum install -y python-tripleoclient >>> - time openstack undercloud install >>> >>> the errors i am getting is as follows >>> >>> + set -o pipefail >>> + '[' -x /usr/sbin/semanage ']' >>> + semodule -i /opt/stack/selinux-policy/ipxe.pp >>> dib-run-parts Thu Oct 22 03:37:24 EET 2015 00-apply-selinux-policy >> completed >>> dib-run-parts Thu Oct 22 03:37:24 EET 2015 Running >>> >> /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies >>> + set -o pipefail >>> ++ mktemp -d >>> + TMPDIR=/tmp/tmp.2o0g4QF9tm >>> + '[' -x /usr/sbin/semanage ']' >>> + cd /tmp/tmp.2o0g4QF9tm >>> ++ ls '/opt/stack/selinux-policy/*.te' >>> ls: cannot access /opt/stack/selinux-policy/*.te: No such file or >> directory >>> + semodule -i '/tmp/tmp.2o0g4QF9tm/*.pp' >>> semodule: Failed on /tmp/tmp.2o0g4QF9tm/*.pp! >>> [2015-10-22 03:37:24,898] (os-refresh-config) [ERROR] during configure >>> phase. [Command '['dib-run-parts', >>> '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit >>> status 1] >>> >>> [2015-10-22 03:37:24,898] (os-refresh-config) [ERROR] Aborting... >>> >>> what can i do to fix this >>> >>> >>> < >> https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995903C076FA394F816CC136539DBA6A32D7305539E4219F5A650358C02CA2ED9F1F26319&AspxAutoDetectCookieSupport=1 >>> >>> >>> *805010942448935*< >> https://www.redhat.com/wapps/training/certification/verify.html?certNumber=805010942448935&verify=Verify >>> >>> * * >>> >>> *GR750055912MA*< >> https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995903C076FA394F816CC136539DBA6A32D7305539E4219F5A650358C02CA2ED9F1F26319&AspxAutoDetectCookieSupport=1 >>> >>> >>> *Link to me on LinkedIn * >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From chkumar246 at gmail.com Thu Oct 22 12:02:42 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Thu, 22 Oct 2015 17:32:42 +0530 Subject: [Rdo-list] RDO Bug statistics for 22-10-2015 Message-ID: # RDO Bugs on 2015-10-22 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 328 - Fixed (MODIFIED, POST, ON_QA): 190 ## Number of open bugs by component diskimage-builder [ 4] ++ distribution [ 13] +++++++++ dnsmasq [ 1] Documentation [ 4] ++ instack [ 4] ++ instack-undercloud [ 28] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 11] ++++++++ openstack-cinder [ 14] ++++++++++ openstack-foreman-inst... [ 3] ++ openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 1] openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] +++++ openstack-manila [ 8] +++++ openstack-neutron [ 10] +++++++ openstack-nova [ 18] +++++++++++++ openstack-packstack [ 54] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] ++++++++ openstack-selinux [ 13] +++++++++ openstack-swift [ 2] + openstack-tripleo [ 24] +++++++++++++++++ openstack-tripleo-heat... [ 5] +++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 4] ++ openvswitch [ 1] Package Review [ 2] + python-glanceclient [ 2] + python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] +++ python-oslo-config [ 1] rdo-manager [ 48] +++++++++++++++++++++++++++++++++++ rdo-manager-cli [ 6] ++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (328 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (13 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-10-07 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1266923 ] http://bugzilla.redhat.com/1266923 (NEW) Component: distribution Last change: 2015-10-07 Summary: RDO's hdf5 rpm/yum dependencies conflicts [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### Documentation (4 bugs) [1272111 ] http://bugzilla.redhat.com/1272111 (NEW) Component: Documentation Last change: 2015-10-15 Summary: RFE : document how to access horizon in RDO manager VIRT setup [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2015-10-15 Summary: [DOC] External network should be documents in RDO manager installation [1271793 ] http://bugzilla.redhat.com/1271793 (NEW) Component: Documentation Last change: 2015-10-14 Summary: rdo-manager doc has incomplete /etc/hosts configuration [1271888 ] http://bugzilla.redhat.com/1271888 (NEW) Component: Documentation Last change: 2015-10-15 Summary: step required to build images for overcloud ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-20 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-19 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (11 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: Ceilometer requires pymongo>=3.0.2 [1214928 ] http://bugzilla.redhat.com/1214928 (NEW) Component: openstack-ceilometer Last change: 2015-04-23 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1265721 ] http://bugzilla.redhat.com/1265721 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (NEW) Component: openstack-ceilometer Last change: 2015-09-23 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field [1265818 ] http://bugzilla.redhat.com/1265818 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-28 Summary: ceilometer polling agent does not start ### openstack-cinder (14 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (1 bug) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-08-25 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class ### openstack-manila (8 bugs) [1272957 ] http://bugzilla.redhat.com/1272957 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver: same volumes are re-used with vol mapped layout after restarting manila services [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: puppet module for manila should include service type - shareV2 [1272960 ] http://bugzilla.redhat.com/1272960 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Glusterfs NFS-Ganesha share's export location should be uniform for both nfsv3 & nfsv4 protocols [1272962 ] http://bugzilla.redhat.com/1272962 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Attempt to create share fails ungracefully when backend gluster volumes aren't exported [1272970 ] http://bugzilla.redhat.com/1272970 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_native: cannot connect via SSH using password authentication to multiple gluster clusters with different passwords [1272968 ] http://bugzilla.redhat.com/1272968 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs vol based layout: Deleting a share created from snapshot should also delete its backend gluster volume [1272954 ] http://bugzilla.redhat.com/1272954 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterFS_native_driver: snapshot delete doesn't delete snapshot entries that are in error state [1272958 ] http://bugzilla.redhat.com/1272958 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver - vol based layout: share size may be misleading ### openstack-neutron (10 bugs) [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1269610 ] http://bugzilla.redhat.com/1269610 (ASSIGNED) Component: openstack-neutron Last change: 2015-10-20 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1272289 ] http://bugzilla.redhat.com/1272289 (ASSIGNED) Component: openstack-neutron Last change: 2015-10-19 Summary: rdo-manager tempest smoke test failing on "floating ip pool not found' [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-10-13 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1271838 ] http://bugzilla.redhat.com/1271838 (ASSIGNED) Component: openstack-neutron Last change: 2015-10-20 Summary: Baremetal basic non-HA deployment fails due to failing module import by neutron [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (18 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: novnc init script doesnt write to log [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2015-10-19 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-10-17 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova AVC messages [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files ### openstack-packstack (54 bugs) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-packstack Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1266028 ] http://bugzilla.redhat.com/1266028 (NEW) Component: openstack-packstack Last change: 2015-10-08 Summary: Packstack should use pymysql database driver since Liberty [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1269255 ] http://bugzilla.redhat.com/1269255 (NEW) Component: openstack-packstack Last change: 2015-10-06 Summary: Failed to start RabbitMQ broker. [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd ### openstack-puppet-modules (11 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing ### openstack-selinux (13 bugs) [1261465 ] http://bugzilla.redhat.com/1261465 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: OpenStack Keystone is not functional [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-10-20 Summary: Glance over nfs fails due to selinux [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2015-10-02 Summary: Nova rootwrap-daemon requires a selinux exception [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux ### openstack-swift (2 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (24 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (4 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1270615 ] http://bugzilla.redhat.com/1270615 (NEW) Component: openstack-utils Last change: 2015-10-11 Summary: openstack status still checking mysql not mariadb [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### Package Review (2 bugs) [1272524 ] http://bugzilla.redhat.com/1272524 (NEW) Component: Package Review Last change: 2015-10-16 Summary: Review Request: Mistral - workflow Service for OpenStack cloud [1272513 ] http://bugzilla.redhat.com/1272513 (NEW) Component: Package Review Last change: 2015-10-16 Summary: Review Request: Murano - is an application catalog for OpenStack ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-10-21 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-06-04 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (48 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1273574 ] http://bugzilla.redhat.com/1273574 (ASSIGNED) Component: rdo-manager Last change: 2015-10-22 Summary: rdo-manager liberty, delete node is failing [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1274060 ] http://bugzilla.redhat.com/1274060 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: [SELinux][RHEL7] openstack-ironic-inspector- dnsmasq.service fails to start with SELinux enabled [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Support IPv6 [1272572 ] http://bugzilla.redhat.com/1272572 (NEW) Component: rdo-manager Last change: 2015-10-20 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-10-20 Summary: Two ironic-inspector processes are running on the undercloud, breaking the introspection [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: Glance client returning 'Expected endpoint' [1271335 ] http://bugzilla.redhat.com/1271335 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] Support explicit configuration of L2 population [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: Duplicate nova hypervisors after rebooting compute nodes [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1273121 ] http://bugzilla.redhat.com/1273121 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: openstack help returns errors [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-15 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1272167 ] http://bugzilla.redhat.com/1272167 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: [RFE] Support enabling the port security extension [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] support override of API and RPC worker counts [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2015-10-21 Summary: HA overcloud with network isolation deployment fails ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (190 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (2 bugs) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency [1271002 ] http://bugzilla.redhat.com/1271002 (MODIFIED) Component: openstack-ceilometer Last change: 2015-10-17 Summary: Ceilometer dbsync failing during HA deployment ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (4 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (14 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (MODIFIED) Component: openstack-neutron Last change: 2015-10-19 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (60 bugs) [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1269158 ] http://bugzilla.redhat.com/1269158 (POST) Component: openstack-packstack Last change: 2015-10-19 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-07-21 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution ### openstack-puppet-modules (19 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (1 bug) [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (12 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (1 bug) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### Package Review (1 bug) [1243550 ] http://bugzilla.redhat.com/1243550 (ON_QA) Component: Package Review Last change: 2015-10-09 Summary: Review Request: openstack-aodh - OpenStack Telemetry Alarming ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-django-openstack-auth (1 bug) [1218894 ] http://bugzilla.redhat.com/1218894 (ON_QA) Component: python-django-openstack-auth Last change: 2015-10-16 Summary: Horizon: Re login failed after timeout ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-10-05 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (10 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1268992 ] http://bugzilla.redhat.com/1268992 (MODIFIED) Component: rdo-manager Last change: 2015-10-08 Summary: [RDO-Manager][Liberty] : openstack baremetal introspection bulk start causes "Internal server error" ( introspection fails) . [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1271433 ] http://bugzilla.redhat.com/1271433 (MODIFIED) Component: rdo-manager Last change: 2015-10-20 Summary: Horizon fails to load [1272180 ] http://bugzilla.redhat.com/1272180 (MODIFIED) Component: rdo-manager Last change: 2015-10-19 Summary: Horizon doesn't load when deploying without pacemaker [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-05-29 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (9 bugs) [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2015-10-20 Summary: VXLAN should be default neutron network type [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Oct 22 13:21:26 2015 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 22 Oct 2015 09:21:26 -0400 Subject: [Rdo-list] [CI] tempest swift failures in HA deployments of rdo-manager Message-ID: FYI https://bugzilla.redhat.com/show_bug.cgi?id=1274308 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.pevec at redhat.com Thu Oct 22 13:27:42 2015 From: alan.pevec at redhat.com (Alan Pevec) Date: Thu, 22 Oct 2015 15:27:42 +0200 Subject: [Rdo-list] RDO Liberty released in CentOS Cloud SIG Message-ID: <5628E44E.4040100@redhat.com> I am pleased to announce the general availability of the RDO build for OpenStack Liberty for CentOS Linux 7 x86_64, suitable for building private, public and hybrid clouds. OpenStack Liberty is the 12th release of the open source software collaboratively built by a large number of contributors around the OpenStack.org project space. The RDO community project ( https://www.rdoproject.org/ ) curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a founding member of the CentOS Cloud Infrastructure SIG ( https://wiki.centos.org/SpecialInterestGroup/Cloud ). The Cloud Infrastructure SIG focus on delivering a great user experience for CentOS Linux users looking to build and maintain their own onpremise, public or hybrid clouds. In addition to the comprehensive OpenStack services, libraries and clients, this release also provides Packstack, a simple installer for proof-of-concept installations, as small as a single all-in-one box and RDO Manager ( https://www.rdoproject.org/RDO-Manager ) , an OpenStack deployment and management tool for production environments based on the OpenStack TripleO project. ------- QuickStart: Ensure you have a fully updated CentOS Linux 7/x86_64 machine, and run : sudo yum install centos-release-openstack-liberty sudo yum install openstack-packstack packstack --allinone For a more detailed quickstart please refer to the RDO Project hosted guide at https://www.rdoproject.org/QuickStart For RDO Manger consult https://www.rdoproject.org/RDO-Manager page. RDO project is closely tracking upstream OpenStack projects using the Delorean tool[1] which is producing RPM packages from upstream development branches. Since the previous OpenStack Kilo release, RDO is participating in the Cloud SIG and using CentOS provided infrastructure. Towards the end of developement cycle packages are imported into CentOS Cloud SIG buildsystem[2] and get eventually published in Cloud SIG repositories[3]. [1] http://trunk.rdoproject.org/ [2] http://wiki.centos.org/HowTos/CommunityBuildSystem [3] http://mirror.centos.org/centos/7/cloud/x86_64/ -------- Getting Help: The RDO Project provides a Q&A service at ask.openstack.org, for more developer oriented content we recommend joining the mailing list at https://www.redhat.com/mailman/listinfo/rdo-list. Remember to post a brief introduction about yourself and your RDO story. You can also find extensive documentation at https://www.rdoproject.org/Docs. We also welcome comments and requests on the CentOS Mailing lists ( https://lists.centos.org/ ) and the CentOS IRC Channels ( #centos on irc.freenode.net ), however we have a more focused audience in the RDO venues. To get involved in the OpenStack RPM packaging effort, see https://www.rdoproject.org/Get_involved and https://wiki.centos.org/SpecialInterestGroup/Cloud Join us in #rdo on the Freenode IRC network, and follow us at @RDOCommunity on Twitter. And, if you're going to be in Tokyo for the OpenStack Summit next week, join us on Wednesday at lunch for the RDO community meetup ( http://sched.co/4MYy ). I'd like to thank all RDO developers and CentOS Project for their effort and support resulting in this release, especially dmsimard - for continuously improving RDO CI jpena - for keeping Delorean service up and running jruzicka - for the rdopkg auto-magic number80 - for countless reviews and packaging wisdom social - for puppet module mastery trown - for leading RDO Manager side of the show! Special thanks to all the folks who helped with last minute testing in IRC #rdo channel ! Thanks, Alan Pevec Cloud SIG and RDO project member From ohochman at redhat.com Thu Oct 22 13:59:51 2015 From: ohochman at redhat.com (Omri Hochman) Date: Thu, 22 Oct 2015 09:59:51 -0400 (EDT) Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: References: <56276EE2.6010109@redhat.com> <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> Message-ID: <1623678732.57145402.1445522391221.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Pedro Sousa" > To: "Omri Hochman" > Cc: "rdo-list" > Sent: Wednesday, October 21, 2015 3:10:22 PM > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > Hi Omri, > > I'll test it out thanks. Did you build your overcloud images based on > Liberty? > > export RDO_RELEASE='liberty' > openstack overcloud image build --all Yes - that's how we built the images. Omri. > > > > On Wed, Oct 21, 2015 at 7:30 PM, Omri Hochman wrote: > > > > > > > ----- Original Message ----- > > > From: "Omri Hochman" > > > To: "Pedro Sousa" > > > Cc: "rdo-list" > > > Sent: Wednesday, October 21, 2015 11:16:03 AM > > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > > > > > > > > > ----- Original Message ----- > > > > From: "Pedro Sousa" > > > > To: "John Trowbridge" > > > > Cc: "rdo-list" > > > > Sent: Wednesday, October 21, 2015 7:10:38 AM > > > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > > > > > Hi John, > > > > > > > > I've managed to install on baremetal following this howto: > > > > https://remote-lab.net/rdo-manager-ha-openstack-deployment/ (based on > > > > liberty) > > > > > > Hey Pedro, > > > > > > Are you using: yum install -y > > > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > > > to get the latest RDO GA bits ? > > > > > > We're failing in overcloud deployment on BM with several issues. > > > > Actually, an update : > > > > After using the workaround from this issue: > > https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c9 > > > > We've manage to get HA on Bare-Metal (*using the latest > > rdo-release-liberty.rpm) > > > > That was the deployment command : > > > > openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 > > --ceph-storage-scale 1 -e > > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > > -e /home/stack/network-environment.yaml -e > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > --ntp-server 10.5.26.10 --neutron-network-type vxlan --neutron-tunnel-types > > vxlan --timeout 90 > > > > > > > > Thanks, > > > Omri. > > > > > > > > > > > I have 3 Controllers + 1 Compute (HA and Network Isolation). However > > I'm > > > > having some issues logging on (maybe some keystone issue) and some > > issue > > > > with openvswitch that I'm trying to address with Marius Cornea help. > > > > > > > > Regards, > > > > Pedro Sousa > > > > > > > > > > > > On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < trown at redhat.com > > > > > wrote: > > > > > > > > > > > > Hola rdoers, > > > > > > > > The plan is to GA RDO Liberty today (woot!), so I wanted to send out a > > > > status update for the RDO Manager installer. I would also like to > > gather > > > > feedback on how other community participants feel about that status as > > > > it relates to RDO Manager participating in the GA. That feedback can > > > > come as replies to this thread, or even better there is a packaging > > > > meeting on #rdo at 1500 UTC today and we can discuss it further then. > > > > > > > > tldr; > > > > RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > > > > virtual hardware have been verified to work with GA bits, however bare > > > > metal installs have not yet been verified. > > > > > > > > I would like to start with some historical context here, as it seems we > > > > have picked up quite a few new active community members recently (again > > > > woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > > > > successful end to end demo with a single controller and single compute > > > > node, and only by using a special delorean server pulling bits from a > > > > special github organization (rdo-management). We were able to get it > > > > consistently deploying **virtual** HA w/ ceph in CI by the middle of > > the > > > > Liberty upstream cycle. Then, due largely to the fact that there was > > > > nobody being paid to work full time on RDO Manager, and the people who > > > > were contributing in more or less "extra" time were getting swamped > > with > > > > releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief > > > > 24 hour periods where someone would spend a weekend fixing things only > > > > to have it break again early the following week. > > > > > > > > There have been many improvements in the recent weeks to this sad state > > > > of affairs. Firstly, we have upstreamed almost everything from the > > > > rdo-management github org directly into openstack projects. Secondly, > > > > there is a single source for delorean packages for both core openstack > > > > packages and the tripleo and ironic packages that make up RDO Manager. > > > > These two things may seem a bit trivial to a newcomer to the project, > > > > but they are actually fixes for the biggest cause of the RDO Manager > > > > Kilo CI breaking. I think with those two fixes (plus some work on > > > > upstream tripleo CI) we have set ourselves up to make steady forward > > > > progress rather than spending all our time troubleshooting complete > > > > breakages. (Although this is still openstack so complete breakages will > > > > still happen from time to time :p) > > > > > > > > Another very easy to overlook improvement over where we were at Kilo > > GA, > > > > is that we actually have all RDO Manager packages (minus a couple EPEL > > > > dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we > > > > did not even have everything officially packaged, rather only in our > > > > special delorean instance. > > > > > > > > All this leads to my opinion that RDO Manager should participate in the > > > > RDO GA. I am unconvinced that bare metal installs can not be made to > > > > work with some extra documentation or configuration changes. However, > > > > even if that is not the case, we are in a drastically better place than > > > > we were at the beginning of the Kilo cycle. > > > > > > > > That said, this is a community, and I would like to hear how other > > > > community participants both from RDO in general and RDO Manager > > > > specifically feel about this. Ideally, if someone thinks the RDO > > Manager > > > > release should be blocked, there should be a BZ with the blocker flag > > > > proposed so that there is actionable criteria to unblock the release. > > > > > > > > Thanks for all your hard work to get to this point, and lets keep it > > > > rolling. > > > > > > > > -trown > > > > > > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > From pgsousa at gmail.com Thu Oct 22 14:00:41 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 22 Oct 2015 15:00:41 +0100 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: Hi Marius, I successfully managed to deploy overcloud with http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm instead of delorean repos. I also built the images based on liberty. Thanks :) A side note, is there a way to disable cinder when deploying? Because I get *Error: **Unable to retrieve volume limit information.* horizon errors. On Wed, Oct 21, 2015 at 6:41 PM, Marius Cornea wrote: > It's definitely a bug, the deployment shouldn't pass without > completing keystone init. What's the content of your > network-environment.yaml? > > I'm not sure if this relates but it's worth trying an installation > with the GA bits, the docs are being updated to describe the steps. > Some useful notes can be found here: > https://etherpad.openstack.org/p/RDO-Manager_liberty > > trown ? mcornea: the important bit is to use `yum install -y > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm` > for undercloud repos, and `export RDO_RELEASE='liberty'` for image > build > > On Wed, Oct 21, 2015 at 6:54 PM, Pedro Sousa wrote: > > Yes, I've done that already, however it never runs keystone init. Is it > > something wrong in my deployment command "openstack overcloud deploy" or > do > > you think it's a bug/conf issue? > > > > Thanks > > > > On Wed, Oct 21, 2015 at 5:50 PM, Marius Cornea > > wrote: > >> > >> To delete the overcloud you need to run heat stack-delete overcloud > >> and wait until it finishes(check heat stack-list) > >> > >> On Wed, Oct 21, 2015 at 6:29 PM, Pedro Sousa wrote: > >> > You're right, I didn't get that output, keystone init didn't run: > >> > > >> > $ openstack overcloud deploy --control-scale 3 --compute-scale 1 > >> > --libvirt-type kvm --ntp-server pool.ntp.org --templates > ~/the-cloud/ -e > >> > ~/the-cloud/environments/puppet-pacemaker.yaml -e > >> > ~/the-cloud/environments/network-isolation.yaml -e > >> > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > >> > ~/the-cloud/environments/network-environment.yaml --control-flavor > >> > controller --compute-flavor compute > >> > > >> > Deploying templates in the directory /home/stack/the-cloud > >> > Overcloud Endpoint: http://192.168.174.35:5000/v2.0/ > >> > Overcloud Deployed > >> > > >> > > >> > In fact I have some mysql errors in my controllers, see below. Is > there > >> > a > >> > way to redeploy? Because I've run "openstack overcloud deploy" and > >> > nothing > >> > happens. > >> > > >> > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: > >> > [2015-10-21 > >> > 14:21:50,903] (heat-config) [INFO] Error: Could not prefetch > mysql_user > >> > provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT > CONCAT(User, > >> > '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): > Can't > >> > connect to local MySQL server through socket > '/var/lib/mysql/mysql.sock' > >> > (2) > >> > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: > Error: > >> > Could not prefetch mysql_database provider 'mysql': Execution of > >> > '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000): > >> > Can't > >> > connect to local MySQL server through socket > '/var/lib/mysql/mysql.sock' > >> > (2) > >> > > >> > Thanks > >> > > >> > > >> > > >> > > >> > > >> > > >> > On Wed, Oct 21, 2015 at 4:56 PM, Marius Cornea > > >> > wrote: > >> >> > >> >> I believe the keystone init failed. It is done in a postconfig step > >> >> via ssh on the public VIP(see lines 3-13 in > >> >> https://gist.github.com/remoteur/920109a31083942ba5e1 ). Did you get > >> >> that kind of output for the deploy command? > >> >> > >> >> Try also journalctl -l -u os-collect-config | grep -i error on the > >> >> controller nodes, it should indicate if something went wrong during > >> >> deployment. > >> >> > >> >> On Wed, Oct 21, 2015 at 5:05 PM, Pedro Sousa > wrote: > >> >> > Hi Marius, > >> >> > > >> >> > your tip worked fine thanks, bridges seems to be correctly created, > >> >> > however > >> >> > I still cannot login, seems some keystone problem: > >> >> > > >> >> > #keystone --debug tenant-list > >> >> > > >> >> > DEBUG:keystoneclient.auth.identity.v2:Making authentication request > >> >> > to > >> >> > http://192.168.174.35:5000/v2.0/tokens > >> >> > INFO:requests.packages.urllib3.connectionpool:Starting new HTTP > >> >> > connection > >> >> > (1): 192.168.174.35 > >> >> > DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens > >> >> > HTTP/1.1" > >> >> > 401 114 > >> >> > DEBUG:keystoneclient.session:Request returned failure status: 401 > >> >> > DEBUG:keystoneclient.v2_0.client:Authorization Failed. > >> >> > The request you have made requires authentication. (HTTP 401) > >> >> > (Request-ID: > >> >> > req-accee3b3-b552-4c6b-ac39-d0791b5c1390) > >> >> > > >> >> > Did you had this issue when deployed on virtual? > >> >> > > >> >> > Regards > >> >> > > >> >> > > >> >> > > >> >> > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> Here's an adjusted controller.yaml which disables DHCP on the > first > >> >> >> nic: enp1s0f0 so it doesn't get an IP address > >> >> >> http://paste.openstack.org/show/476981/ > >> >> >> > >> >> >> Please note that this assumes that your overcloud nodes are PXE > >> >> >> booting on the 2nd NIC(basically disabling the 1st nic) > >> >> >> > >> >> >> Given your setup(I'm doing some assumptions here so I might be > >> >> >> wrong) > >> >> >> I would use the 1st nic for PXE booting and provisioning network > and > >> >> >> 2nd nic for running the isolated networks with this kind of > >> >> >> template: > >> >> >> http://paste.openstack.org/show/476986/ > >> >> >> > >> >> >> Let me know if it works for you. > >> >> >> > >> >> >> Thanks, > >> >> >> Marius > >> >> >> > >> >> >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa > >> >> >> wrote: > >> >> >> > Hi, > >> >> >> > > >> >> >> > here you go. > >> >> >> > > >> >> >> > Regards, > >> >> >> > Pedro Sousa > >> >> >> > > >> >> >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea > >> >> >> > > >> >> >> > wrote: > >> >> >> >> > >> >> >> >> Hi Pedro, > >> >> >> >> > >> >> >> >> One issue I can quickly see is that br-ex has assigned the same > >> >> >> >> IP > >> >> >> >> address as enp1s0f0. Can you post the nic templates you used > for > >> >> >> >> deployment? > >> >> >> >> > >> >> >> >> 2: enp1s0f0: mtu 1500 qdisc > mq > >> >> >> >> state > >> >> >> >> UP qlen 1000 > >> >> >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global > dynamic > >> >> >> >> enp1s0f0 > >> >> >> >> 9: br-ex: mtu 1500 qdisc > >> >> >> >> noqueue > >> >> >> >> state > >> >> >> >> UNKNOWN > >> >> >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global > br-ex > >> >> >> >> > >> >> >> >> Thanks, > >> >> >> >> Marius > >> >> >> >> > >> >> >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa < > pgsousa at gmail.com> > >> >> >> >> wrote: > >> >> >> >> > Hi Marius, > >> >> >> >> > > >> >> >> >> > I've followed your howto and managed to get overcloud > deployed > >> >> >> >> > in > >> >> >> >> > HA, > >> >> >> >> > thanks. However I cannot login to it (via CLI or Horizon) : > >> >> >> >> > > >> >> >> >> > ERROR (Unauthorized): The request you have made requires > >> >> >> >> > authentication. > >> >> >> >> > (HTTP 401) (Request-ID: > >> >> >> >> > req-96310dfa-3d64-4f05-966f-f4d92702e2b1) > >> >> >> >> > > >> >> >> >> > So I rebooted the controllers and now I cannot login through > >> >> >> >> > Provisioning > >> >> >> >> > network, seems some openvswitch bridge conf problem, heres my > >> >> >> >> > conf: > >> >> >> >> > > >> >> >> >> > # ip a > >> >> >> >> > 1: lo: mtu 65536 qdisc noqueue state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >> >> >> >> > inet 127.0.0.1/8 scope host lo > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 ::1/128 scope host > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 2: enp1s0f0: mtu 1500 qdisc > >> >> >> >> > mq > >> >> >> >> > state > >> >> >> >> > UP > >> >> >> >> > qlen 1000 > >> >> >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global > >> >> >> >> > dynamic > >> >> >> >> > enp1s0f0 > >> >> >> >> > valid_lft 84562sec preferred_lft 84562sec > >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 3: enp1s0f1: mtu 1500 qdisc > >> >> >> >> > mq > >> >> >> >> > master > >> >> >> >> > ovs-system state UP qlen 1000 > >> >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 4: ovs-system: mtu 1500 qdisc noop > state > >> >> >> >> > DOWN > >> >> >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > 5: br-tun: mtu 1500 qdisc noop state > DOWN > >> >> >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > 6: vlan20: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global > >> >> >> >> > vlan20 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global > >> >> >> >> > vlan20 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 7: vlan40: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global > >> >> >> >> > vlan40 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 8: vlan174: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global > >> >> >> >> > vlan174 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global > >> >> >> >> > vlan174 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 9: br-ex: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global > br-ex > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 10: vlan50: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 11: vlan30: mtu 1500 qdisc > >> >> >> >> > noqueue > >> >> >> >> > state > >> >> >> >> > UNKNOWN > >> >> >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff > >> >> >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global > >> >> >> >> > vlan30 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global > >> >> >> >> > vlan30 > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link > >> >> >> >> > valid_lft forever preferred_lft forever > >> >> >> >> > 12: br-int: mtu 1500 qdisc noop state > >> >> >> >> > DOWN > >> >> >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > # ovs-vsctl show > >> >> >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 > >> >> >> >> > Bridge br-ex > >> >> >> >> > Port br-ex > >> >> >> >> > Interface br-ex > >> >> >> >> > type: internal > >> >> >> >> > Port "enp1s0f1" > >> >> >> >> > Interface "enp1s0f1" > >> >> >> >> > Port "vlan40" > >> >> >> >> > tag: 40 > >> >> >> >> > Interface "vlan40" > >> >> >> >> > type: internal > >> >> >> >> > Port "vlan20" > >> >> >> >> > tag: 20 > >> >> >> >> > Interface "vlan20" > >> >> >> >> > type: internal > >> >> >> >> > Port phy-br-ex > >> >> >> >> > Interface phy-br-ex > >> >> >> >> > type: patch > >> >> >> >> > options: {peer=int-br-ex} > >> >> >> >> > Port "vlan50" > >> >> >> >> > tag: 50 > >> >> >> >> > Interface "vlan50" > >> >> >> >> > type: internal > >> >> >> >> > Port "vlan30" > >> >> >> >> > tag: 30 > >> >> >> >> > Interface "vlan30" > >> >> >> >> > type: internal > >> >> >> >> > Port "vlan174" > >> >> >> >> > tag: 174 > >> >> >> >> > Interface "vlan174" > >> >> >> >> > type: internal > >> >> >> >> > Bridge br-int > >> >> >> >> > fail_mode: secure > >> >> >> >> > Port br-int > >> >> >> >> > Interface br-int > >> >> >> >> > type: internal > >> >> >> >> > Port patch-tun > >> >> >> >> > Interface patch-tun > >> >> >> >> > type: patch > >> >> >> >> > options: {peer=patch-int} > >> >> >> >> > Port int-br-ex > >> >> >> >> > Interface int-br-ex > >> >> >> >> > type: patch > >> >> >> >> > options: {peer=phy-br-ex} > >> >> >> >> > Bridge br-tun > >> >> >> >> > fail_mode: secure > >> >> >> >> > Port "gre-0a00140b" > >> >> >> >> > Interface "gre-0a00140b" > >> >> >> >> > type: gre > >> >> >> >> > options: {df_default="true", in_key=flow, > >> >> >> >> > local_ip="10.0.20.10", > >> >> >> >> > out_key=flow, remote_ip="10.0.20.11"} > >> >> >> >> > Port patch-int > >> >> >> >> > Interface patch-int > >> >> >> >> > type: patch > >> >> >> >> > options: {peer=patch-tun} > >> >> >> >> > Port "gre-0a00140d" > >> >> >> >> > Interface "gre-0a00140d" > >> >> >> >> > type: gre > >> >> >> >> > options: {df_default="true", in_key=flow, > >> >> >> >> > local_ip="10.0.20.10", > >> >> >> >> > out_key=flow, remote_ip="10.0.20.13"} > >> >> >> >> > Port "gre-0a00140c" > >> >> >> >> > Interface "gre-0a00140c" > >> >> >> >> > type: gre > >> >> >> >> > options: {df_default="true", in_key=flow, > >> >> >> >> > local_ip="10.0.20.10", > >> >> >> >> > out_key=flow, remote_ip="10.0.20.12"} > >> >> >> >> > Port br-tun > >> >> >> >> > Interface br-tun > >> >> >> >> > type: internal > >> >> >> >> > ovs_version: "2.4.0" > >> >> >> >> > > >> >> >> >> > Regards, > >> >> >> >> > Pedro Sousa > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea > >> >> >> >> > > >> >> >> >> > wrote: > >> >> >> >> >> > >> >> >> >> >> Hi everyone, > >> >> >> >> >> > >> >> >> >> >> I wrote a blog post about how to deploy a HA with network > >> >> >> >> >> isolation > >> >> >> >> >> overcloud on top of the virtual environment. I tried to > >> >> >> >> >> provide > >> >> >> >> >> some > >> >> >> >> >> insights into what instack-virt-setup creates and how to use > >> >> >> >> >> the > >> >> >> >> >> network isolation templates in the virtual environment. I > hope > >> >> >> >> >> you > >> >> >> >> >> find it useful. > >> >> >> >> >> > >> >> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > >> >> >> >> >> > >> >> >> >> >> Thanks, > >> >> >> >> >> Marius > >> >> >> >> >> > >> >> >> >> >> _______________________________________________ > >> >> >> >> >> Rdo-list mailing list > >> >> >> >> >> Rdo-list at redhat.com > >> >> >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list > >> >> >> >> >> > >> >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> >> >> >> > > >> >> >> >> > > >> >> >> > > >> >> >> > > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Thu Oct 22 14:46:25 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 22 Oct 2015 15:46:25 +0100 Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <1623678732.57145402.1445522391221.JavaMail.zimbra@redhat.com> References: <56276EE2.6010109@redhat.com> <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> <1623678732.57145402.1445522391221.JavaMail.zimbra@redhat.com> Message-ID: Hi, I've managed to install it, thanks. Another question, how do I scale the compute nodes? I've tried: openstack overcloud deploy --compute-scale 2 --templates ~/the-cloud/ however I get this error: [stack at undercloud ~]$ openstack overcloud deploy --compute-scale 2 --templates ~/the-cloud/ Deploying templates in the directory /home/stack/the-cloud Stack failed with status: resources.Controller: resources[2]: BadRequest: resources.Controller: No valid host was found. No valid host found for resize (HTTP 400) (Request-ID: req-d5ab7497-ea7f-4ecf-ae8c-122f1637c5ea) Heat Stack update failed. Thanks On Thu, Oct 22, 2015 at 2:59 PM, Omri Hochman wrote: > > > ----- Original Message ----- > > From: "Pedro Sousa" > > To: "Omri Hochman" > > Cc: "rdo-list" > > Sent: Wednesday, October 21, 2015 3:10:22 PM > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > Hi Omri, > > > > I'll test it out thanks. Did you build your overcloud images based on > > Liberty? > > > > export RDO_RELEASE='liberty' > > openstack overcloud image build --all > > Yes - that's how we built the images. > > Omri. > > > > > > > > On Wed, Oct 21, 2015 at 7:30 PM, Omri Hochman > wrote: > > > > > > > > > > > ----- Original Message ----- > > > > From: "Omri Hochman" > > > > To: "Pedro Sousa" > > > > Cc: "rdo-list" > > > > Sent: Wednesday, October 21, 2015 11:16:03 AM > > > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "Pedro Sousa" > > > > > To: "John Trowbridge" > > > > > Cc: "rdo-list" > > > > > Sent: Wednesday, October 21, 2015 7:10:38 AM > > > > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > > > > > > > Hi John, > > > > > > > > > > I've managed to install on baremetal following this howto: > > > > > https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > (based on > > > > > liberty) > > > > > > > > Hey Pedro, > > > > > > > > Are you using: yum install -y > > > > > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > > > > to get the latest RDO GA bits ? > > > > > > > > We're failing in overcloud deployment on BM with several issues. > > > > > > Actually, an update : > > > > > > After using the workaround from this issue: > > > https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c9 > > > > > > We've manage to get HA on Bare-Metal (*using the latest > > > rdo-release-liberty.rpm) > > > > > > That was the deployment command : > > > > > > openstack overcloud deploy --templates --control-scale 3 > --compute-scale 1 > > > --ceph-storage-scale 1 -e > > > > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > > > -e /home/stack/network-environment.yaml -e > > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > > --ntp-server 10.5.26.10 --neutron-network-type vxlan > --neutron-tunnel-types > > > vxlan --timeout 90 > > > > > > > > > > > Thanks, > > > > Omri. > > > > > > > > > > > > > > I have 3 Controllers + 1 Compute (HA and Network Isolation). > However > > > I'm > > > > > having some issues logging on (maybe some keystone issue) and some > > > issue > > > > > with openvswitch that I'm trying to address with Marius Cornea > help. > > > > > > > > > > Regards, > > > > > Pedro Sousa > > > > > > > > > > > > > > > On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < > trown at redhat.com > > > > > > wrote: > > > > > > > > > > > > > > > Hola rdoers, > > > > > > > > > > The plan is to GA RDO Liberty today (woot!), so I wanted to send > out a > > > > > status update for the RDO Manager installer. I would also like to > > > gather > > > > > feedback on how other community participants feel about that > status as > > > > > it relates to RDO Manager participating in the GA. That feedback > can > > > > > come as replies to this thread, or even better there is a packaging > > > > > meeting on #rdo at 1500 UTC today and we can discuss it further > then. > > > > > > > > > > tldr; > > > > > RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > > > > > virtual hardware have been verified to work with GA bits, however > bare > > > > > metal installs have not yet been verified. > > > > > > > > > > I would like to start with some historical context here, as it > seems we > > > > > have picked up quite a few new active community members recently > (again > > > > > woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > > > > > successful end to end demo with a single controller and single > compute > > > > > node, and only by using a special delorean server pulling bits > from a > > > > > special github organization (rdo-management). We were able to get > it > > > > > consistently deploying **virtual** HA w/ ceph in CI by the middle > of > > > the > > > > > Liberty upstream cycle. Then, due largely to the fact that there > was > > > > > nobody being paid to work full time on RDO Manager, and the people > who > > > > > were contributing in more or less "extra" time were getting swamped > > > with > > > > > releasing RHEL OSP 7, CI on the Kilo bits became mostly red with > brief > > > > > 24 hour periods where someone would spend a weekend fixing things > only > > > > > to have it break again early the following week. > > > > > > > > > > There have been many improvements in the recent weeks to this sad > state > > > > > of affairs. Firstly, we have upstreamed almost everything from the > > > > > rdo-management github org directly into openstack projects. > Secondly, > > > > > there is a single source for delorean packages for both core > openstack > > > > > packages and the tripleo and ironic packages that make up RDO > Manager. > > > > > These two things may seem a bit trivial to a newcomer to the > project, > > > > > but they are actually fixes for the biggest cause of the RDO > Manager > > > > > Kilo CI breaking. I think with those two fixes (plus some work on > > > > > upstream tripleo CI) we have set ourselves up to make steady > forward > > > > > progress rather than spending all our time troubleshooting complete > > > > > breakages. (Although this is still openstack so complete breakages > will > > > > > still happen from time to time :p) > > > > > > > > > > Another very easy to overlook improvement over where we were at > Kilo > > > GA, > > > > > is that we actually have all RDO Manager packages (minus a couple > EPEL > > > > > dep stragglers[1]) in the official RDO GA repo. When RDO Kilo > GA'd, we > > > > > did not even have everything officially packaged, rather only in > our > > > > > special delorean instance. > > > > > > > > > > All this leads to my opinion that RDO Manager should participate > in the > > > > > RDO GA. I am unconvinced that bare metal installs can not be made > to > > > > > work with some extra documentation or configuration changes. > However, > > > > > even if that is not the case, we are in a drastically better place > than > > > > > we were at the beginning of the Kilo cycle. > > > > > > > > > > That said, this is a community, and I would like to hear how other > > > > > community participants both from RDO in general and RDO Manager > > > > > specifically feel about this. Ideally, if someone thinks the RDO > > > Manager > > > > > release should be blocked, there should be a BZ with the blocker > flag > > > > > proposed so that there is actionable criteria to unblock the > release. > > > > > > > > > > Thanks for all your hard work to get to this point, and lets keep > it > > > > > rolling. > > > > > > > > > > -trown > > > > > > > > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ohochman at redhat.com Thu Oct 22 14:59:42 2015 From: ohochman at redhat.com (Omri Hochman) Date: Thu, 22 Oct 2015 10:59:42 -0400 (EDT) Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: References: <56276EE2.6010109@redhat.com> <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> <1623678732.57145402.1445522391221.JavaMail.zimbra@redhat.com> Message-ID: <500060841.57186665.1445525982171.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Pedro Sousa" > To: "Omri Hochman" > Cc: "rdo-list" > Sent: Thursday, October 22, 2015 10:46:25 AM > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > Hi, > > I've managed to install it, thanks. Another question, how do I scale the > compute nodes? > You should use the exact same command you used for the overcloud deployment (including all parameters), just change : '--compute-scale X' To: '--compute-scale X+Num' Regards, Omri > I've tried: openstack overcloud deploy --compute-scale 2 --templates > ~/the-cloud/ however I get this error: > > [stack at undercloud ~]$ openstack overcloud deploy --compute-scale 2 > --templates ~/the-cloud/ > Deploying templates in the directory /home/stack/the-cloud > Stack failed with status: resources.Controller: resources[2]: BadRequest: > resources.Controller: No valid host was found. No valid host found for > resize (HTTP 400) (Request-ID: req-d5ab7497-ea7f-4ecf-ae8c-122f1637c5ea) > Heat Stack update failed. > > Thanks > > > > On Thu, Oct 22, 2015 at 2:59 PM, Omri Hochman wrote: > > > > > > > ----- Original Message ----- > > > From: "Pedro Sousa" > > > To: "Omri Hochman" > > > Cc: "rdo-list" > > > Sent: Wednesday, October 21, 2015 3:10:22 PM > > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > > > Hi Omri, > > > > > > I'll test it out thanks. Did you build your overcloud images based on > > > Liberty? > > > > > > export RDO_RELEASE='liberty' > > > openstack overcloud image build --all > > > > Yes - that's how we built the images. > > > > Omri. > > > > > > > > > > > > On Wed, Oct 21, 2015 at 7:30 PM, Omri Hochman > > wrote: > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "Omri Hochman" > > > > > To: "Pedro Sousa" > > > > > Cc: "rdo-list" > > > > > Sent: Wednesday, October 21, 2015 11:16:03 AM > > > > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > From: "Pedro Sousa" > > > > > > To: "John Trowbridge" > > > > > > Cc: "rdo-list" > > > > > > Sent: Wednesday, October 21, 2015 7:10:38 AM > > > > > > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > > > > > > > > > > > Hi John, > > > > > > > > > > > > I've managed to install on baremetal following this howto: > > > > > > https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > > (based on > > > > > > liberty) > > > > > > > > > > Hey Pedro, > > > > > > > > > > Are you using: yum install -y > > > > > > > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > > > > > to get the latest RDO GA bits ? > > > > > > > > > > We're failing in overcloud deployment on BM with several issues. > > > > > > > > Actually, an update : > > > > > > > > After using the workaround from this issue: > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c9 > > > > > > > > We've manage to get HA on Bare-Metal (*using the latest > > > > rdo-release-liberty.rpm) > > > > > > > > That was the deployment command : > > > > > > > > openstack overcloud deploy --templates --control-scale 3 > > --compute-scale 1 > > > > --ceph-storage-scale 1 -e > > > > > > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > > > > -e /home/stack/network-environment.yaml -e > > > > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > > > --ntp-server 10.5.26.10 --neutron-network-type vxlan > > --neutron-tunnel-types > > > > vxlan --timeout 90 > > > > > > > > > > > > > > Thanks, > > > > > Omri. > > > > > > > > > > > > > > > > > I have 3 Controllers + 1 Compute (HA and Network Isolation). > > However > > > > I'm > > > > > > having some issues logging on (maybe some keystone issue) and some > > > > issue > > > > > > with openvswitch that I'm trying to address with Marius Cornea > > help. > > > > > > > > > > > > Regards, > > > > > > Pedro Sousa > > > > > > > > > > > > > > > > > > On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < > > trown at redhat.com > > > > > > > wrote: > > > > > > > > > > > > > > > > > > Hola rdoers, > > > > > > > > > > > > The plan is to GA RDO Liberty today (woot!), so I wanted to send > > out a > > > > > > status update for the RDO Manager installer. I would also like to > > > > gather > > > > > > feedback on how other community participants feel about that > > status as > > > > > > it relates to RDO Manager participating in the GA. That feedback > > can > > > > > > come as replies to this thread, or even better there is a packaging > > > > > > meeting on #rdo at 1500 UTC today and we can discuss it further > > then. > > > > > > > > > > > > tldr; > > > > > > RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > > > > > > virtual hardware have been verified to work with GA bits, however > > bare > > > > > > metal installs have not yet been verified. > > > > > > > > > > > > I would like to start with some historical context here, as it > > seems we > > > > > > have picked up quite a few new active community members recently > > (again > > > > > > woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > > > > > > successful end to end demo with a single controller and single > > compute > > > > > > node, and only by using a special delorean server pulling bits > > from a > > > > > > special github organization (rdo-management). We were able to get > > it > > > > > > consistently deploying **virtual** HA w/ ceph in CI by the middle > > of > > > > the > > > > > > Liberty upstream cycle. Then, due largely to the fact that there > > was > > > > > > nobody being paid to work full time on RDO Manager, and the people > > who > > > > > > were contributing in more or less "extra" time were getting swamped > > > > with > > > > > > releasing RHEL OSP 7, CI on the Kilo bits became mostly red with > > brief > > > > > > 24 hour periods where someone would spend a weekend fixing things > > only > > > > > > to have it break again early the following week. > > > > > > > > > > > > There have been many improvements in the recent weeks to this sad > > state > > > > > > of affairs. Firstly, we have upstreamed almost everything from the > > > > > > rdo-management github org directly into openstack projects. > > Secondly, > > > > > > there is a single source for delorean packages for both core > > openstack > > > > > > packages and the tripleo and ironic packages that make up RDO > > Manager. > > > > > > These two things may seem a bit trivial to a newcomer to the > > project, > > > > > > but they are actually fixes for the biggest cause of the RDO > > Manager > > > > > > Kilo CI breaking. I think with those two fixes (plus some work on > > > > > > upstream tripleo CI) we have set ourselves up to make steady > > forward > > > > > > progress rather than spending all our time troubleshooting complete > > > > > > breakages. (Although this is still openstack so complete breakages > > will > > > > > > still happen from time to time :p) > > > > > > > > > > > > Another very easy to overlook improvement over where we were at > > Kilo > > > > GA, > > > > > > is that we actually have all RDO Manager packages (minus a couple > > EPEL > > > > > > dep stragglers[1]) in the official RDO GA repo. When RDO Kilo > > GA'd, we > > > > > > did not even have everything officially packaged, rather only in > > our > > > > > > special delorean instance. > > > > > > > > > > > > All this leads to my opinion that RDO Manager should participate > > in the > > > > > > RDO GA. I am unconvinced that bare metal installs can not be made > > to > > > > > > work with some extra documentation or configuration changes. > > However, > > > > > > even if that is not the case, we are in a drastically better place > > than > > > > > > we were at the beginning of the Kilo cycle. > > > > > > > > > > > > That said, this is a community, and I would like to hear how other > > > > > > community participants both from RDO in general and RDO Manager > > > > > > specifically feel about this. Ideally, if someone thinks the RDO > > > > Manager > > > > > > release should be blocked, there should be a BZ with the blocker > > flag > > > > > > proposed so that there is actionable criteria to unblock the > > release. > > > > > > > > > > > > Thanks for all your hard work to get to this point, and lets keep > > it > > > > > > rolling. > > > > > > > > > > > > -trown > > > > > > > > > > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > > > > > > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > Rdo-list mailing list > > > > > > Rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > From ibravo at ltgfederal.com Thu Oct 22 18:49:55 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 22 Oct 2015 14:49:55 -0400 Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> References: <56276EE2.6010109@redhat.com> <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> Message-ID: <56292FD3.3070301@ltgfederal.com> Omri, I was looking at the successful HA deployment in BM that you commented = openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /home/stack/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server 10.5.26.10 --neutron-network-type vxlan --neutron-tunnel-types vxlan --timeout 9 Can you share the contents of your/home/stack/network-environment.yaml file? I couldn't find this file in the undercloud machine, and want to make sure I get a successful deployment. Feel free to replace any confidential information. Thanks, IB Ignacio Bravo LTG Federal Inc On 10/21/2015 02:30 PM, Omri Hochman wrote: > > ----- Original Message ----- >> From: "Omri Hochman" >> To: "Pedro Sousa" >> Cc: "rdo-list" >> Sent: Wednesday, October 21, 2015 11:16:03 AM >> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA >> >> >> >> ----- Original Message ----- >>> From: "Pedro Sousa" >>> To: "John Trowbridge" >>> Cc: "rdo-list" >>> Sent: Wednesday, October 21, 2015 7:10:38 AM >>> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA >>> >>> Hi John, >>> >>> I've managed to install on baremetal following this howto: >>> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ (based on >>> liberty) >> Hey Pedro, >> >> Are you using: yum install -y >> http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm >> to get the latest RDO GA bits ? >> >> We're failing in overcloud deployment on BM with several issues. > Actually, an update : > > After using the workaround from this issue: https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c9 > > We've manage to get HA on Bare-Metal (*using the latest rdo-release-liberty.rpm) > > That was the deployment command : > > openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /home/stack/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server 10.5.26.10 --neutron-network-type vxlan --neutron-tunnel-types vxlan --timeout 90 > >> Thanks, >> Omri. >> >>> I have 3 Controllers + 1 Compute (HA and Network Isolation). However I'm >>> having some issues logging on (maybe some keystone issue) and some issue >>> with openvswitch that I'm trying to address with Marius Cornea help. >>> >>> Regards, >>> Pedro Sousa >>> >>> >>> On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < trown at redhat.com > >>> wrote: >>> >>> >>> Hola rdoers, >>> >>> The plan is to GA RDO Liberty today (woot!), so I wanted to send out a >>> status update for the RDO Manager installer. I would also like to gather >>> feedback on how other community participants feel about that status as >>> it relates to RDO Manager participating in the GA. That feedback can >>> come as replies to this thread, or even better there is a packaging >>> meeting on #rdo at 1500 UTC today and we can discuss it further then. >>> >>> tldr; >>> RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on >>> virtual hardware have been verified to work with GA bits, however bare >>> metal installs have not yet been verified. >>> >>> I would like to start with some historical context here, as it seems we >>> have picked up quite a few new active community members recently (again >>> woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a >>> successful end to end demo with a single controller and single compute >>> node, and only by using a special delorean server pulling bits from a >>> special github organization (rdo-management). We were able to get it >>> consistently deploying **virtual** HA w/ ceph in CI by the middle of the >>> Liberty upstream cycle. Then, due largely to the fact that there was >>> nobody being paid to work full time on RDO Manager, and the people who >>> were contributing in more or less "extra" time were getting swamped with >>> releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief >>> 24 hour periods where someone would spend a weekend fixing things only >>> to have it break again early the following week. >>> >>> There have been many improvements in the recent weeks to this sad state >>> of affairs. Firstly, we have upstreamed almost everything from the >>> rdo-management github org directly into openstack projects. Secondly, >>> there is a single source for delorean packages for both core openstack >>> packages and the tripleo and ironic packages that make up RDO Manager. >>> These two things may seem a bit trivial to a newcomer to the project, >>> but they are actually fixes for the biggest cause of the RDO Manager >>> Kilo CI breaking. I think with those two fixes (plus some work on >>> upstream tripleo CI) we have set ourselves up to make steady forward >>> progress rather than spending all our time troubleshooting complete >>> breakages. (Although this is still openstack so complete breakages will >>> still happen from time to time :p) >>> >>> Another very easy to overlook improvement over where we were at Kilo GA, >>> is that we actually have all RDO Manager packages (minus a couple EPEL >>> dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we >>> did not even have everything officially packaged, rather only in our >>> special delorean instance. >>> >>> All this leads to my opinion that RDO Manager should participate in the >>> RDO GA. I am unconvinced that bare metal installs can not be made to >>> work with some extra documentation or configuration changes. However, >>> even if that is not the case, we are in a drastically better place than >>> we were at the beginning of the Kilo cycle. >>> >>> That said, this is a community, and I would like to hear how other >>> community participants both from RDO in general and RDO Manager >>> specifically feel about this. Ideally, if someone thinks the RDO Manager >>> release should be blocked, there should be a BZ with the blocker flag >>> proposed so that there is actionable criteria to unblock the release. >>> >>> Thanks for all your hard work to get to this point, and lets keep it >>> rolling. >>> >>> -trown >>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ohochman at redhat.com Thu Oct 22 19:47:10 2015 From: ohochman at redhat.com (Omri Hochman) Date: Thu, 22 Oct 2015 15:47:10 -0400 (EDT) Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <56292FD3.3070301@ltgfederal.com> References: <56276EE2.6010109@redhat.com> <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> <56292FD3.3070301@ltgfederal.com> Message-ID: <740430205.57312899.1445543230219.JavaMail.zimbra@redhat.com> Hey Ignacio, I guess you can use my file as example ( I've switched the internal IPs with 'XX.XX.XX.XX' ) http://paste.openstack.org/show/477193/ Note: that configuration file is fit to my environment and It's according the native vlans that already pre-configured on the switch. you will also need to create all the network-isolation configuration yaml files under /home/stack/nic-configs [ohochman at dhcp-1-111 nic-configs_new]$ ls ceph-storage.yaml cinder-storage.yaml compute.yaml controller.yaml swift-storage.yaml Try to according : http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/network_isolation.html Regards, Omri. ----- Original Message ----- > From: "Ignacio Bravo" > To: rdo-list at redhat.com > Sent: Thursday, October 22, 2015 2:49:55 PM > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > Omri, > > I was looking at the successful HA deployment in BM that you commented = > > openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 > --ceph-storage-scale 1 -e > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > -e /home/stack/network-environment.yaml -e > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > --ntp-server 10.5.26.10 --neutron-network-type vxlan --neutron-tunnel-types > vxlan --timeout 9 > > Can you share the contents of your/home/stack/network-environment.yaml file? > I couldn't find this file in the undercloud machine, and want to make sure I > get a successful deployment. Feel free to replace any confidential > information. > > > Thanks, > IB > > > Ignacio Bravo > LTG Federal Inc > > On 10/21/2015 02:30 PM, Omri Hochman wrote: > > > > ----- Original Message ----- > >> From: "Omri Hochman" > >> To: "Pedro Sousa" > >> Cc: "rdo-list" > >> Sent: Wednesday, October 21, 2015 11:16:03 AM > >> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > >> > >> > >> > >> ----- Original Message ----- > >>> From: "Pedro Sousa" > >>> To: "John Trowbridge" > >>> Cc: "rdo-list" > >>> Sent: Wednesday, October 21, 2015 7:10:38 AM > >>> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > >>> > >>> Hi John, > >>> > >>> I've managed to install on baremetal following this howto: > >>> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ (based on > >>> liberty) > >> Hey Pedro, > >> > >> Are you using: yum install -y > >> http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > >> to get the latest RDO GA bits ? > >> > >> We're failing in overcloud deployment on BM with several issues. > > Actually, an update : > > > > After using the workaround from this issue: > > https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c9 > > > > We've manage to get HA on Bare-Metal (*using the latest > > rdo-release-liberty.rpm) > > > > That was the deployment command : > > > > openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 > > --ceph-storage-scale 1 -e > > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > > -e /home/stack/network-environment.yaml -e > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > --ntp-server 10.5.26.10 --neutron-network-type vxlan > > --neutron-tunnel-types vxlan --timeout 90 > > > >> Thanks, > >> Omri. > >> > >>> I have 3 Controllers + 1 Compute (HA and Network Isolation). However I'm > >>> having some issues logging on (maybe some keystone issue) and some issue > >>> with openvswitch that I'm trying to address with Marius Cornea help. > >>> > >>> Regards, > >>> Pedro Sousa > >>> > >>> > >>> On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < trown at redhat.com > > >>> wrote: > >>> > >>> > >>> Hola rdoers, > >>> > >>> The plan is to GA RDO Liberty today (woot!), so I wanted to send out a > >>> status update for the RDO Manager installer. I would also like to gather > >>> feedback on how other community participants feel about that status as > >>> it relates to RDO Manager participating in the GA. That feedback can > >>> come as replies to this thread, or even better there is a packaging > >>> meeting on #rdo at 1500 UTC today and we can discuss it further then. > >>> > >>> tldr; > >>> RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > >>> virtual hardware have been verified to work with GA bits, however bare > >>> metal installs have not yet been verified. > >>> > >>> I would like to start with some historical context here, as it seems we > >>> have picked up quite a few new active community members recently (again > >>> woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > >>> successful end to end demo with a single controller and single compute > >>> node, and only by using a special delorean server pulling bits from a > >>> special github organization (rdo-management). We were able to get it > >>> consistently deploying **virtual** HA w/ ceph in CI by the middle of the > >>> Liberty upstream cycle. Then, due largely to the fact that there was > >>> nobody being paid to work full time on RDO Manager, and the people who > >>> were contributing in more or less "extra" time were getting swamped with > >>> releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief > >>> 24 hour periods where someone would spend a weekend fixing things only > >>> to have it break again early the following week. > >>> > >>> There have been many improvements in the recent weeks to this sad state > >>> of affairs. Firstly, we have upstreamed almost everything from the > >>> rdo-management github org directly into openstack projects. Secondly, > >>> there is a single source for delorean packages for both core openstack > >>> packages and the tripleo and ironic packages that make up RDO Manager. > >>> These two things may seem a bit trivial to a newcomer to the project, > >>> but they are actually fixes for the biggest cause of the RDO Manager > >>> Kilo CI breaking. I think with those two fixes (plus some work on > >>> upstream tripleo CI) we have set ourselves up to make steady forward > >>> progress rather than spending all our time troubleshooting complete > >>> breakages. (Although this is still openstack so complete breakages will > >>> still happen from time to time :p) > >>> > >>> Another very easy to overlook improvement over where we were at Kilo GA, > >>> is that we actually have all RDO Manager packages (minus a couple EPEL > >>> dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we > >>> did not even have everything officially packaged, rather only in our > >>> special delorean instance. > >>> > >>> All this leads to my opinion that RDO Manager should participate in the > >>> RDO GA. I am unconvinced that bare metal installs can not be made to > >>> work with some extra documentation or configuration changes. However, > >>> even if that is not the case, we are in a drastically better place than > >>> we were at the beginning of the Kilo cycle. > >>> > >>> That said, this is a community, and I would like to hear how other > >>> community participants both from RDO in general and RDO Manager > >>> specifically feel about this. Ideally, if someone thinks the RDO Manager > >>> release should be blocked, there should be a BZ with the blocker flag > >>> proposed so that there is actionable criteria to unblock the release. > >>> > >>> Thanks for all your hard work to get to this point, and lets keep it > >>> rolling. > >>> > >>> -trown > >>> > >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > >>> > >>> _______________________________________________ > >>> Rdo-list mailing list > >>> Rdo-list at redhat.com > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > >>> > >>> _______________________________________________ > >>> Rdo-list mailing list > >>> Rdo-list at redhat.com > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ibravo at ltgfederal.com Thu Oct 22 20:30:22 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 22 Oct 2015 16:30:22 -0400 Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <740430205.57312899.1445543230219.JavaMail.zimbra@redhat.com> References: <56276EE2.6010109@redhat.com> <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> <56292FD3.3070301@ltgfederal.com> <740430205.57312899.1445543230219.JavaMail.zimbra@redhat.com> Message-ID: Do I have to have network isolation for a test HA environment? Something like: openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server 10.5.26.10 --neutron-network-type vxlan --neutron-tunnel-types vxlan --timeout 90 I wanted to leave network isolation for my last test (after the others have passed) :) That way, I don?t require yet the -e /home/stack/network-environment.yaml and the modifications of the nic-config files, and most importantly, reading the network isolation part just yet. IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com > On Oct 22, 2015, at 3:47 PM, Omri Hochman wrote: > > Hey Ignacio, > > I guess you can use my file as example ( I've switched the internal IPs with 'XX.XX.XX.XX' ) > http://paste.openstack.org/show/477193/ > > Note: that configuration file is fit to my environment and It's according the native vlans that already pre-configured on the switch. > you will also need to create all the network-isolation configuration yaml files under /home/stack/nic-configs > > [ohochman at dhcp-1-111 nic-configs_new]$ ls > ceph-storage.yaml cinder-storage.yaml compute.yaml controller.yaml swift-storage.yaml > > Try to according : > http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/network_isolation.html > > Regards, > Omri. > > > > ----- Original Message ----- >> From: "Ignacio Bravo" >> To: rdo-list at redhat.com >> Sent: Thursday, October 22, 2015 2:49:55 PM >> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA >> >> Omri, >> >> I was looking at the successful HA deployment in BM that you commented = >> >> openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 >> --ceph-storage-scale 1 -e >> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml >> -e /home/stack/network-environment.yaml -e >> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >> --ntp-server 10.5.26.10 --neutron-network-type vxlan --neutron-tunnel-types >> vxlan --timeout 9 >> >> Can you share the contents of your/home/stack/network-environment.yaml file? >> I couldn't find this file in the undercloud machine, and want to make sure I >> get a successful deployment. Feel free to replace any confidential >> information. >> >> >> Thanks, >> IB >> >> >> Ignacio Bravo >> LTG Federal Inc >> >> On 10/21/2015 02:30 PM, Omri Hochman wrote: >>> >>> ----- Original Message ----- >>>> From: "Omri Hochman" >>>> To: "Pedro Sousa" >>>> Cc: "rdo-list" >>>> Sent: Wednesday, October 21, 2015 11:16:03 AM >>>> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA >>>> >>>> >>>> >>>> ----- Original Message ----- >>>>> From: "Pedro Sousa" >>>>> To: "John Trowbridge" >>>>> Cc: "rdo-list" >>>>> Sent: Wednesday, October 21, 2015 7:10:38 AM >>>>> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA >>>>> >>>>> Hi John, >>>>> >>>>> I've managed to install on baremetal following this howto: >>>>> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ (based on >>>>> liberty) >>>> Hey Pedro, >>>> >>>> Are you using: yum install -y >>>> http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm >>>> to get the latest RDO GA bits ? >>>> >>>> We're failing in overcloud deployment on BM with several issues. >>> Actually, an update : >>> >>> After using the workaround from this issue: >>> https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c9 >>> >>> We've manage to get HA on Bare-Metal (*using the latest >>> rdo-release-liberty.rpm) >>> >>> That was the deployment command : >>> >>> openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 >>> --ceph-storage-scale 1 -e >>> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml >>> -e /home/stack/network-environment.yaml -e >>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >>> --ntp-server 10.5.26.10 --neutron-network-type vxlan >>> --neutron-tunnel-types vxlan --timeout 90 >>> >>>> Thanks, >>>> Omri. >>>> >>>>> I have 3 Controllers + 1 Compute (HA and Network Isolation). However I'm >>>>> having some issues logging on (maybe some keystone issue) and some issue >>>>> with openvswitch that I'm trying to address with Marius Cornea help. >>>>> >>>>> Regards, >>>>> Pedro Sousa >>>>> >>>>> >>>>> On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < trown at redhat.com > >>>>> wrote: >>>>> >>>>> >>>>> Hola rdoers, >>>>> >>>>> The plan is to GA RDO Liberty today (woot!), so I wanted to send out a >>>>> status update for the RDO Manager installer. I would also like to gather >>>>> feedback on how other community participants feel about that status as >>>>> it relates to RDO Manager participating in the GA. That feedback can >>>>> come as replies to this thread, or even better there is a packaging >>>>> meeting on #rdo at 1500 UTC today and we can discuss it further then. >>>>> >>>>> tldr; >>>>> RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on >>>>> virtual hardware have been verified to work with GA bits, however bare >>>>> metal installs have not yet been verified. >>>>> >>>>> I would like to start with some historical context here, as it seems we >>>>> have picked up quite a few new active community members recently (again >>>>> woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a >>>>> successful end to end demo with a single controller and single compute >>>>> node, and only by using a special delorean server pulling bits from a >>>>> special github organization (rdo-management). We were able to get it >>>>> consistently deploying **virtual** HA w/ ceph in CI by the middle of the >>>>> Liberty upstream cycle. Then, due largely to the fact that there was >>>>> nobody being paid to work full time on RDO Manager, and the people who >>>>> were contributing in more or less "extra" time were getting swamped with >>>>> releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief >>>>> 24 hour periods where someone would spend a weekend fixing things only >>>>> to have it break again early the following week. >>>>> >>>>> There have been many improvements in the recent weeks to this sad state >>>>> of affairs. Firstly, we have upstreamed almost everything from the >>>>> rdo-management github org directly into openstack projects. Secondly, >>>>> there is a single source for delorean packages for both core openstack >>>>> packages and the tripleo and ironic packages that make up RDO Manager. >>>>> These two things may seem a bit trivial to a newcomer to the project, >>>>> but they are actually fixes for the biggest cause of the RDO Manager >>>>> Kilo CI breaking. I think with those two fixes (plus some work on >>>>> upstream tripleo CI) we have set ourselves up to make steady forward >>>>> progress rather than spending all our time troubleshooting complete >>>>> breakages. (Although this is still openstack so complete breakages will >>>>> still happen from time to time :p) >>>>> >>>>> Another very easy to overlook improvement over where we were at Kilo GA, >>>>> is that we actually have all RDO Manager packages (minus a couple EPEL >>>>> dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we >>>>> did not even have everything officially packaged, rather only in our >>>>> special delorean instance. >>>>> >>>>> All this leads to my opinion that RDO Manager should participate in the >>>>> RDO GA. I am unconvinced that bare metal installs can not be made to >>>>> work with some extra documentation or configuration changes. However, >>>>> even if that is not the case, we are in a drastically better place than >>>>> we were at the beginning of the Kilo cycle. >>>>> >>>>> That said, this is a community, and I would like to hear how other >>>>> community participants both from RDO in general and RDO Manager >>>>> specifically feel about this. Ideally, if someone thinks the RDO Manager >>>>> release should be blocked, there should be a BZ with the blocker flag >>>>> proposed so that there is actionable criteria to unblock the release. >>>>> >>>>> Thanks for all your hard work to get to this point, and lets keep it >>>>> rolling. >>>>> >>>>> -trown >>>>> >>>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 >>>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> >>>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ohochman at redhat.com Thu Oct 22 20:49:40 2015 From: ohochman at redhat.com (Omri Hochman) Date: Thu, 22 Oct 2015 16:49:40 -0400 (EDT) Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: References: <56276EE2.6010109@redhat.com> <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> <56292FD3.3070301@ltgfederal.com> <740430205.57312899.1445543230219.JavaMail.zimbra@redhat.com> Message-ID: <1851950512.57339101.1445546980687.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Ignacio Bravo" > To: "Omri Hochman" > Cc: rdo-list at redhat.com > Sent: Thursday, October 22, 2015 4:30:22 PM > Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > > Do I have to have network isolation for a test HA environment? Something > like: > > openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 > --ceph-storage-scale 1 > -e > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > -e > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > --ntp-server 10.5.26.10 --neutron-network-type vxlan --neutron-tunnel-types > vxlan --timeout 90 > > I wanted to leave network isolation for my last test (after the others have > passed) :) > That way, I don?t require yet the > > -e /home/stack/network-environment.yaml > > and the modifications of the nic-config files, and most importantly, reading > the network isolation part just yet. Sure, You should be able to get successful BM HA deployment without doing network-isolation, and I think the deployment command you suggested here ^^ should be good for that. Omri. > > > IB > > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > > On Oct 22, 2015, at 3:47 PM, Omri Hochman wrote: > > > > Hey Ignacio, > > > > I guess you can use my file as example ( I've switched the internal IPs > > with 'XX.XX.XX.XX' ) > > http://paste.openstack.org/show/477193/ > > > > Note: that configuration file is fit to my environment and It's according > > the native vlans that already pre-configured on the switch. > > you will also need to create all the network-isolation configuration yaml > > files under /home/stack/nic-configs > > > > [ohochman at dhcp-1-111 nic-configs_new]$ ls > > ceph-storage.yaml cinder-storage.yaml compute.yaml controller.yaml > > swift-storage.yaml > > > > Try to according : > > http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/network_isolation.html > > > > Regards, > > Omri. > > > > > > > > ----- Original Message ----- > >> From: "Ignacio Bravo" > >> To: rdo-list at redhat.com > >> Sent: Thursday, October 22, 2015 2:49:55 PM > >> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > >> > >> Omri, > >> > >> I was looking at the successful HA deployment in BM that you commented = > >> > >> openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 > >> --ceph-storage-scale 1 -e > >> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > >> -e /home/stack/network-environment.yaml -e > >> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > >> --ntp-server 10.5.26.10 --neutron-network-type vxlan > >> --neutron-tunnel-types > >> vxlan --timeout 9 > >> > >> Can you share the contents of your/home/stack/network-environment.yaml > >> file? > >> I couldn't find this file in the undercloud machine, and want to make sure > >> I > >> get a successful deployment. Feel free to replace any confidential > >> information. > >> > >> > >> Thanks, > >> IB > >> > >> > >> Ignacio Bravo > >> LTG Federal Inc > >> > >> On 10/21/2015 02:30 PM, Omri Hochman wrote: > >>> > >>> ----- Original Message ----- > >>>> From: "Omri Hochman" > >>>> To: "Pedro Sousa" > >>>> Cc: "rdo-list" > >>>> Sent: Wednesday, October 21, 2015 11:16:03 AM > >>>> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > >>>> > >>>> > >>>> > >>>> ----- Original Message ----- > >>>>> From: "Pedro Sousa" > >>>>> To: "John Trowbridge" > >>>>> Cc: "rdo-list" > >>>>> Sent: Wednesday, October 21, 2015 7:10:38 AM > >>>>> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA > >>>>> > >>>>> Hi John, > >>>>> > >>>>> I've managed to install on baremetal following this howto: > >>>>> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ (based on > >>>>> liberty) > >>>> Hey Pedro, > >>>> > >>>> Are you using: yum install -y > >>>> http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > >>>> to get the latest RDO GA bits ? > >>>> > >>>> We're failing in overcloud deployment on BM with several issues. > >>> Actually, an update : > >>> > >>> After using the workaround from this issue: > >>> https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c9 > >>> > >>> We've manage to get HA on Bare-Metal (*using the latest > >>> rdo-release-liberty.rpm) > >>> > >>> That was the deployment command : > >>> > >>> openstack overcloud deploy --templates --control-scale 3 --compute-scale > >>> 1 > >>> --ceph-storage-scale 1 -e > >>> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > >>> -e /home/stack/network-environment.yaml -e > >>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > >>> --ntp-server 10.5.26.10 --neutron-network-type vxlan > >>> --neutron-tunnel-types vxlan --timeout 90 > >>> > >>>> Thanks, > >>>> Omri. > >>>> > >>>>> I have 3 Controllers + 1 Compute (HA and Network Isolation). However > >>>>> I'm > >>>>> having some issues logging on (maybe some keystone issue) and some > >>>>> issue > >>>>> with openvswitch that I'm trying to address with Marius Cornea help. > >>>>> > >>>>> Regards, > >>>>> Pedro Sousa > >>>>> > >>>>> > >>>>> On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < trown at redhat.com > > >>>>> wrote: > >>>>> > >>>>> > >>>>> Hola rdoers, > >>>>> > >>>>> The plan is to GA RDO Liberty today (woot!), so I wanted to send out a > >>>>> status update for the RDO Manager installer. I would also like to > >>>>> gather > >>>>> feedback on how other community participants feel about that status as > >>>>> it relates to RDO Manager participating in the GA. That feedback can > >>>>> come as replies to this thread, or even better there is a packaging > >>>>> meeting on #rdo at 1500 UTC today and we can discuss it further then. > >>>>> > >>>>> tldr; > >>>>> RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on > >>>>> virtual hardware have been verified to work with GA bits, however bare > >>>>> metal installs have not yet been verified. > >>>>> > >>>>> I would like to start with some historical context here, as it seems we > >>>>> have picked up quite a few new active community members recently (again > >>>>> woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a > >>>>> successful end to end demo with a single controller and single compute > >>>>> node, and only by using a special delorean server pulling bits from a > >>>>> special github organization (rdo-management). We were able to get it > >>>>> consistently deploying **virtual** HA w/ ceph in CI by the middle of > >>>>> the > >>>>> Liberty upstream cycle. Then, due largely to the fact that there was > >>>>> nobody being paid to work full time on RDO Manager, and the people who > >>>>> were contributing in more or less "extra" time were getting swamped > >>>>> with > >>>>> releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief > >>>>> 24 hour periods where someone would spend a weekend fixing things only > >>>>> to have it break again early the following week. > >>>>> > >>>>> There have been many improvements in the recent weeks to this sad state > >>>>> of affairs. Firstly, we have upstreamed almost everything from the > >>>>> rdo-management github org directly into openstack projects. Secondly, > >>>>> there is a single source for delorean packages for both core openstack > >>>>> packages and the tripleo and ironic packages that make up RDO Manager. > >>>>> These two things may seem a bit trivial to a newcomer to the project, > >>>>> but they are actually fixes for the biggest cause of the RDO Manager > >>>>> Kilo CI breaking. I think with those two fixes (plus some work on > >>>>> upstream tripleo CI) we have set ourselves up to make steady forward > >>>>> progress rather than spending all our time troubleshooting complete > >>>>> breakages. (Although this is still openstack so complete breakages will > >>>>> still happen from time to time :p) > >>>>> > >>>>> Another very easy to overlook improvement over where we were at Kilo > >>>>> GA, > >>>>> is that we actually have all RDO Manager packages (minus a couple EPEL > >>>>> dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we > >>>>> did not even have everything officially packaged, rather only in our > >>>>> special delorean instance. > >>>>> > >>>>> All this leads to my opinion that RDO Manager should participate in the > >>>>> RDO GA. I am unconvinced that bare metal installs can not be made to > >>>>> work with some extra documentation or configuration changes. However, > >>>>> even if that is not the case, we are in a drastically better place than > >>>>> we were at the beginning of the Kilo cycle. > >>>>> > >>>>> That said, this is a community, and I would like to hear how other > >>>>> community participants both from RDO in general and RDO Manager > >>>>> specifically feel about this. Ideally, if someone thinks the RDO > >>>>> Manager > >>>>> release should be blocked, there should be a BZ with the blocker flag > >>>>> proposed so that there is actionable criteria to unblock the release. > >>>>> > >>>>> Thanks for all your hard work to get to this point, and lets keep it > >>>>> rolling. > >>>>> > >>>>> -trown > >>>>> > >>>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 > >>>>> > >>>>> _______________________________________________ > >>>>> Rdo-list mailing list > >>>>> Rdo-list at redhat.com > >>>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>>> > >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> Rdo-list mailing list > >>>>> Rdo-list at redhat.com > >>>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>>> > >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>>> _______________________________________________ > >>>> Rdo-list mailing list > >>>> Rdo-list at redhat.com > >>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>> > >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>>> > >>> _______________________________________________ > >>> Rdo-list mailing list > >>> Rdo-list at redhat.com > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > From ibravo at ltgfederal.com Fri Oct 23 03:05:30 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 22 Oct 2015 23:05:30 -0400 Subject: [Rdo-list] RDO Manager status for Liberty GA In-Reply-To: <1851950512.57339101.1445546980687.JavaMail.zimbra@redhat.com> References: <56276EE2.6010109@redhat.com> <869559786.56514527.1445440563542.JavaMail.zimbra@redhat.com> <1081600108.56606332.1445452228637.JavaMail.zimbra@redhat.com> <56292FD3.3070301@ltgfederal.com> <740430205.57312899.1445543230219.JavaMail.zimbra@redhat.com> <1851950512.57339101.1445546980687.JavaMail.zimbra@redhat.com> Message-ID: I just finalized my first successful install of RDO Manager. All I can say is guau! What a beauty! It just does everything auto-magically. Impressive. I will need to go into the network isolation and multiple Ceph nodes now, but it is definitely amazing. Good work and thanks to ohochman, sasha21 and trown for all the long irc chats. IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com > On Oct 22, 2015, at 4:49 PM, Omri Hochman wrote: > > > ----- Original Message ----- >> From: "Ignacio Bravo" >> To: "Omri Hochman" >> Cc: rdo-list at redhat.com >> Sent: Thursday, October 22, 2015 4:30:22 PM >> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA >> >> Do I have to have network isolation for a test HA environment? Something >> like: >> >> openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 >> --ceph-storage-scale 1 >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >> --ntp-server 10.5.26.10 --neutron-network-type vxlan --neutron-tunnel-types >> vxlan --timeout 90 >> > >> I wanted to leave network isolation for my last test (after the others have >> passed) :) >> That way, I don?t require yet the >> >> -e /home/stack/network-environment.yaml >> >> and the modifications of the nic-config files, and most importantly, reading >> the network isolation part just yet. > > Sure, > > You should be able to get successful BM HA deployment without doing network-isolation, > > and I think the deployment command you suggested here ^^ should be good for that. > > Omri. > >> >> >> IB >> >> >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> >>> On Oct 22, 2015, at 3:47 PM, Omri Hochman wrote: >>> >>> Hey Ignacio, >>> >>> I guess you can use my file as example ( I've switched the internal IPs >>> with 'XX.XX.XX.XX' ) >>> http://paste.openstack.org/show/477193/ >>> >>> Note: that configuration file is fit to my environment and It's according >>> the native vlans that already pre-configured on the switch. >>> you will also need to create all the network-isolation configuration yaml >>> files under /home/stack/nic-configs >>> >>> [ohochman at dhcp-1-111 nic-configs_new]$ ls >>> ceph-storage.yaml cinder-storage.yaml compute.yaml controller.yaml >>> swift-storage.yaml >>> >>> Try to according : >>> http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/network_isolation.html >>> >>> Regards, >>> Omri. >>> >>> >>> >>> ----- Original Message ----- >>>> From: "Ignacio Bravo" >>>> To: rdo-list at redhat.com >>>> Sent: Thursday, October 22, 2015 2:49:55 PM >>>> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA >>>> >>>> Omri, >>>> >>>> I was looking at the successful HA deployment in BM that you commented = >>>> >>>> openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 >>>> --ceph-storage-scale 1 -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml >>>> -e /home/stack/network-environment.yaml -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >>>> --ntp-server 10.5.26.10 --neutron-network-type vxlan >>>> --neutron-tunnel-types >>>> vxlan --timeout 9 >>>> >>>> Can you share the contents of your/home/stack/network-environment.yaml >>>> file? >>>> I couldn't find this file in the undercloud machine, and want to make sure >>>> I >>>> get a successful deployment. Feel free to replace any confidential >>>> information. >>>> >>>> >>>> Thanks, >>>> IB >>>> >>>> >>>> Ignacio Bravo >>>> LTG Federal Inc >>>> >>>> On 10/21/2015 02:30 PM, Omri Hochman wrote: >>>>> >>>>> ----- Original Message ----- >>>>>> From: "Omri Hochman" >>>>>> To: "Pedro Sousa" >>>>>> Cc: "rdo-list" >>>>>> Sent: Wednesday, October 21, 2015 11:16:03 AM >>>>>> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA >>>>>> >>>>>> >>>>>> >>>>>> ----- Original Message ----- >>>>>>> From: "Pedro Sousa" >>>>>>> To: "John Trowbridge" >>>>>>> Cc: "rdo-list" >>>>>>> Sent: Wednesday, October 21, 2015 7:10:38 AM >>>>>>> Subject: Re: [Rdo-list] RDO Manager status for Liberty GA >>>>>>> >>>>>>> Hi John, >>>>>>> >>>>>>> I've managed to install on baremetal following this howto: >>>>>>> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ (based on >>>>>>> liberty) >>>>>> Hey Pedro, >>>>>> >>>>>> Are you using: yum install -y >>>>>> http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm >>>>>> to get the latest RDO GA bits ? >>>>>> >>>>>> We're failing in overcloud deployment on BM with several issues. >>>>> Actually, an update : >>>>> >>>>> After using the workaround from this issue: >>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1271289#c9 >>>>> >>>>> We've manage to get HA on Bare-Metal (*using the latest >>>>> rdo-release-liberty.rpm) >>>>> >>>>> That was the deployment command : >>>>> >>>>> openstack overcloud deploy --templates --control-scale 3 --compute-scale >>>>> 1 >>>>> --ceph-storage-scale 1 -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml >>>>> -e /home/stack/network-environment.yaml -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >>>>> --ntp-server 10.5.26.10 --neutron-network-type vxlan >>>>> --neutron-tunnel-types vxlan --timeout 90 >>>>> >>>>>> Thanks, >>>>>> Omri. >>>>>> >>>>>>> I have 3 Controllers + 1 Compute (HA and Network Isolation). However >>>>>>> I'm >>>>>>> having some issues logging on (maybe some keystone issue) and some >>>>>>> issue >>>>>>> with openvswitch that I'm trying to address with Marius Cornea help. >>>>>>> >>>>>>> Regards, >>>>>>> Pedro Sousa >>>>>>> >>>>>>> >>>>>>> On Wed, Oct 21, 2015 at 11:54 AM, John Trowbridge < trown at redhat.com > >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> Hola rdoers, >>>>>>> >>>>>>> The plan is to GA RDO Liberty today (woot!), so I wanted to send out a >>>>>>> status update for the RDO Manager installer. I would also like to >>>>>>> gather >>>>>>> feedback on how other community participants feel about that status as >>>>>>> it relates to RDO Manager participating in the GA. That feedback can >>>>>>> come as replies to this thread, or even better there is a packaging >>>>>>> meeting on #rdo at 1500 UTC today and we can discuss it further then. >>>>>>> >>>>>>> tldr; >>>>>>> RDO Manager installs with 3 controllers, 1 compute, and 1 ceph on >>>>>>> virtual hardware have been verified to work with GA bits, however bare >>>>>>> metal installs have not yet been verified. >>>>>>> >>>>>>> I would like to start with some historical context here, as it seems we >>>>>>> have picked up quite a few new active community members recently (again >>>>>>> woot!). When RDO Kilo GA'd, RDO Manager was barely capable of a >>>>>>> successful end to end demo with a single controller and single compute >>>>>>> node, and only by using a special delorean server pulling bits from a >>>>>>> special github organization (rdo-management). We were able to get it >>>>>>> consistently deploying **virtual** HA w/ ceph in CI by the middle of >>>>>>> the >>>>>>> Liberty upstream cycle. Then, due largely to the fact that there was >>>>>>> nobody being paid to work full time on RDO Manager, and the people who >>>>>>> were contributing in more or less "extra" time were getting swamped >>>>>>> with >>>>>>> releasing RHEL OSP 7, CI on the Kilo bits became mostly red with brief >>>>>>> 24 hour periods where someone would spend a weekend fixing things only >>>>>>> to have it break again early the following week. >>>>>>> >>>>>>> There have been many improvements in the recent weeks to this sad state >>>>>>> of affairs. Firstly, we have upstreamed almost everything from the >>>>>>> rdo-management github org directly into openstack projects. Secondly, >>>>>>> there is a single source for delorean packages for both core openstack >>>>>>> packages and the tripleo and ironic packages that make up RDO Manager. >>>>>>> These two things may seem a bit trivial to a newcomer to the project, >>>>>>> but they are actually fixes for the biggest cause of the RDO Manager >>>>>>> Kilo CI breaking. I think with those two fixes (plus some work on >>>>>>> upstream tripleo CI) we have set ourselves up to make steady forward >>>>>>> progress rather than spending all our time troubleshooting complete >>>>>>> breakages. (Although this is still openstack so complete breakages will >>>>>>> still happen from time to time :p) >>>>>>> >>>>>>> Another very easy to overlook improvement over where we were at Kilo >>>>>>> GA, >>>>>>> is that we actually have all RDO Manager packages (minus a couple EPEL >>>>>>> dep stragglers[1]) in the official RDO GA repo. When RDO Kilo GA'd, we >>>>>>> did not even have everything officially packaged, rather only in our >>>>>>> special delorean instance. >>>>>>> >>>>>>> All this leads to my opinion that RDO Manager should participate in the >>>>>>> RDO GA. I am unconvinced that bare metal installs can not be made to >>>>>>> work with some extra documentation or configuration changes. However, >>>>>>> even if that is not the case, we are in a drastically better place than >>>>>>> we were at the beginning of the Kilo cycle. >>>>>>> >>>>>>> That said, this is a community, and I would like to hear how other >>>>>>> community participants both from RDO in general and RDO Manager >>>>>>> specifically feel about this. Ideally, if someone thinks the RDO >>>>>>> Manager >>>>>>> release should be blocked, there should be a BZ with the blocker flag >>>>>>> proposed so that there is actionable criteria to unblock the release. >>>>>>> >>>>>>> Thanks for all your hard work to get to this point, and lets keep it >>>>>>> rolling. >>>>>>> >>>>>>> -trown >>>>>>> >>>>>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1273541 >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Rdo-list mailing list >>>>>>> Rdo-list at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>> >>>>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Rdo-list mailing list >>>>>>> Rdo-list at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>> >>>>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>>> _______________________________________________ >>>>>> Rdo-list mailing list >>>>>> Rdo-list at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>> >>>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessandro at namecheap.com Fri Oct 23 08:30:02 2015 From: alessandro at namecheap.com (Alessandro Vozza) Date: Fri, 23 Oct 2015 10:30:02 +0200 Subject: [Rdo-list] Failing overcloud deployment, missin /var/lib/os-collect-config/local-data Message-ID: <0B88AF5A-B2DE-4F6B-99E8-84E247318DD2@namecheap.com> RDO Liberty GA, in KVM (VM?s prepared by me, not instack): undercloud: 2 nics (10.128.1.5 - NAT?ed network for access; 10.128.0.5 - isolated network for pxe/provisioning. On the latter 10.128.0.1 is the default gw) overcloud-nodeX: (10.128.0.x - pxe; 192.168.178.x - DHCP, my home router, for external access) deployed undercloud successfully with: [DEFAULT] image_path = . local_ip = 10.128.0.5/24 local_interface = eth1 dhcp_start = 10.128.0.30 dhcp_end = 10.128.0.200 network_cidr = 10.128.0.0/24 network_gateway = 10.128.0.1 inspection_interface = br-ctlplane inspection_iprange = 10.128.0.20,10.128.0.29 inspection_runbench = false undercloud_debug = true enable_tuskar = false enable_tempest = false [auth] built liberty images, loaded, added two nodes, deploying with: # openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan nodes gets deployed, but os-collect-config fails with: Oct 23 08:20:08 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:08.194 16422 WARNING os_collect_config.ec2 [-] ('Connection aborted.', error(110, 'Connection timed out')) Oct 23 08:20:08 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:08.194 16422 WARNING os-collect-config [-] Source [ec2] Unavailable. Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.449 16422 WARNING os_collect_config.heat [-] No auth_url configured. Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.450 16422 WARNING os_collect_config.request [-] No metadata_url configured. Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.450 16422 WARNING os-collect-config [-] Source [request] Unavailable. Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.451 16422 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.451 16422 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data?]) on that node (controller) [root at overcloud-controller-0 /]# ip a ; ip r ; more /etc/resolv.conf 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:63:69:d5 brd ff:ff:ff:ff:ff:ff inet 10.128.0.42/24 brd 10.128.0.255 scope global dynamic eth0 valid_lft 79203sec preferred_lft 79203sec inet6 fe80::5054:ff:fe63:69d5/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:82:a9:a0 brd ff:ff:ff:ff:ff:ff inet 192.168.178.128/24 brd 192.168.178.255 scope global dynamic eth1 valid_lft 856804sec preferred_lft 856804sec inet6 4006:8241:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1722sec preferred_lft 0sec inet6 4006:15:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1722sec preferred_lft 0sec inet6 4006:2e9b:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1722sec preferred_lft 0sec inet6 4006:3afc:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1720sec preferred_lft 0sec inet6 4006:aa7a:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1720sec preferred_lft 0sec inet6 4006:973b:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1720sec preferred_lft 0sec inet6 4006:4d9d:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1565sec preferred_lft 0sec inet6 4006:fa40:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1563sec preferred_lft 0sec inet6 4006:3c22:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1563sec preferred_lft 0sec inet6 4006:59ed:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1562sec preferred_lft 0sec inet6 4001:ba0e:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic valid_lft 1522sec preferred_lft 0sec inet6 fd00::1:5054:ff:fe82:a9a0/64 scope global dynamic valid_lft 7040sec preferred_lft 7040sec inet6 2001:984:30d0:1:5054:ff:fe82:a9a0/64 scope global dynamic valid_lft 4628sec preferred_lft 1028sec inet6 fe80::5054:ff:fe82:a9a0/64 scope link valid_lft forever preferred_lft forever default via 192.168.178.1 dev eth1 10.128.0.0/24 dev eth0 proto kernel scope link src 10.128.0.42 169.254.169.254 via 10.128.0.5 dev eth0 proto static 192.168.178.0/24 dev eth1 proto kernel scope link src 192.168.178.128 ; generated by /usr/sbin/dhclient-script search fritz.box nameserver 192.168.178.1 What am I doing wrong here? Why the file /var/lib/os-collect-config/local-data is not there? there?s a bunch of other files there [root at overcloud-controller-0 /]# ls /var/lib/os-collect-config/ cfn.json heat_local.json.orig cfn.json.last os_config_files.json cfn.json.orig overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json heat_local.json overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json.last heat_local.json.last overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json.orig Thanks, help appreciated! P.S. Keep up the awesome work! Alessandro Vozza alessandro at namecheap.com +31643197789 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Fri Oct 23 11:46:17 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Fri, 23 Oct 2015 07:46:17 -0400 Subject: [Rdo-list] Failing overcloud deployment, missin /var/lib/os-collect-config/local-data In-Reply-To: <0B88AF5A-B2DE-4F6B-99E8-84E247318DD2@namecheap.com> References: <0B88AF5A-B2DE-4F6B-99E8-84E247318DD2@namecheap.com> Message-ID: <11188B76-3EAA-4017-AE08-EC6527915FBE@ltgfederal.com> Alessandro, I had a lot of problems when using an IP range other than the defaults. We realized that the default IP ranges were hardcoded in a couple of the configuration files and thus customizing the ranges would fail. If you already deployed an undercloud with the default ranges in the same hardware and it worked, then it would certainly prove this issue. One of the places I recall top of my head was a firewall port being able for the 192.0.2.0 network. Check your iptables to see this. IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com > On Oct 23, 2015, at 4:30 AM, Alessandro Vozza wrote: > > RDO Liberty GA, in KVM (VM?s prepared by me, not instack): > > undercloud: 2 nics (10.128.1.5 - NAT?ed network for access; 10.128.0.5 - isolated network for pxe/provisioning. On the latter 10.128.0.1 is the default gw) > overcloud-nodeX: (10.128.0.x - pxe; 192.168.178.x - DHCP, my home router, for external access) > > deployed undercloud successfully with: > > [DEFAULT] > image_path = . > local_ip = 10.128.0.5/24 > local_interface = eth1 > dhcp_start = 10.128.0.30 > dhcp_end = 10.128.0.200 > network_cidr = 10.128.0.0/24 > network_gateway = 10.128.0.1 > inspection_interface = br-ctlplane > inspection_iprange = 10.128.0.20,10.128.0.29 > inspection_runbench = false > undercloud_debug = true > enable_tuskar = false > enable_tempest = false > [auth] > > built liberty images, loaded, added two nodes, deploying with: > > # openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan > > nodes gets deployed, but os-collect-config fails with: > > Oct 23 08:20:08 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:08.194 16422 WARNING os_collect_config.ec2 [-] ('Connection aborted.', error(110, 'Connection timed out')) > Oct 23 08:20:08 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:08.194 16422 WARNING os-collect-config [-] Source [ec2] Unavailable. > Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.449 16422 WARNING os_collect_config.heat [-] No auth_url configured. > Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.450 16422 WARNING os_collect_config.request [-] No metadata_url configured. > Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.450 16422 WARNING os-collect-config [-] Source [request] Unavailable. > Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.451 16422 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping > Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.451 16422 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data?]) > > on that node (controller) > > [root at overcloud-controller-0 /]# ip a ; ip r ; more /etc/resolv.conf > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 > link/ether 52:54:00:63:69:d5 brd ff:ff:ff:ff:ff:ff > inet 10.128.0.42/24 brd 10.128.0.255 scope global dynamic eth0 > valid_lft 79203sec preferred_lft 79203sec > inet6 fe80::5054:ff:fe63:69d5/64 scope link > valid_lft forever preferred_lft forever > 3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 > link/ether 52:54:00:82:a9:a0 brd ff:ff:ff:ff:ff:ff > inet 192.168.178.128/24 brd 192.168.178.255 scope global dynamic eth1 > valid_lft 856804sec preferred_lft 856804sec > inet6 4006:8241:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1722sec preferred_lft 0sec > inet6 4006:15:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1722sec preferred_lft 0sec > inet6 4006:2e9b:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1722sec preferred_lft 0sec > inet6 4006:3afc:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1720sec preferred_lft 0sec > inet6 4006:aa7a:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1720sec preferred_lft 0sec > inet6 4006:973b:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1720sec preferred_lft 0sec > inet6 4006:4d9d:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1565sec preferred_lft 0sec > inet6 4006:fa40:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1563sec preferred_lft 0sec > inet6 4006:3c22:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1563sec preferred_lft 0sec > inet6 4006:59ed:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1562sec preferred_lft 0sec > inet6 4001:ba0e:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic > valid_lft 1522sec preferred_lft 0sec > inet6 fd00::1:5054:ff:fe82:a9a0/64 scope global dynamic > valid_lft 7040sec preferred_lft 7040sec > inet6 2001:984:30d0:1:5054:ff:fe82:a9a0/64 scope global dynamic > valid_lft 4628sec preferred_lft 1028sec > inet6 fe80::5054:ff:fe82:a9a0/64 scope link > valid_lft forever preferred_lft forever > default via 192.168.178.1 dev eth1 > 10.128.0.0/24 dev eth0 proto kernel scope link src 10.128.0.42 > 169.254.169.254 via 10.128.0.5 dev eth0 proto static > 192.168.178.0/24 dev eth1 proto kernel scope link src 192.168.178.128 > ; generated by /usr/sbin/dhclient-script > search fritz.box > nameserver 192.168.178.1 > > What am I doing wrong here? Why the file /var/lib/os-collect-config/local-data is not there? there?s a bunch of other files there > > [root at overcloud-controller-0 /]# ls /var/lib/os-collect-config/ > cfn.json heat_local.json.orig > cfn.json.last os_config_files.json > cfn.json.orig overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json > heat_local.json overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json.last > heat_local.json.last overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json.orig > > > Thanks, help appreciated! > > P.S. Keep up the awesome work! > > > > Alessandro Vozza > alessandro at namecheap.com > +31643197789 > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.pichard at sogeti.com Fri Oct 23 12:15:11 2015 From: guillaume.pichard at sogeti.com (PICHARD, Guillaume) Date: Fri, 23 Oct 2015 12:15:11 +0000 Subject: [Rdo-list] Error installing Sahara using RDO Liberty Message-ID: Hello everyone, I wanted to test packstack RDO Liberty. I followed the "getting started" on RDO website. I used the same packstack arguments than when I used kilo (packstack --allinone --gen-answer-file=/root/packstack.cfg --provision-demo=n --provision-ovs-bridge=n --os-sahara-install=y --os-heat-install=y --os-trove-install=y ) but with Libery I get the following error: ERROR : Error appeared during Puppet run: 10.223.186.113_keystone.pp Error: Invalid parameter public_address on Class[Sahara::Keystone::Auth] at /var/tmp/packstack/6ad74435202e499da0123d628781d411/manifests/10.223.186.113_keystone.pp:205 on node devstack.test.ebd You will find full trace in log /var/tmp/packstack/20151023-115457-kkzCBm/manifests/10.223.186.113_keystone.pp.log Full trace is: Error: NetworkManager is not running. Warning: Scope(Class[Glance::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Glance::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Glance::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. Warning: Scope(Class[Cinder::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Cinder::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Cinder::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The public_address parameter is deprecated, use ec2_public_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The internal_address parameter is deprecated, use ec2_internal_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The admin_address parameter is deprecated, use ec2_admin_url instead. Warning: Scope(Class[Neutron::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Neutron::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Neutron::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. Warning: Scope(Class[Swift::Keystone::Auth]): The public_address parameter is deprecated, use public_url and public_url_s3 instead. Warning: Scope(Class[Ceilometer::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Ceilometer::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Ceilometer::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. Error: Invalid parameter public_address on Class[Sahara::Keystone::Auth] at /var/tmp/packstack/6ad74435202e499da0123d628781d411/manifests/10.223.186.113_ke ystone.pp:205 on node devstack.test.ebd Wrapped exception: Invalid parameter public_address Error: Invalid parameter public_address on Class[Sahara::Keystone::Auth] at /var/tmp/packstack/6ad74435202e499da0123d628781d411/manifests/10.223.186.113_ke ystone.pp:205 on node devstack.test.ebd Did something changed with packstack Liberty ? Thanks for your help, Regards, Guillaume. This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessandro at namecheap.com Fri Oct 23 12:55:59 2015 From: alessandro at namecheap.com (Alessandro Vozza) Date: Fri, 23 Oct 2015 14:55:59 +0200 Subject: [Rdo-list] Failing overcloud deployment, missin /var/lib/os-collect-config/local-data In-Reply-To: <11188B76-3EAA-4017-AE08-EC6527915FBE@ltgfederal.com> References: <0B88AF5A-B2DE-4F6B-99E8-84E247318DD2@namecheap.com> <11188B76-3EAA-4017-AE08-EC6527915FBE@ltgfederal.com> Message-ID: <9194A109-B9C5-4061-A012-483E5942ED29@namecheap.com> Thanks Ignacio I checked but the only reference I could find for that 192.0.2.0 network is on the undecloud: ./sysconfig/iptables:-A POSTROUTING -s 192.0.2.0/24 -o eth0 -j MASQUERADE but I?m not using masquerading (the provisioning network has a default gw that is not the undercloud). I?m trying to understand what /usr/bin/os-collect-config does and how its configuration files are created (and why local-data) is missing. Can you do me favour, and check if a successfully deployed overcloud has /var/lib/os-collect-config/local-data? thanks in advance Alessandro Vozza alessandro at namecheap.com +31643197789 > On 23 Oct 2015, at 13:46, Ignacio Bravo wrote: > > Alessandro, > > I had a lot of problems when using an IP range other than the defaults. We realized that the default IP ranges were hardcoded in a couple of the configuration files and thus customizing the ranges would fail. > > If you already deployed an undercloud with the default ranges in the same hardware and it worked, then it would certainly prove this issue. > > One of the places I recall top of my head was a firewall port being able for the 192.0.2.0 network. Check your iptables to see this. > > IB > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > >> On Oct 23, 2015, at 4:30 AM, Alessandro Vozza > wrote: >> >> RDO Liberty GA, in KVM (VM?s prepared by me, not instack): >> >> undercloud: 2 nics (10.128.1.5 - NAT?ed network for access; 10.128.0.5 - isolated network for pxe/provisioning. On the latter 10.128.0.1 is the default gw) >> overcloud-nodeX: (10.128.0.x - pxe; 192.168.178.x - DHCP, my home router, for external access) >> >> deployed undercloud successfully with: >> >> [DEFAULT] >> image_path = . >> local_ip = 10.128.0.5/24 >> local_interface = eth1 >> dhcp_start = 10.128.0.30 >> dhcp_end = 10.128.0.200 >> network_cidr = 10.128.0.0/24 >> network_gateway = 10.128.0.1 >> inspection_interface = br-ctlplane >> inspection_iprange = 10.128.0.20,10.128.0.29 >> inspection_runbench = false >> undercloud_debug = true >> enable_tuskar = false >> enable_tempest = false >> [auth] >> >> built liberty images, loaded, added two nodes, deploying with: >> >> # openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan >> >> nodes gets deployed, but os-collect-config fails with: >> >> Oct 23 08:20:08 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:08.194 16422 WARNING os_collect_config.ec2 [-] ('Connection aborted.', error(110, 'Connection timed out')) >> Oct 23 08:20:08 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:08.194 16422 WARNING os-collect-config [-] Source [ec2] Unavailable. >> Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.449 16422 WARNING os_collect_config.heat [-] No auth_url configured. >> Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.450 16422 WARNING os_collect_config.request [-] No metadata_url configured. >> Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.450 16422 WARNING os-collect-config [-] Source [request] Unavailable. >> Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.451 16422 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping >> Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 08:20:09.451 16422 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data?]) >> >> on that node (controller) >> >> [root at overcloud-controller-0 /]# ip a ; ip r ; more /etc/resolv.conf >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> inet 127.0.0.1/8 scope host lo >> valid_lft forever preferred_lft forever >> inet6 ::1/128 scope host >> valid_lft forever preferred_lft forever >> 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 >> link/ether 52:54:00:63:69:d5 brd ff:ff:ff:ff:ff:ff >> inet 10.128.0.42/24 brd 10.128.0.255 scope global dynamic eth0 >> valid_lft 79203sec preferred_lft 79203sec >> inet6 fe80::5054:ff:fe63:69d5/64 scope link >> valid_lft forever preferred_lft forever >> 3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 >> link/ether 52:54:00:82:a9:a0 brd ff:ff:ff:ff:ff:ff >> inet 192.168.178.128/24 brd 192.168.178.255 scope global dynamic eth1 >> valid_lft 856804sec preferred_lft 856804sec >> inet6 4006:8241:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1722sec preferred_lft 0sec >> inet6 4006:15:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1722sec preferred_lft 0sec >> inet6 4006:2e9b:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1722sec preferred_lft 0sec >> inet6 4006:3afc:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1720sec preferred_lft 0sec >> inet6 4006:aa7a:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1720sec preferred_lft 0sec >> inet6 4006:973b:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1720sec preferred_lft 0sec >> inet6 4006:4d9d:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1565sec preferred_lft 0sec >> inet6 4006:fa40:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1563sec preferred_lft 0sec >> inet6 4006:3c22:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1563sec preferred_lft 0sec >> inet6 4006:59ed:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1562sec preferred_lft 0sec >> inet6 4001:ba0e:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated dynamic >> valid_lft 1522sec preferred_lft 0sec >> inet6 fd00::1:5054:ff:fe82:a9a0/64 scope global dynamic >> valid_lft 7040sec preferred_lft 7040sec >> inet6 2001:984:30d0:1:5054:ff:fe82:a9a0/64 scope global dynamic >> valid_lft 4628sec preferred_lft 1028sec >> inet6 fe80::5054:ff:fe82:a9a0/64 scope link >> valid_lft forever preferred_lft forever >> default via 192.168.178.1 dev eth1 >> 10.128.0.0/24 dev eth0 proto kernel scope link src 10.128.0.42 >> 169.254.169.254 via 10.128.0.5 dev eth0 proto static >> 192.168.178.0/24 dev eth1 proto kernel scope link src 192.168.178.128 >> ; generated by /usr/sbin/dhclient-script >> search fritz.box >> nameserver 192.168.178.1 >> >> What am I doing wrong here? Why the file /var/lib/os-collect-config/local-data is not there? there?s a bunch of other files there >> >> [root at overcloud-controller-0 /]# ls /var/lib/os-collect-config/ >> cfn.json heat_local.json.orig >> cfn.json.last os_config_files.json >> cfn.json.orig overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json >> heat_local.json overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json.last >> heat_local.json.last overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json.orig >> >> >> Thanks, help appreciated! >> >> P.S. Keep up the awesome work! >> >> >> >> Alessandro Vozza >> alessandro at namecheap.com >> +31643197789 >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Sat Oct 24 11:37:41 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sat, 24 Oct 2015 13:37:41 +0200 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: On Thu, Oct 22, 2015 at 4:00 PM, Pedro Sousa wrote: > Hi Marius, > > I successfully managed to deploy overcloud with > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > instead of delorean repos. I also built the images based on liberty. Thanks > :) > > A side note, is there a way to disable cinder when deploying? Because I get > Error: Unable to retrieve volume limit information. horizon errors. Hi Pedro, This is addressed by BZ#1272572 . Please check the workaround there. https://bugzilla.redhat.com/show_bug.cgi?id=1272572 > On Wed, Oct 21, 2015 at 6:41 PM, Marius Cornea > wrote: >> >> It's definitely a bug, the deployment shouldn't pass without >> completing keystone init. What's the content of your >> network-environment.yaml? >> >> I'm not sure if this relates but it's worth trying an installation >> with the GA bits, the docs are being updated to describe the steps. >> Some useful notes can be found here: >> https://etherpad.openstack.org/p/RDO-Manager_liberty >> >> trown ? mcornea: the important bit is to use `yum install -y >> http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm` >> for undercloud repos, and `export RDO_RELEASE='liberty'` for image >> build >> >> On Wed, Oct 21, 2015 at 6:54 PM, Pedro Sousa wrote: >> > Yes, I've done that already, however it never runs keystone init. Is it >> > something wrong in my deployment command "openstack overcloud deploy" or >> > do >> > you think it's a bug/conf issue? >> > >> > Thanks >> > >> > On Wed, Oct 21, 2015 at 5:50 PM, Marius Cornea >> > wrote: >> >> >> >> To delete the overcloud you need to run heat stack-delete overcloud >> >> and wait until it finishes(check heat stack-list) >> >> >> >> On Wed, Oct 21, 2015 at 6:29 PM, Pedro Sousa wrote: >> >> > You're right, I didn't get that output, keystone init didn't run: >> >> > >> >> > $ openstack overcloud deploy --control-scale 3 --compute-scale 1 >> >> > --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ >> >> > -e >> >> > ~/the-cloud/environments/puppet-pacemaker.yaml -e >> >> > ~/the-cloud/environments/network-isolation.yaml -e >> >> > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e >> >> > ~/the-cloud/environments/network-environment.yaml --control-flavor >> >> > controller --compute-flavor compute >> >> > >> >> > Deploying templates in the directory /home/stack/the-cloud >> >> > Overcloud Endpoint: http://192.168.174.35:5000/v2.0/ >> >> > Overcloud Deployed >> >> > >> >> > >> >> > In fact I have some mysql errors in my controllers, see below. Is >> >> > there >> >> > a >> >> > way to redeploy? Because I've run "openstack overcloud deploy" and >> >> > nothing >> >> > happens. >> >> > >> >> > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: >> >> > [2015-10-21 >> >> > 14:21:50,903] (heat-config) [INFO] Error: Could not prefetch >> >> > mysql_user >> >> > provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT >> >> > CONCAT(User, >> >> > '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): >> >> > Can't >> >> > connect to local MySQL server through socket >> >> > '/var/lib/mysql/mysql.sock' >> >> > (2) >> >> > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]: >> >> > Error: >> >> > Could not prefetch mysql_database provider 'mysql': Execution of >> >> > '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000): >> >> > Can't >> >> > connect to local MySQL server through socket >> >> > '/var/lib/mysql/mysql.sock' >> >> > (2) >> >> > >> >> > Thanks >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > On Wed, Oct 21, 2015 at 4:56 PM, Marius Cornea >> >> > >> >> > wrote: >> >> >> >> >> >> I believe the keystone init failed. It is done in a postconfig step >> >> >> via ssh on the public VIP(see lines 3-13 in >> >> >> https://gist.github.com/remoteur/920109a31083942ba5e1 ). Did you get >> >> >> that kind of output for the deploy command? >> >> >> >> >> >> Try also journalctl -l -u os-collect-config | grep -i error on the >> >> >> controller nodes, it should indicate if something went wrong during >> >> >> deployment. >> >> >> >> >> >> On Wed, Oct 21, 2015 at 5:05 PM, Pedro Sousa >> >> >> wrote: >> >> >> > Hi Marius, >> >> >> > >> >> >> > your tip worked fine thanks, bridges seems to be correctly >> >> >> > created, >> >> >> > however >> >> >> > I still cannot login, seems some keystone problem: >> >> >> > >> >> >> > #keystone --debug tenant-list >> >> >> > >> >> >> > DEBUG:keystoneclient.auth.identity.v2:Making authentication >> >> >> > request >> >> >> > to >> >> >> > http://192.168.174.35:5000/v2.0/tokens >> >> >> > INFO:requests.packages.urllib3.connectionpool:Starting new HTTP >> >> >> > connection >> >> >> > (1): 192.168.174.35 >> >> >> > DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens >> >> >> > HTTP/1.1" >> >> >> > 401 114 >> >> >> > DEBUG:keystoneclient.session:Request returned failure status: 401 >> >> >> > DEBUG:keystoneclient.v2_0.client:Authorization Failed. >> >> >> > The request you have made requires authentication. (HTTP 401) >> >> >> > (Request-ID: >> >> >> > req-accee3b3-b552-4c6b-ac39-d0791b5c1390) >> >> >> > >> >> >> > Did you had this issue when deployed on virtual? >> >> >> > >> >> >> > Regards >> >> >> > >> >> >> > >> >> >> > >> >> >> > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea >> >> >> > >> >> >> > wrote: >> >> >> >> >> >> >> >> Here's an adjusted controller.yaml which disables DHCP on the >> >> >> >> first >> >> >> >> nic: enp1s0f0 so it doesn't get an IP address >> >> >> >> http://paste.openstack.org/show/476981/ >> >> >> >> >> >> >> >> Please note that this assumes that your overcloud nodes are PXE >> >> >> >> booting on the 2nd NIC(basically disabling the 1st nic) >> >> >> >> >> >> >> >> Given your setup(I'm doing some assumptions here so I might be >> >> >> >> wrong) >> >> >> >> I would use the 1st nic for PXE booting and provisioning network >> >> >> >> and >> >> >> >> 2nd nic for running the isolated networks with this kind of >> >> >> >> template: >> >> >> >> http://paste.openstack.org/show/476986/ >> >> >> >> >> >> >> >> Let me know if it works for you. >> >> >> >> >> >> >> >> Thanks, >> >> >> >> Marius >> >> >> >> >> >> >> >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa >> >> >> >> wrote: >> >> >> >> > Hi, >> >> >> >> > >> >> >> >> > here you go. >> >> >> >> > >> >> >> >> > Regards, >> >> >> >> > Pedro Sousa >> >> >> >> > >> >> >> >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea >> >> >> >> > >> >> >> >> > wrote: >> >> >> >> >> >> >> >> >> >> Hi Pedro, >> >> >> >> >> >> >> >> >> >> One issue I can quickly see is that br-ex has assigned the >> >> >> >> >> same >> >> >> >> >> IP >> >> >> >> >> address as enp1s0f0. Can you post the nic templates you used >> >> >> >> >> for >> >> >> >> >> deployment? >> >> >> >> >> >> >> >> >> >> 2: enp1s0f0: mtu 1500 qdisc >> >> >> >> >> mq >> >> >> >> >> state >> >> >> >> >> UP qlen 1000 >> >> >> >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global >> >> >> >> >> dynamic >> >> >> >> >> enp1s0f0 >> >> >> >> >> 9: br-ex: mtu 1500 qdisc >> >> >> >> >> noqueue >> >> >> >> >> state >> >> >> >> >> UNKNOWN >> >> >> >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global >> >> >> >> >> br-ex >> >> >> >> >> >> >> >> >> >> Thanks, >> >> >> >> >> Marius >> >> >> >> >> >> >> >> >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa >> >> >> >> >> >> >> >> >> >> wrote: >> >> >> >> >> > Hi Marius, >> >> >> >> >> > >> >> >> >> >> > I've followed your howto and managed to get overcloud >> >> >> >> >> > deployed >> >> >> >> >> > in >> >> >> >> >> > HA, >> >> >> >> >> > thanks. However I cannot login to it (via CLI or Horizon) : >> >> >> >> >> > >> >> >> >> >> > ERROR (Unauthorized): The request you have made requires >> >> >> >> >> > authentication. >> >> >> >> >> > (HTTP 401) (Request-ID: >> >> >> >> >> > req-96310dfa-3d64-4f05-966f-f4d92702e2b1) >> >> >> >> >> > >> >> >> >> >> > So I rebooted the controllers and now I cannot login through >> >> >> >> >> > Provisioning >> >> >> >> >> > network, seems some openvswitch bridge conf problem, heres >> >> >> >> >> > my >> >> >> >> >> > conf: >> >> >> >> >> > >> >> >> >> >> > # ip a >> >> >> >> >> > 1: lo: mtu 65536 qdisc noqueue state >> >> >> >> >> > UNKNOWN >> >> >> >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> >> >> >> >> > inet 127.0.0.1/8 scope host lo >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > inet6 ::1/128 scope host >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > 2: enp1s0f0: mtu 1500 >> >> >> >> >> > qdisc >> >> >> >> >> > mq >> >> >> >> >> > state >> >> >> >> >> > UP >> >> >> >> >> > qlen 1000 >> >> >> >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global >> >> >> >> >> > dynamic >> >> >> >> >> > enp1s0f0 >> >> >> >> >> > valid_lft 84562sec preferred_lft 84562sec >> >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > 3: enp1s0f1: mtu 1500 >> >> >> >> >> > qdisc >> >> >> >> >> > mq >> >> >> >> >> > master >> >> >> >> >> > ovs-system state UP qlen 1000 >> >> >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > 4: ovs-system: mtu 1500 qdisc noop >> >> >> >> >> > state >> >> >> >> >> > DOWN >> >> >> >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > 5: br-tun: mtu 1500 qdisc noop state >> >> >> >> >> > DOWN >> >> >> >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > 6: vlan20: mtu 1500 qdisc >> >> >> >> >> > noqueue >> >> >> >> >> > state >> >> >> >> >> > UNKNOWN >> >> >> >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global >> >> >> >> >> > vlan20 >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global >> >> >> >> >> > vlan20 >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > 7: vlan40: mtu 1500 qdisc >> >> >> >> >> > noqueue >> >> >> >> >> > state >> >> >> >> >> > UNKNOWN >> >> >> >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global >> >> >> >> >> > vlan40 >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > 8: vlan174: mtu 1500 qdisc >> >> >> >> >> > noqueue >> >> >> >> >> > state >> >> >> >> >> > UNKNOWN >> >> >> >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global >> >> >> >> >> > vlan174 >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global >> >> >> >> >> > vlan174 >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > 9: br-ex: mtu 1500 qdisc >> >> >> >> >> > noqueue >> >> >> >> >> > state >> >> >> >> >> > UNKNOWN >> >> >> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global >> >> >> >> >> > br-ex >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > 10: vlan50: mtu 1500 qdisc >> >> >> >> >> > noqueue >> >> >> >> >> > state >> >> >> >> >> > UNKNOWN >> >> >> >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > 11: vlan30: mtu 1500 qdisc >> >> >> >> >> > noqueue >> >> >> >> >> > state >> >> >> >> >> > UNKNOWN >> >> >> >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global >> >> >> >> >> > vlan30 >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global >> >> >> >> >> > vlan30 >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link >> >> >> >> >> > valid_lft forever preferred_lft forever >> >> >> >> >> > 12: br-int: mtu 1500 qdisc noop state >> >> >> >> >> > DOWN >> >> >> >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > # ovs-vsctl show >> >> >> >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 >> >> >> >> >> > Bridge br-ex >> >> >> >> >> > Port br-ex >> >> >> >> >> > Interface br-ex >> >> >> >> >> > type: internal >> >> >> >> >> > Port "enp1s0f1" >> >> >> >> >> > Interface "enp1s0f1" >> >> >> >> >> > Port "vlan40" >> >> >> >> >> > tag: 40 >> >> >> >> >> > Interface "vlan40" >> >> >> >> >> > type: internal >> >> >> >> >> > Port "vlan20" >> >> >> >> >> > tag: 20 >> >> >> >> >> > Interface "vlan20" >> >> >> >> >> > type: internal >> >> >> >> >> > Port phy-br-ex >> >> >> >> >> > Interface phy-br-ex >> >> >> >> >> > type: patch >> >> >> >> >> > options: {peer=int-br-ex} >> >> >> >> >> > Port "vlan50" >> >> >> >> >> > tag: 50 >> >> >> >> >> > Interface "vlan50" >> >> >> >> >> > type: internal >> >> >> >> >> > Port "vlan30" >> >> >> >> >> > tag: 30 >> >> >> >> >> > Interface "vlan30" >> >> >> >> >> > type: internal >> >> >> >> >> > Port "vlan174" >> >> >> >> >> > tag: 174 >> >> >> >> >> > Interface "vlan174" >> >> >> >> >> > type: internal >> >> >> >> >> > Bridge br-int >> >> >> >> >> > fail_mode: secure >> >> >> >> >> > Port br-int >> >> >> >> >> > Interface br-int >> >> >> >> >> > type: internal >> >> >> >> >> > Port patch-tun >> >> >> >> >> > Interface patch-tun >> >> >> >> >> > type: patch >> >> >> >> >> > options: {peer=patch-int} >> >> >> >> >> > Port int-br-ex >> >> >> >> >> > Interface int-br-ex >> >> >> >> >> > type: patch >> >> >> >> >> > options: {peer=phy-br-ex} >> >> >> >> >> > Bridge br-tun >> >> >> >> >> > fail_mode: secure >> >> >> >> >> > Port "gre-0a00140b" >> >> >> >> >> > Interface "gre-0a00140b" >> >> >> >> >> > type: gre >> >> >> >> >> > options: {df_default="true", in_key=flow, >> >> >> >> >> > local_ip="10.0.20.10", >> >> >> >> >> > out_key=flow, remote_ip="10.0.20.11"} >> >> >> >> >> > Port patch-int >> >> >> >> >> > Interface patch-int >> >> >> >> >> > type: patch >> >> >> >> >> > options: {peer=patch-tun} >> >> >> >> >> > Port "gre-0a00140d" >> >> >> >> >> > Interface "gre-0a00140d" >> >> >> >> >> > type: gre >> >> >> >> >> > options: {df_default="true", in_key=flow, >> >> >> >> >> > local_ip="10.0.20.10", >> >> >> >> >> > out_key=flow, remote_ip="10.0.20.13"} >> >> >> >> >> > Port "gre-0a00140c" >> >> >> >> >> > Interface "gre-0a00140c" >> >> >> >> >> > type: gre >> >> >> >> >> > options: {df_default="true", in_key=flow, >> >> >> >> >> > local_ip="10.0.20.10", >> >> >> >> >> > out_key=flow, remote_ip="10.0.20.12"} >> >> >> >> >> > Port br-tun >> >> >> >> >> > Interface br-tun >> >> >> >> >> > type: internal >> >> >> >> >> > ovs_version: "2.4.0" >> >> >> >> >> > >> >> >> >> >> > Regards, >> >> >> >> >> > Pedro Sousa >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea >> >> >> >> >> > >> >> >> >> >> > wrote: >> >> >> >> >> >> >> >> >> >> >> >> Hi everyone, >> >> >> >> >> >> >> >> >> >> >> >> I wrote a blog post about how to deploy a HA with network >> >> >> >> >> >> isolation >> >> >> >> >> >> overcloud on top of the virtual environment. I tried to >> >> >> >> >> >> provide >> >> >> >> >> >> some >> >> >> >> >> >> insights into what instack-virt-setup creates and how to >> >> >> >> >> >> use >> >> >> >> >> >> the >> >> >> >> >> >> network isolation templates in the virtual environment. I >> >> >> >> >> >> hope >> >> >> >> >> >> you >> >> >> >> >> >> find it useful. >> >> >> >> >> >> >> >> >> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ >> >> >> >> >> >> >> >> >> >> >> >> Thanks, >> >> >> >> >> >> Marius >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> >> >> >> Rdo-list mailing list >> >> >> >> >> >> Rdo-list at redhat.com >> >> >> >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> >> >> >> >> >> >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> > >> > > > From marius at remote-lab.net Sat Oct 24 12:46:44 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sat, 24 Oct 2015 14:46:44 +0200 Subject: [Rdo-list] Failing overcloud deployment, missin /var/lib/os-collect-config/local-data In-Reply-To: <0B88AF5A-B2DE-4F6B-99E8-84E247318DD2@namecheap.com> References: <0B88AF5A-B2DE-4F6B-99E8-84E247318DD2@namecheap.com> Message-ID: Hi Alessandro, I'm not sure if it relates to your problem but please note that the undercloud provides the DHCP service for the undercloud nodes and if you have an additional DHCP server(like you do on the 2nd nic network) things might not work as expected. You should check out the network isolation deployment[1] for more granular control of the networking setup. Now related to your problem can you do 'curl http://169.254.169.254' from one of the overcloud nodes and see if you can get any info? On the undercloud node: sudo iptables -nL PREROUTING -t nat You should see something like: REDIRECT tcp -- 0.0.0.0/0 169.254.169.254 tcp dpt:80 redir ports 8775 nova-api should be listening on the 10.128.0.5 ip, port 8775. If it's not try to see what the logs show. [1] https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/network_isolation.html On Fri, Oct 23, 2015 at 10:30 AM, Alessandro Vozza wrote: > RDO Liberty GA, in KVM (VM?s prepared by me, not instack): > > undercloud: 2 nics (10.128.1.5 - NAT?ed network for access; 10.128.0.5 - > isolated network for pxe/provisioning. On the latter 10.128.0.1 is the > default gw) > overcloud-nodeX: (10.128.0.x - pxe; 192.168.178.x - DHCP, my home router, > for external access) > > deployed undercloud successfully with: > > [DEFAULT] > image_path = . > local_ip = 10.128.0.5/24 > local_interface = eth1 > dhcp_start = 10.128.0.30 > dhcp_end = 10.128.0.200 > network_cidr = 10.128.0.0/24 > network_gateway = 10.128.0.1 > inspection_interface = br-ctlplane > inspection_iprange = 10.128.0.20,10.128.0.29 > inspection_runbench = false > undercloud_debug = true > enable_tuskar = false > enable_tempest = false > [auth] > > built liberty images, loaded, added two nodes, deploying with: > > # openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 > --neutron-tunnel-types vxlan --neutron-network-type vxlan > > nodes gets deployed, but os-collect-config fails with: > > Oct 23 08:20:08 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 > 08:20:08.194 16422 WARNING os_collect_config.ec2 [-] ('Connection aborted.', > error(110, 'Connection timed out')) > Oct 23 08:20:08 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 > 08:20:08.194 16422 WARNING os-collect-config [-] Source [ec2] Unavailable. > Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 > 08:20:09.449 16422 WARNING os_collect_config.heat [-] No auth_url > configured. > Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 > 08:20:09.450 16422 WARNING os_collect_config.request [-] No metadata_url > configured. > Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 > 08:20:09.450 16422 WARNING os-collect-config [-] Source [request] > Unavailable. > Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 > 08:20:09.451 16422 WARNING os_collect_config.local [-] > /var/lib/os-collect-config/local-data not found. Skipping > Oct 23 08:20:09 overcloud-controller-0 os-collect-config[16422]: 2015-10-23 > 08:20:09.451 16422 WARNING os_collect_config.local [-] No local metadata > found (['/var/lib/os-collect-config/local-data?]) > > on that node (controller) > > [root at overcloud-controller-0 /]# ip a ; ip r ; more /etc/resolv.conf > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: eth0: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > link/ether 52:54:00:63:69:d5 brd ff:ff:ff:ff:ff:ff > inet 10.128.0.42/24 brd 10.128.0.255 scope global dynamic eth0 > valid_lft 79203sec preferred_lft 79203sec > inet6 fe80::5054:ff:fe63:69d5/64 scope link > valid_lft forever preferred_lft forever > 3: eth1: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > link/ether 52:54:00:82:a9:a0 brd ff:ff:ff:ff:ff:ff > inet 192.168.178.128/24 brd 192.168.178.255 scope global dynamic eth1 > valid_lft 856804sec preferred_lft 856804sec > inet6 4006:8241:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1722sec preferred_lft 0sec > inet6 4006:15:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1722sec preferred_lft 0sec > inet6 4006:2e9b:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1722sec preferred_lft 0sec > inet6 4006:3afc:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1720sec preferred_lft 0sec > inet6 4006:aa7a:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1720sec preferred_lft 0sec > inet6 4006:973b:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1720sec preferred_lft 0sec > inet6 4006:4d9d:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1565sec preferred_lft 0sec > inet6 4006:fa40:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1563sec preferred_lft 0sec > inet6 4006:3c22:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1563sec preferred_lft 0sec > inet6 4006:59ed:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1562sec preferred_lft 0sec > inet6 4001:ba0e:c0a8:b29b:5054:ff:fe82:a9a0/64 scope global deprecated > dynamic > valid_lft 1522sec preferred_lft 0sec > inet6 fd00::1:5054:ff:fe82:a9a0/64 scope global dynamic > valid_lft 7040sec preferred_lft 7040sec > inet6 2001:984:30d0:1:5054:ff:fe82:a9a0/64 scope global dynamic > valid_lft 4628sec preferred_lft 1028sec > inet6 fe80::5054:ff:fe82:a9a0/64 scope link > valid_lft forever preferred_lft forever > default via 192.168.178.1 dev eth1 > 10.128.0.0/24 dev eth0 proto kernel scope link src 10.128.0.42 > 169.254.169.254 via 10.128.0.5 dev eth0 proto static > 192.168.178.0/24 dev eth1 proto kernel scope link src 192.168.178.128 > ; generated by /usr/sbin/dhclient-script > search fritz.box > nameserver 192.168.178.1 > > What am I doing wrong here? Why the file > /var/lib/os-collect-config/local-data is not there? there?s a bunch of other > files there > > [root at overcloud-controller-0 /]# ls /var/lib/os-collect-config/ > cfn.json heat_local.json.orig > cfn.json.last os_config_files.json > cfn.json.orig > overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json > heat_local.json > overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json.last > heat_local.json.last > overcloud-Controller-yg67rrm4rj6i-0-uos36gscbtmp-NetworkConfig-ackqkwly5ijg-OsNetConfigImpl-hlpyldxrk4d4.json.orig > > > Thanks, help appreciated! > > P.S. Keep up the awesome work! > > > > Alessandro Vozza > alessandro at namecheap.com > +31643197789 > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From guillaume.pichard at sogeti.com Sun Oct 25 14:36:29 2015 From: guillaume.pichard at sogeti.com (PICHARD, Guillaume) Date: Sun, 25 Oct 2015 14:36:29 +0000 Subject: [Rdo-list] Error installing Sahara using RDO Liberty In-Reply-To: <1809678902.81844441.1445678365878.JavaMail.zimbra@redhat.com> References: , <1809678902.81844441.1445678365878.JavaMail.zimbra@redhat.com> Message-ID: Hi Javier, Thank you for the patch. After applying it, Sahara can be installed propertly. But now I have an error about Trove: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-trove' returned 1: Error: No matching Packages to list I checked, and it looks like openstack-trove is missing in the repo: http://mirror.centos.org/centos/7/cloud/x86_64/openstack-liberty/ Any workaround about that one ? Thanks, Regards, Guillaume. ________________________________ Hello everyone, I wanted to test packstack RDO Liberty. I followed the ?getting started? on RDO website. I used the same packstack arguments than when I used kilo (packstack --allinone --gen-answer-file=/root/packstack.cfg --provision-demo=n --provision-ovs-bridge=n --os-sahara-install=y --os-heat-install=y --os-trove-install=y ) but with Libery I get the following error: ERROR : Error appeared during Puppet run: 10.223.186.113_keystone.pp Error: Invalid parameter public_address on Class[Sahara::Keystone::Auth] at /var/tmp/packstack/6ad74435202e499da0123d628781d411/manifests/10.223.186.113_keystone.pp:205 on node devstack.test.ebd You will find full trace in log /var/tmp/packstack/20151023-115457-kkzCBm/manifests/10.223.186.113_keystone.pp.log Full trace is: Error: NetworkManager is not running. Warning: Scope(Class[Glance::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Glance::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Glance::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. Warning: Scope(Class[Cinder::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Cinder::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Cinder::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The public_address parameter is deprecated, use ec2_public_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The internal_address parameter is deprecated, use ec2_internal_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The admin_address parameter is deprecated, use ec2_admin_url instead. Warning: Scope(Class[Neutron::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Neutron::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Neutron::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. Warning: Scope(Class[Swift::Keystone::Auth]): The public_address parameter is deprecated, use public_url and public_url_s3 instead. Warning: Scope(Class[Ceilometer::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Ceilometer::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Ceilometer::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. Error: Invalid parameter public_address on Class[Sahara::Keystone::Auth] at /var/tmp/packstack/6ad74435202e499da0123d628781d411/manifests/10.223.186.113_ke ystone.pp:205 on node devstack.test.ebd Wrapped exception: Invalid parameter public_address Error: Invalid parameter public_address on Class[Sahara::Keystone::Auth] at /var/tmp/packstack/6ad74435202e499da0123d628781d411/manifests/10.223.186.113_ke ystone.pp:205 on node devstack.test.ebd Did something changed with packstack Liberty ? Hi Guillaume, It looks like the packstack package is missing a last-minute patch: https://review.openstack.org/234706 . Can you try applying it manually and see if that fixes your issue? Regards, Javier Thanks for your help, Regards, Guillaume. This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessandro at namecheap.com Sun Oct 25 15:42:09 2015 From: alessandro at namecheap.com (Alessandro Vozza) Date: Sun, 25 Oct 2015 16:42:09 +0100 Subject: [Rdo-list] Error installing Sahara using RDO Liberty In-Reply-To: References: <1809678902.81844441.1445678365878.JavaMail.zimbra@redhat.com> Message-ID: <85650BFA-BE17-45E3-B1FC-4E3431834AFC@namecheap.com> Hi I worked around it by enabling delorean repo (http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/ ); it might not be the best solution but works for me (also for Trove). cheers > On 25 Oct 2015, at 15:36, PICHARD, Guillaume wrote: > > Hi Javier, > > Thank you for the patch. After applying it, Sahara can be installed propertly. But now I have an error about Trove: > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-trove' returned 1: Error: No matching Packages to list > > I checked, and it looks like openstack-trove is missing in the repo: http://mirror.centos.org/centos/7/cloud/x86_64/openstack-liberty/ > > Any workaround about that one ? > > Thanks, > Regards, > Guillaume. > > > Hello everyone, > > I wanted to test packstack RDO Liberty. I followed the ?getting started? on RDO website. I used the same packstack arguments than when I used kilo (packstack --allinone --gen-answer-file=/root/packstack.cfg --provision-demo=n --provision-ovs-bridge=n --os-sahara-install=y --os-heat-install=y --os-trove-install=y ) but with Libery I get the following error: > > ERROR : Error appeared during Puppet run: 10.223.186.113_keystone.pp > Error: Invalid parameter public_address on Class[Sahara::Keystone::Auth] at /var/tmp/packstack/6ad74435202e499da0123d628781d411/manifests/10.223.186.113_keystone.pp:205 on node devstack.test.ebd > You will find full trace in log /var/tmp/packstack/20151023-115457-kkzCBm/manifests/10.223.186.113_keystone.pp.log > > Full trace is: > > Error: NetworkManager is not running. > Warning: Scope(Class[Glance::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. > Warning: Scope(Class[Glance::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. > Warning: Scope(Class[Glance::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. > Warning: Scope(Class[Cinder::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. > Warning: Scope(Class[Cinder::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. > Warning: Scope(Class[Cinder::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. > Warning: Scope(Class[Nova::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. > Warning: Scope(Class[Nova::Keystone::Auth]): The public_address parameter is deprecated, use ec2_public_url instead. > Warning: Scope(Class[Nova::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. > Warning: Scope(Class[Nova::Keystone::Auth]): The internal_address parameter is deprecated, use ec2_internal_url instead. > Warning: Scope(Class[Nova::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. > Warning: Scope(Class[Nova::Keystone::Auth]): The admin_address parameter is deprecated, use ec2_admin_url instead. > Warning: Scope(Class[Neutron::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. > Warning: Scope(Class[Neutron::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. > Warning: Scope(Class[Neutron::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. > Warning: Scope(Class[Swift::Keystone::Auth]): The public_address parameter is deprecated, use public_url and public_url_s3 instead. > Warning: Scope(Class[Ceilometer::Keystone::Auth]): The public_address parameter is deprecated, use public_url instead. > Warning: Scope(Class[Ceilometer::Keystone::Auth]): The internal_address parameter is deprecated, use internal_url instead. > Warning: Scope(Class[Ceilometer::Keystone::Auth]): The admin_address parameter is deprecated, use admin_url instead. > Error: Invalid parameter public_address on Class[Sahara::Keystone::Auth] at /var/tmp/packstack/6ad74435202e499da0123d628781d411/manifests/10.223.186.113_ke > ystone.pp:205 on node devstack.test.ebd > Wrapped exception: > Invalid parameter public_address > Error: Invalid parameter public_address on Class[Sahara::Keystone::Auth] at /var/tmp/packstack/6ad74435202e499da0123d628781d411/manifests/10.223.186.113_ke > ystone.pp:205 on node devstack.test.ebd > > Did something changed with packstack Liberty ? > Hi Guillaume, > > It looks like the packstack package is missing a last-minute patch: https://review.openstack.org/234706 . Can you try applying it manually and see if that fixes your issue? > > Regards, > Javier > > Thanks for your help, > Regards, > Guillaume. > This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message._______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at progbau.de Mon Oct 26 07:18:03 2015 From: contact at progbau.de (Chris) Date: Mon, 26 Oct 2015 14:18:03 +0700 Subject: [Rdo-list] Openstack Icehouse and Qpid behind HAproxy causes services down Message-ID: <027801d10fbe$75b1e250$6115a6f0$@progbau.de> Hello, We have an Openstack Icehouse cluster setup with two management nodes (Nova, Neutron, Horizon etc. ) as well as qpidd (version 0.18) for the message queue. Everything sits behind an HAproxy setup which round robins the request to the both nodes. It works fine until a certain amount of time (couple of days), all the agents from the compute nodes (Nova, Neutron) shows as down in the Horizon web interface. A "openstack-services restart" on both management nodes fixes it normally and the agents are shown as up. In the Nova logs on the compute nodes I see a lot of messages like the ones below, its seems like the connection to the message queue is lost.: ERROR nova.openstack.common.periodic_task [-] Error during ComputeManager.update_available_resource: Timed out waiting for a reply to message ID b28ae4098c31453c83d963c2a9d6c1ee [.] TRACE nova.openstack.common.periodic_task reply, ending = self._poll_connection(msg_id, timeout) TRACE nova.openstack.common.periodic_task File "/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 217, in _poll_connection TRACE nova.openstack.common.periodic_task % msg_id) TRACE nova.openstack.common.periodic_task MessagingTimeout: Timed out waiting for a reply to message ID b28ae4098c31453c83d963c2a9d6c1ee Here the Haproxy port for qpidd: listen qpid_message_broker bind 10.xxx.xxx.xxx:5672 timeout server 1h timeout client 1h timeout connect 240s server xx-xxxxx-x001 10.xxx.xxx.xx1:5672 check inter 10s rise 9999999 fall 5 server xx-xxxxx-x002 10.xxx.xxx.xx2:5672 check backup Any ideas or experiences you had with setting up HAproxy for qpidd? Any help appreciated! Cheers, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Oct 26 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 26 Oct 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20151026150003.C23F660A3FD9@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2015-10-28 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting ([Agenda: https://etherpad.openstack.org/p/RDO-Packaging](https://etherpad.openstack.org/p/RDO-Packaging)) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From pgsousa at gmail.com Tue Oct 27 11:06:18 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 27 Oct 2015 11:06:18 +0000 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: Hi Marius, I've tried to configure InternalAPI VLAN on the first interface that doesn't use a bridge, however it only seems to work if I define the physical device "*enp1s0f0"* like this: * network_config:* * -* * type: interface* * name: nic1* * use_dhcp: false* * addresses:* * -* * ip_netmask:* * list_join:* * - '/'* * - - {get_param: ControlPlaneIp}* * - {get_param: ControlPlaneSubnetCidr}* * routes:* * -* * ip_netmask: 169.254.169.254/32 * * next_hop: {get_param: EC2MetadataIp}* * -* * type: vlan* * device: enp1s0f0* * vlan_id: {get_param: InternalApiNetworkVlanID}* * addresses:* * -* * ip_netmask: {get_param: InternalApiIpSubnet}* So my question is if it's possible to create a VLAN attached to interface without using a bridge and specifying the physical device? My understanding is that you only require bridges when you use Tenant or Floating networks, or is it supposed to work that way? Thanks, Pedro Sousa On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea wrote: > Here's an adjusted controller.yaml which disables DHCP on the first > nic: enp1s0f0 so it doesn't get an IP address > http://paste.openstack.org/show/476981/ > > Please note that this assumes that your overcloud nodes are PXE > booting on the 2nd NIC(basically disabling the 1st nic) > > Given your setup(I'm doing some assumptions here so I might be wrong) > I would use the 1st nic for PXE booting and provisioning network and > 2nd nic for running the isolated networks with this kind of template: > http://paste.openstack.org/show/476986/ > > Let me know if it works for you. > > Thanks, > Marius > > On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa wrote: > > Hi, > > > > here you go. > > > > Regards, > > Pedro Sousa > > > > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea > > wrote: > >> > >> Hi Pedro, > >> > >> One issue I can quickly see is that br-ex has assigned the same IP > >> address as enp1s0f0. Can you post the nic templates you used for > >> deployment? > >> > >> 2: enp1s0f0: mtu 1500 qdisc mq state > >> UP qlen 1000 > >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > enp1s0f0 > >> 9: br-ex: mtu 1500 qdisc noqueue state > >> UNKNOWN > >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > >> > >> Thanks, > >> Marius > >> > >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa > wrote: > >> > Hi Marius, > >> > > >> > I've followed your howto and managed to get overcloud deployed in HA, > >> > thanks. However I cannot login to it (via CLI or Horizon) : > >> > > >> > ERROR (Unauthorized): The request you have made requires > authentication. > >> > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) > >> > > >> > So I rebooted the controllers and now I cannot login through > >> > Provisioning > >> > network, seems some openvswitch bridge conf problem, heres my conf: > >> > > >> > # ip a > >> > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >> > inet 127.0.0.1/8 scope host lo > >> > valid_lft forever preferred_lft forever > >> > inet6 ::1/128 scope host > >> > valid_lft forever preferred_lft forever > >> > 2: enp1s0f0: mtu 1500 qdisc mq state > >> > UP > >> > qlen 1000 > >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > >> > enp1s0f0 > >> > valid_lft 84562sec preferred_lft 84562sec > >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link > >> > valid_lft forever preferred_lft forever > >> > 3: enp1s0f1: mtu 1500 qdisc mq > master > >> > ovs-system state UP qlen 1000 > >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> > valid_lft forever preferred_lft forever > >> > 4: ovs-system: mtu 1500 qdisc noop state DOWN > >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff > >> > 5: br-tun: mtu 1500 qdisc noop state DOWN > >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff > >> > 6: vlan20: mtu 1500 qdisc noqueue > >> > state > >> > UNKNOWN > >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff > >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 > >> > valid_lft forever preferred_lft forever > >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 > >> > valid_lft forever preferred_lft forever > >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link > >> > valid_lft forever preferred_lft forever > >> > 7: vlan40: mtu 1500 qdisc noqueue > >> > state > >> > UNKNOWN > >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff > >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 > >> > valid_lft forever preferred_lft forever > >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link > >> > valid_lft forever preferred_lft forever > >> > 8: vlan174: mtu 1500 qdisc noqueue > >> > state > >> > UNKNOWN > >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff > >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global vlan174 > >> > valid_lft forever preferred_lft forever > >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global vlan174 > >> > valid_lft forever preferred_lft forever > >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link > >> > valid_lft forever preferred_lft forever > >> > 9: br-ex: mtu 1500 qdisc noqueue > state > >> > UNKNOWN > >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > >> > valid_lft forever preferred_lft forever > >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> > valid_lft forever preferred_lft forever > >> > 10: vlan50: mtu 1500 qdisc noqueue > >> > state > >> > UNKNOWN > >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff > >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 > >> > valid_lft forever preferred_lft forever > >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link > >> > valid_lft forever preferred_lft forever > >> > 11: vlan30: mtu 1500 qdisc noqueue > >> > state > >> > UNKNOWN > >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff > >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 > >> > valid_lft forever preferred_lft forever > >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 > >> > valid_lft forever preferred_lft forever > >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link > >> > valid_lft forever preferred_lft forever > >> > 12: br-int: mtu 1500 qdisc noop state DOWN > >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff > >> > > >> > > >> > # ovs-vsctl show > >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 > >> > Bridge br-ex > >> > Port br-ex > >> > Interface br-ex > >> > type: internal > >> > Port "enp1s0f1" > >> > Interface "enp1s0f1" > >> > Port "vlan40" > >> > tag: 40 > >> > Interface "vlan40" > >> > type: internal > >> > Port "vlan20" > >> > tag: 20 > >> > Interface "vlan20" > >> > type: internal > >> > Port phy-br-ex > >> > Interface phy-br-ex > >> > type: patch > >> > options: {peer=int-br-ex} > >> > Port "vlan50" > >> > tag: 50 > >> > Interface "vlan50" > >> > type: internal > >> > Port "vlan30" > >> > tag: 30 > >> > Interface "vlan30" > >> > type: internal > >> > Port "vlan174" > >> > tag: 174 > >> > Interface "vlan174" > >> > type: internal > >> > Bridge br-int > >> > fail_mode: secure > >> > Port br-int > >> > Interface br-int > >> > type: internal > >> > Port patch-tun > >> > Interface patch-tun > >> > type: patch > >> > options: {peer=patch-int} > >> > Port int-br-ex > >> > Interface int-br-ex > >> > type: patch > >> > options: {peer=phy-br-ex} > >> > Bridge br-tun > >> > fail_mode: secure > >> > Port "gre-0a00140b" > >> > Interface "gre-0a00140b" > >> > type: gre > >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> > out_key=flow, remote_ip="10.0.20.11"} > >> > Port patch-int > >> > Interface patch-int > >> > type: patch > >> > options: {peer=patch-tun} > >> > Port "gre-0a00140d" > >> > Interface "gre-0a00140d" > >> > type: gre > >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> > out_key=flow, remote_ip="10.0.20.13"} > >> > Port "gre-0a00140c" > >> > Interface "gre-0a00140c" > >> > type: gre > >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> > out_key=flow, remote_ip="10.0.20.12"} > >> > Port br-tun > >> > Interface br-tun > >> > type: internal > >> > ovs_version: "2.4.0" > >> > > >> > Regards, > >> > Pedro Sousa > >> > > >> > > >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea < > marius at remote-lab.net> > >> > wrote: > >> >> > >> >> Hi everyone, > >> >> > >> >> I wrote a blog post about how to deploy a HA with network isolation > >> >> overcloud on top of the virtual environment. I tried to provide some > >> >> insights into what instack-virt-setup creates and how to use the > >> >> network isolation templates in the virtual environment. I hope you > >> >> find it useful. > >> >> > >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > >> >> > >> >> Thanks, > >> >> Marius > >> >> > >> >> _______________________________________________ > >> >> Rdo-list mailing list > >> >> Rdo-list at redhat.com > >> >> https://www.redhat.com/mailman/listinfo/rdo-list > >> >> > >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.alone at gmail.com Tue Oct 27 11:59:39 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Tue, 27 Oct 2015 11:59:39 +0000 Subject: [Rdo-list] Overcloud features [ Trove Sahara ] Message-ID: Hi guys; I wana know how can I add extra features on Deployed overcloud? for example I wana add Sahara, Trove on my Liberty. -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Tue Oct 27 12:27:54 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Tue, 27 Oct 2015 08:27:54 -0400 Subject: [Rdo-list] Overcloud features [ Trove Sahara ] In-Reply-To: References: Message-ID: <8A956485-6EC3-40EB-A00B-98F5B8D7556F@ltgfederal.com> Ali, This is a great topic. I know that you can modify the heat templates so that when the new controller nodes are installed, the first time around, they do have this functionality. The question is how to do it in an environment already deployed? I would think that you could modify the heat template, bring one controller node down, and redeploy using the updated heat template. This might work but don?t know if this is the proper way of doing it. Ultimately you would want the new configurations to be part of the heat template, so that if you need a new node of that type, you can use ironic/heat to add nodes to the environment. Ideas? __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Oct 27, 2015, at 7:59 AM, AliReza Taleghani wrote: > > Hi guys; > > I wana know how can I add extra features on Deployed overcloud? > for example I wana add Sahara, Trove on my Liberty. > > -- > Sincerely, > Ali R. Taleghani > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Tue Oct 27 13:06:19 2015 From: marius at remote-lab.net (Marius Cornea) Date: Tue, 27 Oct 2015 14:06:19 +0100 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: Hi Pedro, Afaik in order to use a vlan interface you need to set it as part of a bridge - it actually gets created as an internal port within the ovs bridge with the specified vlan tag. Is there any specific reason you don't want to use a bridge for this? I believe your understanding relates to the Neutron configuration. In regards to the network isolation the Tenant network relates to the network used for setting up the overlay networks tunnels ( which in turn will run the tenant networks created after deployment ). On Tue, Oct 27, 2015 at 12:06 PM, Pedro Sousa wrote: > Hi Marius, > > I've tried to configure InternalAPI VLAN on the first interface that doesn't > use a bridge, however it only seems to work if I define the physical device > "enp1s0f0" like this: > > network_config: > - > type: interface > name: nic1 > use_dhcp: false > addresses: > - > ip_netmask: > list_join: > - '/' > - - {get_param: ControlPlaneIp} > - {get_param: ControlPlaneSubnetCidr} > routes: > - > ip_netmask: 169.254.169.254/32 > next_hop: {get_param: EC2MetadataIp} > - > type: vlan > device: enp1s0f0 > vlan_id: {get_param: InternalApiNetworkVlanID} > addresses: > - > ip_netmask: {get_param: InternalApiIpSubnet} > > > So my question is if it's possible to create a VLAN attached to interface > without using a bridge and specifying the physical device? > > My understanding is that you only require bridges when you use Tenant or > Floating networks, or is it supposed to work that way? > > Thanks, > Pedro Sousa > > > > > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea > wrote: >> >> Here's an adjusted controller.yaml which disables DHCP on the first >> nic: enp1s0f0 so it doesn't get an IP address >> http://paste.openstack.org/show/476981/ >> >> Please note that this assumes that your overcloud nodes are PXE >> booting on the 2nd NIC(basically disabling the 1st nic) >> >> Given your setup(I'm doing some assumptions here so I might be wrong) >> I would use the 1st nic for PXE booting and provisioning network and >> 2nd nic for running the isolated networks with this kind of template: >> http://paste.openstack.org/show/476986/ >> >> Let me know if it works for you. >> >> Thanks, >> Marius >> >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa wrote: >> > Hi, >> > >> > here you go. >> > >> > Regards, >> > Pedro Sousa >> > >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea >> > wrote: >> >> >> >> Hi Pedro, >> >> >> >> One issue I can quickly see is that br-ex has assigned the same IP >> >> address as enp1s0f0. Can you post the nic templates you used for >> >> deployment? >> >> >> >> 2: enp1s0f0: mtu 1500 qdisc mq state >> >> UP qlen 1000 >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic >> >> enp1s0f0 >> >> 9: br-ex: mtu 1500 qdisc noqueue >> >> state >> >> UNKNOWN >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex >> >> >> >> Thanks, >> >> Marius >> >> >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa >> >> wrote: >> >> > Hi Marius, >> >> > >> >> > I've followed your howto and managed to get overcloud deployed in HA, >> >> > thanks. However I cannot login to it (via CLI or Horizon) : >> >> > >> >> > ERROR (Unauthorized): The request you have made requires >> >> > authentication. >> >> > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) >> >> > >> >> > So I rebooted the controllers and now I cannot login through >> >> > Provisioning >> >> > network, seems some openvswitch bridge conf problem, heres my conf: >> >> > >> >> > # ip a >> >> > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> >> > inet 127.0.0.1/8 scope host lo >> >> > valid_lft forever preferred_lft forever >> >> > inet6 ::1/128 scope host >> >> > valid_lft forever preferred_lft forever >> >> > 2: enp1s0f0: mtu 1500 qdisc mq >> >> > state >> >> > UP >> >> > qlen 1000 >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic >> >> > enp1s0f0 >> >> > valid_lft 84562sec preferred_lft 84562sec >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 3: enp1s0f1: mtu 1500 qdisc mq >> >> > master >> >> > ovs-system state UP qlen 1000 >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 4: ovs-system: mtu 1500 qdisc noop state DOWN >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff >> >> > 5: br-tun: mtu 1500 qdisc noop state DOWN >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff >> >> > 6: vlan20: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 >> >> > valid_lft forever preferred_lft forever >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 7: vlan40: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 8: vlan174: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global vlan174 >> >> > valid_lft forever preferred_lft forever >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global vlan174 >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 9: br-ex: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 10: vlan50: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 11: vlan30: mtu 1500 qdisc noqueue >> >> > state >> >> > UNKNOWN >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 >> >> > valid_lft forever preferred_lft forever >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 >> >> > valid_lft forever preferred_lft forever >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link >> >> > valid_lft forever preferred_lft forever >> >> > 12: br-int: mtu 1500 qdisc noop state DOWN >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff >> >> > >> >> > >> >> > # ovs-vsctl show >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 >> >> > Bridge br-ex >> >> > Port br-ex >> >> > Interface br-ex >> >> > type: internal >> >> > Port "enp1s0f1" >> >> > Interface "enp1s0f1" >> >> > Port "vlan40" >> >> > tag: 40 >> >> > Interface "vlan40" >> >> > type: internal >> >> > Port "vlan20" >> >> > tag: 20 >> >> > Interface "vlan20" >> >> > type: internal >> >> > Port phy-br-ex >> >> > Interface phy-br-ex >> >> > type: patch >> >> > options: {peer=int-br-ex} >> >> > Port "vlan50" >> >> > tag: 50 >> >> > Interface "vlan50" >> >> > type: internal >> >> > Port "vlan30" >> >> > tag: 30 >> >> > Interface "vlan30" >> >> > type: internal >> >> > Port "vlan174" >> >> > tag: 174 >> >> > Interface "vlan174" >> >> > type: internal >> >> > Bridge br-int >> >> > fail_mode: secure >> >> > Port br-int >> >> > Interface br-int >> >> > type: internal >> >> > Port patch-tun >> >> > Interface patch-tun >> >> > type: patch >> >> > options: {peer=patch-int} >> >> > Port int-br-ex >> >> > Interface int-br-ex >> >> > type: patch >> >> > options: {peer=phy-br-ex} >> >> > Bridge br-tun >> >> > fail_mode: secure >> >> > Port "gre-0a00140b" >> >> > Interface "gre-0a00140b" >> >> > type: gre >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> >> > out_key=flow, remote_ip="10.0.20.11"} >> >> > Port patch-int >> >> > Interface patch-int >> >> > type: patch >> >> > options: {peer=patch-tun} >> >> > Port "gre-0a00140d" >> >> > Interface "gre-0a00140d" >> >> > type: gre >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> >> > out_key=flow, remote_ip="10.0.20.13"} >> >> > Port "gre-0a00140c" >> >> > Interface "gre-0a00140c" >> >> > type: gre >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", >> >> > out_key=flow, remote_ip="10.0.20.12"} >> >> > Port br-tun >> >> > Interface br-tun >> >> > type: internal >> >> > ovs_version: "2.4.0" >> >> > >> >> > Regards, >> >> > Pedro Sousa >> >> > >> >> > >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea >> >> > >> >> > wrote: >> >> >> >> >> >> Hi everyone, >> >> >> >> >> >> I wrote a blog post about how to deploy a HA with network isolation >> >> >> overcloud on top of the virtual environment. I tried to provide some >> >> >> insights into what instack-virt-setup creates and how to use the >> >> >> network isolation templates in the virtual environment. I hope you >> >> >> find it useful. >> >> >> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ >> >> >> >> >> >> Thanks, >> >> >> Marius >> >> >> >> >> >> _______________________________________________ >> >> >> Rdo-list mailing list >> >> >> Rdo-list at redhat.com >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > >> >> > >> > >> > > > From shayne.alone at gmail.com Tue Oct 27 12:46:21 2015 From: shayne.alone at gmail.com (AliReza Taleghani) Date: Tue, 27 Oct 2015 12:46:21 +0000 Subject: [Rdo-list] Overcloud features [ Trove Sahara ] In-Reply-To: <8A956485-6EC3-40EB-A00B-98F5B8D7556F@ltgfederal.com> References: <8A956485-6EC3-40EB-A00B-98F5B8D7556F@ltgfederal.com> Message-ID: em... but it's not still light for me how should do that... event on fresh overcloud deployment how can I add extra features like the Sahara. is there such a guides? On Tue, Oct 27, 2015 at 3:57 PM Ignacio Bravo wrote: > Ali, > > This is a great topic. I know that you can modify the heat templates so > that when the new controller nodes are installed, the first time around, > they do have this functionality. > > The question is how to do it in an environment already deployed? I would > think that you could modify the heat template, bring one controller node > down, and redeploy using the updated heat template. This might work but > don?t know if this is the proper way of doing it. > Ultimately you would want the new configurations to be part of the heat > template, so that if you need a new node of that type, you can use > ironic/heat to add nodes to the environment. > > Ideas? > > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > On Oct 27, 2015, at 7:59 AM, AliReza Taleghani > wrote: > > Hi guys; > > I wana know how can I add extra features on Deployed overcloud? > for example I wana add Sahara, Trove on my Liberty. > > -- > Sincerely, > Ali R. Taleghani > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- Sincerely, Ali R. Taleghani -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Tue Oct 27 14:06:44 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 27 Oct 2015 14:06:44 +0000 Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: Hi Marius, the reason is that I would like for example to use internalapi network with provisioning network in the same interface, and since provisioning doesn't use bridge I wondered if this it's possible. As I said, actually I was able to deploy overcloud with internalapi without the bridge but I had to specify the physical interface "device: enps0f0" on my heat template. Thanks Pedro Sousa On Tue, Oct 27, 2015 at 1:06 PM, Marius Cornea wrote: > Hi Pedro, > > Afaik in order to use a vlan interface you need to set it as part of a > bridge - it actually gets created as an internal port within the ovs > bridge with the specified vlan tag. Is there any specific reason you > don't want to use a bridge for this? > > I believe your understanding relates to the Neutron configuration. In > regards to the network isolation the Tenant network relates to the > network used for setting up the overlay networks tunnels ( which in > turn will run the tenant networks created after deployment ). > > On Tue, Oct 27, 2015 at 12:06 PM, Pedro Sousa wrote: > > Hi Marius, > > > > I've tried to configure InternalAPI VLAN on the first interface that > doesn't > > use a bridge, however it only seems to work if I define the physical > device > > "enp1s0f0" like this: > > > > network_config: > > - > > type: interface > > name: nic1 > > use_dhcp: false > > addresses: > > - > > ip_netmask: > > list_join: > > - '/' > > - - {get_param: ControlPlaneIp} > > - {get_param: ControlPlaneSubnetCidr} > > routes: > > - > > ip_netmask: 169.254.169.254/32 > > next_hop: {get_param: EC2MetadataIp} > > - > > type: vlan > > device: enp1s0f0 > > vlan_id: {get_param: InternalApiNetworkVlanID} > > addresses: > > - > > ip_netmask: {get_param: InternalApiIpSubnet} > > > > > > So my question is if it's possible to create a VLAN attached to interface > > without using a bridge and specifying the physical device? > > > > My understanding is that you only require bridges when you use Tenant or > > Floating networks, or is it supposed to work that way? > > > > Thanks, > > Pedro Sousa > > > > > > > > > > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea > > wrote: > >> > >> Here's an adjusted controller.yaml which disables DHCP on the first > >> nic: enp1s0f0 so it doesn't get an IP address > >> http://paste.openstack.org/show/476981/ > >> > >> Please note that this assumes that your overcloud nodes are PXE > >> booting on the 2nd NIC(basically disabling the 1st nic) > >> > >> Given your setup(I'm doing some assumptions here so I might be wrong) > >> I would use the 1st nic for PXE booting and provisioning network and > >> 2nd nic for running the isolated networks with this kind of template: > >> http://paste.openstack.org/show/476986/ > >> > >> Let me know if it works for you. > >> > >> Thanks, > >> Marius > >> > >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa wrote: > >> > Hi, > >> > > >> > here you go. > >> > > >> > Regards, > >> > Pedro Sousa > >> > > >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea < > marius at remote-lab.net> > >> > wrote: > >> >> > >> >> Hi Pedro, > >> >> > >> >> One issue I can quickly see is that br-ex has assigned the same IP > >> >> address as enp1s0f0. Can you post the nic templates you used for > >> >> deployment? > >> >> > >> >> 2: enp1s0f0: mtu 1500 qdisc mq > state > >> >> UP qlen 1000 > >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > >> >> enp1s0f0 > >> >> 9: br-ex: mtu 1500 qdisc noqueue > >> >> state > >> >> UNKNOWN > >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > >> >> > >> >> Thanks, > >> >> Marius > >> >> > >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa > >> >> wrote: > >> >> > Hi Marius, > >> >> > > >> >> > I've followed your howto and managed to get overcloud deployed in > HA, > >> >> > thanks. However I cannot login to it (via CLI or Horizon) : > >> >> > > >> >> > ERROR (Unauthorized): The request you have made requires > >> >> > authentication. > >> >> > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) > >> >> > > >> >> > So I rebooted the controllers and now I cannot login through > >> >> > Provisioning > >> >> > network, seems some openvswitch bridge conf problem, heres my conf: > >> >> > > >> >> > # ip a > >> >> > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >> >> > inet 127.0.0.1/8 scope host lo > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 ::1/128 scope host > >> >> > valid_lft forever preferred_lft forever > >> >> > 2: enp1s0f0: mtu 1500 qdisc mq > >> >> > state > >> >> > UP > >> >> > qlen 1000 > >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > >> >> > enp1s0f0 > >> >> > valid_lft 84562sec preferred_lft 84562sec > >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 3: enp1s0f1: mtu 1500 qdisc mq > >> >> > master > >> >> > ovs-system state UP qlen 1000 > >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 4: ovs-system: mtu 1500 qdisc noop state DOWN > >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff > >> >> > 5: br-tun: mtu 1500 qdisc noop state DOWN > >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff > >> >> > 6: vlan20: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 7: vlan40: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 8: vlan174: mtu 1500 qdisc > noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global > vlan174 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global > vlan174 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 9: br-ex: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 10: vlan50: mtu 1500 qdisc > noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff > >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 11: vlan30: mtu 1500 qdisc > noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 12: br-int: mtu 1500 qdisc noop state DOWN > >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff > >> >> > > >> >> > > >> >> > # ovs-vsctl show > >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 > >> >> > Bridge br-ex > >> >> > Port br-ex > >> >> > Interface br-ex > >> >> > type: internal > >> >> > Port "enp1s0f1" > >> >> > Interface "enp1s0f1" > >> >> > Port "vlan40" > >> >> > tag: 40 > >> >> > Interface "vlan40" > >> >> > type: internal > >> >> > Port "vlan20" > >> >> > tag: 20 > >> >> > Interface "vlan20" > >> >> > type: internal > >> >> > Port phy-br-ex > >> >> > Interface phy-br-ex > >> >> > type: patch > >> >> > options: {peer=int-br-ex} > >> >> > Port "vlan50" > >> >> > tag: 50 > >> >> > Interface "vlan50" > >> >> > type: internal > >> >> > Port "vlan30" > >> >> > tag: 30 > >> >> > Interface "vlan30" > >> >> > type: internal > >> >> > Port "vlan174" > >> >> > tag: 174 > >> >> > Interface "vlan174" > >> >> > type: internal > >> >> > Bridge br-int > >> >> > fail_mode: secure > >> >> > Port br-int > >> >> > Interface br-int > >> >> > type: internal > >> >> > Port patch-tun > >> >> > Interface patch-tun > >> >> > type: patch > >> >> > options: {peer=patch-int} > >> >> > Port int-br-ex > >> >> > Interface int-br-ex > >> >> > type: patch > >> >> > options: {peer=phy-br-ex} > >> >> > Bridge br-tun > >> >> > fail_mode: secure > >> >> > Port "gre-0a00140b" > >> >> > Interface "gre-0a00140b" > >> >> > type: gre > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> > out_key=flow, remote_ip="10.0.20.11"} > >> >> > Port patch-int > >> >> > Interface patch-int > >> >> > type: patch > >> >> > options: {peer=patch-tun} > >> >> > Port "gre-0a00140d" > >> >> > Interface "gre-0a00140d" > >> >> > type: gre > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> > out_key=flow, remote_ip="10.0.20.13"} > >> >> > Port "gre-0a00140c" > >> >> > Interface "gre-0a00140c" > >> >> > type: gre > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> > out_key=flow, remote_ip="10.0.20.12"} > >> >> > Port br-tun > >> >> > Interface br-tun > >> >> > type: internal > >> >> > ovs_version: "2.4.0" > >> >> > > >> >> > Regards, > >> >> > Pedro Sousa > >> >> > > >> >> > > >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> Hi everyone, > >> >> >> > >> >> >> I wrote a blog post about how to deploy a HA with network > isolation > >> >> >> overcloud on top of the virtual environment. I tried to provide > some > >> >> >> insights into what instack-virt-setup creates and how to use the > >> >> >> network isolation templates in the virtual environment. I hope you > >> >> >> find it useful. > >> >> >> > >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > >> >> >> > >> >> >> Thanks, > >> >> >> Marius > >> >> >> > >> >> >> _______________________________________________ > >> >> >> Rdo-list mailing list > >> >> >> Rdo-list at redhat.com > >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list > >> >> >> > >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasha at redhat.com Tue Oct 27 14:40:58 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Tue, 27 Oct 2015 10:40:58 -0400 (EDT) Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: References: Message-ID: <181528904.65879741.1445956858269.JavaMail.zimbra@redhat.com> Hi, IIUC, this can work with tagged vlan used for the internalapi. The relevant yaml file snippet (nic1 is where the provision and the internalapi are for example): type: ovs_bridge name: br-nic1 use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} - default: true next_hop: {get_param: ControlPlaneDefaultRoute} members: - type: interface name: nic1 primary: true - type: vlan vlan_id: {get_param: InternalApiNetworkVlanID:} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} The InternalApiNetworkVlanID (if different than 20, which is the defaut) has to be set respectively. Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Pedro Sousa" > To: "Marius Cornea" > Cc: "rdo-list" > Sent: Tuesday, October 27, 2015 10:06:44 AM > Subject: Re: [Rdo-list] HA with network isolation on virt howto > > Hi Marius, > > the reason is that I would like for example to use internalapi network with > provisioning network in the same interface, and since provisioning doesn't > use bridge I wondered if this it's possible. > > As I said, actually I was able to deploy overcloud with internalapi without > the bridge but I had to specify the physical interface "device: enps0f0" on > my heat template. > > Thanks > Pedro Sousa > > On Tue, Oct 27, 2015 at 1:06 PM, Marius Cornea < marius at remote-lab.net > > wrote: > > > Hi Pedro, > > Afaik in order to use a vlan interface you need to set it as part of a > bridge - it actually gets created as an internal port within the ovs > bridge with the specified vlan tag. Is there any specific reason you > don't want to use a bridge for this? > > I believe your understanding relates to the Neutron configuration. In > regards to the network isolation the Tenant network relates to the > network used for setting up the overlay networks tunnels ( which in > turn will run the tenant networks created after deployment ). > > On Tue, Oct 27, 2015 at 12:06 PM, Pedro Sousa < pgsousa at gmail.com > wrote: > > Hi Marius, > > > > I've tried to configure InternalAPI VLAN on the first interface that > > doesn't > > use a bridge, however it only seems to work if I define the physical device > > "enp1s0f0" like this: > > > > network_config: > > - > > type: interface > > name: nic1 > > use_dhcp: false > > addresses: > > - > > ip_netmask: > > list_join: > > - '/' > > - - {get_param: ControlPlaneIp} > > - {get_param: ControlPlaneSubnetCidr} > > routes: > > - > > ip_netmask: 169.254.169.254/32 > > next_hop: {get_param: EC2MetadataIp} > > - > > type: vlan > > device: enp1s0f0 > > vlan_id: {get_param: InternalApiNetworkVlanID} > > addresses: > > - > > ip_netmask: {get_param: InternalApiIpSubnet} > > > > > > So my question is if it's possible to create a VLAN attached to interface > > without using a bridge and specifying the physical device? > > > > My understanding is that you only require bridges when you use Tenant or > > Floating networks, or is it supposed to work that way? > > > > Thanks, > > Pedro Sousa > > > > > > > > > > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea < marius at remote-lab.net > > > wrote: > >> > >> Here's an adjusted controller.yaml which disables DHCP on the first > >> nic: enp1s0f0 so it doesn't get an IP address > >> http://paste.openstack.org/show/476981/ > >> > >> Please note that this assumes that your overcloud nodes are PXE > >> booting on the 2nd NIC(basically disabling the 1st nic) > >> > >> Given your setup(I'm doing some assumptions here so I might be wrong) > >> I would use the 1st nic for PXE booting and provisioning network and > >> 2nd nic for running the isolated networks with this kind of template: > >> http://paste.openstack.org/show/476986/ > >> > >> Let me know if it works for you. > >> > >> Thanks, > >> Marius > >> > >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa < pgsousa at gmail.com > wrote: > >> > Hi, > >> > > >> > here you go. > >> > > >> > Regards, > >> > Pedro Sousa > >> > > >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea < marius at remote-lab.net > >> > > > >> > wrote: > >> >> > >> >> Hi Pedro, > >> >> > >> >> One issue I can quickly see is that br-ex has assigned the same IP > >> >> address as enp1s0f0. Can you post the nic templates you used for > >> >> deployment? > >> >> > >> >> 2: enp1s0f0: mtu 1500 qdisc mq state > >> >> UP qlen 1000 > >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > >> >> enp1s0f0 > >> >> 9: br-ex: mtu 1500 qdisc noqueue > >> >> state > >> >> UNKNOWN > >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > >> >> > >> >> Thanks, > >> >> Marius > >> >> > >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa < pgsousa at gmail.com > > >> >> wrote: > >> >> > Hi Marius, > >> >> > > >> >> > I've followed your howto and managed to get overcloud deployed in HA, > >> >> > thanks. However I cannot login to it (via CLI or Horizon) : > >> >> > > >> >> > ERROR (Unauthorized): The request you have made requires > >> >> > authentication. > >> >> > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) > >> >> > > >> >> > So I rebooted the controllers and now I cannot login through > >> >> > Provisioning > >> >> > network, seems some openvswitch bridge conf problem, heres my conf: > >> >> > > >> >> > # ip a > >> >> > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >> >> > inet 127.0.0.1/8 scope host lo > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 ::1/128 scope host > >> >> > valid_lft forever preferred_lft forever > >> >> > 2: enp1s0f0: mtu 1500 qdisc mq > >> >> > state > >> >> > UP > >> >> > qlen 1000 > >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > >> >> > enp1s0f0 > >> >> > valid_lft 84562sec preferred_lft 84562sec > >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 3: enp1s0f1: mtu 1500 qdisc mq > >> >> > master > >> >> > ovs-system state UP qlen 1000 > >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 4: ovs-system: mtu 1500 qdisc noop state DOWN > >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff > >> >> > 5: br-tun: mtu 1500 qdisc noop state DOWN > >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff > >> >> > 6: vlan20: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 7: vlan40: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 8: vlan174: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global vlan174 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global vlan174 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 9: br-ex: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 10: vlan50: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff > >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 11: vlan30: mtu 1500 qdisc noqueue > >> >> > state > >> >> > UNKNOWN > >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff > >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 > >> >> > valid_lft forever preferred_lft forever > >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link > >> >> > valid_lft forever preferred_lft forever > >> >> > 12: br-int: mtu 1500 qdisc noop state DOWN > >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff > >> >> > > >> >> > > >> >> > # ovs-vsctl show > >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 > >> >> > Bridge br-ex > >> >> > Port br-ex > >> >> > Interface br-ex > >> >> > type: internal > >> >> > Port "enp1s0f1" > >> >> > Interface "enp1s0f1" > >> >> > Port "vlan40" > >> >> > tag: 40 > >> >> > Interface "vlan40" > >> >> > type: internal > >> >> > Port "vlan20" > >> >> > tag: 20 > >> >> > Interface "vlan20" > >> >> > type: internal > >> >> > Port phy-br-ex > >> >> > Interface phy-br-ex > >> >> > type: patch > >> >> > options: {peer=int-br-ex} > >> >> > Port "vlan50" > >> >> > tag: 50 > >> >> > Interface "vlan50" > >> >> > type: internal > >> >> > Port "vlan30" > >> >> > tag: 30 > >> >> > Interface "vlan30" > >> >> > type: internal > >> >> > Port "vlan174" > >> >> > tag: 174 > >> >> > Interface "vlan174" > >> >> > type: internal > >> >> > Bridge br-int > >> >> > fail_mode: secure > >> >> > Port br-int > >> >> > Interface br-int > >> >> > type: internal > >> >> > Port patch-tun > >> >> > Interface patch-tun > >> >> > type: patch > >> >> > options: {peer=patch-int} > >> >> > Port int-br-ex > >> >> > Interface int-br-ex > >> >> > type: patch > >> >> > options: {peer=phy-br-ex} > >> >> > Bridge br-tun > >> >> > fail_mode: secure > >> >> > Port "gre-0a00140b" > >> >> > Interface "gre-0a00140b" > >> >> > type: gre > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> > out_key=flow, remote_ip="10.0.20.11"} > >> >> > Port patch-int > >> >> > Interface patch-int > >> >> > type: patch > >> >> > options: {peer=patch-tun} > >> >> > Port "gre-0a00140d" > >> >> > Interface "gre-0a00140d" > >> >> > type: gre > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> > out_key=flow, remote_ip="10.0.20.13"} > >> >> > Port "gre-0a00140c" > >> >> > Interface "gre-0a00140c" > >> >> > type: gre > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > >> >> > out_key=flow, remote_ip="10.0.20.12"} > >> >> > Port br-tun > >> >> > Interface br-tun > >> >> > type: internal > >> >> > ovs_version: "2.4.0" > >> >> > > >> >> > Regards, > >> >> > Pedro Sousa > >> >> > > >> >> > > >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea > >> >> > < marius at remote-lab.net > > >> >> > wrote: > >> >> >> > >> >> >> Hi everyone, > >> >> >> > >> >> >> I wrote a blog post about how to deploy a HA with network isolation > >> >> >> overcloud on top of the virtual environment. I tried to provide some > >> >> >> insights into what instack-virt-setup creates and how to use the > >> >> >> network isolation templates in the virtual environment. I hope you > >> >> >> find it useful. > >> >> >> > >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > >> >> >> > >> >> >> Thanks, > >> >> >> Marius > >> >> >> > >> >> >> _______________________________________________ > >> >> >> Rdo-list mailing list > >> >> >> Rdo-list at redhat.com > >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list > >> >> >> > >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> >> > > >> >> > > >> > > >> > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From pgsousa at gmail.com Tue Oct 27 14:28:09 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 27 Oct 2015 14:28:09 +0000 Subject: [Rdo-list] Overcloud features [ Trove Sahara ] In-Reply-To: References: <8A956485-6EC3-40EB-A00B-98F5B8D7556F@ltgfederal.com> Message-ID: Hi Ali, I guess you need to start here: https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/extra_config.html Regards, Pedro Sousa On Tue, Oct 27, 2015 at 12:46 PM, AliReza Taleghani wrote: > em... but it's not still light for me how should do that... > event on fresh overcloud deployment how can I add extra features like the > Sahara. is there such a guides? > > > On Tue, Oct 27, 2015 at 3:57 PM Ignacio Bravo > wrote: > >> Ali, >> >> This is a great topic. I know that you can modify the heat templates so >> that when the new controller nodes are installed, the first time around, >> they do have this functionality. >> >> The question is how to do it in an environment already deployed? I would >> think that you could modify the heat template, bring one controller node >> down, and redeploy using the updated heat template. This might work but >> don?t know if this is the proper way of doing it. >> Ultimately you would want the new configurations to be part of the heat >> template, so that if you need a new node of that type, you can use >> ironic/heat to add nodes to the environment. >> >> Ideas? >> >> >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> Office: (703) 951-7760 >> >> On Oct 27, 2015, at 7:59 AM, AliReza Taleghani >> wrote: >> >> Hi guys; >> >> I wana know how can I add extra features on Deployed overcloud? >> for example I wana add Sahara, Trove on my Liberty. >> >> -- >> Sincerely, >> Ali R. Taleghani >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> -- > Sincerely, > Ali R. Taleghani > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nuno.loureiro at itcenter.com.pt Tue Oct 27 17:10:17 2015 From: nuno.loureiro at itcenter.com.pt (Nuno Loureiro) Date: Tue, 27 Oct 2015 17:10:17 +0000 Subject: [Rdo-list] HA Overcloud deployment with network isolation in VLAN mode Message-ID: Hi all! I'm deploying an HA overcloud with 3 controller nodes and 3 compute nodes. I'm able to successfully deploy the overcloud in GRE-tunnel mode by issuing the following command: openstack overcloud deploy --control-scale 3 --compute-scale 3 --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e ~/the-cloud/environments/puppet-pacemaker.yaml -e ~/the-cloud/environments/network-isolation.yaml -e ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e ~/the-cloud/environments/network-environment.yaml --control-flavor controller --compute-flavor compute Now I want to use VLAN in tenant networks, disabling the GRE-tunnels. I ran the following command to deploy the overcloud in VLAN mode: openstack overcloud deploy --control-scale 3 --compute-scale 3 --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e ~/the-cloud/environments/puppet-pacemaker.yaml -e ~/the-cloud/environments/network-isolation.yaml -e ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e ~/the-cloud/environments/network-environment.yaml --control-flavor controller --compute-flavor compute --neutron-network-type vlan --neutron-bridge-mappings datacentre:br-ex --neutron-network-vlan-ranges datacentre:1000:10009 However I always get the following error: ERROR: openstack Neutron tunnel types must be specified when Neutron network type is specified I think this problem might be related to this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1244893 Do you have any suggestions on how to solve this problem? Thank you very much! Regards, -- Nuno Loureiro Research & Development Phone: +351 256 370 980 Email: nuno.loureiro at itcenter.com.pt www.itcenter.com.pt [image: ITCENTER Store] [image: ITCENTER Helpdesk] [image: ITCENTER Facebook] [image: ITCENTER Linkedin] [image: ITCENTER Twitter] -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Tue Oct 27 17:34:43 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Tue, 27 Oct 2015 13:34:43 -0400 Subject: [Rdo-list] HA Overcloud deployment with network isolation in VLAN mode In-Reply-To: References: Message-ID: <8E0D529E-6023-41F7-B616-DE4A4C7A9A8B@ltgfederal.com> Try the following: --neutron-network-type vxlan --neutron-tunnel-types vxlan __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com > On Oct 27, 2015, at 1:10 PM, Nuno Loureiro wrote: > > > Hi all! > > I'm deploying an HA overcloud with 3 controller nodes and 3 compute nodes. > > I'm able to successfully deploy the overcloud in GRE-tunnel mode by issuing the following command: > > openstack overcloud deploy --control-scale 3 --compute-scale 3 --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e ~/the-cloud/environments/puppet-pacemaker.yaml -e ~/the-cloud/environments/network-isolation.yaml -e ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e ~/the-cloud/environments/network-environment.yaml --control-flavor controller --compute-flavor compute > > > Now I want to use VLAN in tenant networks, disabling the GRE-tunnels. > > I ran the following command to deploy the overcloud in VLAN mode: > > openstack overcloud deploy --control-scale 3 --compute-scale 3 --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e ~/the-cloud/environments/puppet-pacemaker.yaml -e ~/the-cloud/environments/network-isolation.yaml -e ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e ~/the-cloud/environments/network-environment.yaml --control-flavor controller --compute-flavor compute --neutron-network-type vlan --neutron-bridge-mappings datacentre:br-ex --neutron-network-vlan-ranges datacentre:1000:10009 > > However I always get the following error: > ERROR: openstack Neutron tunnel types must be specified when Neutron network type is specified > > I think this problem might be related to this bug: > https://bugzilla.redhat.com/show_bug.cgi?id=1244893 > > > Do you have any suggestions on how to solve this problem? > > Thank you very much! > Regards, > -- > Nuno Loureiro > Research & Development > Phone: +351 256 370 980 > Email: nuno.loureiro at itcenter.com.pt > > www.itcenter.com.pt > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From erming at ualberta.ca Tue Oct 27 17:15:20 2015 From: erming at ualberta.ca (Erming Pei) Date: Tue, 27 Oct 2015 11:15:20 -0600 Subject: [Rdo-list] [rdo-manager] No route to host issue and network isolation In-Reply-To: <5620301D.7020209@redhat.com> References: <561ED146.2070606@ualberta.ca> <561EE408.6030302@redhat.com> <5620068E.1020202@ualberta.ca> <5620301D.7020209@redhat.com> Message-ID: <562FB128.8040402@ualberta.ca> Hi Dan, et al, Thanks for all your warmhearted help. I followed Dan's suggestions as well as others' and tried again, I see an explicit error in the overcloud: "Connection aborted.', error(113, 'No route to host')". (Below are the error messages and information) I think it could because of lacking of sufficient network configuration. Looks the proof-of-concept use case doesn't work for me as I changed something, such as the undercloud.conf (eg., all the IPs, etc). So next, I would try to do the network isolation. Is there a simple/concise example of the network configuration ( network-environment.yaml). I know that there is a guide for this but I've lost myself in the ocean of the information. It would be the best to start from a basic configuration, if you could help. Thanks, Erming $openstack overcloud deploy --timeout 90 --ntp-server time1.srv.ualberta.ca -e /home/stack/storage-environment.yaml --neutron-public-interface enp1s0f1 --templates Oct 27 16:16:58 overcloud-controller-0 os-collect-config[12030]: 2015-10-27 16:16:58.626 12030 WARNING os-collect-config [-] Source [ec2] Unavailable. Oct 27 16:16:58 overcloud-controller-0 os-collect-config[12030]: 2015-10-27 16:16:58.626 12030 WARNING os_collect_config.ec2 [-] ('Connection aborted.', error(113, 'No route to host')) Oct 27 16:16:25 overcloud-controller-0 os-collect-config[12030]: 2015-10-27 16:16:25.586 12030 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) Oct 27 16:16:25 overcloud-controller-0 os-collect-config[12030]: 2015-10-27 16:16:25.586 12030 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping Oct 27 16:16:25 overcloud-controller-0 os-collect-config[12030]: 2015-10-27 16:16:25.586 12030 WARNING os-collect-config [-] Source [request] Unavailable. Oct 27 16:16:25 overcloud-controller-0 os-collect-config[12030]: 2015-10-27 16:16:25.586 12030 WARNING os_collect_config.request [-] No metadata_url configured. Oct 27 16:16:25 overcloud-controller-0 os-collect-config[12030]: 2015-10-27 16:16:25.585 12030 WARNING os_collect_config.heat [-] No auth_url configured. [heat-admin at overcloud-controller-0 ~]$ ifconfig br-ex: flags=4163 mtu 1500 inet6 fe80::225:90ff:fe33:a63f prefixlen 64 scopeid 0x20 ether 00:25:90:33:a6:3f txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 75 bytes 23562 (23.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp1s0f0: flags=4163 mtu 1500 inet 10.0.6.77 netmask 255.255.0.0 broadcast 10.0.255.255 inet6 fe80::225:90ff:fe33:a63e prefixlen 64 scopeid 0x20 ether 00:25:90:33:a6:3e txqueuelen 1000 (Ethernet) RX packets 44760 bytes 16134789 (15.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 30109 bytes 3840484 (3.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xfade0000-fadfffff enp1s0f1: flags=4099 mtu 1500 ether 00:25:90:33:a6:3f txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xfad20000-fad3ffff ib0: flags=4163 mtu 2044 inet6 fe80::202:c902:24:4045 prefixlen 64 scopeid 0x20 Infiniband hardware address can be incorrect! Please read BUGS section in ifconfig(8). infiniband 80:00:04:04:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00 txqueuelen 256 (InfiniBand) RX packets 161572 bytes 9048032 (8.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 17 bytes 5380 (5.2 KiB) TX errors 0 dropped 7 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 134 bytes 11552 (11.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 134 bytes 11552 (11.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [heat-admin at overcloud-controller-0 ~]$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.6.40 0.0.0.0 UG 0 0 0 enp1s0f0 10.0.0.0 0.0.0.0 255.255.0.0 U 0 0 0 enp1s0f0 169.254.169.254 10.0.6.40 255.255.255.255 UGH 0 0 0 enp1s0f0 [heat-admin at overcloud-controller-0 ~]$ sudo ovs-vsctl show 1cf8f4ac-fd60-43a3-84cd-6259387e44c9 Bridge br-ex Port "enp1s0f1" Interface "enp1s0f1" Port br-ex Interface br-ex type: internal ovs_version: "2.3.1" [heat-admin at overcloud-controller-0 ~]$ systemctl -l ... ceph.service loaded active exited LSB: Start Ceph distributed file system daemons at boot time chronyd.service loaded active running NTP client/server cloud-config.service loaded active exited Apply the settings specified in cloud-config cloud-final.service loaded failed failed Execute cloud user/final scripts cloud-init-local.service loaded active exited Initial cloud-init job (pre-networking) cloud-init.service loaded active exited Initial cloud-init job (metadata service crawler) crond.service loaded active running Command Scheduler dbus.service loaded active running D-Bus System Message Bus dhcp-interface at br-ex.service loaded failed failed DHCP interface br/ex dhcp-interface at enp1s0f0.service loaded active exited DHCP interface enp1s0f0 dhcp-interface at enp1s0f1.service loaded failed failed DHCP interface enp1s0f1 dhcp-interface at ib0.service loaded failed failed DHCP interface ib0 dhcp-interface at ovs-system.service loaded failed failed DHCP interface ovs/system getty at tty1.service loaded active running Getty on tty1 ipmievd.service loaded active running Ipmievd Daemon irqbalance.service loaded active running irqbalance daemon iscsi-shutdown.service loaded active exited Logout off all iSCSI sessions on shutdown kdump.service loaded active exited Crash recovery kernel arming kmod-static-nodes.service loaded active exited Create list of required static device nodes for the current kerne ksm.service loaded active exited Kernel Samepage Merging ksmtuned.service loaded active running Kernel Samepage Merging (KSM) Tuning Daemon libvirtd.service loaded active running Virtualization daemon lvm2-lvmetad.service loaded active running LVM2 metadata daemon lvm2-monitor.service loaded active exited Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or prog netcf-transaction.service loaded active exited Rollback uncommitted netcf network config change transactions network.service loaded active exited LSB: Bring up/down networking openvswitch-nonetwork.service loaded active exited Open vSwitch Internal Unit openvswitch.service loaded active exited Open vSwitch os-collect-config.service loaded active running Collect metadata and run hook commands. On 10/15/15, 5:00 PM, Dan Sneddon wrote: > If you are doing the most basic deployment (no network isolation) on > bare metal, you will want to specify which interface is your external > network interface. This will be the interface that gets attached to > br-ex (this defaults to your first interface, which may not be correct > in your case). In the basic deployment, this network will require a > DHCP server to give the controller an address, and if you want to use > floating IPs you will need a range of IPs that are free (won't be > assigned to other hosts by DHCP server). > > So, if your external interface is 'enp11s0f0' (just guessing, since > that one actually has a DHCP address), then your command-line will be > at least: > > openstack overcloud deploy --templates \ > --neutron-public-interface enp11s0f0 > > But you should probably include a reference to an NTP server: > > openstack overcloud deploy --templates \ > --neutron-public-interface enp11s0f0 \ > --ntp-server pool.ntp.org > > Some other options to consider: > > --timeout > --debug > > Selecting tunnel type: > --neutron-network-type > --neutron-tunnel-types > -- --------------------------------------------- Erming Pei, Ph.D Senior System Analyst; Grid/Cloud Specialist Research Computing Group Information Services & Technology University of Alberta, Canada Tel: +1 7804929914 Fax: +1 7804921729 --------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Oct 27 23:08:00 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 28 Oct 2015 08:08:00 +0900 Subject: [Rdo-list] Reminder: RDO Community Meetup today 12:45 in Tokyo Message-ID: <563003D0.7020704@redhat.com> A reminder that the RDO community meetup at OpenStack Summit will be in the Sakura tower at 12:45 during the lunch break. Get your lunch and bring it along. The agenda for today's meeting may be found at https://etherpad.openstack.org/p/rdo-tokyo Please add the items that you're interested in discussing. Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From marius at remote-lab.net Wed Oct 28 01:59:43 2015 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 28 Oct 2015 02:59:43 +0100 Subject: [Rdo-list] HA Overcloud deployment with network isolation in VLAN mode In-Reply-To: References: Message-ID: On Tue, Oct 27, 2015 at 6:10 PM, Nuno Loureiro < nuno.loureiro at itcenter.com.pt> wrote: > > Hi all! > > I'm deploying an HA overcloud with 3 controller nodes and 3 compute nodes. > > I'm able to successfully deploy the overcloud in GRE-tunnel mode by > issuing the following command: > > openstack overcloud deploy --control-scale 3 --compute-scale 3 > --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e > ~/the-cloud/environments/puppet-pacemaker.yaml -e > ~/the-cloud/environments/network-isolation.yaml -e > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > ~/the-cloud/environments/network-environment.yaml --control-flavor > controller --compute-flavor compute > > > Now I want to use VLAN in tenant networks, disabling the GRE-tunnels. > > I ran the following command to deploy the overcloud in VLAN mode: > > openstack overcloud deploy --control-scale 3 --compute-scale 3 > --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e > ~/the-cloud/environments/puppet-pacemaker.yaml -e > ~/the-cloud/environments/network-isolation.yaml -e > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > ~/the-cloud/environments/network-environment.yaml --control-flavor > controller --compute-flavor compute --neutron-network-type vlan > --neutron-bridge-mappings datacentre:br-ex --neutron-network-vlan-ranges > datacentre:1000:10009 > > However I always get the following error: > > ERROR: openstack Neutron tunnel types must be specified when Neutron network type is specified > > > I think this problem might be related to this bug: > https://bugzilla.redhat.com/show_bug.cgi?id=1244893 > Yes, the error relates to that bug: you should pass both --neutron-network-type vlan and --neutron-tunnel-types vlan to pass. In addition to this you should also make sure that the network templates match your physical environment, provide valid vlan range(10009 is not a valid vlan tag), switch configuration is in place. > > Do you have any suggestions on how to solve this problem? > > Thank you very much! > Regards, > -- > Nuno Loureiro > Research & Development > Phone: +351 256 370 980 > Email: nuno.loureiro at itcenter.com.pt > www.itcenter.com.pt > [image: ITCENTER Store] [image: ITCENTER > Helpdesk] > > [image: ITCENTER Facebook] [image: > ITCENTER Linkedin] [image: > ITCENTER Twitter] > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Wed Oct 28 03:03:23 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 27 Oct 2015 23:03:23 -0400 (EDT) Subject: [Rdo-list] HA Overcloud deployment with network isolation in VLAN mode In-Reply-To: References: Message-ID: <1711148368.3269213.1446001403321.JavaMail.zimbra@redhat.com> This isn't entirely correct. The --neutron-tunnel-types parameter only takes [vxlan|gre], and should not be required when --neutron-network-type is 'vlan'. However, a bug is making the CLI require this parameter in error. Nuno, there is a bug on this behavior, feel free to add your comments or more information: https://bugzilla.redhat.com/show_bug.cgi?id=1244893 -Dan Sneddon ----- Original Message ----- > > On Tue, Oct 27, 2015 at 6:10 PM, Nuno Loureiro < > nuno.loureiro at itcenter.com.pt > wrote: > > > > > Hi all! > > I'm deploying an HA overcloud with 3 controller nodes and 3 compute nodes. > > I'm able to successfully deploy the overcloud in GRE-tunnel mode by issuing > the following command: > > openstack overcloud deploy --control-scale 3 --compute-scale 3 --libvirt-type > kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e > ~/the-cloud/environments/puppet-pacemaker.yaml -e > ~/the-cloud/environments/network-isolation.yaml -e > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > ~/the-cloud/environments/network-environment.yaml --control-flavor > controller --compute-flavor compute > > > Now I want to use VLAN in tenant networks, disabling the GRE-tunnels. > > I ran the following command to deploy the overcloud in VLAN mode: > > openstack overcloud deploy --control-scale 3 --compute-scale 3 --libvirt-type > kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e > ~/the-cloud/environments/puppet-pacemaker.yaml -e > ~/the-cloud/environments/network-isolation.yaml -e > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > ~/the-cloud/environments/network-environment.yaml --control-flavor > controller --compute-flavor compute --neutron-network-type vlan > --neutron-bridge-mappings datacentre:br-ex --neutron-network-vlan-ranges > datacentre:1000:10009 > > However I always get the following error: > ERROR: openstack Neutron tunnel types must be specified when Neutron network > type is specified > > I think this problem might be related to this bug: > https://bugzilla.redhat.com/show_bug.cgi?id=1244893 > > Yes, the error relates to that bug: you should pass both > --neutron-network-type vlan and --neutron-tunnel-types vlan to pass. > > In addition to this you should also make sure that the network templates > match your physical environment, provide valid vlan range( 10009 is not a > valid vlan tag), switch configuration is in place. > > > > > Do you have any suggestions on how to solve this problem? > > Thank you very much! > Regards, > -- > Nuno Loureiro > > Research & Development > > Phone: +351 256 370 980 > > Email: nuno.loureiro at itcenter.com.pt > > www.itcenter.com.pt > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dsneddon at redhat.com Wed Oct 28 03:15:17 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 27 Oct 2015 23:15:17 -0400 (EDT) Subject: [Rdo-list] HA with network isolation on virt howto In-Reply-To: <181528904.65879741.1445956858269.JavaMail.zimbra@redhat.com> References: <181528904.65879741.1445956858269.JavaMail.zimbra@redhat.com> Message-ID: <711400958.3274360.1446002117368.JavaMail.zimbra@redhat.com> That looks correct. You can see an example of templates that use the provisioning interface for both provisioning and as a VLAN trunk, checkout the files in /usr/local/openstack-tripleo-heat-templates/network/config/single-nic-with-vlans You will see that that config adds the provisioning interface to a bridge, along with the VLANs that will run on top. This is the same as the approach Sasha presents. -Dan Sneddon ----- Original Message ----- > Hi, > IIUC, this can work with tagged vlan used for the internalapi. > > The relevant yaml file snippet (nic1 is where the provision and the > internalapi are for example): > type: ovs_bridge > name: br-nic1 > use_dhcp: false > dns_servers: {get_param: DnsServers} > addresses: > - > ip_netmask: > list_join: > - '/' > - - {get_param: ControlPlaneIp} > - {get_param: ControlPlaneSubnetCidr} > routes: > - > ip_netmask: 169.254.169.254/32 > next_hop: {get_param: EC2MetadataIp} > - > default: true > next_hop: {get_param: ControlPlaneDefaultRoute} > members: > - > type: interface > name: nic1 > primary: true > - > type: vlan > vlan_id: {get_param: InternalApiNetworkVlanID:} > addresses: > - > ip_netmask: {get_param: InternalApiIpSubnet} > > > The InternalApiNetworkVlanID (if different than 20, which is the defaut) has > to be set respectively. > > > > Best regards, > Sasha Chuzhoy. > > ----- Original Message ----- > > From: "Pedro Sousa" > > To: "Marius Cornea" > > Cc: "rdo-list" > > Sent: Tuesday, October 27, 2015 10:06:44 AM > > Subject: Re: [Rdo-list] HA with network isolation on virt howto > > > > Hi Marius, > > > > the reason is that I would like for example to use internalapi network with > > provisioning network in the same interface, and since provisioning doesn't > > use bridge I wondered if this it's possible. > > > > As I said, actually I was able to deploy overcloud with internalapi without > > the bridge but I had to specify the physical interface "device: enps0f0" on > > my heat template. > > > > Thanks > > Pedro Sousa > > > > On Tue, Oct 27, 2015 at 1:06 PM, Marius Cornea < marius at remote-lab.net > > > wrote: > > > > > > Hi Pedro, > > > > Afaik in order to use a vlan interface you need to set it as part of a > > bridge - it actually gets created as an internal port within the ovs > > bridge with the specified vlan tag. Is there any specific reason you > > don't want to use a bridge for this? > > > > I believe your understanding relates to the Neutron configuration. In > > regards to the network isolation the Tenant network relates to the > > network used for setting up the overlay networks tunnels ( which in > > turn will run the tenant networks created after deployment ). > > > > On Tue, Oct 27, 2015 at 12:06 PM, Pedro Sousa < pgsousa at gmail.com > wrote: > > > Hi Marius, > > > > > > I've tried to configure InternalAPI VLAN on the first interface that > > > doesn't > > > use a bridge, however it only seems to work if I define the physical > > > device > > > "enp1s0f0" like this: > > > > > > network_config: > > > - > > > type: interface > > > name: nic1 > > > use_dhcp: false > > > addresses: > > > - > > > ip_netmask: > > > list_join: > > > - '/' > > > - - {get_param: ControlPlaneIp} > > > - {get_param: ControlPlaneSubnetCidr} > > > routes: > > > - > > > ip_netmask: 169.254.169.254/32 > > > next_hop: {get_param: EC2MetadataIp} > > > - > > > type: vlan > > > device: enp1s0f0 > > > vlan_id: {get_param: InternalApiNetworkVlanID} > > > addresses: > > > - > > > ip_netmask: {get_param: InternalApiIpSubnet} > > > > > > > > > So my question is if it's possible to create a VLAN attached to interface > > > without using a bridge and specifying the physical device? > > > > > > My understanding is that you only require bridges when you use Tenant or > > > Floating networks, or is it supposed to work that way? > > > > > > Thanks, > > > Pedro Sousa > > > > > > > > > > > > > > > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea < marius at remote-lab.net > > > > wrote: > > >> > > >> Here's an adjusted controller.yaml which disables DHCP on the first > > >> nic: enp1s0f0 so it doesn't get an IP address > > >> http://paste.openstack.org/show/476981/ > > >> > > >> Please note that this assumes that your overcloud nodes are PXE > > >> booting on the 2nd NIC(basically disabling the 1st nic) > > >> > > >> Given your setup(I'm doing some assumptions here so I might be wrong) > > >> I would use the 1st nic for PXE booting and provisioning network and > > >> 2nd nic for running the isolated networks with this kind of template: > > >> http://paste.openstack.org/show/476986/ > > >> > > >> Let me know if it works for you. > > >> > > >> Thanks, > > >> Marius > > >> > > >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa < pgsousa at gmail.com > > > >> wrote: > > >> > Hi, > > >> > > > >> > here you go. > > >> > > > >> > Regards, > > >> > Pedro Sousa > > >> > > > >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea < > > >> > marius at remote-lab.net > > >> > > > > >> > wrote: > > >> >> > > >> >> Hi Pedro, > > >> >> > > >> >> One issue I can quickly see is that br-ex has assigned the same IP > > >> >> address as enp1s0f0. Can you post the nic templates you used for > > >> >> deployment? > > >> >> > > >> >> 2: enp1s0f0: mtu 1500 qdisc mq > > >> >> state > > >> >> UP qlen 1000 > > >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > > >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > > >> >> enp1s0f0 > > >> >> 9: br-ex: mtu 1500 qdisc noqueue > > >> >> state > > >> >> UNKNOWN > > >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > > >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > > >> >> > > >> >> Thanks, > > >> >> Marius > > >> >> > > >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa < pgsousa at gmail.com > > > >> >> wrote: > > >> >> > Hi Marius, > > >> >> > > > >> >> > I've followed your howto and managed to get overcloud deployed in > > >> >> > HA, > > >> >> > thanks. However I cannot login to it (via CLI or Horizon) : > > >> >> > > > >> >> > ERROR (Unauthorized): The request you have made requires > > >> >> > authentication. > > >> >> > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1) > > >> >> > > > >> >> > So I rebooted the controllers and now I cannot login through > > >> >> > Provisioning > > >> >> > network, seems some openvswitch bridge conf problem, heres my conf: > > >> >> > > > >> >> > # ip a > > >> >> > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > >> >> > inet 127.0.0.1/8 scope host lo > > >> >> > valid_lft forever preferred_lft forever > > >> >> > inet6 ::1/128 scope host > > >> >> > valid_lft forever preferred_lft forever > > >> >> > 2: enp1s0f0: mtu 1500 qdisc mq > > >> >> > state > > >> >> > UP > > >> >> > qlen 1000 > > >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff > > >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic > > >> >> > enp1s0f0 > > >> >> > valid_lft 84562sec preferred_lft 84562sec > > >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link > > >> >> > valid_lft forever preferred_lft forever > > >> >> > 3: enp1s0f1: mtu 1500 qdisc mq > > >> >> > master > > >> >> > ovs-system state UP qlen 1000 > > >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > > >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > > >> >> > valid_lft forever preferred_lft forever > > >> >> > 4: ovs-system: mtu 1500 qdisc noop state DOWN > > >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff > > >> >> > 5: br-tun: mtu 1500 qdisc noop state DOWN > > >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff > > >> >> > 6: vlan20: mtu 1500 qdisc noqueue > > >> >> > state > > >> >> > UNKNOWN > > >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff > > >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20 > > >> >> > valid_lft forever preferred_lft forever > > >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20 > > >> >> > valid_lft forever preferred_lft forever > > >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link > > >> >> > valid_lft forever preferred_lft forever > > >> >> > 7: vlan40: mtu 1500 qdisc noqueue > > >> >> > state > > >> >> > UNKNOWN > > >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff > > >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40 > > >> >> > valid_lft forever preferred_lft forever > > >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link > > >> >> > valid_lft forever preferred_lft forever > > >> >> > 8: vlan174: mtu 1500 qdisc > > >> >> > noqueue > > >> >> > state > > >> >> > UNKNOWN > > >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff > > >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global vlan174 > > >> >> > valid_lft forever preferred_lft forever > > >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global vlan174 > > >> >> > valid_lft forever preferred_lft forever > > >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link > > >> >> > valid_lft forever preferred_lft forever > > >> >> > 9: br-ex: mtu 1500 qdisc noqueue > > >> >> > state > > >> >> > UNKNOWN > > >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff > > >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex > > >> >> > valid_lft forever preferred_lft forever > > >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link > > >> >> > valid_lft forever preferred_lft forever > > >> >> > 10: vlan50: mtu 1500 qdisc > > >> >> > noqueue > > >> >> > state > > >> >> > UNKNOWN > > >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff > > >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50 > > >> >> > valid_lft forever preferred_lft forever > > >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link > > >> >> > valid_lft forever preferred_lft forever > > >> >> > 11: vlan30: mtu 1500 qdisc > > >> >> > noqueue > > >> >> > state > > >> >> > UNKNOWN > > >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff > > >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30 > > >> >> > valid_lft forever preferred_lft forever > > >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30 > > >> >> > valid_lft forever preferred_lft forever > > >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link > > >> >> > valid_lft forever preferred_lft forever > > >> >> > 12: br-int: mtu 1500 qdisc noop state DOWN > > >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff > > >> >> > > > >> >> > > > >> >> > # ovs-vsctl show > > >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101 > > >> >> > Bridge br-ex > > >> >> > Port br-ex > > >> >> > Interface br-ex > > >> >> > type: internal > > >> >> > Port "enp1s0f1" > > >> >> > Interface "enp1s0f1" > > >> >> > Port "vlan40" > > >> >> > tag: 40 > > >> >> > Interface "vlan40" > > >> >> > type: internal > > >> >> > Port "vlan20" > > >> >> > tag: 20 > > >> >> > Interface "vlan20" > > >> >> > type: internal > > >> >> > Port phy-br-ex > > >> >> > Interface phy-br-ex > > >> >> > type: patch > > >> >> > options: {peer=int-br-ex} > > >> >> > Port "vlan50" > > >> >> > tag: 50 > > >> >> > Interface "vlan50" > > >> >> > type: internal > > >> >> > Port "vlan30" > > >> >> > tag: 30 > > >> >> > Interface "vlan30" > > >> >> > type: internal > > >> >> > Port "vlan174" > > >> >> > tag: 174 > > >> >> > Interface "vlan174" > > >> >> > type: internal > > >> >> > Bridge br-int > > >> >> > fail_mode: secure > > >> >> > Port br-int > > >> >> > Interface br-int > > >> >> > type: internal > > >> >> > Port patch-tun > > >> >> > Interface patch-tun > > >> >> > type: patch > > >> >> > options: {peer=patch-int} > > >> >> > Port int-br-ex > > >> >> > Interface int-br-ex > > >> >> > type: patch > > >> >> > options: {peer=phy-br-ex} > > >> >> > Bridge br-tun > > >> >> > fail_mode: secure > > >> >> > Port "gre-0a00140b" > > >> >> > Interface "gre-0a00140b" > > >> >> > type: gre > > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > > >> >> > out_key=flow, remote_ip="10.0.20.11"} > > >> >> > Port patch-int > > >> >> > Interface patch-int > > >> >> > type: patch > > >> >> > options: {peer=patch-tun} > > >> >> > Port "gre-0a00140d" > > >> >> > Interface "gre-0a00140d" > > >> >> > type: gre > > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > > >> >> > out_key=flow, remote_ip="10.0.20.13"} > > >> >> > Port "gre-0a00140c" > > >> >> > Interface "gre-0a00140c" > > >> >> > type: gre > > >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10", > > >> >> > out_key=flow, remote_ip="10.0.20.12"} > > >> >> > Port br-tun > > >> >> > Interface br-tun > > >> >> > type: internal > > >> >> > ovs_version: "2.4.0" > > >> >> > > > >> >> > Regards, > > >> >> > Pedro Sousa > > >> >> > > > >> >> > > > >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea > > >> >> > < marius at remote-lab.net > > > >> >> > wrote: > > >> >> >> > > >> >> >> Hi everyone, > > >> >> >> > > >> >> >> I wrote a blog post about how to deploy a HA with network > > >> >> >> isolation > > >> >> >> overcloud on top of the virtual environment. I tried to provide > > >> >> >> some > > >> >> >> insights into what instack-virt-setup creates and how to use the > > >> >> >> network isolation templates in the virtual environment. I hope you > > >> >> >> find it useful. > > >> >> >> > > >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/ > > >> >> >> > > >> >> >> Thanks, > > >> >> >> Marius > > >> >> >> > > >> >> >> _______________________________________________ > > >> >> >> Rdo-list mailing list > > >> >> >> Rdo-list at redhat.com > > >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list > > >> >> >> > > >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > >> >> > > > >> >> > > > >> > > > >> > > > > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From nuno.loureiro at itcenter.com.pt Wed Oct 28 11:45:28 2015 From: nuno.loureiro at itcenter.com.pt (Nuno Loureiro) Date: Wed, 28 Oct 2015 11:45:28 +0000 Subject: [Rdo-list] HA Overcloud deployment with network isolation in VLAN mode In-Reply-To: <1711148368.3269213.1446001403321.JavaMail.zimbra@redhat.com> References: <1711148368.3269213.1446001403321.JavaMail.zimbra@redhat.com> Message-ID: Hi all! Thank you for your replies. The VLAN range I posted on my email was a typo. The command I used had the correct range 1000:1009. The network templates were configured according to my physical network. I followed Marius suggestion and used the following command to deploy the overcloud stack: openstack overcloud deploy --control-scale 3 --compute-scale 3 --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e ~/the-cloud/environments/puppet-pacemaker.yaml -e ~/the-cloud/environments/network-isolation.yaml -e ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e ~/the-cloud/environments/network-environment.yaml --control-flavor controller --compute-flavor compute --neutron-network-type vlan --neutron-bridge-mappings datacentre:br-ex --neutron-network-vlan-ranges datacentre:1000:1009 --neutron-tunnel-types vlan The overcloud stack was deployed in VLAN mode without the GRE-tunnels. However, it doesn't work correctly because the neutron-openvswitch-agent is unable to start as it doesn't recognize "vlan" as a tunnel mode. I captured the followinfg error in /var/log/neutron/openvswitch-agent.log 2015-10-28 11:35:53.057 160431 WARNING oslo_config.cfg [-] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from group "oslo_concurrency". 2015-10-28 11:35:53.058 160431 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Agent failed to create agent config map 2015-10-28 11:35:53.058 160431 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last): 2015-10-28 11:35:53.058 160431 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1894, in main 2015-10-28 11:35:53.058 160431 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent agent_config = create_agent_config_map(cfg.CONF) 2015-10-28 11:35:53.058 160431 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1863, in create_agent_config_map 2015-10-28 11:35:53.058 160431 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent raise ValueError(msg) 2015-10-28 11:35:53.058 160431 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ValueError: (u'Invalid tunnel type specified: %s', 'vlan') 2015-10-28 11:35:53.058 160431 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 2015-10-28 11:36:19.023 161519 WARNING oslo_config.cfg [-] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from group "oslo_concurrency". 2015-10-28 11:36:19.024 161519 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Agent failed to create agent config map 2015-10-28 11:36:19.024 161519 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last): 2015-10-28 11:36:19.024 161519 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1894, in main 2015-10-28 11:36:19.024 161519 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent agent_config = create_agent_config_map(cfg.CONF) 2015-10-28 11:36:19.024 161519 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1863, in create_agent_config_map 2015-10-28 11:36:19.024 161519 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent raise ValueError(msg) 2015-10-28 11:36:19.024 161519 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ValueError: (u'Invalid tunnel type specified: %s', 'vlan') 2015-10-28 11:36:19.024 161519 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent So the --neutron-tunnel-types vlan command allows the deployment to pass the CLI bug but creates a problem in neutron configuration. Do you have any other suggestions? Thank you very much! Regards, On Wed, Oct 28, 2015 at 3:03 AM, Dan Sneddon wrote: > This isn't entirely correct. The --neutron-tunnel-types parameter only > takes [vxlan|gre], and should not be > required when --neutron-network-type is 'vlan'. However, a bug is making > the CLI require this parameter in error. > > Nuno, there is a bug on this behavior, feel free to add your comments or > more information: > https://bugzilla.redhat.com/show_bug.cgi?id=1244893 > > -Dan Sneddon > > ----- Original Message ----- > > > > On Tue, Oct 27, 2015 at 6:10 PM, Nuno Loureiro < > > nuno.loureiro at itcenter.com.pt > wrote: > > > > > > > > > > Hi all! > > > > I'm deploying an HA overcloud with 3 controller nodes and 3 compute > nodes. > > > > I'm able to successfully deploy the overcloud in GRE-tunnel mode by > issuing > > the following command: > > > > openstack overcloud deploy --control-scale 3 --compute-scale 3 > --libvirt-type > > kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e > > ~/the-cloud/environments/puppet-pacemaker.yaml -e > > ~/the-cloud/environments/network-isolation.yaml -e > > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > > ~/the-cloud/environments/network-environment.yaml --control-flavor > > controller --compute-flavor compute > > > > > > Now I want to use VLAN in tenant networks, disabling the GRE-tunnels. > > > > I ran the following command to deploy the overcloud in VLAN mode: > > > > openstack overcloud deploy --control-scale 3 --compute-scale 3 > --libvirt-type > > kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e > > ~/the-cloud/environments/puppet-pacemaker.yaml -e > > ~/the-cloud/environments/network-isolation.yaml -e > > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > > ~/the-cloud/environments/network-environment.yaml --control-flavor > > controller --compute-flavor compute --neutron-network-type vlan > > --neutron-bridge-mappings datacentre:br-ex --neutron-network-vlan-ranges > > datacentre:1000:10009 > > > > However I always get the following error: > > ERROR: openstack Neutron tunnel types must be specified when Neutron > network > > type is specified > > > > I think this problem might be related to this bug: > > https://bugzilla.redhat.com/show_bug.cgi?id=1244893 > > > > Yes, the error relates to that bug: you should pass both > > --neutron-network-type vlan and --neutron-tunnel-types vlan to pass. > > > > In addition to this you should also make sure that the network templates > > match your physical environment, provide valid vlan range( 10009 is not a > > valid vlan tag), switch configuration is in place. > > > > > > > > > > Do you have any suggestions on how to solve this problem? > > > > Thank you very much! > > Regards, > > -- > > Nuno Loureiro > > > > Research & Development > > > > Phone: +351 256 370 980 > > > > Email: nuno.loureiro at itcenter.com.pt > > > > www.itcenter.com.pt > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Nuno Loureiro Research & Development Phone: +351 256 370 980 Email: nuno.loureiro at itcenter.com.pt www.itcenter.com.pt [image: ITCENTER Store] [image: ITCENTER Helpdesk] [image: ITCENTER Facebook] [image: ITCENTER Linkedin] [image: ITCENTER Twitter] -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Wed Oct 28 15:08:53 2015 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 28 Oct 2015 16:08:53 +0100 Subject: [Rdo-list] HA Overcloud deployment with network isolation in VLAN mode In-Reply-To: References: <1711148368.3269213.1446001403321.JavaMail.zimbra@redhat.com> Message-ID: I believe this is exactly what Dan was referring to (valid tunnel types are only gre or vxlan). Could you try setting tunnel_types in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to gre or vxlan and run pcs resource restart neutron-openvswitch-agent? This should get the ovs agent started but please note that on a subsequent run of the deploy command the tunnel_types will get overwritten. On Wed, Oct 28, 2015 at 12:45 PM, Nuno Loureiro < nuno.loureiro at itcenter.com.pt> wrote: > Hi all! > > Thank you for your replies. > > The VLAN range I posted on my email was a typo. The command I used had the > correct range 1000:1009. The network templates were configured according to > my physical network. > > > I followed Marius suggestion and used the following command to deploy the > overcloud stack: > > openstack overcloud deploy --control-scale 3 --compute-scale 3 > --libvirt-type kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e > ~/the-cloud/environments/puppet-pacemaker.yaml -e > ~/the-cloud/environments/network-isolation.yaml -e > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e > ~/the-cloud/environments/network-environment.yaml --control-flavor > controller --compute-flavor compute --neutron-network-type vlan > --neutron-bridge-mappings datacentre:br-ex --neutron-network-vlan-ranges > datacentre:1000:1009 --neutron-tunnel-types vlan > > > The overcloud stack was deployed in VLAN mode without the GRE-tunnels. > > However, it doesn't work correctly because the neutron-openvswitch-agent > is unable to start as it doesn't recognize "vlan" as a tunnel mode. > > I captured the followinfg error in /var/log/neutron/openvswitch-agent.log > > 2015-10-28 11:35:53.057 160431 WARNING oslo_config.cfg [-] Option > "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from > group "oslo_concurrency". > 2015-10-28 11:35:53.058 160431 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Agent > failed to create agent config map > 2015-10-28 11:35:53.058 160431 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback > (most recent call last): > 2015-10-28 11:35:53.058 160431 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File > "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", > line 1894, in main > 2015-10-28 11:35:53.058 160431 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent > agent_config = create_agent_config_map(cfg.CONF) > 2015-10-28 11:35:53.058 160431 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File > "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", > line 1863, in create_agent_config_map > 2015-10-28 11:35:53.058 160431 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent raise > ValueError(msg) > 2015-10-28 11:35:53.058 160431 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ValueError: > (u'Invalid tunnel type specified: %s', 'vlan') > 2015-10-28 11:35:53.058 160431 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent > 2015-10-28 11:36:19.023 161519 WARNING oslo_config.cfg [-] Option > "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from > group "oslo_concurrency". > 2015-10-28 11:36:19.024 161519 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Agent > failed to create agent config map > 2015-10-28 11:36:19.024 161519 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback > (most recent call last): > 2015-10-28 11:36:19.024 161519 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File > "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", > line 1894, in main > 2015-10-28 11:36:19.024 161519 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent > agent_config = create_agent_config_map(cfg.CONF) > 2015-10-28 11:36:19.024 161519 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File > "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", > line 1863, in create_agent_config_map > 2015-10-28 11:36:19.024 161519 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent raise > ValueError(msg) > 2015-10-28 11:36:19.024 161519 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ValueError: > (u'Invalid tunnel type specified: %s', 'vlan') > 2015-10-28 11:36:19.024 161519 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent > > > So the --neutron-tunnel-types vlan command allows the deployment to pass > the CLI bug but creates a problem in neutron configuration. > > Do you have any other suggestions? > > Thank you very much! > Regards, > > > On Wed, Oct 28, 2015 at 3:03 AM, Dan Sneddon wrote: > >> This isn't entirely correct. The --neutron-tunnel-types parameter only >> takes [vxlan|gre], and should not be >> required when --neutron-network-type is 'vlan'. However, a bug is making >> the CLI require this parameter in error. >> >> Nuno, there is a bug on this behavior, feel free to add your comments or >> more information: >> https://bugzilla.redhat.com/show_bug.cgi?id=1244893 >> >> -Dan Sneddon >> >> ----- Original Message ----- >> > >> > On Tue, Oct 27, 2015 at 6:10 PM, Nuno Loureiro < >> > nuno.loureiro at itcenter.com.pt > wrote: >> > >> > >> > >> > >> > Hi all! >> > >> > I'm deploying an HA overcloud with 3 controller nodes and 3 compute >> nodes. >> > >> > I'm able to successfully deploy the overcloud in GRE-tunnel mode by >> issuing >> > the following command: >> > >> > openstack overcloud deploy --control-scale 3 --compute-scale 3 >> --libvirt-type >> > kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e >> > ~/the-cloud/environments/puppet-pacemaker.yaml -e >> > ~/the-cloud/environments/network-isolation.yaml -e >> > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e >> > ~/the-cloud/environments/network-environment.yaml --control-flavor >> > controller --compute-flavor compute >> > >> > >> > Now I want to use VLAN in tenant networks, disabling the GRE-tunnels. >> > >> > I ran the following command to deploy the overcloud in VLAN mode: >> > >> > openstack overcloud deploy --control-scale 3 --compute-scale 3 >> --libvirt-type >> > kvm --ntp-server pool.ntp.org --templates ~/the-cloud/ -e >> > ~/the-cloud/environments/puppet-pacemaker.yaml -e >> > ~/the-cloud/environments/network-isolation.yaml -e >> > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e >> > ~/the-cloud/environments/network-environment.yaml --control-flavor >> > controller --compute-flavor compute --neutron-network-type vlan >> > --neutron-bridge-mappings datacentre:br-ex --neutron-network-vlan-ranges >> > datacentre:1000:10009 >> > >> > However I always get the following error: >> > ERROR: openstack Neutron tunnel types must be specified when Neutron >> network >> > type is specified >> > >> > I think this problem might be related to this bug: >> > https://bugzilla.redhat.com/show_bug.cgi?id=1244893 >> > >> > Yes, the error relates to that bug: you should pass both >> > --neutron-network-type vlan and --neutron-tunnel-types vlan to pass. >> > >> > In addition to this you should also make sure that the network templates >> > match your physical environment, provide valid vlan range( 10009 is not >> a >> > valid vlan tag), switch configuration is in place. >> > >> > >> > >> > >> > Do you have any suggestions on how to solve this problem? >> > >> > Thank you very much! >> > Regards, >> > -- >> > Nuno Loureiro >> > >> > Research & Development >> > >> > Phone: +351 256 370 980 >> > >> > Email: nuno.loureiro at itcenter.com.pt >> > >> > www.itcenter.com.pt >> > >> > >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > -- > Nuno Loureiro > Research & Development > Phone: +351 256 370 980 > Email: nuno.loureiro at itcenter.com.pt > www.itcenter.com.pt > [image: ITCENTER Store] [image: ITCENTER > Helpdesk] > > [image: ITCENTER Facebook] [image: > ITCENTER Linkedin] [image: > ITCENTER Twitter] > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at buskey.name Wed Oct 28 17:17:16 2015 From: tom at buskey.name (Tom Buskey) Date: Wed, 28 Oct 2015 13:17:16 -0400 Subject: [Rdo-list] Kilo Horizon session timeout and cookie Message-ID: If you stay logged into the Horizon dashboard, it'll timeout. You cannot login until you delete the cookies in your browser for the Horizon server. For CentOS 7 (and probably RHEL 7 as well), python-django-openstack-auth-1.2.0-4.el7.noarch.rpm is the installed rpm from the RDO repo. Supposedly there is an updated rpm ( https://bugzilla.redhat.com/show_bug.cgi?id=1218894) which is 1.2.0-6. Are there any plans to put an updated rpm on the repo? Do I have to spin or patch my own rpm to get Kilo working? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Oct 28 21:00:30 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 29 Oct 2015 06:00:30 +0900 Subject: [Rdo-list] Fwd: Call for Proposals: Virtualization & IaaS Devroom at FOSDEM 2016 In-Reply-To: <5631162B.7070900@redhat.com> References: <5631162B.7070900@redhat.com> Message-ID: <5631376E.1070505@redhat.com> for those considering coming to FOSDEM, please also consider submitting talks to the IaaS devroom. -------- Forwarded Message -------- Subject: Call for Proposals: Virtualization & IaaS Devroom at FOSDEM 2016 Date: Wed, 28 Oct 2015 19:38:35 +0100 From: Mikey Ariel Organization: Red Hat, Inc. To: rhev-tech at redhat.com, fosdem-planning at redhat.com, osas at redhat.com, rhev-devel at redhat.com I'm happy to announce that the call for proposals is now open for the virtualization and Infrastructure-as-a-Service devroom at FOSDEM 2016. See the full text in this blog post: http://community.redhat.com/blog/2015/10/call-for-proposals-fosdem16-virtualization-iaas-devroom/ Please don't hesitate to ping me with any questions, as well as share this post with people or projects that you think might be interested. Cheers, Mikey -- Mikey Ariel Community Lead, oVirt www.ovirt.org "To be is to do" (Socrates) "To do is to be" (Jean-Paul Sartre) "Do be do be do" (Frank Sinatra) Mobile: +420-702-131-141 IRC: mariel / thatdocslady Twitter: @ThatDocsLady From apevec at gmail.com Wed Oct 28 21:44:54 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 28 Oct 2015 22:44:54 +0100 Subject: [Rdo-list] Kilo Horizon session timeout and cookie In-Reply-To: References: Message-ID: > there is an updated rpm > (https://bugzilla.redhat.com/show_bug.cgi?id=1218894) which is 1.2.0-6. > > Are there any plans to put an updated rpm on the repo? Do I have to spin or > patch my own rpm to get Kilo working? You can grab the update from the testing repo http://cbs.centos.org/repos/cloud7-openstack-kilo-testing/x86_64/os/Packages/python-django-openstack-auth-1.2.0-6.el7.noarch.rpm Testing repo also has 2015.1.2 updates, they all should be signed and pushed in a batch update tomorrow. Cheers, Alan From mrunge at redhat.com Wed Oct 28 23:38:03 2015 From: mrunge at redhat.com (Matthias Runge) Date: Thu, 29 Oct 2015 00:38:03 +0100 Subject: [Rdo-list] Kilo Horizon session timeout and cookie In-Reply-To: References: Message-ID: <56315C5B.9000605@redhat.com> On 28/10/15 18:17, Tom Buskey wrote: > If you stay logged into the Horizon dashboard, it'll timeout. > > You cannot login until you delete the cookies in your browser for the > Horizon server. > > For CentOS 7 (and probably RHEL 7 as > well), python-django-openstack-auth-1.2.0-4.el7.noarch.rpm > is > the installed rpm from the RDO repo. Supposedly there is an updated rpm > (https://bugzilla.redhat.com/show_bug.cgi?id=1218894) which is 1.2.0-6. > > Are there any plans to put an updated rpm on the repo? Do I have to > spin or patch my own rpm to get Kilo working? > Alan mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1218894#c22 both required builds are in RDO kilo testing repo. Matthias From mohammed.arafa at gmail.com Thu Oct 29 01:21:40 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 28 Oct 2015 21:21:40 -0400 Subject: [Rdo-list] [rdo-manager] liberty rdo rpm Message-ID: the documentation states to use rdo-release-liberty.rpm thats missing from the website. so you cant even install and upgrade what course of action does this require? restore the rpm file or update the docs to use rdo-release-liberty-2.rpm? -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Thu Oct 29 01:41:14 2015 From: marius at remote-lab.net (Marius Cornea) Date: Thu, 29 Oct 2015 02:41:14 +0100 Subject: [Rdo-list] [rdo-manager] liberty rdo rpm In-Reply-To: References: Message-ID: Hi Mohammed, The rdo-release-liberty.rpm URL appears to redirect to rdo-release-liberty-2.noarch.rpm so the URL mentioned in the docs should work fine: curl -I -L http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm HTTP/1.1 302 Found Date: Thu, 29 Oct 2015 01:33:54 GMT Server: Apache Location: https://www.rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm Content-Type: text/html; charset=iso-8859-1 HTTP/1.1 301 Moved Permanently Date: Thu, 29 Oct 2015 01:33:54 GMT Server: Apache Location: https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty.rpm Content-Type: text/html; charset=iso-8859-1 HTTP/1.1 307 Temporary Redirect Date: Thu, 29 Oct 2015 01:33:56 GMT Server: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.1e-fips Strict-Transport-Security: max-age=15768000; includeSubDomains; preload Location: http://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm Content-Type: text/html; charset=iso-8859-1 HTTP/1.1 302 Found Date: Thu, 29 Oct 2015 01:33:56 GMT Server: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.1e-fips Location: https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm Content-Type: text/html; charset=iso-8859-1 HTTP/1.1 200 OK Date: Thu, 29 Oct 2015 01:33:56 GMT Server: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.1e-fips Strict-Transport-Security: max-age=15768000; includeSubDomains; preload Last-Modified: Wed, 21 Oct 2015 17:51:35 GMT ETag: "1494-522a1078837c0" Accept-Ranges: bytes Content-Length: 5268 AppTime: D=294 AppServer: people01.fedoraproject.org X-GitProject: (null) Content-Type: application/x-rpm On Thu, Oct 29, 2015 at 2:21 AM, Mohammed Arafa wrote: > the documentation states to use rdo-release-liberty.rpm > thats missing from the website. so you cant even install and upgrade > > what course of action does this require? restore the rpm file or update > the docs to use rdo-release-liberty-2.rpm? > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kambiz at redhat.com Thu Oct 29 01:56:51 2015 From: kambiz at redhat.com (Kambiz Aghaiepour) Date: Wed, 28 Oct 2015 21:56:51 -0400 Subject: [Rdo-list] TryStack Outage Report 2015-10-28 Message-ID: <20151029015651.GF30187@redhat.com> TryStack Outage Wednesday Oct 28, 2015 Impact - Earlier Wednesday morning, TryStack ( http://x86.trystack.org/ ) experienced an outage for several hours beginning in the early hours of the day. The outage impacted all tenants and appears to have been caused due to exhaustion of services related to tenant networks building up over the course of several months. In order to return services to normal, resources (networks, router ports, etc) for tenants without any running VMs were manually deleted freeing up system resources on our neutron host and returning TryStack back to normal operations. Per Tenant Fix - If you have occasion to use TryStack as a sandbox environment, you may need to delete and recreate your router in your tenant if you find your launched guests are not acquiring a DHCP address correctly or able to be connected with over an associated floating IP address. Ongoing Resource Management - In order to prevent exhaustion of system resources, we have been automatically deleting VMs 24 hours after they are created. Additionally, we clear router gateways as well as floating IP allocations 12 hours after they are set (the public subnet is a /24 network and anyone with an account can use the public subnet free of charge, hence the need for aggressively culling resources) Until today we had not been purging other resources, and over the course of the last three to four months, the tenant/project count has grown to just over 1300 tenants. Many users login a few times, create their networks and routers, and launch some test VMs and may not revisit TryStack for some time. As such the qrouter and qdhcp network namespaces are created, and ports created in OVS, along with associated dnsmasq processes for each subnet the tenant creates. We are adding management and culling of these additional resource types using the ospurge utility ( see: https://github.com/openstack/ospurge ) IRC Alerting - We have also added IRC bots that can announce alerts in the #trystack channel in Freenode. Alerts are sent to the IRC bot via a nagios instance monitoring the environment. Grafana / Graphite - We are currently working on building dashboards using grafana, using a graphite backend, and collectd agents sending data to graphite. Will Foster has built an initial dashboard to see resource utilization and trending at a glance (Thanks Will!). The dashboard(s) are not yet ready for public consumption, but we plan on making a read-only grafana interface available in the near future. For a sample of what the dashboard will look like, see : http://ibin.co/2Kf8i9WxsWIl (The image is only depicting part of the dashboard as it is only a screenshot). -- Red Hat, Inc. 100 East Davie Street Raleigh, NC 27601 "All tyranny needs to gain a foothold is for people of good conscience to remain silent." --Thomas Jefferson From mohammed.arafa at gmail.com Thu Oct 29 02:04:49 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 28 Oct 2015 22:04:49 -0400 Subject: [Rdo-list] [rdo-manager] liberty rdo rpm In-Reply-To: References: Message-ID: i discovered this issue by copy and pasting from the documentation. i dont think that yum cannot follow links. it was telling me the rpm did not exist On Wed, Oct 28, 2015 at 9:41 PM, Marius Cornea wrote: > Hi Mohammed, > > The rdo-release-liberty.rpm URL appears to redirect > to rdo-release-liberty-2.noarch.rpm so the URL mentioned in the docs should > work fine: > > curl -I -L > http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > HTTP/1.1 302 Found > Date: Thu, 29 Oct 2015 01:33:54 GMT > Server: Apache > Location: > https://www.rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm > Content-Type: text/html; charset=iso-8859-1 > > HTTP/1.1 301 Moved Permanently > Date: Thu, 29 Oct 2015 01:33:54 GMT > Server: Apache > Location: > https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty.rpm > Content-Type: text/html; charset=iso-8859-1 > > HTTP/1.1 307 Temporary Redirect > Date: Thu, 29 Oct 2015 01:33:56 GMT > Server: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.1e-fips > Strict-Transport-Security: max-age=15768000; includeSubDomains; preload > Location: > http://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm > Content-Type: text/html; charset=iso-8859-1 > > HTTP/1.1 302 Found > Date: Thu, 29 Oct 2015 01:33:56 GMT > Server: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.1e-fips > Location: > https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm > Content-Type: text/html; charset=iso-8859-1 > > HTTP/1.1 200 OK > Date: Thu, 29 Oct 2015 01:33:56 GMT > Server: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.1e-fips > Strict-Transport-Security: max-age=15768000; includeSubDomains; preload > Last-Modified: Wed, 21 Oct 2015 17:51:35 GMT > ETag: "1494-522a1078837c0" > Accept-Ranges: bytes > Content-Length: 5268 > AppTime: D=294 > AppServer: people01.fedoraproject.org > X-GitProject: (null) > Content-Type: application/x-rpm > > > On Thu, Oct 29, 2015 at 2:21 AM, Mohammed Arafa > wrote: > >> the documentation states to use rdo-release-liberty.rpm >> thats missing from the website. so you cant even install and upgrade >> >> what course of action does this require? restore the rpm file or update >> the docs to use rdo-release-liberty-2.rpm? >> >> -- >> >> >> >> >> *805010942448935* >> >> >> *GR750055912MA* >> >> >> *Link to me on LinkedIn * >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Thu Oct 29 02:46:32 2015 From: marius at remote-lab.net (Marius Cornea) Date: Thu, 29 Oct 2015 11:46:32 +0900 Subject: [Rdo-list] [rdo-manager] liberty rdo rpm In-Reply-To: References: Message-ID: <5B8AF18B-F444-4BC9-B37C-51981AC2365C@remote-lab.net> Can you see if yum localinstall does the job? > On 29 Oct 2015, at 11:04, Mohammed Arafa wrote: > > i discovered this issue by copy and pasting from the documentation. i dont think that yum cannot follow links. it was telling me the rpm did not exist > >> On Wed, Oct 28, 2015 at 9:41 PM, Marius Cornea wrote: >> Hi Mohammed, >> >> The rdo-release-liberty.rpm URL appears to redirect to rdo-release-liberty-2.noarch.rpm so the URL mentioned in the docs should work fine: >> >> curl -I -L http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm >> HTTP/1.1 302 Found >> Date: Thu, 29 Oct 2015 01:33:54 GMT >> Server: Apache >> Location: https://www.rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm >> Content-Type: text/html; charset=iso-8859-1 >> >> HTTP/1.1 301 Moved Permanently >> Date: Thu, 29 Oct 2015 01:33:54 GMT >> Server: Apache >> Location: https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty.rpm >> Content-Type: text/html; charset=iso-8859-1 >> >> HTTP/1.1 307 Temporary Redirect >> Date: Thu, 29 Oct 2015 01:33:56 GMT >> Server: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.1e-fips >> Strict-Transport-Security: max-age=15768000; includeSubDomains; preload >> Location: http://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm >> Content-Type: text/html; charset=iso-8859-1 >> >> HTTP/1.1 302 Found >> Date: Thu, 29 Oct 2015 01:33:56 GMT >> Server: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.1e-fips >> Location: https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm >> Content-Type: text/html; charset=iso-8859-1 >> >> HTTP/1.1 200 OK >> Date: Thu, 29 Oct 2015 01:33:56 GMT >> Server: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.1e-fips >> Strict-Transport-Security: max-age=15768000; includeSubDomains; preload >> Last-Modified: Wed, 21 Oct 2015 17:51:35 GMT >> ETag: "1494-522a1078837c0" >> Accept-Ranges: bytes >> Content-Length: 5268 >> AppTime: D=294 >> AppServer: people01.fedoraproject.org >> X-GitProject: (null) >> Content-Type: application/x-rpm >> >> >>> On Thu, Oct 29, 2015 at 2:21 AM, Mohammed Arafa wrote: >>> the documentation states to use rdo-release-liberty.rpm >>> thats missing from the website. so you cant even install and upgrade >>> >>> what course of action does this require? restore the rpm file or update the docs to use rdo-release-liberty-2.rpm? >>> >>> -- >>> >>> >>> >>> >>> >>> >>> 805010942448935 >>> >>> GR750055912MA >>> >>> Link to me on LinkedIn >>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -- > > > > > > > 805010942448935 > > GR750055912MA > > Link to me on LinkedIn -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu Oct 29 07:58:05 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 29 Oct 2015 08:58:05 +0100 Subject: [Rdo-list] [rdo-manager] liberty rdo rpm In-Reply-To: References: Message-ID: > yum cannot follow links. it was telling me the rpm did not exist Yum uses libcurl so it should follow redirects. Please give me yum debug log and also curl command shown by Marius. Cheers, Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Thu Oct 29 10:13:44 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 29 Oct 2015 06:13:44 -0400 Subject: [Rdo-list] [rdo-manager] liberty rdo rpm In-Reply-To: References: Message-ID: I had worked around by finding the full path to liberty-2 in the browser and used that. I can say that rerunning the command from docs says the original rpm does not upgrade anything . I am afk On Oct 29, 2015 3:58 AM, "Alan Pevec" wrote: > > yum cannot follow links. it was telling me the rpm did not exist > > Yum uses libcurl so it should follow redirects. Please give me yum debug > log and also curl command shown by Marius. > > Cheers, > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Thu Oct 29 14:41:46 2015 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 29 Oct 2015 15:41:46 +0100 Subject: [Rdo-list] Overcloud features [ Trove Sahara ] In-Reply-To: References: Message-ID: <4346662.FXkXcpaVbU@whitebase.usersys.redhat.com> On Tuesday 27 of October 2015 14:28:09 Pedro Sousa wrote: > Hi Ali, > > I guess you need to start here: > https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/ad > vanced_deployment/extra_config.html Apart from this method, there is a set of work-in-progress patches to add proper Sahara and Trove support: https://blueprints.launchpad.net/tripleo/+spec/sahara-integration In the Sahara case, it should be possible to also manually deploy the service, at least on one of the controllers; of course it disappear if you recreate your overcloud. Last time I checked, few months ago, rdo-manager did not set the firewall so the manual deployment was working. Ciao -- Luigi From ramkumar.gowrishankar at nuagenetworks.net Thu Oct 29 16:27:39 2015 From: ramkumar.gowrishankar at nuagenetworks.net (Ramkumar GOWRISHANKAR) Date: Thu, 29 Oct 2015 12:27:39 -0400 Subject: [Rdo-list] How to access the 192.0.2.1:8004 URL to get the deployment failure logs Message-ID: Hi, My virtual test bed deployment with just one controller and no computes is failing at ControllerNodesPostDeployment. The debug steps when a deployment fails tells to run the following command: "heat resource-show overcloud ControllerNodesPostDeployment". When I run the command, I see 3 URL starting with http://192.0.2.1:8004. How do I access these URLs? When I try a wget on these URLs or when I create a ssh tunnel from the base machine and try to access the URLs I get permission denied message. When I try to access just the base URL ( http://192.0.2.1:8004 mapped to http://localhost:8005) via a tunnel, I get the following message: {"versions": [{"status":"CURRENT", "id": "v1.0", "links": [{"href":" http://localhost:8005/v1/","rel":"self"}]}]} I have looked through the /var/log/heat/ folder for any error messages but I cannot find any more detailed error message other than deployment failed at step 1 LoadBalancer. Any pointers on how to debug a deployment? Thanks, Ramkumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Thu Oct 29 10:32:34 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Thu, 29 Oct 2015 16:02:34 +0530 Subject: [Rdo-list] RDO Bug Statistics [2015-10-29] Message-ID: # RDO Bugs on 2015-10-29 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 332 - Fixed (MODIFIED, POST, ON_QA): 190 ## Number of open bugs by component diskimage-builder [ 4] ++ distribution [ 14] ++++++++++ dnsmasq [ 1] Documentation [ 4] ++ instack [ 4] ++ instack-undercloud [ 28] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 11] ++++++++ openstack-cinder [ 14] ++++++++++ openstack-foreman-inst... [ 2] + openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 2] + openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] +++++ openstack-manila [ 8] +++++ openstack-neutron [ 10] +++++++ openstack-nova [ 18] +++++++++++++ openstack-packstack [ 55] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] ++++++++ openstack-selinux [ 13] +++++++++ openstack-swift [ 3] ++ openstack-tripleo [ 24] +++++++++++++++++ openstack-tripleo-heat... [ 5] +++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 4] ++ openvswitch [ 1] Package Review [ 3] ++ python-glanceclient [ 2] + python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] +++ python-oslo-config [ 1] rdo-manager [ 48] ++++++++++++++++++++++++++++++++++ rdo-manager-cli [ 6] ++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (332 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (14 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-10-07 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1266923 ] http://bugzilla.redhat.com/1266923 (NEW) Component: distribution Last change: 2015-10-07 Summary: RDO's hdf5 rpm/yum dependencies conflicts [1271169 ] http://bugzilla.redhat.com/1271169 (NEW) Component: distribution Last change: 2015-10-13 Summary: [doc] virtual environment setup [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1275608 ] http://bugzilla.redhat.com/1275608 (NEW) Component: distribution Last change: 2015-10-27 Summary: EOL'ed rpm file URL not up to date [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### Documentation (4 bugs) [1272111 ] http://bugzilla.redhat.com/1272111 (NEW) Component: Documentation Last change: 2015-10-15 Summary: RFE : document how to access horizon in RDO manager VIRT setup [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2015-10-15 Summary: [DOC] External network should be documents in RDO manager installation [1271793 ] http://bugzilla.redhat.com/1271793 (NEW) Component: Documentation Last change: 2015-10-14 Summary: rdo-manager doc has incomplete /etc/hosts configuration [1271888 ] http://bugzilla.redhat.com/1271888 (NEW) Component: Documentation Last change: 2015-10-15 Summary: step required to build images for overcloud ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (28 bugs) [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-30 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2015-10-20 Summary: Overcloud images contain Kilo repos [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1270585 ] http://bugzilla.redhat.com/1270585 (NEW) Component: instack-undercloud Last change: 2015-10-19 Summary: instack isntallation fails with parse error: Invalid string liberty on CentOS [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (11 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: Ceilometer requires pymongo>=3.0.2 [1214928 ] http://bugzilla.redhat.com/1214928 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-10-26 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1265721 ] http://bugzilla.redhat.com/1265721 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (NEW) Component: openstack-ceilometer Last change: 2015-09-23 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field [1265818 ] http://bugzilla.redhat.com/1265818 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-28 Summary: ceilometer polling agent does not start ### openstack-cinder (14 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1268182 ] http://bugzilla.redhat.com/1268182 (NEW) Component: openstack-cinder Last change: 2015-10-02 Summary: cinder spontaneously sets instance root device to 'available' [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-foreman-installer (2 bugs) [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (2 bugs) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable [1275656 ] http://bugzilla.redhat.com/1275656 (NEW) Component: openstack-horizon Last change: 2015-10-28 Summary: FontAwesome lib bad path ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-10-26 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class ### openstack-manila (8 bugs) [1272957 ] http://bugzilla.redhat.com/1272957 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver: same volumes are re-used with vol mapped layout after restarting manila services [1271138 ] http://bugzilla.redhat.com/1271138 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: puppet module for manila should include service type - shareV2 [1272960 ] http://bugzilla.redhat.com/1272960 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Glusterfs NFS-Ganesha share's export location should be uniform for both nfsv3 & nfsv4 protocols [1272962 ] http://bugzilla.redhat.com/1272962 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_driver: Attempt to create share fails ungracefully when backend gluster volumes aren't exported [1272970 ] http://bugzilla.redhat.com/1272970 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs_native: cannot connect via SSH using password authentication to multiple gluster clusters with different passwords [1272968 ] http://bugzilla.redhat.com/1272968 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterfs vol based layout: Deleting a share created from snapshot should also delete its backend gluster volume [1272954 ] http://bugzilla.redhat.com/1272954 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: glusterFS_native_driver: snapshot delete doesn't delete snapshot entries that are in error state [1272958 ] http://bugzilla.redhat.com/1272958 (NEW) Component: openstack-manila Last change: 2015-10-19 Summary: gluster driver - vol based layout: share size may be misleading ### openstack-neutron (10 bugs) [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1269610 ] http://bugzilla.redhat.com/1269610 (ASSIGNED) Component: openstack-neutron Last change: 2015-10-20 Summary: Overcloud deployment fails - openvswitch agent is not running and nova instances end up in error state [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1272289 ] http://bugzilla.redhat.com/1272289 (ASSIGNED) Component: openstack-neutron Last change: 2015-10-19 Summary: rdo-manager tempest smoke test failing on "floating ip pool not found' [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2015-10-23 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1271838 ] http://bugzilla.redhat.com/1271838 (ASSIGNED) Component: openstack-neutron Last change: 2015-10-20 Summary: Baremetal basic non-HA deployment fails due to failing module import by neutron [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (18 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: novnc init script doesnt write to log [1271033 ] http://bugzilla.redhat.com/1271033 (NEW) Component: openstack-nova Last change: 2015-10-19 Summary: nova.conf.sample is out of date [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-10-17 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-10-17 Summary: Nova AVC messages [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files ### openstack-packstack (55 bugs) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1271246 ] http://bugzilla.redhat.com/1271246 (NEW) Component: openstack-packstack Last change: 2015-10-13 Summary: packstack failed to start nova.api [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1275803 ] http://bugzilla.redhat.com/1275803 (NEW) Component: openstack-packstack Last change: 2015-10-27 Summary: packstack --allinone fails on Fedora 22-3 during _keystone.pp [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-packstack Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1269535 ] http://bugzilla.redhat.com/1269535 (NEW) Component: openstack-packstack Last change: 2015-10-07 Summary: packstack script does not test to see if the rc files *were* created. [1266028 ] http://bugzilla.redhat.com/1266028 (NEW) Component: openstack-packstack Last change: 2015-10-08 Summary: Packstack should use pymysql database driver since Liberty [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1270770 ] http://bugzilla.redhat.com/1270770 (NEW) Component: openstack-packstack Last change: 2015-10-12 Summary: Packstack generated CONFIG_MANILA_SERVICE_IMAGE_LOCATION points to a dropbox link [1269255 ] http://bugzilla.redhat.com/1269255 (NEW) Component: openstack-packstack Last change: 2015-10-06 Summary: Failed to start RabbitMQ broker. [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd ### openstack-puppet-modules (11 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing ### openstack-selinux (13 bugs) [1261465 ] http://bugzilla.redhat.com/1261465 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: OpenStack Keystone is not functional [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-10-20 Summary: Glance over nfs fails due to selinux [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1268124 ] http://bugzilla.redhat.com/1268124 (NEW) Component: openstack-selinux Last change: 2015-10-02 Summary: Nova rootwrap-daemon requires a selinux exception [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux ### openstack-swift (3 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1274308 ] http://bugzilla.redhat.com/1274308 (NEW) Component: openstack-swift Last change: 2015-10-22 Summary: Consistently occurring swift related failures in RDO with a HA deployment [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (24 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-10-09 Summary: Node registration fails silently if instackenv.json is badly formatted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" ### openstack-tripleo-heat-templates (5 bugs) [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-08 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1271411 ] http://bugzilla.redhat.com/1271411 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-10-13 Summary: Unable to deploy internal api endpoint for keystone on a different network to admin api [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (4 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1270615 ] http://bugzilla.redhat.com/1270615 (NEW) Component: openstack-utils Last change: 2015-10-11 Summary: openstack status still checking mysql not mariadb [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### Package Review (3 bugs) [1272524 ] http://bugzilla.redhat.com/1272524 (NEW) Component: Package Review Last change: 2015-10-16 Summary: Review Request: Mistral - workflow Service for OpenStack cloud [1268372 ] http://bugzilla.redhat.com/1268372 (ASSIGNED) Component: Package Review Last change: 2015-10-29 Summary: Review Request: openstack-app-catalog-ui - openstack horizon plugin for the openstack app-catalog [1272513 ] http://bugzilla.redhat.com/1272513 (NEW) Component: Package Review Last change: 2015-10-16 Summary: Review Request: Murano - is an application catalog for OpenStack ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-10-21 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-10-26 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (48 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1269657 ] http://bugzilla.redhat.com/1269657 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Support configuration of default subnet pools [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1273574 ] http://bugzilla.redhat.com/1273574 (ASSIGNED) Component: rdo-manager Last change: 2015-10-22 Summary: rdo-manager liberty, delete node is failing [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1274060 ] http://bugzilla.redhat.com/1274060 (NEW) Component: rdo-manager Last change: 2015-10-23 Summary: [SELinux][RHEL7] openstack-ironic-inspector- dnsmasq.service fails to start with SELinux enabled [1269655 ] http://bugzilla.redhat.com/1269655 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Support deploying VPNaaS [1271336 ] http://bugzilla.redhat.com/1271336 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] Enable configuration of OVS ARP Responder [1269890 ] http://bugzilla.redhat.com/1269890 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Support IPv6 [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1270818 ] http://bugzilla.redhat.com/1270818 (NEW) Component: rdo-manager Last change: 2015-10-20 Summary: Two ironic-inspector processes are running on the undercloud, breaking the introspection [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1269002 ] http://bugzilla.redhat.com/1269002 (ASSIGNED) Component: rdo-manager Last change: 2015-10-14 Summary: instack-undercloud: overcloud HA deployment fails - the rabbitmq doesn't run on the controllers. [1271232 ] http://bugzilla.redhat.com/1271232 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: tempest_lib.exceptions.Conflict: An object with that identifier already exists [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: Glance client returning 'Expected endpoint' [1271335 ] http://bugzilla.redhat.com/1271335 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] Support explicit configuration of L2 population [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1271317 ] http://bugzilla.redhat.com/1271317 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: instack-virt-setup fails: error Running install- packages install [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: Duplicate nova hypervisors after rebooting compute nodes [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1273121 ] http://bugzilla.redhat.com/1273121 (NEW) Component: rdo-manager Last change: 2015-10-19 Summary: openstack help returns errors [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2015-10-15 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1272167 ] http://bugzilla.redhat.com/1272167 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: [RFE] Support enabling the port security extension [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1269622 ] http://bugzilla.redhat.com/1269622 (NEW) Component: rdo-manager Last change: 2015-10-13 Summary: [RFE] support override of API and RPC worker counts [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: overcloud-novacompute stuck in spawning state [1269894 ] http://bugzilla.redhat.com/1269894 (NEW) Component: rdo-manager Last change: 2015-10-08 Summary: [RFE] Add creation of demo tenant, network and installation of demo images [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1269661 ] http://bugzilla.redhat.com/1269661 (NEW) Component: rdo-manager Last change: 2015-10-07 Summary: [RFE] Supporting SR-IOV enabled deployments [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2015-10-21 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2015-10-15 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2015-10-21 Summary: HA overcloud with network isolation deployment fails [1276097 ] http://bugzilla.redhat.com/1276097 (NEW) Component: rdo-manager Last change: 2015-10-28 Summary: dnsmasq-dhcp: DHCPDISCOVER no address available ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (190 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (2 bugs) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency [1271002 ] http://bugzilla.redhat.com/1271002 (MODIFIED) Component: openstack-ceilometer Last change: 2015-10-23 Summary: Ceilometer dbsync failing during HA deployment ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (4 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1268146 ] http://bugzilla.redhat.com/1268146 (ON_QA) Component: openstack-glance Last change: 2015-10-02 Summary: openstack-glance-registry will not start: missing systemd dependency [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (14 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2015-10-26 Summary: neutron should not specify signing_dir in neutron- dist.conf [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1270325 ] http://bugzilla.redhat.com/1270325 (MODIFIED) Component: openstack-neutron Last change: 2015-10-19 Summary: neutron-ovs-cleanup fails to start with bad path to ovs plugin configuration ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (60 bugs) [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-10-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1269158 ] http://bugzilla.redhat.com/1269158 (POST) Component: openstack-packstack Last change: 2015-10-19 Summary: Sahara configuration should be affected by heat availability (broken by default right now) [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-10-23 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution ### openstack-puppet-modules (19 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1270957 ] http://bugzilla.redhat.com/1270957 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-10-13 Summary: Undercloud install fails on Error: Could not find class ::ironic::inspector for instack on node instack [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation ### openstack-sahara (1 bug) [1268235 ] http://bugzilla.redhat.com/1268235 (MODIFIED) Component: openstack-sahara Last change: 2015-10-02 Summary: rootwrap filter not included in Sahara RPM ### openstack-selinux (12 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (2 bugs) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account [1272572 ] http://bugzilla.redhat.com/1272572 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-10-28 Summary: Error: Unable to retrieve volume limit information when accessing System Defaults in Horizon ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### Package Review (1 bug) [1243550 ] http://bugzilla.redhat.com/1243550 (ON_QA) Component: Package Review Last change: 2015-10-09 Summary: Review Request: openstack-aodh - OpenStack Telemetry Alarming ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-django-openstack-auth (1 bug) [1218894 ] http://bugzilla.redhat.com/1218894 (ON_QA) Component: python-django-openstack-auth Last change: 2015-10-28 Summary: Horizon: Re login failed after timeout ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-10-05 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (9 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: rdo-manager Last change: 2015-10-14 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1271433 ] http://bugzilla.redhat.com/1271433 (MODIFIED) Component: rdo-manager Last change: 2015-10-20 Summary: Horizon fails to load [1272180 ] http://bugzilla.redhat.com/1272180 (MODIFIED) Component: rdo-manager Last change: 2015-10-19 Summary: Horizon doesn't load when deploying without pacemaker [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2015-10-07 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-05-29 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (9 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2015-10-20 Summary: VXLAN should be default neutron network type [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Thu Oct 29 18:05:14 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 29 Oct 2015 14:05:14 -0400 Subject: [Rdo-list] [rdo-manager] liberty undercloud fails Message-ID: i did a (second or third!) fresh install. and i have this error now when i do an undercloud install + '[' -n '' ']' + echo 'No metadata IP found. Skipping.' No metadata IP found. Skipping. dib-run-parts Thu Oct 22 16:20:39 CAT 2015 20-os-net-config completed dib-run-parts Thu Oct 22 16:20:39 CAT 2015 Running /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles [2015/10/22 04:20:39 PM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json dib-run-parts Thu Oct 22 16:20:39 CAT 2015 40-hiera-datafiles completed dib-run-parts Thu Oct 22 16:20:39 CAT 2015 Running /usr/libexec/os-refresh-config/configure.d/50-puppet-stack-config + set -o pipefail + set +e + puppet apply --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp Error: (): did not find expected alphabetic or numeric character while scanning an anchor at line 81 column 36 at /etc/puppet/manifests/puppet-stack-config.pp:16 on node rdo.egit.vms Wrapped exception: (): did not find expected alphabetic or numeric character while scanning an anchor at line 81 column 36 Error: (): did not find expected alphabetic or numeric character while scanning an anchor at line 81 column 36 at /etc/puppet/manifests/puppet-stack-config.pp:16 on node rdo.egit.vms + rc=1 + set -e + echo 'puppet apply exited with exit code 1' puppet apply exited with exit code 1 + '[' 1 '!=' 2 -a 1 '!=' 0 ']' + exit 1 [2015-10-22 16:21:08,216] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] [2015-10-22 16:21:08,217] (os-refresh-config) [ERROR] Aborting... Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 562, in install _run_orc(instack_env) File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 494, in _run_orc _run_live_command(args, instack_env, 'os-refresh-config') File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 325, in _run_live_command raise RuntimeError('%s failed. See log for details.' % name) RuntimeError: os-refresh-config failed. See log for details. Command 'instack-install-undercloud' returned non-zero exit status 1 my hosts file: 127.0.0.1 rdo.egit.vms rdo localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 my undercloud.conf file http://paste.fedoraproject.org/282489/24678144 i am using the liberty repo and am not using the delorean repo -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Thu Oct 29 20:28:29 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Thu, 29 Oct 2015 16:28:29 -0400 Subject: [Rdo-list] [rdo-manager] liberty undercloud fails In-Reply-To: References: Message-ID: <5632816D.1090201@ltgfederal.com> Mohammed, Tuskar is currently not working, so you need to disable it in undercloud.conf Also, I believe there were some issues when you didn't select the default IPs in the file. If this is your first install, I would do it with the default 192.02.0 range and then move to a custom IP configuration. Ignacio Bravo LTG Federal Inc On 10/29/2015 02:05 PM, Mohammed Arafa wrote: > i did a (second or third!) fresh install. and i have this error now > when i do an undercloud install > > + '[' -n '' ']' > + echo 'No metadata IP found. Skipping.' > No metadata IP found. Skipping. > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 20-os-net-config completed > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 Running > /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles > [2015/10/22 04:20:39 PM] [WARNING] DEPRECATED: falling back to > /var/run/os-collect-config/os_config_files.json > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 40-hiera-datafiles completed > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 Running > /usr/libexec/os-refresh-config/configure.d/50-puppet-stack-config > + set -o pipefail > + set +e > + puppet apply --detailed-exitcodes > /etc/puppet/manifests/puppet-stack-config.pp > Error: (): did not find expected alphabetic or numeric > character while scanning an anchor at line 81 column 36 at > /etc/puppet/manifests/puppet-stack-config.pp:16 on node rdo.egit.vms > Wrapped exception: > (): did not find expected alphabetic or numeric character > while scanning an anchor at line 81 column 36 > Error: (): did not find expected alphabetic or numeric > character while scanning an anchor at line 81 column 36 at > /etc/puppet/manifests/puppet-stack-config.pp:16 on node rdo.egit.vms > + rc=1 > + set -e > + echo 'puppet apply exited with exit code 1' > puppet apply exited with exit code 1 > + '[' 1 '!=' 2 -a 1 '!=' 0 ']' > + exit 1 > [2015-10-22 16:21:08,216] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > > [2015-10-22 16:21:08,217] (os-refresh-config) [ERROR] Aborting... > Traceback (most recent call last): > File "", line 1, in > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 562, in install > _run_orc(instack_env) > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 494, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 325, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) > RuntimeError: os-refresh-config failed. See log for details. > Command 'instack-install-undercloud' returned non-zero exit status 1 > > > my hosts file: > 127.0.0.1 rdo.egit.vms rdo localhost localhost.localdomain > localhost4 localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > > my undercloud.conf file > http://paste.fedoraproject.org/282489/24678144 > > i am using the liberty repo and am not using the delorean repo > > -- > > > > > > > > *805010942448935* > ** > > > > > *GR750055912MA* > > > > > *Link to me on LinkedIn * > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Thu Oct 29 18:54:41 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 29 Oct 2015 14:54:41 -0400 Subject: [Rdo-list] [rdo-manager] liberty undercloud fails In-Reply-To: References: Message-ID: i just did another run but this time instead of the liberty repo i used delorean and i got the same error output where is this line 81 column 36? its not my undercloud.conf as that is a blank line On Thu, Oct 29, 2015 at 2:05 PM, Mohammed Arafa wrote: > i did a (second or third!) fresh install. and i have this error now when i > do an undercloud install > > + '[' -n '' ']' > + echo 'No metadata IP found. Skipping.' > No metadata IP found. Skipping. > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 20-os-net-config completed > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 Running > /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles > [2015/10/22 04:20:39 PM] [WARNING] DEPRECATED: falling back to > /var/run/os-collect-config/os_config_files.json > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 40-hiera-datafiles completed > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 Running > /usr/libexec/os-refresh-config/configure.d/50-puppet-stack-config > + set -o pipefail > + set +e > + puppet apply --detailed-exitcodes > /etc/puppet/manifests/puppet-stack-config.pp > Error: (): did not find expected alphabetic or numeric > character while scanning an anchor at line 81 column 36 at > /etc/puppet/manifests/puppet-stack-config.pp:16 on node rdo.egit.vms > Wrapped exception: > (): did not find expected alphabetic or numeric character > while scanning an anchor at line 81 column 36 > Error: (): did not find expected alphabetic or numeric > character while scanning an anchor at line 81 column 36 at > /etc/puppet/manifests/puppet-stack-config.pp:16 on node rdo.egit.vms > + rc=1 > + set -e > + echo 'puppet apply exited with exit code 1' > puppet apply exited with exit code 1 > + '[' 1 '!=' 2 -a 1 '!=' 0 ']' > + exit 1 > [2015-10-22 16:21:08,216] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > > [2015-10-22 16:21:08,217] (os-refresh-config) [ERROR] Aborting... > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 562, in install > _run_orc(instack_env) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 494, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 325, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) > RuntimeError: os-refresh-config failed. See log for details. > Command 'instack-install-undercloud' returned non-zero exit status 1 > > > my hosts file: > 127.0.0.1 rdo.egit.vms rdo localhost localhost.localdomain > localhost4 localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > > my undercloud.conf file > http://paste.fedoraproject.org/282489/24678144 > > i am using the liberty repo and am not using the delorean repo > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Thu Oct 29 23:33:41 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 29 Oct 2015 19:33:41 -0400 Subject: [Rdo-list] [rdo-manager] liberty undercloud fails In-Reply-To: <5632816D.1090201@ltgfederal.com> References: <5632816D.1090201@ltgfederal.com> Message-ID: Ignacio I put enable_tuskar = false and it still gave me the same error do i have to totally comment it out? i can try that tomorrow On Thu, Oct 29, 2015 at 4:28 PM, Ignacio Bravo wrote: > Mohammed, > > Tuskar is currently not working, so you need to disable it in > undercloud.conf > > Also, I believe there were some issues when you didn't select the default > IPs in the file. If this is your first install, I would do it with the > default 192.02.0 range and then move to a custom IP configuration. > > Ignacio Bravo > LTG Federal Inc > > > On 10/29/2015 02:05 PM, Mohammed Arafa wrote: > > i did a (second or third!) fresh install. and i have this error now when i > do an undercloud install > > + '[' -n '' ']' > + echo 'No metadata IP found. Skipping.' > No metadata IP found. Skipping. > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 20-os-net-config completed > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 Running > /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles > [2015/10/22 04:20:39 PM] [WARNING] DEPRECATED: falling back to > /var/run/os-collect-config/os_config_files.json > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 40-hiera-datafiles completed > dib-run-parts Thu Oct 22 16:20:39 CAT 2015 Running > /usr/libexec/os-refresh-config/configure.d/50-puppet-stack-config > + set -o pipefail > + set +e > + puppet apply --detailed-exitcodes > /etc/puppet/manifests/puppet-stack-config.pp > Error: (): did not find expected alphabetic or numeric > character while scanning an anchor at line 81 column 36 at > /etc/puppet/manifests/puppet-stack-config.pp:16 on node rdo.egit.vms > Wrapped exception: > (): did not find expected alphabetic or numeric character > while scanning an anchor at line 81 column 36 > Error: (): did not find expected alphabetic or numeric > character while scanning an anchor at line 81 column 36 at > /etc/puppet/manifests/puppet-stack-config.pp:16 on node rdo.egit.vms > + rc=1 > + set -e > + echo 'puppet apply exited with exit code 1' > puppet apply exited with exit code 1 > + '[' 1 '!=' 2 -a 1 '!=' 0 ']' > + exit 1 > [2015-10-22 16:21:08,216] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > > [2015-10-22 16:21:08,217] (os-refresh-config) [ERROR] Aborting... > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 562, in install > _run_orc(instack_env) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 494, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 325, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) > RuntimeError: os-refresh-config failed. See log for details. > Command 'instack-install-undercloud' returned non-zero exit status 1 > > > my hosts file: > 127.0.0.1 rdo.egit.vms rdo localhost localhost.localdomain > localhost4 localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > > my undercloud.conf file > http://paste.fedoraproject.org/282489/24678144 > > i am using the liberty repo and am not using the delorean repo > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > > > _______________________________________________ > Rdo-list mailing listRdo-list at redhat.comhttps://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessandro at namecheap.com Fri Oct 30 06:39:46 2015 From: alessandro at namecheap.com (Alessandro Vozza) Date: Fri, 30 Oct 2015 07:39:46 +0100 Subject: [Rdo-list] Discovery password Message-ID: <579A84E6-DE97-43EA-8F75-54F7AF0A80FD@namecheap.com> Hi I?m discovering my bare metals but I run into network problems: some of them (dell blades) are discovered correctly while others can?t send their results back to the undercloud (but they do boot correctly the discovery image). Is there a way to drop into a shell at the console of the nodes that are failing and check the network configuration there? In the good ?ol days of staypuft we could pass the rootpw= parameter to pxe, is that an option now as well? thanks alessandro From marius at remote-lab.net Fri Oct 30 08:12:18 2015 From: marius at remote-lab.net (Marius Cornea) Date: Fri, 30 Oct 2015 09:12:18 +0100 Subject: [Rdo-list] How to access the 192.0.2.1:8004 URL to get the deployment failure logs In-Reply-To: References: Message-ID: Hi, You should try at least 1 controller + 1 compute, I don't think a deployment with just one controller is supposed to work. Thanks, Marius On Thu, Oct 29, 2015 at 5:27 PM, Ramkumar GOWRISHANKAR wrote: > Hi, > > My virtual test bed deployment with just one controller and no computes is > failing at ControllerNodesPostDeployment. The debug steps when a deployment > fails tells to run the following command: "heat resource-show overcloud > ControllerNodesPostDeployment". When I run the command, I see 3 URL starting > with http://192.0.2.1:8004. > How do I access these URLs? When I try a wget on these URLs or when I create > a ssh tunnel from the base machine and try to access the URLs I get > permission denied message. When I try to access just the base URL > (http://192.0.2.1:8004 mapped to http://localhost:8005) via a tunnel, I get > the following message: > {"versions": [{"status":"CURRENT", "id": "v1.0", "links": > [{"href":"http://localhost:8005/v1/","rel":"self"}]}]} > > I have looked through the /var/log/heat/ folder for any error messages but I > cannot find any more detailed error message other than deployment failed at > step 1 LoadBalancer. > > Any pointers on how to debug a deployment? > > Thanks, > > Ramkumar > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mangelajo at redhat.com Fri Oct 30 09:23:12 2015 From: mangelajo at redhat.com (Miguel Angel Ajo) Date: Fri, 30 Oct 2015 10:23:12 +0100 Subject: [Rdo-list] TryStack Outage Report 2015-10-28 In-Reply-To: <20151029015651.GF30187@redhat.com> References: <20151029015651.GF30187@redhat.com> Message-ID: <56333700.6030806@redhat.com> Interesting report, I was discussing this with Joe Talerico a few days ago, but starting at Liberty, you could also leverage the neutron QoS service to cap down the tenant traffic. https://www.openstack.org/summit/tokyo-2015/videos/presentation/qos-a-neutron-n00bie http://www.ajo.es/post/126667247769/neutron-qos-service-plugin It's still on it's early stages, and will only let you to limit egress traffic from ports via network attachment to a policy or port attachment to a policy. You can't still setup a default policy, but for example you could periodically list new created networks, and attach those networks to a policy limiting egress to 500kbps or less. Another strategy could be watching for routers creation, and create limit on the internal ports of the routers (effectively limiting the tenant ingress from the public network). That would prevent the "chatty neighbor" cases, or the abuse of trystack, but of course that would have to be noted somewhere , otherwise people could think openstack performs horribly on network ;) Cheers, Kambiz Aghaiepour wrote: > TryStack Outage > Wednesday Oct 28, 2015 > > Impact - > > Earlier Wednesday morning, TryStack ( http://x86.trystack.org/ ) > experienced an outage for several hours beginning in the early hours of > the day. The outage impacted all tenants and appears to have been > caused due to exhaustion of services related to tenant networks building > up over the course of several months. In order to return services to > normal, resources (networks, router ports, etc) for tenants without any > running VMs were manually deleted freeing up system resources on our > neutron host and returning TryStack back to normal operations. > > Per Tenant Fix - > > If you have occasion to use TryStack as a sandbox environment, you may > need to delete and recreate your router in your tenant if you find your > launched guests are not acquiring a DHCP address correctly or able to be > connected with over an associated floating IP address. > > Ongoing Resource Management - > > In order to prevent exhaustion of system resources, we have been > automatically deleting VMs 24 hours after they are > created. Additionally, we clear router gateways as well as floating IP > allocations 12 hours after they are set (the public subnet is a /24 > network and anyone with an account can use the public subnet free of > charge, hence the need for aggressively culling resources) > > Until today we had not been purging other resources, and over the course > of the last three to four months, the tenant/project count has grown to > just over 1300 tenants. Many users login a few times, create their > networks and routers, and launch some test VMs and may not revisit > TryStack for some time. As such the qrouter and qdhcp network > namespaces are created, and ports created in OVS, along with associated > dnsmasq processes for each subnet the tenant creates. We are adding > management and culling of these additional resource types using the > ospurge utility ( see: https://github.com/openstack/ospurge ) > > IRC Alerting - > > We have also added IRC bots that can announce alerts in the #trystack > channel in Freenode. Alerts are sent to the IRC bot via a nagios > instance monitoring the environment. > > Grafana / Graphite - > > We are currently working on building dashboards using grafana, using a > graphite backend, and collectd agents sending data to graphite. Will > Foster has built an initial dashboard to see resource utilization and > trending at a glance (Thanks Will!). The dashboard(s) are not yet ready > for public consumption, but we plan on making a read-only grafana > interface available in the near future. For a sample of what the > dashboard will look like, see : > > http://ibin.co/2Kf8i9WxsWIl > > (The image is only depicting part of the dashboard as it is only a > screenshot). > > > From rasca at redhat.com Fri Oct 30 09:44:47 2015 From: rasca at redhat.com (Raoul Scarazzini) Date: Fri, 30 Oct 2015 10:44:47 +0100 Subject: [Rdo-list] Network Isolation setup check Message-ID: <56333C0F.5000504@redhat.com> Hi everybody, I'm trying to deploy a tripleo environment with network isolation using a pool of 8 machines: 3 controller, 2 compute and 3 storage. Each of those machine has got 2 network interfaces, the first one (em1) connected to the lan, the second one (em2) used for the undercloud provisioning. The ultimate goal of the setup is to have the ExternalNet on the em1 (so to be able to put instances with floating Ips in the LAN) and all the other networks (InternalApi, Storage and StorageMgmt) on the em2. To produce what described I created this network-environment.yaml configuration: resource_registry: OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/nic-configs/cinder-storage.yaml OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/nic-configs/controller.yaml OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/nic-configs/swift-storage.yaml OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/nic-configs/ceph-storage.yaml parameter_defaults: # Customize the IP subnets to match the local environment InternalApiNetCidr: 172.17.0.0/24 StorageNetCidr: 172.18.0.0/24 StorageMgmtNetCidr: 172.19.0.0/24 TenantNetCidr: 172.16.0.0/24 ExternalNetCidr: 10.1.240.0/24 ControlPlaneSubnetCidr: '24' InternalApiAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}] StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}] StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}] TenantAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}] ExternalAllocationPools: [{'start': '10.1.240.10', 'end': '10.1.240.200'}] # Specify the gateway on the external network. ExternalInterfaceDefaultRoute: 10.1.240.254 # Gateway router for the provisioning network (or Undercloud IP) ControlPlaneDefaultRoute: 192.0.2.1 # Generally the IP of the Undercloud EC2MetadataIp: 192.0.2.1 DnsServers: ["10.1.241.2"] InternalApiNetworkVlanID: 2201 StorageNetworkVlanID: 2203 StorageMgmtNetworkVlanID: 2204 TenantNetworkVlanID: 2202 # This won't actually be used since external is on native VLAN, just here for reference #ExternalNetworkVlanID: 38 # Floating IP networks do not have to use br-ex, they can use any bridge as long as the NeutronExternalNetworkBridge is set to "''". NeutronExternalNetworkBridge: "''" And modified the controller.yaml file in this way (default parts are omitted, nic1 == em1 and nic2 == em2): ... ... resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: ovs_bridge name: {get_input: bridge_name} use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} members: - type: interface name: nic1 addresses: - ip_netmask: {get_param: ExternalIpSubnet} routes: - ip_netmask: 0.0.0.0/0 next_hop: {get_param: ExternalInterfaceDefaultRoute} - type: interface name: nic2 # force the MAC address of the bridge to this interface primary: true - type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} - type: vlan vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: {get_resource: OsNetConfigImpl} The deploy of the overcloud was invoked with this command: openstack overcloud deploy --templates --libvirt-type=kvm --ntp-server 10.5.26.10 --control-scale 3 --compute-scale 2 --ceph-storage-scale 3 --block-storage-scale 0 --swift-storage-scale 0 --control-flavor baremetal --compute-flavor baremetal --ceph-storage-flavor baremetal --block-storage-flavor baremetal --swift-storage-flavor baremetal --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network-environment.yaml Now the point is that I need to know if my configurations are formally correct since I got some network problems once the post deployment status were done. I still don't know (we're investigating) if those problems are related to the switch configurations (so hardware side) but for some reason everything exploded. What I saw until the machines were reachable was what I was expecting: the external address assigned to em1 and the vlans correctly assigned to the em2. With all the external IPs address pingable one to each other. But I was not able to do further tests. >From your point of view, do I miss something? Many thanks, -- Raoul Scarazzini rasca at redhat.com From mpavlase at redhat.com Fri Oct 30 09:59:27 2015 From: mpavlase at redhat.com (=?windows-1252?Q?Martin_Pavl=E1sek?=) Date: Fri, 30 Oct 2015 10:59:27 +0100 Subject: [Rdo-list] Kilo Horizon session timeout and cookie In-Reply-To: <56315C5B.9000605@redhat.com> References: <56315C5B.9000605@redhat.com> Message-ID: <56333F7F.5020205@redhat.com> The exactly same behaviour reminds me this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1270213, which actually is: https://bugzilla.redhat.com/show_bug.cgi?id=1255369 Martin On 29/10/15 00:38, Matthias Runge wrote: > On 28/10/15 18:17, Tom Buskey wrote: >> If you stay logged into the Horizon dashboard, it'll timeout. >> >> You cannot login until you delete the cookies in your browser for the >> Horizon server. >> >> For CentOS 7 (and probably RHEL 7 as >> well), python-django-openstack-auth-1.2.0-4.el7.noarch.rpm >> is >> the installed rpm from the RDO repo. Supposedly there is an updated rpm >> (https://bugzilla.redhat.com/show_bug.cgi?id=1218894) which is 1.2.0-6. >> >> Are there any plans to put an updated rpm on the repo? Do I have to >> spin or patch my own rpm to get Kilo working? >> > Alan mentioned in > https://bugzilla.redhat.com/show_bug.cgi?id=1218894#c22 > > both required builds are in RDO kilo testing repo. > > Matthias > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ramkumar.gowrishankar at nuagenetworks.net Fri Oct 30 11:39:28 2015 From: ramkumar.gowrishankar at nuagenetworks.net (Ramkumar GOWRISHANKAR) Date: Fri, 30 Oct 2015 07:39:28 -0400 Subject: [Rdo-list] How to access the 192.0.2.1:8004 URL to get the deployment failure logs In-Reply-To: References: Message-ID: Marius, Ok. I will try that out again. I reduced to just controller since the controller + compute was failing for me. I am making changes to the templates and images so this is not a stock deployment. But my question still stands for debugging as a previous try with controller and fails where the nodes are up, the deployment fails in the ControllerNodesPostDeployment step and I am not able to ssh into the controller and the heat-engine.log is not giving any useful information. Ramkumar On Fri, Oct 30, 2015 at 4:12 AM, Marius Cornea wrote: > Hi, > > You should try at least 1 controller + 1 compute, I don't think a > deployment with just one controller is supposed to work. > > Thanks, > Marius > > On Thu, Oct 29, 2015 at 5:27 PM, Ramkumar GOWRISHANKAR > wrote: > > Hi, > > > > My virtual test bed deployment with just one controller and no computes > is > > failing at ControllerNodesPostDeployment. The debug steps when a > deployment > > fails tells to run the following command: "heat resource-show overcloud > > ControllerNodesPostDeployment". When I run the command, I see 3 URL > starting > > with http://192.0.2.1:8004. > > How do I access these URLs? When I try a wget on these URLs or when I > create > > a ssh tunnel from the base machine and try to access the URLs I get > > permission denied message. When I try to access just the base URL > > (http://192.0.2.1:8004 mapped to http://localhost:8005) via a tunnel, I > get > > the following message: > > {"versions": [{"status":"CURRENT", "id": "v1.0", "links": > > [{"href":"http://localhost:8005/v1/","rel":"self"}]}]} > > > > I have looked through the /var/log/heat/ folder for any error messages > but I > > cannot find any more detailed error message other than deployment failed > at > > step 1 LoadBalancer. > > > > Any pointers on how to debug a deployment? > > > > Thanks, > > > > Ramkumar > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Fri Oct 30 11:47:15 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 30 Oct 2015 11:47:15 +0000 Subject: [Rdo-list] RDO-Manager HA Pacemaker in Compute Nodes Message-ID: Hi all, I would like to be able to recover automatically the VMS when a compute node dies as described here: http://blog.clusterlabs.org/blog/2015/openstack-ha-compute/ I've checked that I have pacemaker_remote.service and NovaCompute/NovaEvacuate pacemaker resources on my compute nodes, but it's doesn't seem to be configured/running: [*root at overcloud-novacompute-0 openstack]# systemctl list-unit-files | grep pacemaker* *pacemaker.service disabled* *pacemaker_remote.service disabled* *[root at overcloud-novacompute-0 openstack]# pcs status* *Error: cluster is not currently running on this node* Is there a way to activate this on stack deployment? Or do I have to customize it? Thanks, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Fri Oct 30 12:20:17 2015 From: marius at remote-lab.net (Marius Cornea) Date: Fri, 30 Oct 2015 13:20:17 +0100 Subject: [Rdo-list] How to access the 192.0.2.1:8004 URL to get the deployment failure logs In-Reply-To: References: Message-ID: Some good Heat templates debugging tips can be found at: http://hardysteven.blogspot.co.uk/2015/04/debugging-tripleo-heat-templates.html If you're not able to ssh to the controller I'd check first if the nova instances end up in active state, watch if at any point you're able to reach them via the ctlplane ip address, maybe the templates changes break the connectivity. If so you could enable root login via a firstboot config[1] and access the nodes via console to see what happened or do this in the image with virt-customize[2] [1] https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/extra_config.html#firstboot-extra-configuration [2] virt-customize -a overcloud-full.qcow2 --root-password password:rootpass On Fri, Oct 30, 2015 at 12:39 PM, Ramkumar GOWRISHANKAR wrote: > Marius, > > Ok. I will try that out again. I reduced to just controller since the > controller + compute was failing for me. I am making changes to the > templates and images so this is not a stock deployment. But my question > still stands for debugging as a previous try with controller and fails > where the nodes are up, the deployment fails in the > ControllerNodesPostDeployment step and I am not able to ssh into the > controller and the heat-engine.log is not giving any useful information. > > Ramkumar > > On Fri, Oct 30, 2015 at 4:12 AM, Marius Cornea > wrote: >> >> Hi, >> >> You should try at least 1 controller + 1 compute, I don't think a >> deployment with just one controller is supposed to work. >> >> Thanks, >> Marius >> >> On Thu, Oct 29, 2015 at 5:27 PM, Ramkumar GOWRISHANKAR >> wrote: >> > Hi, >> > >> > My virtual test bed deployment with just one controller and no computes >> > is >> > failing at ControllerNodesPostDeployment. The debug steps when a >> > deployment >> > fails tells to run the following command: "heat resource-show overcloud >> > ControllerNodesPostDeployment". When I run the command, I see 3 URL >> > starting >> > with http://192.0.2.1:8004. >> > How do I access these URLs? When I try a wget on these URLs or when I >> > create >> > a ssh tunnel from the base machine and try to access the URLs I get >> > permission denied message. When I try to access just the base URL >> > (http://192.0.2.1:8004 mapped to http://localhost:8005) via a tunnel, I >> > get >> > the following message: >> > {"versions": [{"status":"CURRENT", "id": "v1.0", "links": >> > [{"href":"http://localhost:8005/v1/","rel":"self"}]}]} >> > >> > I have looked through the /var/log/heat/ folder for any error messages >> > but I >> > cannot find any more detailed error message other than deployment failed >> > at >> > step 1 LoadBalancer. >> > >> > Any pointers on how to debug a deployment? >> > >> > Thanks, >> > >> > Ramkumar >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > > From tom at buskey.name Fri Oct 30 13:18:28 2015 From: tom at buskey.name (Tom Buskey) Date: Fri, 30 Oct 2015 09:18:28 -0400 Subject: [Rdo-list] Kilo Horizon session timeout and cookie In-Reply-To: <56333F7F.5020205@redhat.com> References: <56315C5B.9000605@redhat.com> <56333F7F.5020205@redhat.com> Message-ID: The bug reports say you need to add AUTH_USER_MODE and SESSION_ENGINE to /etc/openstack-dashboard/local_settings but neither rpm does. The only way to know about it is to read the bug reports On Fri, Oct 30, 2015 at 5:59 AM, Martin Pavl?sek wrote: > The exactly same behaviour reminds me this bug: > https://bugzilla.redhat.com/show_bug.cgi?id=1270213, which actually is: > https://bugzilla.redhat.com/show_bug.cgi?id=1255369 > > Martin > > On 29/10/15 00:38, Matthias Runge wrote: > > On 28/10/15 18:17, Tom Buskey wrote: > >> If you stay logged into the Horizon dashboard, it'll timeout. > >> > >> You cannot login until you delete the cookies in your browser for the > >> Horizon server. > >> > >> For CentOS 7 (and probably RHEL 7 as > >> well), python-django-openstack-auth-1.2.0-4.el7.noarch.rpm > >> < > https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/python-django-openstack-auth-1.2.0-4.el7.noarch.rpm> > is > >> the installed rpm from the RDO repo. Supposedly there is an updated rpm > >> (https://bugzilla.redhat.com/show_bug.cgi?id=1218894) which is 1.2.0-6. > >> > >> Are there any plans to put an updated rpm on the repo? Do I have to > >> spin or patch my own rpm to get Kilo working? > >> > > Alan mentioned in > > https://bugzilla.redhat.com/show_bug.cgi?id=1218894#c22 > > > > both required builds are in RDO kilo testing repo. > > > > Matthias > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessandro at namecheap.com Fri Oct 30 14:10:01 2015 From: alessandro at namecheap.com (Alessandro Vozza) Date: Fri, 30 Oct 2015 15:10:01 +0100 Subject: [Rdo-list] How to access the 192.0.2.1:8004 URL to get the deployment failure logs In-Reply-To: References: Message-ID: <455CDB35-44D6-4215-9771-40CA1CEEC081@namecheap.com> Hi Marius, is that trick in [2] also working for the discovery image? I?d love to inspect failed discovery, can I use the same virt-customize on ironic-python-agent.vmlinuz image you think? thanks > On 30 Oct 2015, at 13:20, Marius Cornea wrote: > > Some good Heat templates debugging tips can be found at: > http://hardysteven.blogspot.co.uk/2015/04/debugging-tripleo-heat-templates.html > > If you're not able to ssh to the controller I'd check first if the > nova instances end up in active state, watch if at any point you're > able to reach them via the ctlplane ip address, maybe the templates > changes break the connectivity. If so you could enable root login via > a firstboot config[1] and access the nodes via console to see what > happened or do this in the image with virt-customize[2] > > [1] https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/extra_config.html#firstboot-extra-configuration > [2] virt-customize -a overcloud-full.qcow2 --root-password password:rootpass > > On Fri, Oct 30, 2015 at 12:39 PM, Ramkumar GOWRISHANKAR > wrote: >> Marius, >> >> Ok. I will try that out again. I reduced to just controller since the >> controller + compute was failing for me. I am making changes to the >> templates and images so this is not a stock deployment. But my question >> still stands for debugging as a previous try with controller and fails >> where the nodes are up, the deployment fails in the >> ControllerNodesPostDeployment step and I am not able to ssh into the >> controller and the heat-engine.log is not giving any useful information. >> >> Ramkumar >> >> On Fri, Oct 30, 2015 at 4:12 AM, Marius Cornea >> wrote: >>> >>> Hi, >>> >>> You should try at least 1 controller + 1 compute, I don't think a >>> deployment with just one controller is supposed to work. >>> >>> Thanks, >>> Marius >>> >>> On Thu, Oct 29, 2015 at 5:27 PM, Ramkumar GOWRISHANKAR >>> wrote: >>>> Hi, >>>> >>>> My virtual test bed deployment with just one controller and no computes >>>> is >>>> failing at ControllerNodesPostDeployment. The debug steps when a >>>> deployment >>>> fails tells to run the following command: "heat resource-show overcloud >>>> ControllerNodesPostDeployment". When I run the command, I see 3 URL >>>> starting >>>> with http://192.0.2.1:8004. >>>> How do I access these URLs? When I try a wget on these URLs or when I >>>> create >>>> a ssh tunnel from the base machine and try to access the URLs I get >>>> permission denied message. When I try to access just the base URL >>>> (http://192.0.2.1:8004 mapped to http://localhost:8005) via a tunnel, I >>>> get >>>> the following message: >>>> {"versions": [{"status":"CURRENT", "id": "v1.0", "links": >>>> [{"href":"http://localhost:8005/v1/","rel":"self"}]}]} >>>> >>>> I have looked through the /var/log/heat/ folder for any error messages >>>> but I >>>> cannot find any more detailed error message other than deployment failed >>>> at >>>> step 1 LoadBalancer. >>>> >>>> Any pointers on how to debug a deployment? >>>> >>>> Thanks, >>>> >>>> Ramkumar >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From marius at remote-lab.net Sat Oct 31 13:26:55 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sat, 31 Oct 2015 14:26:55 +0100 Subject: [Rdo-list] Network Isolation setup check In-Reply-To: <56333C0F.5000504@redhat.com> References: <56333C0F.5000504@redhat.com> Message-ID: Hi Raoul, A couple of notes for the controller.yaml template. You're adding both interfaces to br-ex bridge and you're assigning the IP addresses on top of the nics themselves. While this might work in terms of connectivity, when adding the physical nics to the ovs bridge you should leverage the ovs internal ports for IP address assignment. Also be careful when bridging 2 unbonded physical nics as you might create loops in the network. Here's my approach: create 2 bridges: br-ex containing nic1 with the external network IP set on the untagged port and br-ctlplane containing nic2 with the ctlplane network IP set on untagged and the other vlans on tagged ports: http://paste.openstack.org/show/477729/ On Fri, Oct 30, 2015 at 10:44 AM, Raoul Scarazzini wrote: > Hi everybody, > I'm trying to deploy a tripleo environment with network isolation using > a pool of 8 machines: 3 controller, 2 compute and 3 storage. > Each of those machine has got 2 network interfaces, the first one (em1) > connected to the lan, the second one (em2) used for the undercloud > provisioning. > > The ultimate goal of the setup is to have the ExternalNet on the em1 (so > to be able to put instances with floating Ips in the LAN) and all the > other networks (InternalApi, Storage and StorageMgmt) on the em2. > > To produce what described I created this network-environment.yaml > configuration: > > resource_registry: > OS::TripleO::BlockStorage::Net::SoftwareConfig: > /home/stack/nic-configs/cinder-storage.yaml > OS::TripleO::Compute::Net::SoftwareConfig: > /home/stack/nic-configs/compute.yaml > OS::TripleO::Controller::Net::SoftwareConfig: > /home/stack/nic-configs/controller.yaml > OS::TripleO::ObjectStorage::Net::SoftwareConfig: > /home/stack/nic-configs/swift-storage.yaml > OS::TripleO::CephStorage::Net::SoftwareConfig: > /home/stack/nic-configs/ceph-storage.yaml > > parameter_defaults: > # Customize the IP subnets to match the local environment > InternalApiNetCidr: 172.17.0.0/24 > StorageNetCidr: 172.18.0.0/24 > StorageMgmtNetCidr: 172.19.0.0/24 > TenantNetCidr: 172.16.0.0/24 > ExternalNetCidr: 10.1.240.0/24 > ControlPlaneSubnetCidr: '24' > InternalApiAllocationPools: [{'start': '172.17.0.10', 'end': > '172.17.0.200'}] > StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}] > StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': > '172.19.0.200'}] > TenantAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}] > ExternalAllocationPools: [{'start': '10.1.240.10', 'end': '10.1.240.200'}] > # Specify the gateway on the external network. > ExternalInterfaceDefaultRoute: 10.1.240.254 > # Gateway router for the provisioning network (or Undercloud IP) > ControlPlaneDefaultRoute: 192.0.2.1 > # Generally the IP of the Undercloud > EC2MetadataIp: 192.0.2.1 > DnsServers: ["10.1.241.2"] > InternalApiNetworkVlanID: 2201 > StorageNetworkVlanID: 2203 > StorageMgmtNetworkVlanID: 2204 > TenantNetworkVlanID: 2202 > # This won't actually be used since external is on native VLAN, just > here for reference > #ExternalNetworkVlanID: 38 > # Floating IP networks do not have to use br-ex, they can use any > bridge as long as the NeutronExternalNetworkBridge is set to "''". > NeutronExternalNetworkBridge: "''" > > And modified the controller.yaml file in this way (default parts are > omitted, nic1 == em1 and nic2 == em2): > > ... > ... > resources: > OsNetConfigImpl: > type: OS::Heat::StructuredConfig > properties: > group: os-apply-config > config: > os_net_config: > network_config: > - > type: ovs_bridge > name: {get_input: bridge_name} > use_dhcp: false > dns_servers: {get_param: DnsServers} > addresses: > - > ip_netmask: > list_join: > - '/' > - - {get_param: ControlPlaneIp} > - {get_param: ControlPlaneSubnetCidr} > routes: > - > ip_netmask: 169.254.169.254/32 > next_hop: {get_param: EC2MetadataIp} > members: > - > type: interface > name: nic1 > addresses: > - > ip_netmask: {get_param: ExternalIpSubnet} > routes: > - > ip_netmask: 0.0.0.0/0 > next_hop: {get_param: ExternalInterfaceDefaultRoute} > - > type: interface > name: nic2 > # force the MAC address of the bridge to this interface > primary: true > - > type: vlan > vlan_id: {get_param: InternalApiNetworkVlanID} > addresses: > - > ip_netmask: {get_param: InternalApiIpSubnet} > - > type: vlan > vlan_id: {get_param: StorageNetworkVlanID} > addresses: > - > ip_netmask: {get_param: StorageIpSubnet} > - > type: vlan > vlan_id: {get_param: StorageMgmtNetworkVlanID} > addresses: > - > ip_netmask: {get_param: StorageMgmtIpSubnet} > - > type: vlan > vlan_id: {get_param: TenantNetworkVlanID} > addresses: > - > ip_netmask: {get_param: TenantIpSubnet} > > outputs: > OS::stack_id: > description: The OsNetConfigImpl resource. > value: {get_resource: OsNetConfigImpl} > > The deploy of the overcloud was invoked with this command: > > openstack overcloud deploy --templates --libvirt-type=kvm --ntp-server > 10.5.26.10 --control-scale 3 --compute-scale 2 --ceph-storage-scale 3 > --block-storage-scale 0 --swift-storage-scale 0 --control-flavor > baremetal --compute-flavor baremetal --ceph-storage-flavor baremetal > --block-storage-flavor baremetal --swift-storage-flavor baremetal > --templates -e > /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml > -e > /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml > -e /home/stack/network-environment.yaml > > Now the point is that I need to know if my configurations are formally > correct since I got some network problems once the post deployment > status were done. > I still don't know (we're investigating) if those problems are related > to the switch configurations (so hardware side) but for some reason > everything exploded. > What I saw until the machines were reachable was what I was expecting: > the external address assigned to em1 and the vlans correctly assigned to > the em2. With all the external IPs address pingable one to each other. > But I was not able to do further tests. > > >From your point of view, do I miss something? > > Many thanks, > > -- > Raoul Scarazzini > rasca at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From marius at remote-lab.net Sat Oct 31 12:46:33 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sat, 31 Oct 2015 13:46:33 +0100 Subject: [Rdo-list] How to access the 192.0.2.1:8004 URL to get the deployment failure logs In-Reply-To: <455CDB35-44D6-4215-9771-40CA1CEEC081@namecheap.com> References: <455CDB35-44D6-4215-9771-40CA1CEEC081@namecheap.com> Message-ID: Hi Alessandro, No, I'm afraid it won't work with the ramdisk. On Fri, Oct 30, 2015 at 3:10 PM, Alessandro Vozza wrote: > Hi Marius, > > is that trick in [2] also working for the discovery image? I?d love to inspect failed discovery, can I use the same virt-customize on ironic-python-agent.vmlinuz image you think? > > thanks > > > >> On 30 Oct 2015, at 13:20, Marius Cornea wrote: >> >> Some good Heat templates debugging tips can be found at: >> http://hardysteven.blogspot.co.uk/2015/04/debugging-tripleo-heat-templates.html >> >> If you're not able to ssh to the controller I'd check first if the >> nova instances end up in active state, watch if at any point you're >> able to reach them via the ctlplane ip address, maybe the templates >> changes break the connectivity. If so you could enable root login via >> a firstboot config[1] and access the nodes via console to see what >> happened or do this in the image with virt-customize[2] >> >> [1] https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/advanced_deployment/extra_config.html#firstboot-extra-configuration >> [2] virt-customize -a overcloud-full.qcow2 --root-password password:rootpass >> >> On Fri, Oct 30, 2015 at 12:39 PM, Ramkumar GOWRISHANKAR >> wrote: >>> Marius, >>> >>> Ok. I will try that out again. I reduced to just controller since the >>> controller + compute was failing for me. I am making changes to the >>> templates and images so this is not a stock deployment. But my question >>> still stands for debugging as a previous try with controller and fails >>> where the nodes are up, the deployment fails in the >>> ControllerNodesPostDeployment step and I am not able to ssh into the >>> controller and the heat-engine.log is not giving any useful information. >>> >>> Ramkumar >>> >>> On Fri, Oct 30, 2015 at 4:12 AM, Marius Cornea >>> wrote: >>>> >>>> Hi, >>>> >>>> You should try at least 1 controller + 1 compute, I don't think a >>>> deployment with just one controller is supposed to work. >>>> >>>> Thanks, >>>> Marius >>>> >>>> On Thu, Oct 29, 2015 at 5:27 PM, Ramkumar GOWRISHANKAR >>>> wrote: >>>>> Hi, >>>>> >>>>> My virtual test bed deployment with just one controller and no computes >>>>> is >>>>> failing at ControllerNodesPostDeployment. The debug steps when a >>>>> deployment >>>>> fails tells to run the following command: "heat resource-show overcloud >>>>> ControllerNodesPostDeployment". When I run the command, I see 3 URL >>>>> starting >>>>> with http://192.0.2.1:8004. >>>>> How do I access these URLs? When I try a wget on these URLs or when I >>>>> create >>>>> a ssh tunnel from the base machine and try to access the URLs I get >>>>> permission denied message. When I try to access just the base URL >>>>> (http://192.0.2.1:8004 mapped to http://localhost:8005) via a tunnel, I >>>>> get >>>>> the following message: >>>>> {"versions": [{"status":"CURRENT", "id": "v1.0", "links": >>>>> [{"href":"http://localhost:8005/v1/","rel":"self"}]}]} >>>>> >>>>> I have looked through the /var/log/heat/ folder for any error messages >>>>> but I >>>>> cannot find any more detailed error message other than deployment failed >>>>> at >>>>> step 1 LoadBalancer. >>>>> >>>>> Any pointers on how to debug a deployment? >>>>> >>>>> Thanks, >>>>> >>>>> Ramkumar >>>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >