From gilles at redhat.com Mon Feb 4 00:59:17 2013 From: gilles at redhat.com (Gilles Dubreuil) Date: Mon, 04 Feb 2013 11:59:17 +1100 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <421EE192CD0C6C49A23B97027914202B03E13B@marchand> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> <5102F1B9.30705@redhat.com> <421EE192CD0C6C49A23B97027914202B03B17C@marathon> <421EE192CD0C6C49A23B97027914202B03B233@marathon> <421EE192CD0C6C49A23B97027914202B03E13B@marchand> Message-ID: <1359939557.2145.10.camel@gil.surfgate.org> On Mon, 2013-01-28 at 12:47 +0000, Derrick H. Karimi wrote: > On 01/25/2013 05:32 PM, Derrick H. Karimi wrote: > > On 01/25/2013 04:04 PM, Derrick H. Karimi wrote: > >> On 01/25/2013 03:57 PM, Perry Myers wrote: > >>> On 01/25/2013 03:43 PM, Paul Robert Marino wrote: > >>> > >>> > >>> The Essex packages are a different channel than the Folsom packages. > >>> Unfortunately I'm not sure if that channel is still available which > >>> may explain why you don't see them. > >>> The RHOS Preview still should provide access to both the Essex and > >>> Folsom channels. > >> Ok I am trying to figure out how to get the Preview into the Satellite > >> server. Or if the Satellite server for sure does have Essex on it. > > yum --showduplicates openstack-nova-common* reveals that 2012.1.3-1 > > should be available. I am gonna try to force yum to install with that > > somehow. > I got one of my team members, Kodiak, involved. He maintains the > satellite server, and was able to figure out , among other things, that > we were seeing Openstack packages from our mirrored EPEL. We are not > sure if we still need the "preview" but he signed up for it anyway. > > I am able to make you install essex packages by specifying complete > package names. I went to the projects websites and tried to determine > which version matched the state of release essex. > > yum install openstack-nova-2012.1.3-1.el6.noarch > openstack-glance-2012.1-5.el6.noarch > openstack-keystone-2012.1.3-1.el6.noarch openstack-swift-1.4.8-1.el6.noarch > > Some wierd stuff started happening with dependencies of keystone > 2012.1.3-1. Once yum processed it it told me it wanted to install the > 2012.2 version of the same library!. I think I finally got around this > by grabbing rpm's directly from our satellite's rpm search page on the > web interface. And I do: > > rpm -i python-keystone-auth-token-2012.1.3-1.el6.noarch.rpm > rpm -i python-keystone-2012.1.3-1.el6.noarch.rpm > > And then I yum in the essex versions of other openstack projects. In > the end I have Openstack up, and am trying to configure it now. I can > launch instances, but for some reason I can delete them. > Hi Derrick, Do you have any specific requirements requesting Essex instead of Folsom? Since you're tapping into EPEL6, at least for now, I wonder what's the EPEL6 repo version you're using because EPEL6 has got only Folsom packages since its release last December, superseding Essex. Regards, Gilles From ksf at sei.cmu.edu Mon Feb 4 13:29:43 2013 From: ksf at sei.cmu.edu (Kodiak Firesmith) Date: Mon, 4 Feb 2013 13:29:43 +0000 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <1359939557.2145.10.camel@gil.surfgate.org> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> <5102F1B9.30705@redhat.com> <421EE192CD0C6C49A23B97027914202B03B17C@marathon> <421EE192CD0C6C49A23B97027914202B03B233@marathon> <421EE192CD0C6C49A23B97027914202B03E13B@marchand> <1359939557.2145.10.camel@gil.surfgate.org> Message-ID: <6A8340D9D5097144961EEF98758D7089130D5D@marathon> Hello All, I can't speak for the requirements of essex over folsom part of your question, but I can cautiously assert that "Essex" is available for RHEL6 still via EPEL repos based on a table of versions here: http://docs.openstack.org/folsom/openstack-compute/install/yum/content/version.html#d6e297 ...and the output of a yum query this morning from my EL6 workstation: # yum --showduplicates search openstack-nova-compute ... N/S Matched: openstack-nova-compute openstack-nova-compute-2012.1.1-15.el6.noarch : OpenStack Nova Virtual Machine control service openstack-nova-compute-2012.1.3-1.el6.noarch : OpenStack Nova Virtual Machine control service <-[Essex] openstack-nova-compute-2012.2-2.el6.noarch : OpenStack Nova Virtual Machine control service <- [Folsom] # yum info openstack-nova-compute-2012.1.3-1.el6.noarch Name : openstack-nova-compute Arch : noarch Version : 2012.1.3 Release : 1.el6 Repo : epel-x86_64-server-6 - Kodiak Firesmith Linux System Administrator Software Engineering Institute | CMU Office: 412.268.8771 Email: ksf at cert.org | ksf at sei.cmu.edu -----Original Message----- From: Gilles Dubreuil [mailto:gilles at redhat.com] Sent: Sunday, February 03, 2013 7:59 PM To: Derrick H. Karimi Cc: Perry Myers; Eric B. Werner; Cliff Perry; rhos-list at redhat.com; Kodiak Firesmith; Todd Sanders Subject: Re: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster On Mon, 2013-01-28 at 12:47 +0000, Derrick H. Karimi wrote: > On 01/25/2013 05:32 PM, Derrick H. Karimi wrote: > > On 01/25/2013 04:04 PM, Derrick H. Karimi wrote: > >> On 01/25/2013 03:57 PM, Perry Myers wrote: > >>> On 01/25/2013 03:43 PM, Paul Robert Marino wrote: > >>> > >>> > >>> The Essex packages are a different channel than the Folsom packages. > >>> Unfortunately I'm not sure if that channel is still available > >>> which may explain why you don't see them. > >>> The RHOS Preview still should provide access to both the Essex and > >>> Folsom channels. > >> Ok I am trying to figure out how to get the Preview into the > >> Satellite server. Or if the Satellite server for sure does have Essex on > >> it. > > yum --showduplicates openstack-nova-common* reveals that 2012.1.3-1 > > should be available. I am gonna try to force yum to install with > > that somehow. > I got one of my team members, Kodiak, involved. He maintains the > satellite server, and was able to figure out , among other things, > that we were seeing Openstack packages from our mirrored EPEL. We are > not sure if we still need the "preview" but he signed up for it anyway. > > I am able to make you install essex packages by specifying complete > package names. I went to the projects websites and tried to determine > which version matched the state of release essex. > > yum install openstack-nova-2012.1.3-1.el6.noarch > openstack-glance-2012.1-5.el6.noarch > openstack-keystone-2012.1.3-1.el6.noarch > openstack-swift-1.4.8-1.el6.noarch > > Some wierd stuff started happening with dependencies of keystone > 2012.1.3-1. Once yum processed it it told me it wanted to install the > 2012.2 version of the same library!. I think I finally got around > this by grabbing rpm's directly from our satellite's rpm search page > on the web interface. And I do: > > rpm -i python-keystone-auth-token-2012.1.3-1.el6.noarch.rpm > rpm -i python-keystone-2012.1.3-1.el6.noarch.rpm > > And then I yum in the essex versions of other openstack projects. In > the end I have Openstack up, and am trying to configure it now. I can > launch instances, but for some reason I can delete them. > Hi Derrick, Do you have any specific requirements requesting Essex instead of Folsom? Since you're tapping into EPEL6, at least for now, I wonder what's the EPEL6 repo version you're using because EPEL6 has got only Folsom packages since its release last December, superseding Essex. Regards, Gilles -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6254 bytes Desc: not available URL: From jlabocki at redhat.com Mon Feb 4 16:16:44 2013 From: jlabocki at redhat.com (James Labocki) Date: Mon, 4 Feb 2013 11:16:44 -0500 (EST) Subject: [rhos-list] Packstack Interactive Error Message-ID: <1189358418.10188188.1359994604797.JavaMail.root@redhat.com> I ran into the following exception when installing via packstack interactively. I'm not sure if anyone can help debug it or figure out what I am doing incorrect. -James # packstack Welcome to Installer setup utility Should Packstack install Glance ['y'| 'n'] [y] : y Should Packstack install Cinder ['y'| 'n'] [y] : y Should Packstack install Nova ['y'| 'n'] [y] : y Should Packstack install Horizon ['y'| 'n'] [y] : y Should Packstack install Swift ['y'| 'n'] [n] : y Should Packstack install openstack client tools ['y'| 'n'] [y] : y Enter the path to your ssh Public key to install on servers [/root/.ssh/id_rsa.pub] : Enter the IP address of the MySQL server [10.16.46.104] : Enter the password for the MySQL admin user : Enter the IP address of the QPID service [10.16.46.104] : Enter the IP address of the Keystone server [10.16.46.104] : Enter the IP address of the Glance server [10.16.46.104] : Enter the IP address of the Cinder server [10.16.46.104] : Enter the IP address of the Nova API service [10.16.46.104] : Enter the IP address of the Nova Cert service [10.16.46.104] : Enter the IP address of the Nova VNC proxy [10.16.46.104] : Enter a comma separated list of IP addresses on which to install the Nova Compute services [10.16.46.104] : 10.16.46.104,10.16.46.106 Enter the Private interface for Flat DHCP on the Nova compute servers [eth1] : Enter the IP address of the Nova Network service [10.16.46.104] : Enter the Public interface on the Nova network server [eth0] : Enter the Private interface for Flat DHCP on the Nova network server [eth1] : Enter the IP Range for Flat DHCP ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [192.168.32.0/22] : Enter the IP Range for Floating IP's ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [10.3.4.0/22] : Enter the IP address of the Nova Scheduler service [10.16.46.104] : Enter the IP address of the client server [10.16.46.104] : Enter the IP address of the Horizon server [10.16.46.104] : Enter the IP address of the Swift proxy service [10.16.46.104] : Enter the Swift Storage servers e.g. host/dev,host/dev [10.16.46.104] : Enter the number of swift storage zones, MUST be no bigger than the number of storage devices configured [1] : Enter the number of swift storage replicas, MUST be no bigger than the number of storage zones configured [1] : Enter FileSystem type for storage nodes ['xfs'| 'ext4'] [ext4] : Should packstack install EPEL on each server ['y'| 'n'] [n] : y Enter a comma separated list of URLs to any additional yum repositories to install: To subscribe each server to Red Hat enter a username here: james.labocki To subscribe each server to Red Hat enter your password here : Installer will be installed using the following configuration: ============================================================== os-glance-install: y os-cinder-install: y os-nova-install: y os-horizon-install: y os-swift-install: y os-client-install: y ssh-public-key: /root/.ssh/id_rsa.pub mysql-host: 10.16.46.104 mysql-pw: ******** qpid-host: 10.16.46.104 keystone-host: 10.16.46.104 glance-host: 10.16.46.104 cinder-host: 10.16.46.104 novaapi-host: 10.16.46.104 novacert-host: 10.16.46.104 novavncproxy-hosts: 10.16.46.104 novacompute-hosts: 10.16.46.104,10.16.46.106 novacompute-privif: eth1 novanetwork-host: 10.16.46.104 novanetwork-pubif: eth0 novanetwork-privif: eth1 novanetwork-fixed-range: 192.168.32.0/22 novanetwork-floating-range: 10.3.4.0/22 novasched-host: 10.16.46.104 osclient-host: 10.16.46.104 os-horizon-host: 10.16.46.104 os-swift-proxy: 10.16.46.104 os-swift-storage: 10.16.46.104 os-swift-storage-zones: 1 os-swift-storage-replicas: 1 os-swift-storage-fstype: ext4 use-epel: y additional-repo: rh-username: james.labocki rh-password: ******** Proceed with the configuration listed above? (yes|no): yes Installing: Clean Up... [ DONE ] Running Pre install scripts... [ DONE ] Setting Up ssh keys...root at 10.16.46.104's password: root at 10.16.46.104's password: [ DONE ] Create MySQL Manifest... [ DONE ] Creating QPID Manifest... [ DONE ] Creating Keystone Manifest... [ DONE ] Adding Glance Keystone Manifest entries... [ DONE ] Creating Galnce Manifest... [ DONE ] Adding Cinder Keystone Manifest entries... [ DONE ] Checking if the Cinder server has a cinder-volumes vg... [ DONE ] Creating Cinder Manifest... [ DONE ] Adding Nova API Manifest entries... [ DONE ] Adding Nova Keystone Manifest entries... [ DONE ] Adding Nova Cert Manifest entries... [ DONE ] Adding Nova Compute Manifest entries... [ DONE ] Adding Nova Network Manifest entries... [ DONE ] Adding Nova Scheduler Manifest entries... [ DONE ] Adding Nova VNC Proxy Manifest entries... [ DONE ] Adding Nova Common Manifest entries... [ DONE ] Creating OS Client Manifest... [ DONE ] Creating OS Horizon Manifest... [ DONE ] Adding Swift Keystone Manifest entries... [ DONE ] Creating OS Swift builder Manifests... [ DONE ] Creating OS Swift proxy Manifests... [ DONE ] Creating OS Swift storage Manifests... [ DONE ] Creating OS Swift Common Manifests... [ DONE ] Preparing Servers...ERROR:root:============= STDERR ========== ERROR:root:Warning: Permanently added '10.16.46.104' (RSA) to the list of known hosts. + trap t ERR +++ uname -i ++ '[' x86_64 = x86_64 ']' ++ echo x86_64 + export EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm + EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm + grep 'Red Hat Enterprise Linux' /etc/redhat-release + rpm -q epel-release + rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY + mkdir -p /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests + rpm -q epel-release + yum install -y yum-plugin-priorities Unable to read consumer identity Warning: RPMDB altered outside of yum. + rpm -q epel-release + openstack-config --set /etc/yum.repos.d/redhat.repo rhel-server-ost-6-folsom-rpms priority 1 Traceback (most recent call last): File "/usr/bin/openstack-config", line 49, in conf.readfp(open(cfgfile)) IOError: [Errno 2] No such file or directory: '/etc/yum.repos.d/redhat.repo' + true + subscription-manager register --username=james.labocki '--password=********' --autosubscribe + subscription-manager list --consumed + grep -i openstack ++ subscription-manager list --available ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 ++ grep 'Pool Id' ++ awk '{print $3}' + subscription-manager subscribe --pool Usage: subscription-manager subscribe [OPTIONS] ++ t ++ exit 2 [ ERROR ] ERROR:root:Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 795, in main _main(confFile) File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 591, in _main runSequences() File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 567, in runSequences controller.runAllSequences() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", line 57, in runAllSequences sequence.run() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", line 154, in run step.run() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", line 60, in run function() File "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", line 139, in serverprep server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) File "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", line 399, in execute raise ScriptRuntimeError("Error running remote script") ScriptRuntimeError: Error running remote script Error running remote script Please check log file /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log for more information [root at rhc-05 ~]# cat /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log 2013-02-01 15:22:45::ERROR::common_utils::394::root:: ============= STDERR ========== 2013-02-01 15:22:45::ERROR::common_utils::395::root:: Warning: Permanently added '10.16.46.104' (RSA) to the list of known hosts. + trap t ERR +++ uname -i ++ '[' x86_64 = x86_64 ']' ++ echo x86_64 + export EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm + EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm + grep 'Red Hat Enterprise Linux' /etc/redhat-release + rpm -q epel-release + rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY + mkdir -p /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests + rpm -q epel-release + yum install -y yum-plugin-priorities Unable to read consumer identity Warning: RPMDB altered outside of yum. + rpm -q epel-release + openstack-config --set /etc/yum.repos.d/redhat.repo rhel-server-ost-6-folsom-rpms priority 1 Traceback (most recent call last): File "/usr/bin/openstack-config", line 49, in conf.readfp(open(cfgfile)) IOError: [Errno 2] No such file or directory: '/etc/yum.repos.d/redhat.repo' + true + subscription-manager register --username=james.labocki '--password=********' --autosubscribe + subscription-manager list --consumed + grep -i openstack ++ subscription-manager list --available ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 ++ grep 'Pool Id' ++ awk '{print $3}' + subscription-manager subscribe --pool Usage: subscription-manager subscribe [OPTIONS] ++ t ++ exit 2 2013-02-01 15:22:45::ERROR::run_setup::803::root:: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 795, in main _main(confFile) File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 591, in _main runSequences() File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 567, in runSequences controller.runAllSequences() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", line 57, in runAllSequences sequence.run() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", line 154, in run step.run() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", line 60, in run function() File "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", line 139, in serverprep server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) File "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", line 399, in execute raise ScriptRuntimeError("Error running remote script") ScriptRuntimeError: Error running remote script From dhkarimi at sei.cmu.edu Mon Feb 4 16:22:39 2013 From: dhkarimi at sei.cmu.edu (Derrick H. Karimi) Date: Mon, 4 Feb 2013 16:22:39 +0000 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <1359939557.2145.10.camel@gil.surfgate.org> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> <5102F1B9.30705@redhat.com> <421EE192CD0C6C49A23B97027914202B03B17C@marathon> <421EE192CD0C6C49A23B97027914202B03B233@marathon> <421EE192CD0C6C49A23B97027914202B03E13B@marchand> <1359939557.2145.10.camel@gil.surfgate.org> Message-ID: <421EE192CD0C6C49A23B97027914202B01921B7C@marathon> On 02/03/2013 07:59 PM, Gilles Dubreuil wrote: > On Mon, 2013-01-28 at 12:47 +0000, Derrick H. Karimi wrote: >> On 01/25/2013 05:32 PM, Derrick H. Karimi wrote: >>> On 01/25/2013 04:04 PM, Derrick H. Karimi wrote: >>>> On 01/25/2013 03:57 PM, Perry Myers wrote: >>>>> On 01/25/2013 03:43 PM, Paul Robert Marino wrote: >>>>> >>>>> >>>>> The Essex packages are a different channel than the Folsom packages. >>>>> Unfortunately I'm not sure if that channel is still available which >>>>> may explain why you don't see them. >>>>> The RHOS Preview still should provide access to both the Essex and >>>>> Folsom channels. >>>> Ok I am trying to figure out how to get the Preview into the Satellite >>>> server. Or if the Satellite server for sure does have Essex on it. >>> yum --showduplicates openstack-nova-common* reveals that 2012.1.3-1 >>> should be available. I am gonna try to force yum to install with that >>> somehow. >> I got one of my team members, Kodiak, involved. He maintains the >> satellite server, and was able to figure out , among other things, that >> we were seeing Openstack packages from our mirrored EPEL. We are not >> sure if we still need the "preview" but he signed up for it anyway. >> >> I am able to make you install essex packages by specifying complete >> package names. I went to the projects websites and tried to determine >> which version matched the state of release essex. >> >> yum install openstack-nova-2012.1.3-1.el6.noarch >> openstack-glance-2012.1-5.el6.noarch >> openstack-keystone-2012.1.3-1.el6.noarch openstack-swift-1.4.8-1.el6.noarch >> >> Some wierd stuff started happening with dependencies of keystone >> 2012.1.3-1. Once yum processed it it told me it wanted to install the >> 2012.2 version of the same library!. I think I finally got around this >> by grabbing rpm's directly from our satellite's rpm search page on the >> web interface. And I do: >> >> rpm -i python-keystone-auth-token-2012.1.3-1.el6.noarch.rpm >> rpm -i python-keystone-2012.1.3-1.el6.noarch.rpm >> >> And then I yum in the essex versions of other openstack projects. In >> the end I have Openstack up, and am trying to configure it now. I can >> launch instances, but for some reason I can delete them. >> > Hi Derrick, > > Do you have any specific requirements requesting Essex instead of > Folsom? I did. Now, luckily, they have gone away and I am using Folsom. > Since you're tapping into EPEL6, at least for now, I wonder what's the > EPEL6 repo version you're using because EPEL6 has got only Folsom > packages since its release last December, superseding Essex. To me it looked like they were available, see Kodiak's email for more detail. > > > Regards, > > Gilles > > > > -- --Derrick H. Karimi --Software Developer, SEI Innovation Center --Carnegie Mellon University From ykaul at redhat.com Mon Feb 4 16:27:34 2013 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 04 Feb 2013 18:27:34 +0200 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <1189358418.10188188.1359994604797.JavaMail.root@redhat.com> References: <1189358418.10188188.1359994604797.JavaMail.root@redhat.com> Message-ID: <510FE176.8030304@redhat.com> On 04/02/13 18:16, James Labocki wrote: > I ran into the following exception when installing via packstack interactively. I'm not sure if anyone can help debug it or figure out what I am doing incorrect. > > -James > > > # packstack > Welcome to Installer setup utility > Should Packstack install Glance ['y'| 'n'] [y] : y > Should Packstack install Cinder ['y'| 'n'] [y] : y > Should Packstack install Nova ['y'| 'n'] [y] : y > Should Packstack install Horizon ['y'| 'n'] [y] : y > Should Packstack install Swift ['y'| 'n'] [n] : y > Should Packstack install openstack client tools ['y'| 'n'] [y] : y > Enter the path to your ssh Public key to install on servers [/root/.ssh/id_rsa.pub] : > Enter the IP address of the MySQL server [10.16.46.104] : > Enter the password for the MySQL admin user : > Enter the IP address of the QPID service [10.16.46.104] : > Enter the IP address of the Keystone server [10.16.46.104] : > Enter the IP address of the Glance server [10.16.46.104] : > Enter the IP address of the Cinder server [10.16.46.104] : > Enter the IP address of the Nova API service [10.16.46.104] : > Enter the IP address of the Nova Cert service [10.16.46.104] : > Enter the IP address of the Nova VNC proxy [10.16.46.104] : > Enter a comma separated list of IP addresses on which to install the Nova Compute services [10.16.46.104] : 10.16.46.104,10.16.46.106 > Enter the Private interface for Flat DHCP on the Nova compute servers [eth1] : > Enter the IP address of the Nova Network service [10.16.46.104] : > Enter the Public interface on the Nova network server [eth0] : > Enter the Private interface for Flat DHCP on the Nova network server [eth1] : > Enter the IP Range for Flat DHCP ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [192.168.32.0/22] : > Enter the IP Range for Floating IP's ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [10.3.4.0/22] : > Enter the IP address of the Nova Scheduler service [10.16.46.104] : > Enter the IP address of the client server [10.16.46.104] : > Enter the IP address of the Horizon server [10.16.46.104] : > Enter the IP address of the Swift proxy service [10.16.46.104] : > Enter the Swift Storage servers e.g. host/dev,host/dev [10.16.46.104] : > Enter the number of swift storage zones, MUST be no bigger than the number of storage devices configured [1] : > Enter the number of swift storage replicas, MUST be no bigger than the number of storage zones configured [1] : > Enter FileSystem type for storage nodes ['xfs'| 'ext4'] [ext4] : > Should packstack install EPEL on each server ['y'| 'n'] [n] : y > Enter a comma separated list of URLs to any additional yum repositories to install: > To subscribe each server to Red Hat enter a username here: james.labocki > To subscribe each server to Red Hat enter your password here : > > Installer will be installed using the following configuration: > ============================================================== > os-glance-install: y > os-cinder-install: y > os-nova-install: y > os-horizon-install: y > os-swift-install: y > os-client-install: y > ssh-public-key: /root/.ssh/id_rsa.pub > mysql-host: 10.16.46.104 > mysql-pw: ******** > qpid-host: 10.16.46.104 > keystone-host: 10.16.46.104 > glance-host: 10.16.46.104 > cinder-host: 10.16.46.104 > novaapi-host: 10.16.46.104 > novacert-host: 10.16.46.104 > novavncproxy-hosts: 10.16.46.104 > novacompute-hosts: 10.16.46.104,10.16.46.106 > novacompute-privif: eth1 > novanetwork-host: 10.16.46.104 > novanetwork-pubif: eth0 > novanetwork-privif: eth1 > novanetwork-fixed-range: 192.168.32.0/22 > novanetwork-floating-range: 10.3.4.0/22 > novasched-host: 10.16.46.104 > osclient-host: 10.16.46.104 > os-horizon-host: 10.16.46.104 > os-swift-proxy: 10.16.46.104 > os-swift-storage: 10.16.46.104 > os-swift-storage-zones: 1 > os-swift-storage-replicas: 1 > os-swift-storage-fstype: ext4 > use-epel: y > additional-repo: > rh-username: james.labocki > rh-password: ******** > Proceed with the configuration listed above? (yes|no): yes > > Installing: > Clean Up... [ DONE ] > Running Pre install scripts... [ DONE ] > Setting Up ssh keys...root at 10.16.46.104's password: > root at 10.16.46.104's password: > [ DONE ] > Create MySQL Manifest... [ DONE ] > Creating QPID Manifest... [ DONE ] > Creating Keystone Manifest... [ DONE ] > Adding Glance Keystone Manifest entries... [ DONE ] > Creating Galnce Manifest... [ DONE ] > Adding Cinder Keystone Manifest entries... [ DONE ] > Checking if the Cinder server has a cinder-volumes vg... [ DONE ] > Creating Cinder Manifest... [ DONE ] > Adding Nova API Manifest entries... [ DONE ] > Adding Nova Keystone Manifest entries... [ DONE ] > Adding Nova Cert Manifest entries... [ DONE ] > Adding Nova Compute Manifest entries... [ DONE ] > Adding Nova Network Manifest entries... [ DONE ] > Adding Nova Scheduler Manifest entries... [ DONE ] > Adding Nova VNC Proxy Manifest entries... [ DONE ] > Adding Nova Common Manifest entries... [ DONE ] > Creating OS Client Manifest... [ DONE ] > Creating OS Horizon Manifest... [ DONE ] > Adding Swift Keystone Manifest entries... [ DONE ] > Creating OS Swift builder Manifests... [ DONE ] > Creating OS Swift proxy Manifests... [ DONE ] > Creating OS Swift storage Manifests... [ DONE ] > Creating OS Swift Common Manifests... [ DONE ] > Preparing Servers...ERROR:root:============= STDERR ========== > ERROR:root:Warning: Permanently added '10.16.46.104' (RSA) to the list of known hosts. > + trap t ERR > +++ uname -i > ++ '[' x86_64 = x86_64 ']' > ++ echo x86_64 > + export EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm Are you running upstream or downstream? Why do you need stuff from EPEL? Y. > + EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > + grep 'Red Hat Enterprise Linux' /etc/redhat-release > + rpm -q epel-release > + rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY > + mkdir -p /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests > + rpm -q epel-release > + yum install -y yum-plugin-priorities > Unable to read consumer identity > Warning: RPMDB altered outside of yum. > + rpm -q epel-release > + openstack-config --set /etc/yum.repos.d/redhat.repo rhel-server-ost-6-folsom-rpms priority 1 > Traceback (most recent call last): > File "/usr/bin/openstack-config", line 49, in > conf.readfp(open(cfgfile)) > IOError: [Errno 2] No such file or directory: '/etc/yum.repos.d/redhat.repo' > + true > + subscription-manager register --username=james.labocki '--password=********' --autosubscribe > + subscription-manager list --consumed > + grep -i openstack > ++ subscription-manager list --available > ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 > ++ grep 'Pool Id' > ++ awk '{print $3}' > + subscription-manager subscribe --pool > Usage: subscription-manager subscribe [OPTIONS] > > ++ t > ++ exit 2 > > [ ERROR ] > ERROR:root:Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 795, in main > _main(confFile) > File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 591, in _main > runSequences() > File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 567, in runSequences > controller.runAllSequences() > File "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", line 57, in runAllSequences > sequence.run() > File "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", line 154, in run > step.run() > File "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", line 60, in run > function() > File "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", line 139, in serverprep > server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) > File "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", line 399, in execute > raise ScriptRuntimeError("Error running remote script") > ScriptRuntimeError: Error running remote script > > Error running remote script > Please check log file /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log for more information > [root at rhc-05 ~]# cat /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log > 2013-02-01 15:22:45::ERROR::common_utils::394::root:: ============= STDERR ========== > 2013-02-01 15:22:45::ERROR::common_utils::395::root:: Warning: Permanently added '10.16.46.104' (RSA) to the list of known hosts. > + trap t ERR > +++ uname -i > ++ '[' x86_64 = x86_64 ']' > ++ echo x86_64 > + export EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > + EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > + grep 'Red Hat Enterprise Linux' /etc/redhat-release > + rpm -q epel-release > + rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY > + mkdir -p /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests > + rpm -q epel-release > + yum install -y yum-plugin-priorities > Unable to read consumer identity > Warning: RPMDB altered outside of yum. > + rpm -q epel-release > + openstack-config --set /etc/yum.repos.d/redhat.repo rhel-server-ost-6-folsom-rpms priority 1 > Traceback (most recent call last): > File "/usr/bin/openstack-config", line 49, in > conf.readfp(open(cfgfile)) > IOError: [Errno 2] No such file or directory: '/etc/yum.repos.d/redhat.repo' > + true > + subscription-manager register --username=james.labocki '--password=********' --autosubscribe > + subscription-manager list --consumed > + grep -i openstack > ++ subscription-manager list --available > ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 > ++ grep 'Pool Id' > ++ awk '{print $3}' > + subscription-manager subscribe --pool > Usage: subscription-manager subscribe [OPTIONS] > > ++ t > ++ exit 2 > > 2013-02-01 15:22:45::ERROR::run_setup::803::root:: Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 795, in main > _main(confFile) > File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 591, in _main > runSequences() > File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 567, in runSequences > controller.runAllSequences() > File "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", line 57, in runAllSequences > sequence.run() > File "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", line 154, in run > step.run() > File "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", line 60, in run > function() > File "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", line 139, in serverprep > server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) > File "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", line 399, in execute > raise ScriptRuntimeError("Error running remote script") > ScriptRuntimeError: Error running remote script > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From jlabocki at redhat.com Mon Feb 4 16:36:23 2013 From: jlabocki at redhat.com (James Labocki) Date: Mon, 4 Feb 2013 11:36:23 -0500 (EST) Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <510FE176.8030304@redhat.com> Message-ID: <681932348.10196008.1359995783668.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Yaniv Kaul" > To: "James Labocki" > Cc: "rhos-list" > Sent: Monday, February 4, 2013 11:27:34 AM > Subject: Re: [rhos-list] Packstack Interactive Error > > On 04/02/13 18:16, James Labocki wrote: > > I ran into the following exception when installing via packstack > > interactively. I'm not sure if anyone can help debug it or figure > > out what I am doing incorrect. > > > > -James > > > > > > # packstack > > Welcome to Installer setup utility > > Should Packstack install Glance ['y'| 'n'] [y] : y > > Should Packstack install Cinder ['y'| 'n'] [y] : y > > Should Packstack install Nova ['y'| 'n'] [y] : y > > Should Packstack install Horizon ['y'| 'n'] [y] : y > > Should Packstack install Swift ['y'| 'n'] [n] : y > > Should Packstack install openstack client tools ['y'| 'n'] > > [y] : y > > Enter the path to your ssh Public key to install on servers > > [/root/.ssh/id_rsa.pub] : > > Enter the IP address of the MySQL server [10.16.46.104] : > > Enter the password for the MySQL admin user : > > Enter the IP address of the QPID service [10.16.46.104] : > > Enter the IP address of the Keystone server [10.16.46.104] : > > Enter the IP address of the Glance server [10.16.46.104] : > > Enter the IP address of the Cinder server [10.16.46.104] : > > Enter the IP address of the Nova API service [10.16.46.104] : > > Enter the IP address of the Nova Cert service [10.16.46.104] > > : > > Enter the IP address of the Nova VNC proxy [10.16.46.104] : > > Enter a comma separated list of IP addresses on which to > > install the Nova Compute services [10.16.46.104] : > > 10.16.46.104,10.16.46.106 > > Enter the Private interface for Flat DHCP on the Nova compute > > servers [eth1] : > > Enter the IP address of the Nova Network service > > [10.16.46.104] : > > Enter the Public interface on the Nova network server [eth0] > > : > > Enter the Private interface for Flat DHCP on the Nova network > > server [eth1] : > > Enter the IP Range for Flat DHCP > > ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [192.168.32.0/22] > > : > > Enter the IP Range for Floating IP's > > ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [10.3.4.0/22] : > > Enter the IP address of the Nova Scheduler service > > [10.16.46.104] : > > Enter the IP address of the client server [10.16.46.104] : > > Enter the IP address of the Horizon server [10.16.46.104] : > > Enter the IP address of the Swift proxy service > > [10.16.46.104] : > > Enter the Swift Storage servers e.g. host/dev,host/dev > > [10.16.46.104] : > > Enter the number of swift storage zones, MUST be no bigger > > than the number of storage devices configured [1] : > > Enter the number of swift storage replicas, MUST be no bigger > > than the number of storage zones configured [1] : > > Enter FileSystem type for storage nodes ['xfs'| 'ext4'] > > [ext4] : > > Should packstack install EPEL on each server ['y'| 'n'] [n] : > > y > > Enter a comma separated list of URLs to any additional yum > > repositories to install: > > To subscribe each server to Red Hat enter a username here: > > james.labocki > > To subscribe each server to Red Hat enter your password here : > > > > Installer will be installed using the following configuration: > > ============================================================== > > os-glance-install: y > > os-cinder-install: y > > os-nova-install: y > > os-horizon-install: y > > os-swift-install: y > > os-client-install: y > > ssh-public-key: /root/.ssh/id_rsa.pub > > mysql-host: 10.16.46.104 > > mysql-pw: ******** > > qpid-host: 10.16.46.104 > > keystone-host: 10.16.46.104 > > glance-host: 10.16.46.104 > > cinder-host: 10.16.46.104 > > novaapi-host: 10.16.46.104 > > novacert-host: 10.16.46.104 > > novavncproxy-hosts: 10.16.46.104 > > novacompute-hosts: 10.16.46.104,10.16.46.106 > > novacompute-privif: eth1 > > novanetwork-host: 10.16.46.104 > > novanetwork-pubif: eth0 > > novanetwork-privif: eth1 > > novanetwork-fixed-range: 192.168.32.0/22 > > novanetwork-floating-range: 10.3.4.0/22 > > novasched-host: 10.16.46.104 > > osclient-host: 10.16.46.104 > > os-horizon-host: 10.16.46.104 > > os-swift-proxy: 10.16.46.104 > > os-swift-storage: 10.16.46.104 > > os-swift-storage-zones: 1 > > os-swift-storage-replicas: 1 > > os-swift-storage-fstype: ext4 > > use-epel: y > > additional-repo: > > rh-username: james.labocki > > rh-password: ******** > > Proceed with the configuration listed above? (yes|no): yes > > > > Installing: > > Clean Up... [ > > DONE ] > > Running Pre install scripts... [ > > DONE ] > > Setting Up ssh keys...root at 10.16.46.104's password: > > root at 10.16.46.104's password: > > [ DONE ] > > Create MySQL Manifest... [ > > DONE ] > > Creating QPID Manifest... [ > > DONE ] > > Creating Keystone Manifest... [ > > DONE ] > > Adding Glance Keystone Manifest entries... [ > > DONE ] > > Creating Galnce Manifest... [ > > DONE ] > > Adding Cinder Keystone Manifest entries... [ > > DONE ] > > Checking if the Cinder server has a cinder-volumes vg... [ > > DONE ] > > Creating Cinder Manifest... [ > > DONE ] > > Adding Nova API Manifest entries... [ > > DONE ] > > Adding Nova Keystone Manifest entries... [ > > DONE ] > > Adding Nova Cert Manifest entries... [ > > DONE ] > > Adding Nova Compute Manifest entries... [ > > DONE ] > > Adding Nova Network Manifest entries... [ > > DONE ] > > Adding Nova Scheduler Manifest entries... [ > > DONE ] > > Adding Nova VNC Proxy Manifest entries... [ > > DONE ] > > Adding Nova Common Manifest entries... [ > > DONE ] > > Creating OS Client Manifest... [ > > DONE ] > > Creating OS Horizon Manifest... [ > > DONE ] > > Adding Swift Keystone Manifest entries... [ > > DONE ] > > Creating OS Swift builder Manifests... [ > > DONE ] > > Creating OS Swift proxy Manifests... [ > > DONE ] > > Creating OS Swift storage Manifests... [ > > DONE ] > > Creating OS Swift Common Manifests... [ > > DONE ] > > Preparing Servers...ERROR:root:============= STDERR ========== > > ERROR:root:Warning: Permanently added '10.16.46.104' (RSA) to > > the list of known hosts. > > + trap t ERR > > +++ uname -i > > ++ '[' x86_64 = x86_64 ']' > > ++ echo x86_64 > > + export > > EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > Are you running upstream or downstream? Why do you need stuff from > EPEL? > Y. Version = openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch I didn't know if I would need something from EPEL, so I defaulted to enabling it. Is this causing the problem? > > > + > > EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > + grep 'Red Hat Enterprise Linux' /etc/redhat-release > > + rpm -q epel-release > > + rpm -Uvh > > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 > > Signature, key ID 0608b895: NOKEY > > + mkdir -p > > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests > > + rpm -q epel-release > > + yum install -y yum-plugin-priorities > > Unable to read consumer identity > > Warning: RPMDB altered outside of yum. > > + rpm -q epel-release > > + openstack-config --set /etc/yum.repos.d/redhat.repo > > rhel-server-ost-6-folsom-rpms priority 1 > > Traceback (most recent call last): > > File "/usr/bin/openstack-config", line 49, in > > conf.readfp(open(cfgfile)) > > IOError: [Errno 2] No such file or directory: > > '/etc/yum.repos.d/redhat.repo' > > + true > > + subscription-manager register --username=james.labocki > > '--password=********' --autosubscribe > > + subscription-manager list --consumed > > + grep -i openstack > > ++ subscription-manager list --available > > ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 > > ++ grep 'Pool Id' > > ++ awk '{print $3}' > > + subscription-manager subscribe --pool > > Usage: subscription-manager subscribe [OPTIONS] > > > > ++ t > > ++ exit 2 > > > > [ ERROR ] > > ERROR:root:Traceback (most recent call last): > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 795, in main > > _main(confFile) > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 591, in _main > > runSequences() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 567, in runSequences > > controller.runAllSequences() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", > > line 57, in runAllSequences > > sequence.run() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > > line 154, in run > > step.run() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > > line 60, in run > > function() > > File > > "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", > > line 139, in serverprep > > server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", > > line 399, in execute > > raise ScriptRuntimeError("Error running remote script") > > ScriptRuntimeError: Error running remote script > > > > Error running remote script > > Please check log file > > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log > > for more information > > [root at rhc-05 ~]# cat > > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log > > 2013-02-01 15:22:45::ERROR::common_utils::394::root:: > > ============= STDERR ========== > > 2013-02-01 15:22:45::ERROR::common_utils::395::root:: Warning: > > Permanently added '10.16.46.104' (RSA) to the list of known > > hosts. > > + trap t ERR > > +++ uname -i > > ++ '[' x86_64 = x86_64 ']' > > ++ echo x86_64 > > + export > > EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > + > > EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > + grep 'Red Hat Enterprise Linux' /etc/redhat-release > > + rpm -q epel-release > > + rpm -Uvh > > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 > > Signature, key ID 0608b895: NOKEY > > + mkdir -p > > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests > > + rpm -q epel-release > > + yum install -y yum-plugin-priorities > > Unable to read consumer identity > > Warning: RPMDB altered outside of yum. > > + rpm -q epel-release > > + openstack-config --set /etc/yum.repos.d/redhat.repo > > rhel-server-ost-6-folsom-rpms priority 1 > > Traceback (most recent call last): > > File "/usr/bin/openstack-config", line 49, in > > conf.readfp(open(cfgfile)) > > IOError: [Errno 2] No such file or directory: > > '/etc/yum.repos.d/redhat.repo' > > + true > > + subscription-manager register --username=james.labocki > > '--password=********' --autosubscribe > > + subscription-manager list --consumed > > + grep -i openstack > > ++ subscription-manager list --available > > ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 > > ++ grep 'Pool Id' > > ++ awk '{print $3}' > > + subscription-manager subscribe --pool > > Usage: subscription-manager subscribe [OPTIONS] > > > > ++ t > > ++ exit 2 > > > > 2013-02-01 15:22:45::ERROR::run_setup::803::root:: Traceback > > (most recent call last): > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 795, in main > > _main(confFile) > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 591, in _main > > runSequences() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 567, in runSequences > > controller.runAllSequences() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", > > line 57, in runAllSequences > > sequence.run() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > > line 154, in run > > step.run() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > > line 60, in run > > function() > > File > > "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", > > line 139, in serverprep > > server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", > > line 399, in execute > > raise ScriptRuntimeError("Error running remote script") > > ScriptRuntimeError: Error running remote script > > > > > > > > _______________________________________________ > > rhos-list mailing list > > rhos-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rhos-list > > From ykaul at redhat.com Mon Feb 4 16:53:18 2013 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 04 Feb 2013 18:53:18 +0200 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <681932348.10196008.1359995783668.JavaMail.root@redhat.com> References: <681932348.10196008.1359995783668.JavaMail.root@redhat.com> Message-ID: <510FE77E.7070307@redhat.com> On 04/02/13 18:36, James Labocki wrote: > > ----- Original Message ----- >> From: "Yaniv Kaul" >> To: "James Labocki" >> Cc: "rhos-list" >> Sent: Monday, February 4, 2013 11:27:34 AM >> Subject: Re: [rhos-list] Packstack Interactive Error >> >> On 04/02/13 18:16, James Labocki wrote: >>> I ran into the following exception when installing via packstack >>> interactively. I'm not sure if anyone can help debug it or figure >>> out what I am doing incorrect. >>> >>> -James >>> >>> >>> # packstack >>> Welcome to Installer setup utility >>> Should Packstack install Glance ['y'| 'n'] [y] : y >>> Should Packstack install Cinder ['y'| 'n'] [y] : y >>> Should Packstack install Nova ['y'| 'n'] [y] : y >>> Should Packstack install Horizon ['y'| 'n'] [y] : y >>> Should Packstack install Swift ['y'| 'n'] [n] : y >>> Should Packstack install openstack client tools ['y'| 'n'] >>> [y] : y >>> Enter the path to your ssh Public key to install on servers >>> [/root/.ssh/id_rsa.pub] : >>> Enter the IP address of the MySQL server [10.16.46.104] : >>> Enter the password for the MySQL admin user : >>> Enter the IP address of the QPID service [10.16.46.104] : >>> Enter the IP address of the Keystone server [10.16.46.104] : >>> Enter the IP address of the Glance server [10.16.46.104] : >>> Enter the IP address of the Cinder server [10.16.46.104] : >>> Enter the IP address of the Nova API service [10.16.46.104] : >>> Enter the IP address of the Nova Cert service [10.16.46.104] >>> : >>> Enter the IP address of the Nova VNC proxy [10.16.46.104] : >>> Enter a comma separated list of IP addresses on which to >>> install the Nova Compute services [10.16.46.104] : >>> 10.16.46.104,10.16.46.106 >>> Enter the Private interface for Flat DHCP on the Nova compute >>> servers [eth1] : >>> Enter the IP address of the Nova Network service >>> [10.16.46.104] : >>> Enter the Public interface on the Nova network server [eth0] >>> : >>> Enter the Private interface for Flat DHCP on the Nova network >>> server [eth1] : >>> Enter the IP Range for Flat DHCP >>> ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [192.168.32.0/22] >>> : >>> Enter the IP Range for Floating IP's >>> ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [10.3.4.0/22] : >>> Enter the IP address of the Nova Scheduler service >>> [10.16.46.104] : >>> Enter the IP address of the client server [10.16.46.104] : >>> Enter the IP address of the Horizon server [10.16.46.104] : >>> Enter the IP address of the Swift proxy service >>> [10.16.46.104] : >>> Enter the Swift Storage servers e.g. host/dev,host/dev >>> [10.16.46.104] : >>> Enter the number of swift storage zones, MUST be no bigger >>> than the number of storage devices configured [1] : >>> Enter the number of swift storage replicas, MUST be no bigger >>> than the number of storage zones configured [1] : >>> Enter FileSystem type for storage nodes ['xfs'| 'ext4'] >>> [ext4] : >>> Should packstack install EPEL on each server ['y'| 'n'] [n] : >>> y >>> Enter a comma separated list of URLs to any additional yum >>> repositories to install: >>> To subscribe each server to Red Hat enter a username here: >>> james.labocki >>> To subscribe each server to Red Hat enter your password here : >>> >>> Installer will be installed using the following configuration: >>> ============================================================== >>> os-glance-install: y >>> os-cinder-install: y >>> os-nova-install: y >>> os-horizon-install: y >>> os-swift-install: y >>> os-client-install: y >>> ssh-public-key: /root/.ssh/id_rsa.pub >>> mysql-host: 10.16.46.104 >>> mysql-pw: ******** >>> qpid-host: 10.16.46.104 >>> keystone-host: 10.16.46.104 >>> glance-host: 10.16.46.104 >>> cinder-host: 10.16.46.104 >>> novaapi-host: 10.16.46.104 >>> novacert-host: 10.16.46.104 >>> novavncproxy-hosts: 10.16.46.104 >>> novacompute-hosts: 10.16.46.104,10.16.46.106 >>> novacompute-privif: eth1 >>> novanetwork-host: 10.16.46.104 >>> novanetwork-pubif: eth0 >>> novanetwork-privif: eth1 >>> novanetwork-fixed-range: 192.168.32.0/22 >>> novanetwork-floating-range: 10.3.4.0/22 >>> novasched-host: 10.16.46.104 >>> osclient-host: 10.16.46.104 >>> os-horizon-host: 10.16.46.104 >>> os-swift-proxy: 10.16.46.104 >>> os-swift-storage: 10.16.46.104 >>> os-swift-storage-zones: 1 >>> os-swift-storage-replicas: 1 >>> os-swift-storage-fstype: ext4 >>> use-epel: y >>> additional-repo: >>> rh-username: james.labocki >>> rh-password: ******** >>> Proceed with the configuration listed above? (yes|no): yes >>> >>> Installing: >>> Clean Up... [ >>> DONE ] >>> Running Pre install scripts... [ >>> DONE ] >>> Setting Up ssh keys...root at 10.16.46.104's password: >>> root at 10.16.46.104's password: >>> [ DONE ] >>> Create MySQL Manifest... [ >>> DONE ] >>> Creating QPID Manifest... [ >>> DONE ] >>> Creating Keystone Manifest... [ >>> DONE ] >>> Adding Glance Keystone Manifest entries... [ >>> DONE ] >>> Creating Galnce Manifest... [ >>> DONE ] >>> Adding Cinder Keystone Manifest entries... [ >>> DONE ] >>> Checking if the Cinder server has a cinder-volumes vg... [ >>> DONE ] >>> Creating Cinder Manifest... [ >>> DONE ] >>> Adding Nova API Manifest entries... [ >>> DONE ] >>> Adding Nova Keystone Manifest entries... [ >>> DONE ] >>> Adding Nova Cert Manifest entries... [ >>> DONE ] >>> Adding Nova Compute Manifest entries... [ >>> DONE ] >>> Adding Nova Network Manifest entries... [ >>> DONE ] >>> Adding Nova Scheduler Manifest entries... [ >>> DONE ] >>> Adding Nova VNC Proxy Manifest entries... [ >>> DONE ] >>> Adding Nova Common Manifest entries... [ >>> DONE ] >>> Creating OS Client Manifest... [ >>> DONE ] >>> Creating OS Horizon Manifest... [ >>> DONE ] >>> Adding Swift Keystone Manifest entries... [ >>> DONE ] >>> Creating OS Swift builder Manifests... [ >>> DONE ] >>> Creating OS Swift proxy Manifests... [ >>> DONE ] >>> Creating OS Swift storage Manifests... [ >>> DONE ] >>> Creating OS Swift Common Manifests... [ >>> DONE ] >>> Preparing Servers...ERROR:root:============= STDERR ========== >>> ERROR:root:Warning: Permanently added '10.16.46.104' (RSA) to >>> the list of known hosts. >>> + trap t ERR >>> +++ uname -i >>> ++ '[' x86_64 = x86_64 ']' >>> ++ echo x86_64 >>> + export >>> EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >> Are you running upstream or downstream? Why do you need stuff from >> EPEL? >> Y. > Version = openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch This is an ancient (relatively) build, please use https://brewweb.devel.redhat.com/buildinfo?buildID=253313 > > I didn't know if I would need something from EPEL, so I defaulted to enabling it. Is this causing the problem? Perhaps, but you shouldn't need it. I suggest http://download.lab.bos.redhat.com/rel-eng/OpenStack/Folsom/2013-01-30.1 Y. > > >>> + >>> EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> + grep 'Red Hat Enterprise Linux' /etc/redhat-release >>> + rpm -q epel-release >>> + rpm -Uvh >>> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 >>> Signature, key ID 0608b895: NOKEY >>> + mkdir -p >>> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests >>> + rpm -q epel-release >>> + yum install -y yum-plugin-priorities >>> Unable to read consumer identity >>> Warning: RPMDB altered outside of yum. >>> + rpm -q epel-release >>> + openstack-config --set /etc/yum.repos.d/redhat.repo >>> rhel-server-ost-6-folsom-rpms priority 1 >>> Traceback (most recent call last): >>> File "/usr/bin/openstack-config", line 49, in >>> conf.readfp(open(cfgfile)) >>> IOError: [Errno 2] No such file or directory: >>> '/etc/yum.repos.d/redhat.repo' >>> + true >>> + subscription-manager register --username=james.labocki >>> '--password=********' --autosubscribe >>> + subscription-manager list --consumed >>> + grep -i openstack >>> ++ subscription-manager list --available >>> ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 >>> ++ grep 'Pool Id' >>> ++ awk '{print $3}' >>> + subscription-manager subscribe --pool >>> Usage: subscription-manager subscribe [OPTIONS] >>> >>> ++ t >>> ++ exit 2 >>> >>> [ ERROR ] >>> ERROR:root:Traceback (most recent call last): >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 795, in main >>> _main(confFile) >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 591, in _main >>> runSequences() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 567, in runSequences >>> controller.runAllSequences() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", >>> line 57, in runAllSequences >>> sequence.run() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >>> line 154, in run >>> step.run() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >>> line 60, in run >>> function() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", >>> line 139, in serverprep >>> server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", >>> line 399, in execute >>> raise ScriptRuntimeError("Error running remote script") >>> ScriptRuntimeError: Error running remote script >>> >>> Error running remote script >>> Please check log file >>> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log >>> for more information >>> [root at rhc-05 ~]# cat >>> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log >>> 2013-02-01 15:22:45::ERROR::common_utils::394::root:: >>> ============= STDERR ========== >>> 2013-02-01 15:22:45::ERROR::common_utils::395::root:: Warning: >>> Permanently added '10.16.46.104' (RSA) to the list of known >>> hosts. >>> + trap t ERR >>> +++ uname -i >>> ++ '[' x86_64 = x86_64 ']' >>> ++ echo x86_64 >>> + export >>> EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> + >>> EPEL_RPM_URL=http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> + grep 'Red Hat Enterprise Linux' /etc/redhat-release >>> + rpm -q epel-release >>> + rpm -Uvh >>> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 >>> Signature, key ID 0608b895: NOKEY >>> + mkdir -p >>> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests >>> + rpm -q epel-release >>> + yum install -y yum-plugin-priorities >>> Unable to read consumer identity >>> Warning: RPMDB altered outside of yum. >>> + rpm -q epel-release >>> + openstack-config --set /etc/yum.repos.d/redhat.repo >>> rhel-server-ost-6-folsom-rpms priority 1 >>> Traceback (most recent call last): >>> File "/usr/bin/openstack-config", line 49, in >>> conf.readfp(open(cfgfile)) >>> IOError: [Errno 2] No such file or directory: >>> '/etc/yum.repos.d/redhat.repo' >>> + true >>> + subscription-manager register --username=james.labocki >>> '--password=********' --autosubscribe >>> + subscription-manager list --consumed >>> + grep -i openstack >>> ++ subscription-manager list --available >>> ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 >>> ++ grep 'Pool Id' >>> ++ awk '{print $3}' >>> + subscription-manager subscribe --pool >>> Usage: subscription-manager subscribe [OPTIONS] >>> >>> ++ t >>> ++ exit 2 >>> >>> 2013-02-01 15:22:45::ERROR::run_setup::803::root:: Traceback >>> (most recent call last): >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 795, in main >>> _main(confFile) >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 591, in _main >>> runSequences() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 567, in runSequences >>> controller.runAllSequences() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", >>> line 57, in runAllSequences >>> sequence.run() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >>> line 154, in run >>> step.run() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >>> line 60, in run >>> function() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", >>> line 139, in serverprep >>> server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", >>> line 399, in execute >>> raise ScriptRuntimeError("Error running remote script") >>> ScriptRuntimeError: Error running remote script >>> >>> >>> >>> _______________________________________________ >>> rhos-list mailing list >>> rhos-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rhos-list >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlabocki at redhat.com Mon Feb 4 16:59:09 2013 From: jlabocki at redhat.com (James Labocki) Date: Mon, 4 Feb 2013 11:59:09 -0500 (EST) Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <510FE77E.7070307@redhat.com> Message-ID: <545339275.10224864.1359997149709.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Yaniv Kaul" > To: "James Labocki" > Cc: "rhos-list" > Sent: Monday, February 4, 2013 11:53:18 AM > Subject: Re: [rhos-list] Packstack Interactive Error > > > On 04/02/13 18:36, James Labocki wrote: > > > ----- Original Message ----- > > From: "Yaniv Kaul" To: "James Labocki" > Cc: "rhos-list" Sent: > Monday, February 4, 2013 11:27:34 AM > Subject: Re: [rhos-list] Packstack Interactive Error > > On 04/02/13 18:16, James Labocki wrote: > > I ran into the following exception when installing via packstack > interactively. I'm not sure if anyone can help debug it or figure > out what I am doing incorrect. > > -James > > > # packstack > Welcome to Installer setup utility > Should Packstack install Glance ['y'| 'n'] [y] : y > Should Packstack install Cinder ['y'| 'n'] [y] : y > Should Packstack install Nova ['y'| 'n'] [y] : y > Should Packstack install Horizon ['y'| 'n'] [y] : y > Should Packstack install Swift ['y'| 'n'] [n] : y > Should Packstack install openstack client tools ['y'| 'n'] > [y] : y > Enter the path to your ssh Public key to install on servers > [/root/.ssh/id_rsa.pub] : > Enter the IP address of the MySQL server [10.16.46.104] : > Enter the password for the MySQL admin user : > Enter the IP address of the QPID service [10.16.46.104] : > Enter the IP address of the Keystone server [10.16.46.104] : > Enter the IP address of the Glance server [10.16.46.104] : > Enter the IP address of the Cinder server [10.16.46.104] : > Enter the IP address of the Nova API service [10.16.46.104] : > Enter the IP address of the Nova Cert service [10.16.46.104] > : > Enter the IP address of the Nova VNC proxy [10.16.46.104] : > Enter a comma separated list of IP addresses on which to > install the Nova Compute services [10.16.46.104] : > 10.16.46.104,10.16.46.106 > Enter the Private interface for Flat DHCP on the Nova compute > servers [eth1] : > Enter the IP address of the Nova Network service > [10.16.46.104] : > Enter the Public interface on the Nova network server [eth0] > : > Enter the Private interface for Flat DHCP on the Nova network > server [eth1] : > Enter the IP Range for Flat DHCP > ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [192.168.32.0/22] > : > Enter the IP Range for Floating IP's > ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [10.3.4.0/22] : > Enter the IP address of the Nova Scheduler service > [10.16.46.104] : > Enter the IP address of the client server [10.16.46.104] : > Enter the IP address of the Horizon server [10.16.46.104] : > Enter the IP address of the Swift proxy service > [10.16.46.104] : > Enter the Swift Storage servers e.g. host/dev,host/dev > [10.16.46.104] : > Enter the number of swift storage zones, MUST be no bigger > than the number of storage devices configured [1] : > Enter the number of swift storage replicas, MUST be no bigger > than the number of storage zones configured [1] : > Enter FileSystem type for storage nodes ['xfs'| 'ext4'] > [ext4] : > Should packstack install EPEL on each server ['y'| 'n'] [n] : > y > Enter a comma separated list of URLs to any additional yum > repositories to install: > To subscribe each server to Red Hat enter a username here: > james.labocki > To subscribe each server to Red Hat enter your password here : > > Installer will be installed using the following configuration: > ============================================================== > os-glance-install: y > os-cinder-install: y > os-nova-install: y > os-horizon-install: y > os-swift-install: y > os-client-install: y > ssh-public-key: /root/.ssh/id_rsa.pub > mysql-host: 10.16.46.104 > mysql-pw: ******** > qpid-host: 10.16.46.104 > keystone-host: 10.16.46.104 > glance-host: 10.16.46.104 > cinder-host: 10.16.46.104 > novaapi-host: 10.16.46.104 > novacert-host: 10.16.46.104 > novavncproxy-hosts: 10.16.46.104 > novacompute-hosts: 10.16.46.104,10.16.46.106 > novacompute-privif: eth1 > novanetwork-host: 10.16.46.104 > novanetwork-pubif: eth0 > novanetwork-privif: eth1 > novanetwork-fixed-range: 192.168.32.0/22 > novanetwork-floating-range: 10.3.4.0/22 > novasched-host: 10.16.46.104 > osclient-host: 10.16.46.104 > os-horizon-host: 10.16.46.104 > os-swift-proxy: 10.16.46.104 > os-swift-storage: 10.16.46.104 > os-swift-storage-zones: 1 > os-swift-storage-replicas: 1 > os-swift-storage-fstype: ext4 > use-epel: y > additional-repo: > rh-username: james.labocki > rh-password: ******** > Proceed with the configuration listed above? (yes|no): yes > > Installing: > Clean Up... [ > DONE ] > Running Pre install scripts... [ > DONE ] > Setting Up ssh keys...root at 10.16.46.104 's password: > root at 10.16.46.104 's password: > [ DONE ] > Create MySQL Manifest... [ > DONE ] > Creating QPID Manifest... [ > DONE ] > Creating Keystone Manifest... [ > DONE ] > Adding Glance Keystone Manifest entries... [ > DONE ] > Creating Galnce Manifest... [ > DONE ] > Adding Cinder Keystone Manifest entries... [ > DONE ] > Checking if the Cinder server has a cinder-volumes vg... [ > DONE ] > Creating Cinder Manifest... [ > DONE ] > Adding Nova API Manifest entries... [ > DONE ] > Adding Nova Keystone Manifest entries... [ > DONE ] > Adding Nova Cert Manifest entries... [ > DONE ] > Adding Nova Compute Manifest entries... [ > DONE ] > Adding Nova Network Manifest entries... [ > DONE ] > Adding Nova Scheduler Manifest entries... [ > DONE ] > Adding Nova VNC Proxy Manifest entries... [ > DONE ] > Adding Nova Common Manifest entries... [ > DONE ] > Creating OS Client Manifest... [ > DONE ] > Creating OS Horizon Manifest... [ > DONE ] > Adding Swift Keystone Manifest entries... [ > DONE ] > Creating OS Swift builder Manifests... [ > DONE ] > Creating OS Swift proxy Manifests... [ > DONE ] > Creating OS Swift storage Manifests... [ > DONE ] > Creating OS Swift Common Manifests... [ > DONE ] > Preparing Servers...ERROR:root:============= STDERR ========== > ERROR:root:Warning: Permanently added '10.16.46.104' (RSA) to > the list of known hosts. > + trap t ERR > +++ uname -i > ++ '[' x86_64 = x86_64 ']' > ++ echo x86_64 > + export > EPEL_RPM_URL= > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > Are you running upstream or downstream? Why do you need stuff > from > EPEL? > Y. Version = openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch > This is an ancient (relatively) build, please use > https://brewweb.devel.redhat.com/buildinfo?buildID=253313 Thanks Yaniv. That is what is included in RHN when I install following the product documentation at access.redhat.com. Should we push a new version to the CDN? > > > > I didn't know if I would need something from EPEL, so I defaulted to > enabling it. Is this causing the problem? > Perhaps, but you shouldn't need it. > I suggest > http://download.lab.bos.redhat.com/rel-eng/OpenStack/Folsom/2013-01-30.1 > Y. > > > > > > > > + > EPEL_RPM_URL= > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > + grep 'Red Hat Enterprise Linux' /etc/redhat-release > + rpm -q epel-releasef > + rpm -Uvh > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 > Signature, key ID 0608b895: NOKEY > + mkdir -p > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests > + rpm -q epel-release > + yum install -y yum-plugin-priorities > Unable to read consumer identity > Warning: RPMDB altered outside of yum. > + rpm -q epel-release > + openstack-config --set /etc/yum.repos.d/redhat.repo > rhel-server-ost-6-folsom-rpms priority 1 > Traceback (most recent call last): > File "/usr/bin/openstack-config", line 49, in > conf.readfp(open(cfgfile)) > IOError: [Errno 2] No such file or directory: > '/etc/yum.repos.d/redhat.repo' > + true > + subscription-manager register --username=james.labocki > '--password=********' --autosubscribe > + subscription-manager list --consumed > + grep -i openstack > ++ subscription-manager list --available > ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 > ++ grep 'Pool Id' > ++ awk '{print $3}' > + subscription-manager subscribe --pool > Usage: subscription-manager subscribe [OPTIONS] > > ++ t > ++ exit 2 > > [ ERROR ] > ERROR:root:Traceback (most recent call last): > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 795, in main > _main(confFile) > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 591, in _main > runSequences() > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 567, in runSequences > controller.runAllSequences() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", > line 57, in runAllSequences > sequence.run() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > line 154, in run > step.run() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > line 60, in run > function() > File > "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", > line 139, in serverprep > server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) > File > "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", > line 399, in execute > raise ScriptRuntimeError("Error running remote script") > ScriptRuntimeError: Error running remote script > > Error running remote script > Please check log file > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log > for more information > [root at rhc-05 ~]# cat > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log > 2013-02-01 15:22:45::ERROR::common_utils::394::root:: > ============= STDERR ========== > 2013-02-01 15:22:45::ERROR::common_utils::395::root:: Warning: > Permanently added '10.16.46.104' (RSA) to the list of known > hosts. > + trap t ERR > +++ uname -i > ++ '[' x86_64 = x86_64 ']' > ++ echo x86_64 > + export > EPEL_RPM_URL= > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > + > EPEL_RPM_URL= > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > + grep 'Red Hat Enterprise Linux' /etc/redhat-release > + rpm -q epel-release > + rpm -Uvh > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 > Signature, key ID 0608b895: NOKEY > + mkdir -p > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests > + rpm -q epel-release > + yum install -y yum-plugin-priorities > Unable to read consumer identity > Warning: RPMDB altered outside of yum. > + rpm -q epel-release > + openstack-config --set /etc/yum.repos.d/redhat.repo > rhel-server-ost-6-folsom-rpms priority 1 > Traceback (most recent call last): > File "/usr/bin/openstack-config", line 49, in > conf.readfp(open(cfgfile)) > IOError: [Errno 2] No such file or directory: > '/etc/yum.repos.d/redhat.repo' > + true > + subscription-manager register --username=james.labocki > '--password=********' --autosubscribe > + subscription-manager list --consumed > + grep -i openstack > ++ subscription-manager list --available > ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 > ++ grep 'Pool Id' > ++ awk '{print $3}' > + subscription-manager subscribe --pool > Usage: subscription-manager subscribe [OPTIONS] > > ++ t > ++ exit 2 > > 2013-02-01 15:22:45::ERROR::run_setup::803::root:: Traceback > (most recent call last): > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 795, in main > _main(confFile) > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 591, in _main > runSequences() > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 567, in runSequences > controller.runAllSequences() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", > line 57, in runAllSequences > sequence.run() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > line 154, in run > step.run() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > line 60, in run > function() > File > "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", > line 139, in serverprep > server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) > File > "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", > line 399, in execute > raise ScriptRuntimeError("Error running remote script") > ScriptRuntimeError: Error running remote script > > > > _______________________________________________ > rhos-list mailing list rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From ykaul at redhat.com Mon Feb 4 17:10:15 2013 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 04 Feb 2013 19:10:15 +0200 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <545339275.10224864.1359997149709.JavaMail.root@redhat.com> References: <545339275.10224864.1359997149709.JavaMail.root@redhat.com> Message-ID: <510FEB77.6030708@redhat.com> On 04/02/13 18:59, James Labocki wrote: > > ----- Original Message ----- >> From: "Yaniv Kaul" >> To: "James Labocki" >> Cc: "rhos-list" >> Sent: Monday, February 4, 2013 11:53:18 AM >> Subject: Re: [rhos-list] Packstack Interactive Error >> >> >> On 04/02/13 18:36, James Labocki wrote: >> >> >> ----- Original Message ----- >> >> From: "Yaniv Kaul" To: "James Labocki" >> Cc: "rhos-list" Sent: >> Monday, February 4, 2013 11:27:34 AM >> Subject: Re: [rhos-list] Packstack Interactive Error >> >> On 04/02/13 18:16, James Labocki wrote: >> >> I ran into the following exception when installing via packstack >> interactively. I'm not sure if anyone can help debug it or figure >> out what I am doing incorrect. >> >> -James >> >> >> # packstack >> Welcome to Installer setup utility >> Should Packstack install Glance ['y'| 'n'] [y] : y >> Should Packstack install Cinder ['y'| 'n'] [y] : y >> Should Packstack install Nova ['y'| 'n'] [y] : y >> Should Packstack install Horizon ['y'| 'n'] [y] : y >> Should Packstack install Swift ['y'| 'n'] [n] : y >> Should Packstack install openstack client tools ['y'| 'n'] >> [y] : y >> Enter the path to your ssh Public key to install on servers >> [/root/.ssh/id_rsa.pub] : >> Enter the IP address of the MySQL server [10.16.46.104] : >> Enter the password for the MySQL admin user : >> Enter the IP address of the QPID service [10.16.46.104] : >> Enter the IP address of the Keystone server [10.16.46.104] : >> Enter the IP address of the Glance server [10.16.46.104] : >> Enter the IP address of the Cinder server [10.16.46.104] : >> Enter the IP address of the Nova API service [10.16.46.104] : >> Enter the IP address of the Nova Cert service [10.16.46.104] >> : >> Enter the IP address of the Nova VNC proxy [10.16.46.104] : >> Enter a comma separated list of IP addresses on which to >> install the Nova Compute services [10.16.46.104] : >> 10.16.46.104,10.16.46.106 >> Enter the Private interface for Flat DHCP on the Nova compute >> servers [eth1] : >> Enter the IP address of the Nova Network service >> [10.16.46.104] : >> Enter the Public interface on the Nova network server [eth0] >> : >> Enter the Private interface for Flat DHCP on the Nova network >> server [eth1] : >> Enter the IP Range for Flat DHCP >> ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [192.168.32.0/22] >> : >> Enter the IP Range for Floating IP's >> ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [10.3.4.0/22] : >> Enter the IP address of the Nova Scheduler service >> [10.16.46.104] : >> Enter the IP address of the client server [10.16.46.104] : >> Enter the IP address of the Horizon server [10.16.46.104] : >> Enter the IP address of the Swift proxy service >> [10.16.46.104] : >> Enter the Swift Storage servers e.g. host/dev,host/dev >> [10.16.46.104] : >> Enter the number of swift storage zones, MUST be no bigger >> than the number of storage devices configured [1] : >> Enter the number of swift storage replicas, MUST be no bigger >> than the number of storage zones configured [1] : >> Enter FileSystem type for storage nodes ['xfs'| 'ext4'] >> [ext4] : >> Should packstack install EPEL on each server ['y'| 'n'] [n] : >> y >> Enter a comma separated list of URLs to any additional yum >> repositories to install: >> To subscribe each server to Red Hat enter a username here: >> james.labocki >> To subscribe each server to Red Hat enter your password here : >> >> Installer will be installed using the following configuration: >> ============================================================== >> os-glance-install: y >> os-cinder-install: y >> os-nova-install: y >> os-horizon-install: y >> os-swift-install: y >> os-client-install: y >> ssh-public-key: /root/.ssh/id_rsa.pub >> mysql-host: 10.16.46.104 >> mysql-pw: ******** >> qpid-host: 10.16.46.104 >> keystone-host: 10.16.46.104 >> glance-host: 10.16.46.104 >> cinder-host: 10.16.46.104 >> novaapi-host: 10.16.46.104 >> novacert-host: 10.16.46.104 >> novavncproxy-hosts: 10.16.46.104 >> novacompute-hosts: 10.16.46.104,10.16.46.106 >> novacompute-privif: eth1 >> novanetwork-host: 10.16.46.104 >> novanetwork-pubif: eth0 >> novanetwork-privif: eth1 >> novanetwork-fixed-range: 192.168.32.0/22 >> novanetwork-floating-range: 10.3.4.0/22 >> novasched-host: 10.16.46.104 >> osclient-host: 10.16.46.104 >> os-horizon-host: 10.16.46.104 >> os-swift-proxy: 10.16.46.104 >> os-swift-storage: 10.16.46.104 >> os-swift-storage-zones: 1 >> os-swift-storage-replicas: 1 >> os-swift-storage-fstype: ext4 >> use-epel: y >> additional-repo: >> rh-username: james.labocki >> rh-password: ******** >> Proceed with the configuration listed above? (yes|no): yes >> >> Installing: >> Clean Up... [ >> DONE ] >> Running Pre install scripts... [ >> DONE ] >> Setting Up ssh keys...root at 10.16.46.104 's password: >> root at 10.16.46.104 's password: >> [ DONE ] >> Create MySQL Manifest... [ >> DONE ] >> Creating QPID Manifest... [ >> DONE ] >> Creating Keystone Manifest... [ >> DONE ] >> Adding Glance Keystone Manifest entries... [ >> DONE ] >> Creating Galnce Manifest... [ >> DONE ] >> Adding Cinder Keystone Manifest entries... [ >> DONE ] >> Checking if the Cinder server has a cinder-volumes vg... [ >> DONE ] >> Creating Cinder Manifest... [ >> DONE ] >> Adding Nova API Manifest entries... [ >> DONE ] >> Adding Nova Keystone Manifest entries... [ >> DONE ] >> Adding Nova Cert Manifest entries... [ >> DONE ] >> Adding Nova Compute Manifest entries... [ >> DONE ] >> Adding Nova Network Manifest entries... [ >> DONE ] >> Adding Nova Scheduler Manifest entries... [ >> DONE ] >> Adding Nova VNC Proxy Manifest entries... [ >> DONE ] >> Adding Nova Common Manifest entries... [ >> DONE ] >> Creating OS Client Manifest... [ >> DONE ] >> Creating OS Horizon Manifest... [ >> DONE ] >> Adding Swift Keystone Manifest entries... [ >> DONE ] >> Creating OS Swift builder Manifests... [ >> DONE ] >> Creating OS Swift proxy Manifests... [ >> DONE ] >> Creating OS Swift storage Manifests... [ >> DONE ] >> Creating OS Swift Common Manifests... [ >> DONE ] >> Preparing Servers...ERROR:root:============= STDERR ========== >> ERROR:root:Warning: Permanently added '10.16.46.104' (RSA) to >> the list of known hosts. >> + trap t ERR >> +++ uname -i >> ++ '[' x86_64 = x86_64 ']' >> ++ echo x86_64 >> + export >> EPEL_RPM_URL= >> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >> Are you running upstream or downstream? Why do you need stuff >> from >> EPEL? >> Y. Version = openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch >> This is an ancient (relatively) build, please use >> https://brewweb.devel.redhat.com/buildinfo?buildID=253313 > Thanks Yaniv. That is what is included in RHN when I install following the product documentation at access.redhat.com. Should we push a new version to the CDN? My apologies. Above internal URLs are what we currently test and hope to release soon as an update on RHN. We'll push when it's ready (it's already in QE hands). But now you made me look at the traceback. Perhaps https://bugzilla.redhat.com/show_bug.cgi?id=904670 ? Y. > >> >> >> I didn't know if I would need something from EPEL, so I defaulted to >> enabling it. Is this causing the problem? >> Perhaps, but you shouldn't need it. >> I suggest >> http://download.lab.bos.redhat.com/rel-eng/OpenStack/Folsom/2013-01-30.1 >> Y. >> >> >> >> >> >> >> >> + >> EPEL_RPM_URL= >> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >> + grep 'Red Hat Enterprise Linux' /etc/redhat-release >> + rpm -q epel-releasef >> + rpm -Uvh >> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >> warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 >> Signature, key ID 0608b895: NOKEY >> + mkdir -p >> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests >> + rpm -q epel-release >> + yum install -y yum-plugin-priorities >> Unable to read consumer identity >> Warning: RPMDB altered outside of yum. >> + rpm -q epel-release >> + openstack-config --set /etc/yum.repos.d/redhat.repo >> rhel-server-ost-6-folsom-rpms priority 1 >> Traceback (most recent call last): >> File "/usr/bin/openstack-config", line 49, in >> conf.readfp(open(cfgfile)) >> IOError: [Errno 2] No such file or directory: >> '/etc/yum.repos.d/redhat.repo' >> + true >> + subscription-manager register --username=james.labocki >> '--password=********' --autosubscribe >> + subscription-manager list --consumed >> + grep -i openstack >> ++ subscription-manager list --available >> ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 >> ++ grep 'Pool Id' >> ++ awk '{print $3}' >> + subscription-manager subscribe --pool >> Usage: subscription-manager subscribe [OPTIONS] >> >> ++ t >> ++ exit 2 >> >> [ ERROR ] >> ERROR:root:Traceback (most recent call last): >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 795, in main >> _main(confFile) >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 591, in _main >> runSequences() >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 567, in runSequences >> controller.runAllSequences() >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", >> line 57, in runAllSequences >> sequence.run() >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >> line 154, in run >> step.run() >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >> line 60, in run >> function() >> File >> "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", >> line 139, in serverprep >> server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", >> line 399, in execute >> raise ScriptRuntimeError("Error running remote script") >> ScriptRuntimeError: Error running remote script >> >> Error running remote script >> Please check log file >> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log >> for more information >> [root at rhc-05 ~]# cat >> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log >> 2013-02-01 15:22:45::ERROR::common_utils::394::root:: >> ============= STDERR ========== >> 2013-02-01 15:22:45::ERROR::common_utils::395::root:: Warning: >> Permanently added '10.16.46.104' (RSA) to the list of known >> hosts. >> + trap t ERR >> +++ uname -i >> ++ '[' x86_64 = x86_64 ']' >> ++ echo x86_64 >> + export >> EPEL_RPM_URL= >> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >> + >> EPEL_RPM_URL= >> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >> + grep 'Red Hat Enterprise Linux' /etc/redhat-release >> + rpm -q epel-release >> + rpm -Uvh >> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >> warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 >> Signature, key ID 0608b895: NOKEY >> + mkdir -p >> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests >> + rpm -q epel-release >> + yum install -y yum-plugin-priorities >> Unable to read consumer identity >> Warning: RPMDB altered outside of yum. >> + rpm -q epel-release >> + openstack-config --set /etc/yum.repos.d/redhat.repo >> rhel-server-ost-6-folsom-rpms priority 1 >> Traceback (most recent call last): >> File "/usr/bin/openstack-config", line 49, in >> conf.readfp(open(cfgfile)) >> IOError: [Errno 2] No such file or directory: >> '/etc/yum.repos.d/redhat.repo' >> + true >> + subscription-manager register --username=james.labocki >> '--password=********' --autosubscribe >> + subscription-manager list --consumed >> + grep -i openstack >> ++ subscription-manager list --available >> ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 >> ++ grep 'Pool Id' >> ++ awk '{print $3}' >> + subscription-manager subscribe --pool >> Usage: subscription-manager subscribe [OPTIONS] >> >> ++ t >> ++ exit 2 >> >> 2013-02-01 15:22:45::ERROR::run_setup::803::root:: Traceback >> (most recent call last): >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 795, in main >> _main(confFile) >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 591, in _main >> runSequences() >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 567, in runSequences >> controller.runAllSequences() >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", >> line 57, in runAllSequences >> sequence.run() >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >> line 154, in run >> step.run() >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >> line 60, in run >> function() >> File >> "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", >> line 139, in serverprep >> server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", >> line 399, in execute >> raise ScriptRuntimeError("Error running remote script") >> ScriptRuntimeError: Error running remote script >> >> >> >> _______________________________________________ >> rhos-list mailing list rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmagr at redhat.com Mon Feb 4 17:37:32 2013 From: mmagr at redhat.com (Martin Magr) Date: Mon, 04 Feb 2013 18:37:32 +0100 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <510FEB77.6030708@redhat.com> References: <545339275.10224864.1359997149709.JavaMail.root@redhat.com> <510FEB77.6030708@redhat.com> Message-ID: <510FF1DC.4080202@redhat.com> On 02/04/2013 06:10 PM, Yaniv Kaul wrote: > On 04/02/13 18:59, James Labocki wrote: >> ----- Original Message ----- >>> From: "Yaniv Kaul" >>> To: "James Labocki" >>> Cc: "rhos-list" >>> Sent: Monday, February 4, 2013 11:53:18 AM >>> Subject: Re: [rhos-list] Packstack Interactive Error >>> >>> >>> On 04/02/13 18:36, James Labocki wrote: >>> >>> >>> ----- Original Message ----- >>> >>> From: "Yaniv Kaul" To: "James Labocki" >>> Cc: "rhos-list" Sent: >>> Monday, February 4, 2013 11:27:34 AM >>> Subject: Re: [rhos-list] Packstack Interactive Error >>> >>> On 04/02/13 18:16, James Labocki wrote: >>> >>> I ran into the following exception when installing via packstack >>> interactively. I'm not sure if anyone can help debug it or figure >>> out what I am doing incorrect. >>> >>> -James >>> >>> >>> # packstack >>> Welcome to Installer setup utility >>> Should Packstack install Glance ['y'| 'n'] [y] : y >>> Should Packstack install Cinder ['y'| 'n'] [y] : y >>> Should Packstack install Nova ['y'| 'n'] [y] : y >>> Should Packstack install Horizon ['y'| 'n'] [y] : y >>> Should Packstack install Swift ['y'| 'n'] [n] : y >>> Should Packstack install openstack client tools ['y'| 'n'] >>> [y] : y >>> Enter the path to your ssh Public key to install on servers >>> [/root/.ssh/id_rsa.pub] : >>> Enter the IP address of the MySQL server [10.16.46.104] : >>> Enter the password for the MySQL admin user : >>> Enter the IP address of the QPID service [10.16.46.104] : >>> Enter the IP address of the Keystone server [10.16.46.104] : >>> Enter the IP address of the Glance server [10.16.46.104] : >>> Enter the IP address of the Cinder server [10.16.46.104] : >>> Enter the IP address of the Nova API service [10.16.46.104] : >>> Enter the IP address of the Nova Cert service [10.16.46.104] >>> : >>> Enter the IP address of the Nova VNC proxy [10.16.46.104] : >>> Enter a comma separated list of IP addresses on which to >>> install the Nova Compute services [10.16.46.104] : >>> 10.16.46.104,10.16.46.106 >>> Enter the Private interface for Flat DHCP on the Nova compute >>> servers [eth1] : >>> Enter the IP address of the Nova Network service >>> [10.16.46.104] : >>> Enter the Public interface on the Nova network server [eth0] >>> : >>> Enter the Private interface for Flat DHCP on the Nova network >>> server [eth1] : >>> Enter the IP Range for Flat DHCP >>> ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [192.168.32.0/22] >>> : >>> Enter the IP Range for Floating IP's >>> ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [10.3.4.0/22] : >>> Enter the IP address of the Nova Scheduler service >>> [10.16.46.104] : >>> Enter the IP address of the client server [10.16.46.104] : >>> Enter the IP address of the Horizon server [10.16.46.104] : >>> Enter the IP address of the Swift proxy service >>> [10.16.46.104] : >>> Enter the Swift Storage servers e.g. host/dev,host/dev >>> [10.16.46.104] : >>> Enter the number of swift storage zones, MUST be no bigger >>> than the number of storage devices configured [1] : >>> Enter the number of swift storage replicas, MUST be no bigger >>> than the number of storage zones configured [1] : >>> Enter FileSystem type for storage nodes ['xfs'| 'ext4'] >>> [ext4] : >>> Should packstack install EPEL on each server ['y'| 'n'] [n] : >>> y >>> Enter a comma separated list of URLs to any additional yum >>> repositories to install: >>> To subscribe each server to Red Hat enter a username here: >>> james.labocki >>> To subscribe each server to Red Hat enter your password here : >>> >>> Installer will be installed using the following configuration: >>> ============================================================== >>> os-glance-install: y >>> os-cinder-install: y >>> os-nova-install: y >>> os-horizon-install: y >>> os-swift-install: y >>> os-client-install: y >>> ssh-public-key: /root/.ssh/id_rsa.pub >>> mysql-host: 10.16.46.104 >>> mysql-pw: ******** >>> qpid-host: 10.16.46.104 >>> keystone-host: 10.16.46.104 >>> glance-host: 10.16.46.104 >>> cinder-host: 10.16.46.104 >>> novaapi-host: 10.16.46.104 >>> novacert-host: 10.16.46.104 >>> novavncproxy-hosts: 10.16.46.104 >>> novacompute-hosts: 10.16.46.104,10.16.46.106 >>> novacompute-privif: eth1 >>> novanetwork-host: 10.16.46.104 >>> novanetwork-pubif: eth0 >>> novanetwork-privif: eth1 >>> novanetwork-fixed-range: 192.168.32.0/22 >>> novanetwork-floating-range: 10.3.4.0/22 >>> novasched-host: 10.16.46.104 >>> osclient-host: 10.16.46.104 >>> os-horizon-host: 10.16.46.104 >>> os-swift-proxy: 10.16.46.104 >>> os-swift-storage: 10.16.46.104 >>> os-swift-storage-zones: 1 >>> os-swift-storage-replicas: 1 >>> os-swift-storage-fstype: ext4 >>> use-epel: y >>> additional-repo: >>> rh-username: james.labocki >>> rh-password: ******** >>> Proceed with the configuration listed above? (yes|no): yes >>> >>> Installing: >>> Clean Up... [ >>> DONE ] >>> Running Pre install scripts... [ >>> DONE ] >>> Setting Up sshkeys...root at 10.16.46.104 's password: >>> root at 10.16.46.104 's password: >>> [ DONE ] >>> Create MySQL Manifest... [ >>> DONE ] >>> Creating QPID Manifest... [ >>> DONE ] >>> Creating Keystone Manifest... [ >>> DONE ] >>> Adding Glance Keystone Manifest entries... [ >>> DONE ] >>> Creating Galnce Manifest... [ >>> DONE ] >>> Adding Cinder Keystone Manifest entries... [ >>> DONE ] >>> Checking if the Cinder server has a cinder-volumes vg... [ >>> DONE ] >>> Creating Cinder Manifest... [ >>> DONE ] >>> Adding Nova API Manifest entries... [ >>> DONE ] >>> Adding Nova Keystone Manifest entries... [ >>> DONE ] >>> Adding Nova Cert Manifest entries... [ >>> DONE ] >>> Adding Nova Compute Manifest entries... [ >>> DONE ] >>> Adding Nova Network Manifest entries... [ >>> DONE ] >>> Adding Nova Scheduler Manifest entries... [ >>> DONE ] >>> Adding Nova VNC Proxy Manifest entries... [ >>> DONE ] >>> Adding Nova Common Manifest entries... [ >>> DONE ] >>> Creating OS Client Manifest... [ >>> DONE ] >>> Creating OS Horizon Manifest... [ >>> DONE ] >>> Adding Swift Keystone Manifest entries... [ >>> DONE ] >>> Creating OS Swift builder Manifests... [ >>> DONE ] >>> Creating OS Swift proxy Manifests... [ >>> DONE ] >>> Creating OS Swift storage Manifests... [ >>> DONE ] >>> Creating OS Swift Common Manifests... [ >>> DONE ] >>> Preparing Servers...ERROR:root:============= STDERR ========== >>> ERROR:root:Warning: Permanently added '10.16.46.104' (RSA) to >>> the list of known hosts. >>> + trap t ERR >>> +++ uname -i >>> ++ '[' x86_64 = x86_64 ']' >>> ++ echo x86_64 >>> + export >>> EPEL_RPM_URL= >>> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> Are you running upstream or downstream? Why do you need stuff >>> from >>> EPEL? >>> Y. Version = openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch >>> This is an ancient (relatively) build, please use >>> https://brewweb.devel.redhat.com/buildinfo?buildID=253313 >> Thanks Yaniv. That is what is included in RHN when I install following the product documentation at access.redhat.com. Should we push a new version to the CDN? > > My apologies. Above internal URLs are what we currently test and hope > to release soon as an update on RHN. > We'll push when it's ready (it's already in QE hands). > But now you made me look at the traceback. Perhaps > https://bugzilla.redhat.com/show_bug.cgi?id=904670 ? > > Y. > This is the same problem IMHO. Following says that you are missing /etc/yum.repos.d/redhat.repo file. Traceback (most recent call last): File "/usr/bin/openstack-config", line 49, in conf.readfp(open(cfgfile)) IOError: [Errno 2] No such file or directory: '/etc/yum.repos.d/redhat.repo' And I think this happens, when packstack tries to set priority of folsom repo. >>> >>> I didn't know if I would need something from EPEL, so I defaulted to >>> enabling it. Is this causing the problem? >>> Perhaps, but you shouldn't need it. >>> I suggest >>> http://download.lab.bos.redhat.com/rel-eng/OpenStack/Folsom/2013-01-30.1 >>> Y. >>> >>> >>> >>> >>> >>> >>> >>> + >>> EPEL_RPM_URL= >>> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> + grep 'Red Hat Enterprise Linux' /etc/redhat-release >>> + rpm -q epel-releasef >>> + rpm -Uvh >>> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 >>> Signature, key ID 0608b895: NOKEY >>> + mkdir -p >>> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests >>> + rpm -q epel-release >>> + yum install -y yum-plugin-priorities >>> Unable to read consumer identity >>> Warning: RPMDB altered outside of yum. >>> + rpm -q epel-release >>> + openstack-config --set /etc/yum.repos.d/redhat.repo >>> rhel-server-ost-6-folsom-rpms priority 1 >>> Traceback (most recent call last): >>> File "/usr/bin/openstack-config", line 49, in >>> conf.readfp(open(cfgfile)) >>> IOError: [Errno 2] No such file or directory: >>> '/etc/yum.repos.d/redhat.repo' >>> + true >>> + subscription-manager register --username=james.labocki >>> '--password=********' --autosubscribe >>> + subscription-manager list --consumed >>> + grep -i openstack >>> ++ subscription-manager list --available >>> ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 >>> ++ grep 'Pool Id' >>> ++ awk '{print $3}' >>> + subscription-manager subscribe --pool >>> Usage: subscription-manager subscribe [OPTIONS] >>> >>> ++ t >>> ++ exit 2 >>> >>> [ ERROR ] >>> ERROR:root:Traceback (most recent call last): >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 795, in main >>> _main(confFile) >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 591, in _main >>> runSequences() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 567, in runSequences >>> controller.runAllSequences() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", >>> line 57, in runAllSequences >>> sequence.run() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >>> line 154, in run >>> step.run() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >>> line 60, in run >>> function() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", >>> line 139, in serverprep >>> server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", >>> line 399, in execute >>> raise ScriptRuntimeError("Error running remote script") >>> ScriptRuntimeError: Error running remote script >>> >>> Error running remote script >>> Please check log file >>> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log >>> for more information >>> [root at rhc-05 ~]# cat >>> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log >>> 2013-02-01 15:22:45::ERROR::common_utils::394::root:: >>> ============= STDERR ========== >>> 2013-02-01 15:22:45::ERROR::common_utils::395::root:: Warning: >>> Permanently added '10.16.46.104' (RSA) to the list of known >>> hosts. >>> + trap t ERR >>> +++ uname -i >>> ++ '[' x86_64 = x86_64 ']' >>> ++ echo x86_64 >>> + export >>> EPEL_RPM_URL= >>> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> + >>> EPEL_RPM_URL= >>> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> + grep 'Red Hat Enterprise Linux' /etc/redhat-release >>> + rpm -q epel-release >>> + rpm -Uvh >>> http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >>> warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 >>> Signature, key ID 0608b895: NOKEY >>> + mkdir -p >>> /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests >>> + rpm -q epel-release >>> + yum install -y yum-plugin-priorities >>> Unable to read consumer identity >>> Warning: RPMDB altered outside of yum. >>> + rpm -q epel-release >>> + openstack-config --set /etc/yum.repos.d/redhat.repo >>> rhel-server-ost-6-folsom-rpms priority 1 >>> Traceback (most recent call last): >>> File "/usr/bin/openstack-config", line 49, in >>> conf.readfp(open(cfgfile)) >>> IOError: [Errno 2] No such file or directory: >>> '/etc/yum.repos.d/redhat.repo' >>> + true >>> + subscription-manager register --username=james.labocki >>> '--password=********' --autosubscribe >>> + subscription-manager list --consumed >>> + grep -i openstack >>> ++ subscription-manager list --available >>> ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 >>> ++ grep 'Pool Id' >>> ++ awk '{print $3}' >>> + subscription-manager subscribe --pool >>> Usage: subscription-manager subscribe [OPTIONS] >>> >>> ++ t >>> ++ exit 2 >>> >>> 2013-02-01 15:22:45::ERROR::run_setup::803::root:: Traceback >>> (most recent call last): >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 795, in main >>> _main(confFile) >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 591, in _main >>> runSequences() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >>> line 567, in runSequences >>> controller.runAllSequences() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", >>> line 57, in runAllSequences >>> sequence.run() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >>> line 154, in run >>> step.run() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", >>> line 60, in run >>> function() >>> File >>> "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", >>> line 139, in serverprep >>> server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) >>> File >>> "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", >>> line 399, in execute >>> raise ScriptRuntimeError("Error running remote script") >>> ScriptRuntimeError: Error running remote script >>> >>> >>> >>> _______________________________________________ >>> rhos-list mailing listrhos-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rhos-list >>> > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -- Martin M?gr Openstack Red Hat Czech cell: +420-775-291-585 phone: +420-532-294-183 ext.: 82-62183 irc: mmagr @ #brno, #cloud, #openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlabocki at redhat.com Mon Feb 4 19:47:06 2013 From: jlabocki at redhat.com (James Labocki) Date: Mon, 4 Feb 2013 14:47:06 -0500 (EST) Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <510FF1DC.4080202@redhat.com> Message-ID: <1157636198.10341994.1360007226004.JavaMail.root@redhat.com> I attempted using openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm that you directed me to. Unfortunately it appears it requires RHEL 6.4. "OS support check... Host 10.16.46.104: RHEL version not supported. RHEL >6.4 required" RHEL 6.4 is not unavailable in RHN at this time. I believe that anyone outside of Red Hat trying to use the Folsom installation documentation might conclude that they could use packstack and RHEL 6.3, but I'm worried they would run into this same problem that I have encountered and not be able to complete installation using packstack. Is the correct action: 1. Amend the installation documentation at access.redhat.com to remove the references to packstack until openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm and RHEL 6.4 are available in RHN. 2. Continue to find the root cause of the issue I am encountering with openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch and RHEL 6.3 -James ----- Original Message ----- > From: "Martin Magr" > To: "Yaniv Kaul" > Cc: "James Labocki" , "rhos-list" > Sent: Monday, February 4, 2013 12:37:32 PM > Subject: Re: [rhos-list] Packstack Interactive Error > > > On 02/04/2013 06:10 PM, Yaniv Kaul wrote: > > > > On 04/02/13 18:59, James Labocki wrote: > > > ----- Original Message ----- > > From: "Yaniv Kaul" To: "James Labocki" > Cc: "rhos-list" Sent: > Monday, February 4, 2013 11:53:18 AM > Subject: Re: [rhos-list] Packstack Interactive Error > > > On 04/02/13 18:36, James Labocki wrote: > > > ----- Original Message ----- > > From: "Yaniv Kaul" To: "James Labocki" > Cc: "rhos-list" Sent: > Monday, February 4, 2013 11:27:34 AM > Subject: Re: [rhos-list] Packstack Interactive Error > > On 04/02/13 18:16, James Labocki wrote: > > I ran into the following exception when installing via packstack > interactively. I'm not sure if anyone can help debug it or figure > out what I am doing incorrect. > > -James > > > # packstack > Welcome to Installer setup utility > Should Packstack install Glance ['y'| 'n'] [y] : y > Should Packstack install Cinder ['y'| 'n'] [y] : y > Should Packstack install Nova ['y'| 'n'] [y] : y > Should Packstack install Horizon ['y'| 'n'] [y] : y > Should Packstack install Swift ['y'| 'n'] [n] : y > Should Packstack install openstack client tools ['y'| 'n'] > [y] : y > Enter the path to your ssh Public key to install on servers > [/root/.ssh/id_rsa.pub] : > Enter the IP address of the MySQL server [10.16.46.104] : > Enter the password for the MySQL admin user : > Enter the IP address of the QPID service [10.16.46.104] : > Enter the IP address of the Keystone server [10.16.46.104] : > Enter the IP address of the Glance server [10.16.46.104] : > Enter the IP address of the Cinder server [10.16.46.104] : > Enter the IP address of the Nova API service [10.16.46.104] : > Enter the IP address of the Nova Cert service [10.16.46.104] > : > Enter the IP address of the Nova VNC proxy [10.16.46.104] : > Enter a comma separated list of IP addresses on which to > install the Nova Compute services [10.16.46.104] : > 10.16.46.104,10.16.46.106 > Enter the Private interface for Flat DHCP on the Nova compute > servers [eth1] : > Enter the IP address of the Nova Network service > [10.16.46.104] : > Enter the Public interface on the Nova network server [eth0] > : > Enter the Private interface for Flat DHCP on the Nova network > server [eth1] : > Enter the IP Range for Flat DHCP > ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [192.168.32.0/22] > : > Enter the IP Range for Floating IP's > ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [10.3.4.0/22] : > Enter the IP address of the Nova Scheduler service > [10.16.46.104] : > Enter the IP address of the client server [10.16.46.104] : > Enter the IP address of the Horizon server [10.16.46.104] : > Enter the IP address of the Swift proxy service > [10.16.46.104] : > Enter the Swift Storage servers e.g. host/dev,host/dev > [10.16.46.104] : > Enter the number of swift storage zones, MUST be no bigger > than the number of storage devices configured [1] : > Enter the number of swift storage replicas, MUST be no bigger > than the number of storage zones configured [1] : > Enter FileSystem type for storage nodes ['xfs'| 'ext4'] > [ext4] : > Should packstack install EPEL on each server ['y'| 'n'] [n] : > y > Enter a comma separated list of URLs to any additional yum > repositories to install: > To subscribe each server to Red Hat enter a username here: > james.labocki > To subscribe each server to Red Hat enter your password here : > > Installer will be installed using the following configuration: > ============================================================== > os-glance-install: y > os-cinder-install: y > os-nova-install: y > os-horizon-install: y > os-swift-install: y > os-client-install: y > ssh-public-key: /root/.ssh/id_rsa.pub > mysql-host: 10.16.46.104 > mysql-pw: ******** > qpid-host: 10.16.46.104 > keystone-host: 10.16.46.104 > glance-host: 10.16.46.104 > cinder-host: 10.16.46.104 > novaapi-host: 10.16.46.104 > novacert-host: 10.16.46.104 > novavncproxy-hosts: 10.16.46.104 > novacompute-hosts: 10.16.46.104,10.16.46.106 > novacompute-privif: eth1 > novanetwork-host: 10.16.46.104 > novanetwork-pubif: eth0 > novanetwork-privif: eth1 > novanetwork-fixed-range: 192.168.32.0/22 > novanetwork-floating-range: 10.3.4.0/22 > novasched-host: 10.16.46.104 > osclient-host: 10.16.46.104 > os-horizon-host: 10.16.46.104 > os-swift-proxy: 10.16.46.104 > os-swift-storage: 10.16.46.104 > os-swift-storage-zones: 1 > os-swift-storage-replicas: 1 > os-swift-storage-fstype: ext4 > use-epel: y > additional-repo: > rh-username: james.labocki > rh-password: ******** > Proceed with the configuration listed above? (yes|no): yes > > Installing: > Clean Up... [ > DONE ] > Running Pre install scripts... [ > DONE ] > Setting Up ssh keys...root at 10.16.46.104 's password: > root at 10.16.46.104 's password: > [ DONE ] > Create MySQL Manifest... [ > DONE ] > Creating QPID Manifest... [ > DONE ] > Creating Keystone Manifest... [ > DONE ] > Adding Glance Keystone Manifest entries... [ > DONE ] > Creating Galnce Manifest... [ > DONE ] > Adding Cinder Keystone Manifest entries... [ > DONE ] > Checking if the Cinder server has a cinder-volumes vg... [ > DONE ] > Creating Cinder Manifest... [ > DONE ] > Adding Nova API Manifest entries... [ > DONE ] > Adding Nova Keystone Manifest entries... [ > DONE ] > Adding Nova Cert Manifest entries... [ > DONE ] > Adding Nova Compute Manifest entries... [ > DONE ] > Adding Nova Network Manifest entries... [ > DONE ] > Adding Nova Scheduler Manifest entries... [ > DONE ] > Adding Nova VNC Proxy Manifest entries... [ > DONE ] > Adding Nova Common Manifest entries... [ > DONE ] > Creating OS Client Manifest... [ > DONE ] > Creating OS Horizon Manifest... [ > DONE ] > Adding Swift Keystone Manifest entries... [ > DONE ] > Creating OS Swift builder Manifests... [ > DONE ] > Creating OS Swift proxy Manifests... [ > DONE ] > Creating OS Swift storage Manifests... [ > DONE ] > Creating OS Swift Common Manifests... [ > DONE ] > Preparing Servers...ERROR:root:============= STDERR ========== > ERROR:root:Warning: Permanently added '10.16.46.104' (RSA) to > the list of known hosts. > + trap t ERR > +++ uname -i > ++ '[' x86_64 = x86_64 ']' > ++ echo x86_64 > + export > EPEL_RPM_URL= > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > Are you running upstream or downstream? Why do you need stuff > from > EPEL? > Y. Version = openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch > This is an ancient (relatively) build, please use > https://brewweb.devel.redhat.com/buildinfo?buildID=253313 Thanks > Yaniv. That is what is included in RHN when I install following the > product documentation at access.redhat.com. Should we push a new > version to the CDN? > My apologies. Above internal URLs are what we currently test and hope > to release soon as an update on RHN. > We'll push when it's ready (it's already in QE hands). > But now you made me look at the traceback. Perhaps > https://bugzilla.redhat.com/show_bug.cgi?id=904670 ? > > Y. > > This is the same problem IMHO. Following says that you are missing > /etc/yum.repos.d/redhat.repo file. > Traceback (most recent call last): > File "/usr/bin/openstack-config", line 49, in > conf.readfp(open(cfgfile)) > IOError: [Errno 2] No such file or directory: > '/etc/yum.repos.d/redhat.repo' > And I think this happens, when packstack tries to set priority of > folsom repo. > > > > > > > > I didn't know if I would need something from EPEL, so I defaulted to > enabling it. Is this causing the problem? > Perhaps, but you shouldn't need it. > I suggest > http://download.lab.bos.redhat.com/rel-eng/OpenStack/Folsom/2013-01-30.1 > Y. > > > > > > > > + > EPEL_RPM_URL= > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > + grep 'Red Hat Enterprise Linux' /etc/redhat-release > + rpm -q epel-releasef > + rpm -Uvh > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 > Signature, key ID 0608b895: NOKEY > + mkdir -p > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests > + rpm -q epel-release > + yum install -y yum-plugin-priorities > Unable to read consumer identity > Warning: RPMDB altered outside of yum. > + rpm -q epel-release > + openstack-config --set /etc/yum.repos.d/redhat.repo > rhel-server-ost-6-folsom-rpms priority 1 > Traceback (most recent call last): > File "/usr/bin/openstack-config", line 49, in > conf.readfp(open(cfgfile)) > IOError: [Errno 2] No such file or directory: > '/etc/yum.repos.d/redhat.repo' > + true > + subscription-manager register --username=james.labocki > '--password=********' --autosubscribe > + subscription-manager list --consumed > + grep -i openstack > ++ subscription-manager list --available > ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 > ++ grep 'Pool Id' > ++ awk '{print $3}' > + subscription-manager subscribe --pool > Usage: subscription-manager subscribe [OPTIONS] > > ++ t > ++ exit 2 > > [ ERROR ] > ERROR:root:Traceback (most recent call last): > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 795, in main > _main(confFile) > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 591, in _main > runSequences() > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 567, in runSequences > controller.runAllSequences() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", > line 57, in runAllSequences > sequence.run() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > line 154, in run > step.run() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > line 60, in run > function() > File > "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", > line 139, in serverprep > server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) > File > "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", > line 399, in execute > raise ScriptRuntimeError("Error running remote script") > ScriptRuntimeError: Error running remote script > > Error running remote script > Please check log file > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log > for more information > [root at rhc-05 ~]# cat > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log > 2013-02-01 15:22:45::ERROR::common_utils::394::root:: > ============= STDERR ========== > 2013-02-01 15:22:45::ERROR::common_utils::395::root:: Warning: > Permanently added '10.16.46.104' (RSA) to the list of known > hosts. > + trap t ERR > +++ uname -i > ++ '[' x86_64 = x86_64 ']' > ++ echo x86_64 > + export > EPEL_RPM_URL= > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > + > EPEL_RPM_URL= > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > + grep 'Red Hat Enterprise Linux' /etc/redhat-release > + rpm -q epel-release > + rpm -Uvh > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 > Signature, key ID 0608b895: NOKEY > + mkdir -p > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests > + rpm -q epel-release > + yum install -y yum-plugin-priorities > Unable to read consumer identity > Warning: RPMDB altered outside of yum. > + rpm -q epel-release > + openstack-config --set /etc/yum.repos.d/redhat.repo > rhel-server-ost-6-folsom-rpms priority 1 > Traceback (most recent call last): > File "/usr/bin/openstack-config", line 49, in > conf.readfp(open(cfgfile)) > IOError: [Errno 2] No such file or directory: > '/etc/yum.repos.d/redhat.repo' > + true > + subscription-manager register --username=james.labocki > '--password=********' --autosubscribe > + subscription-manager list --consumed > + grep -i openstack > ++ subscription-manager list --available > ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 > ++ grep 'Pool Id' > ++ awk '{print $3}' > + subscription-manager subscribe --pool > Usage: subscription-manager subscribe [OPTIONS] > > ++ t > ++ exit 2 > > 2013-02-01 15:22:45::ERROR::run_setup::803::root:: Traceback > (most recent call last): > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 795, in main > _main(confFile) > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 591, in _main > runSequences() > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 567, in runSequences > controller.runAllSequences() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", > line 57, in runAllSequences > sequence.run() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > line 154, in run > step.run() > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > line 60, in run > function() > File > "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", > line 139, in serverprep > server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) > File > "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", > line 399, in execute > raise ScriptRuntimeError("Error running remote script") > ScriptRuntimeError: Error running remote script > > > > _______________________________________________ > rhos-list mailing list rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > > _______________________________________________ > rhos-list mailing list rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > -- > > Martin M?gr > Openstack > Red Hat Czech > > cell: +420-775-291-585 > phone: +420-532-294-183 > ext.: 82-62183 > irc: mmagr @ #brno, #cloud, #openstack From jlabocki at redhat.com Mon Feb 4 20:04:01 2013 From: jlabocki at redhat.com (James Labocki) Date: Mon, 4 Feb 2013 15:04:01 -0500 (EST) Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <1157636198.10341994.1360007226004.JavaMail.root@redhat.com> Message-ID: <1439229355.10346513.1360008241913.JavaMail.root@redhat.com> I must be recovering from the Super Bowl party last night ... disregard the comment on RHEL 6.4 :D ----- Original Message ----- > From: "James Labocki" > To: "Martin Magr" , "Yaniv Kaul" > Cc: "rhos-list" > Sent: Monday, February 4, 2013 2:47:06 PM > Subject: Re: [rhos-list] Packstack Interactive Error > > I attempted using > openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm that you > directed me to. Unfortunately it appears it requires RHEL 6.4. > > "OS support check... Host 10.16.46.104: RHEL version not supported. > RHEL >6.4 required" > > RHEL 6.4 is not unavailable in RHN at this time. I believe that > anyone outside of Red Hat trying to use the Folsom installation > documentation might conclude that they could use packstack and RHEL > 6.3, but I'm worried they would run into this same problem that I > have encountered and not be able to complete installation using > packstack. Is the correct action: > > 1. Amend the installation documentation at access.redhat.com to > remove the references to packstack until > openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm and RHEL > 6.4 are available in RHN. > > 2. Continue to find the root cause of the issue I am encountering > with openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch and RHEL > 6.3 > > -James > > ----- Original Message ----- > > From: "Martin Magr" > > To: "Yaniv Kaul" > > Cc: "James Labocki" , "rhos-list" > > > > Sent: Monday, February 4, 2013 12:37:32 PM > > Subject: Re: [rhos-list] Packstack Interactive Error > > > > > > On 02/04/2013 06:10 PM, Yaniv Kaul wrote: > > > > > > > > On 04/02/13 18:59, James Labocki wrote: > > > > > > ----- Original Message ----- > > > > From: "Yaniv Kaul" To: "James Labocki" > > Cc: "rhos-list" Sent: > > Monday, February 4, 2013 11:53:18 AM > > Subject: Re: [rhos-list] Packstack Interactive Error > > > > > > On 04/02/13 18:36, James Labocki wrote: > > > > > > ----- Original Message ----- > > > > From: "Yaniv Kaul" To: "James Labocki" > > Cc: "rhos-list" Sent: > > Monday, February 4, 2013 11:27:34 AM > > Subject: Re: [rhos-list] Packstack Interactive Error > > > > On 04/02/13 18:16, James Labocki wrote: > > > > I ran into the following exception when installing via packstack > > interactively. I'm not sure if anyone can help debug it or figure > > out what I am doing incorrect. > > > > -James > > > > > > # packstack > > Welcome to Installer setup utility > > Should Packstack install Glance ['y'| 'n'] [y] : y > > Should Packstack install Cinder ['y'| 'n'] [y] : y > > Should Packstack install Nova ['y'| 'n'] [y] : y > > Should Packstack install Horizon ['y'| 'n'] [y] : y > > Should Packstack install Swift ['y'| 'n'] [n] : y > > Should Packstack install openstack client tools ['y'| 'n'] > > [y] : y > > Enter the path to your ssh Public key to install on servers > > [/root/.ssh/id_rsa.pub] : > > Enter the IP address of the MySQL server [10.16.46.104] : > > Enter the password for the MySQL admin user : > > Enter the IP address of the QPID service [10.16.46.104] : > > Enter the IP address of the Keystone server [10.16.46.104] : > > Enter the IP address of the Glance server [10.16.46.104] : > > Enter the IP address of the Cinder server [10.16.46.104] : > > Enter the IP address of the Nova API service [10.16.46.104] : > > Enter the IP address of the Nova Cert service [10.16.46.104] > > : > > Enter the IP address of the Nova VNC proxy [10.16.46.104] : > > Enter a comma separated list of IP addresses on which to > > install the Nova Compute services [10.16.46.104] : > > 10.16.46.104,10.16.46.106 > > Enter the Private interface for Flat DHCP on the Nova compute > > servers [eth1] : > > Enter the IP address of the Nova Network service > > [10.16.46.104] : > > Enter the Public interface on the Nova network server [eth0] > > : > > Enter the Private interface for Flat DHCP on the Nova network > > server [eth1] : > > Enter the IP Range for Flat DHCP > > ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [192.168.32.0/22] > > : > > Enter the IP Range for Floating IP's > > ['^([\\d]{1|3}\\.){3}[\\d]{1|3}/\\d\\d?$'] [10.3.4.0/22] : > > Enter the IP address of the Nova Scheduler service > > [10.16.46.104] : > > Enter the IP address of the client server [10.16.46.104] : > > Enter the IP address of the Horizon server [10.16.46.104] : > > Enter the IP address of the Swift proxy service > > [10.16.46.104] : > > Enter the Swift Storage servers e.g. host/dev,host/dev > > [10.16.46.104] : > > Enter the number of swift storage zones, MUST be no bigger > > than the number of storage devices configured [1] : > > Enter the number of swift storage replicas, MUST be no bigger > > than the number of storage zones configured [1] : > > Enter FileSystem type for storage nodes ['xfs'| 'ext4'] > > [ext4] : > > Should packstack install EPEL on each server ['y'| 'n'] [n] : > > y > > Enter a comma separated list of URLs to any additional yum > > repositories to install: > > To subscribe each server to Red Hat enter a username here: > > james.labocki > > To subscribe each server to Red Hat enter your password here : > > > > Installer will be installed using the following configuration: > > ============================================================== > > os-glance-install: y > > os-cinder-install: y > > os-nova-install: y > > os-horizon-install: y > > os-swift-install: y > > os-client-install: y > > ssh-public-key: /root/.ssh/id_rsa.pub > > mysql-host: 10.16.46.104 > > mysql-pw: ******** > > qpid-host: 10.16.46.104 > > keystone-host: 10.16.46.104 > > glance-host: 10.16.46.104 > > cinder-host: 10.16.46.104 > > novaapi-host: 10.16.46.104 > > novacert-host: 10.16.46.104 > > novavncproxy-hosts: 10.16.46.104 > > novacompute-hosts: 10.16.46.104,10.16.46.106 > > novacompute-privif: eth1 > > novanetwork-host: 10.16.46.104 > > novanetwork-pubif: eth0 > > novanetwork-privif: eth1 > > novanetwork-fixed-range: 192.168.32.0/22 > > novanetwork-floating-range: 10.3.4.0/22 > > novasched-host: 10.16.46.104 > > osclient-host: 10.16.46.104 > > os-horizon-host: 10.16.46.104 > > os-swift-proxy: 10.16.46.104 > > os-swift-storage: 10.16.46.104 > > os-swift-storage-zones: 1 > > os-swift-storage-replicas: 1 > > os-swift-storage-fstype: ext4 > > use-epel: y > > additional-repo: > > rh-username: james.labocki > > rh-password: ******** > > Proceed with the configuration listed above? (yes|no): yes > > > > Installing: > > Clean Up... [ > > DONE ] > > Running Pre install scripts... [ > > DONE ] > > Setting Up ssh keys...root at 10.16.46.104 's password: > > root at 10.16.46.104 's password: > > [ DONE ] > > Create MySQL Manifest... [ > > DONE ] > > Creating QPID Manifest... [ > > DONE ] > > Creating Keystone Manifest... [ > > DONE ] > > Adding Glance Keystone Manifest entries... [ > > DONE ] > > Creating Galnce Manifest... [ > > DONE ] > > Adding Cinder Keystone Manifest entries... [ > > DONE ] > > Checking if the Cinder server has a cinder-volumes vg... [ > > DONE ] > > Creating Cinder Manifest... [ > > DONE ] > > Adding Nova API Manifest entries... [ > > DONE ] > > Adding Nova Keystone Manifest entries... [ > > DONE ] > > Adding Nova Cert Manifest entries... [ > > DONE ] > > Adding Nova Compute Manifest entries... [ > > DONE ] > > Adding Nova Network Manifest entries... [ > > DONE ] > > Adding Nova Scheduler Manifest entries... [ > > DONE ] > > Adding Nova VNC Proxy Manifest entries... [ > > DONE ] > > Adding Nova Common Manifest entries... [ > > DONE ] > > Creating OS Client Manifest... [ > > DONE ] > > Creating OS Horizon Manifest... [ > > DONE ] > > Adding Swift Keystone Manifest entries... [ > > DONE ] > > Creating OS Swift builder Manifests... [ > > DONE ] > > Creating OS Swift proxy Manifests... [ > > DONE ] > > Creating OS Swift storage Manifests... [ > > DONE ] > > Creating OS Swift Common Manifests... [ > > DONE ] > > Preparing Servers...ERROR:root:============= STDERR ========== > > ERROR:root:Warning: Permanently added '10.16.46.104' (RSA) to > > the list of known hosts. > > + trap t ERR > > +++ uname -i > > ++ '[' x86_64 = x86_64 ']' > > ++ echo x86_64 > > + export > > EPEL_RPM_URL= > > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > Are you running upstream or downstream? Why do you need stuff > > from > > EPEL? > > Y. Version = openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch > > This is an ancient (relatively) build, please use > > https://brewweb.devel.redhat.com/buildinfo?buildID=253313 Thanks > > Yaniv. That is what is included in RHN when I install following the > > product documentation at access.redhat.com. Should we push a new > > version to the CDN? > > My apologies. Above internal URLs are what we currently test and > > hope > > to release soon as an update on RHN. > > We'll push when it's ready (it's already in QE hands). > > But now you made me look at the traceback. Perhaps > > https://bugzilla.redhat.com/show_bug.cgi?id=904670 ? > > > > Y. > > > > This is the same problem IMHO. Following says that you are missing > > /etc/yum.repos.d/redhat.repo file. > > Traceback (most recent call last): > > File "/usr/bin/openstack-config", line 49, in > > conf.readfp(open(cfgfile)) > > IOError: [Errno 2] No such file or directory: > > '/etc/yum.repos.d/redhat.repo' > > And I think this happens, when packstack tries to set priority of > > folsom repo. > > > > > > > > > > > > > > > > I didn't know if I would need something from EPEL, so I defaulted > > to > > enabling it. Is this causing the problem? > > Perhaps, but you shouldn't need it. > > I suggest > > http://download.lab.bos.redhat.com/rel-eng/OpenStack/Folsom/2013-01-30.1 > > Y. > > > > > > > > > > > > > > > > + > > EPEL_RPM_URL= > > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > + grep 'Red Hat Enterprise Linux' /etc/redhat-release > > + rpm -q epel-releasef > > + rpm -Uvh > > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 > > Signature, key ID 0608b895: NOKEY > > + mkdir -p > > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests > > + rpm -q epel-release > > + yum install -y yum-plugin-priorities > > Unable to read consumer identity > > Warning: RPMDB altered outside of yum. > > + rpm -q epel-release > > + openstack-config --set /etc/yum.repos.d/redhat.repo > > rhel-server-ost-6-folsom-rpms priority 1 > > Traceback (most recent call last): > > File "/usr/bin/openstack-config", line 49, in > > conf.readfp(open(cfgfile)) > > IOError: [Errno 2] No such file or directory: > > '/etc/yum.repos.d/redhat.repo' > > + true > > + subscription-manager register --username=james.labocki > > '--password=********' --autosubscribe > > + subscription-manager list --consumed > > + grep -i openstack > > ++ subscription-manager list --available > > ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 > > ++ grep 'Pool Id' > > ++ awk '{print $3}' > > + subscription-manager subscribe --pool > > Usage: subscription-manager subscribe [OPTIONS] > > > > ++ t > > ++ exit 2 > > > > [ ERROR ] > > ERROR:root:Traceback (most recent call last): > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 795, in main > > _main(confFile) > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 591, in _main > > runSequences() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 567, in runSequences > > controller.runAllSequences() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", > > line 57, in runAllSequences > > sequence.run() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > > line 154, in run > > step.run() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > > line 60, in run > > function() > > File > > "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", > > line 139, in serverprep > > server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", > > line 399, in execute > > raise ScriptRuntimeError("Error running remote script") > > ScriptRuntimeError: Error running remote script > > > > Error running remote script > > Please check log file > > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log > > for more information > > [root at rhc-05 ~]# cat > > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/openstack-setup_2013_02_01_15_19_12.log > > 2013-02-01 15:22:45::ERROR::common_utils::394::root:: > > ============= STDERR ========== > > 2013-02-01 15:22:45::ERROR::common_utils::395::root:: Warning: > > Permanently added '10.16.46.104' (RSA) to the list of known > > hosts. > > + trap t ERR > > +++ uname -i > > ++ '[' x86_64 = x86_64 ']' > > ++ echo x86_64 > > + export > > EPEL_RPM_URL= > > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > + > > EPEL_RPM_URL= > > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > + grep 'Red Hat Enterprise Linux' /etc/redhat-release > > + rpm -q epel-release > > + rpm -Uvh > > http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > > warning: /var/tmp/rpm-tmp.WAbeqC: Header V3 RSA/SHA256 > > Signature, key ID 0608b895: NOKEY > > + mkdir -p > > /var/tmp/ecf8e442-fd38-4510-a370-c385e1460387/manifests > > + rpm -q epel-release > > + yum install -y yum-plugin-priorities > > Unable to read consumer identity > > Warning: RPMDB altered outside of yum. > > + rpm -q epel-release > > + openstack-config --set /etc/yum.repos.d/redhat.repo > > rhel-server-ost-6-folsom-rpms priority 1 > > Traceback (most recent call last): > > File "/usr/bin/openstack-config", line 49, in > > conf.readfp(open(cfgfile)) > > IOError: [Errno 2] No such file or directory: > > '/etc/yum.repos.d/redhat.repo' > > + true > > + subscription-manager register --username=james.labocki > > '--password=********' --autosubscribe > > + subscription-manager list --consumed > > + grep -i openstack > > ++ subscription-manager list --available > > ++ grep -e 'Red Hat OpenStack' -m 1 -A 2 > > ++ grep 'Pool Id' > > ++ awk '{print $3}' > > + subscription-manager subscribe --pool > > Usage: subscription-manager subscribe [OPTIONS] > > > > ++ t > > ++ exit 2 > > > > 2013-02-01 15:22:45::ERROR::run_setup::803::root:: Traceback > > (most recent call last): > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 795, in main > > _main(confFile) > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 591, in _main > > runSequences() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > > line 567, in runSequences > > controller.runAllSequences() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", > > line 57, in runAllSequences > > sequence.run() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > > line 154, in run > > step.run() > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", > > line 60, in run > > function() > > File > > "/usr/lib/python2.6/site-packages/packstack/plugins/serverprep_901.py", > > line 139, in serverprep > > server.execute(maskList=[controller.CONF["CONFIG_RH_PASSWORD"].strip()]) > > File > > "/usr/lib/python2.6/site-packages/packstack/installer/common_utils.py", > > line 399, in execute > > raise ScriptRuntimeError("Error running remote script") > > ScriptRuntimeError: Error running remote script > > > > > > > > _______________________________________________ > > rhos-list mailing list rhos-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rhos-list > > > > > > _______________________________________________ > > rhos-list mailing list rhos-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rhos-list > > > > -- > > > > Martin M?gr > > Openstack > > Red Hat Czech > > > > cell: +420-775-291-585 > > phone: +420-532-294-183 > > ext.: 82-62183 > > irc: mmagr @ #brno, #cloud, #openstack > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From prmarino1 at gmail.com Mon Feb 4 21:19:24 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Mon, 04 Feb 2013 16:19:24 -0500 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <1439229355.10346513.1360008241913.JavaMail.root@redhat.com> Message-ID: <511025dc.1883650a.2632.2528@mx.google.com> An HTML attachment was scrubbed... URL: From pmyers at redhat.com Tue Feb 5 03:56:28 2013 From: pmyers at redhat.com (Perry Myers) Date: Mon, 04 Feb 2013 22:56:28 -0500 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <511025dc.1883650a.2632.2528@mx.google.com> References: <511025dc.1883650a.2632.2528@mx.google.com> Message-ID: <511082EC.6080303@redhat.com> >> I attempted using >> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm that you >> directed me to. Unfortunately it appears it requires RHEL 6.4. >> >> "OS support check... Host 10.16.46.104: RHEL version not supported. >> RHEL >6.4 required" >> >> RHEL 6.4 is not unavailable in RHN at this time. I believe that >> anyone outside of Red Hat trying to use the Folsom installation >> documentation might conclude that they could use packstack and RHEL >> 6.3, >> but I'm worried they would run into this same problem that I >> have encountered and not be able to complete installation using >> packstack. Is the correct action: >> >> 1. Amend the installation documentation at access.redhat.com to >> remove the references to packstack until >> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm and RHEL >> 6.4 are available in RHN. RHEL 6.4 is not yet GA, but we want to make sure that users of RHOS 2.1 are starting with at least RHEL 6.4 Beta, which is available on RHN/CDN So if you're using RHEL 6.3, that error message is valid. Best to start with the RHEL 6.4 Beta. The docs do instruct users on enabling the beta repositories: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html But if you're using PackStack, it probably doesn't set the priorities properly for the Beta channel and it also probably doesn't enable that channel by default. Derek, thoughts on the above? >> >> 2. Continue to find the root cause of the issue I am encountering >> with openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch and RHEL >> 6.3 The second issue you hit is valid (https://bugzilla.redhat.com/show_bug.cgi?id=904670) and does need to be fixed. It should be fixed in the next few weeks and pushed up to the RHOS Folsom Preview channels. Perry From pmyers at redhat.com Tue Feb 5 04:00:42 2013 From: pmyers at redhat.com (Perry Myers) Date: Mon, 04 Feb 2013 23:00:42 -0500 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <6A8340D9D5097144961EEF98758D7089130D5D@marathon> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> <5102F1B9.30705@redhat.com> <421EE192CD0C6C49A23B97027914202B03B17C@marathon> <421EE192CD0C6C49A23B97027914202B03B233@marathon> <421EE192CD0C6C49A23B97027914202B03E13B@marchand> <1359939557.2145.10.camel@gil.surfgate.org> <6A8340D9D5097144961EEF98758D7089130D5D@marathon> Message-ID: <511083EA.1040808@redhat.com> On 02/04/2013 08:29 AM, Kodiak Firesmith wrote: > Hello All, > I can't speak for the requirements of essex over folsom part of your question, > but I can cautiously assert that "Essex" is available for RHEL6 still via EPEL > repos based on a table of versions here: > http://docs.openstack.org/folsom/openstack-compute/install/yum/content/version.html#d6e297 > > ...and the output of a yum query this morning from my EL6 workstation: > > # yum --showduplicates search openstack-nova-compute > ... > N/S Matched: openstack-nova-compute > openstack-nova-compute-2012.1.1-15.el6.noarch : OpenStack Nova Virtual Machine > control service > openstack-nova-compute-2012.1.3-1.el6.noarch : OpenStack Nova Virtual Machine > control service <-[Essex] > openstack-nova-compute-2012.2-2.el6.noarch : OpenStack Nova Virtual Machine > control service <- [Folsom] > > # yum info openstack-nova-compute-2012.1.3-1.el6.noarch > Name : openstack-nova-compute > Arch : noarch > Version : 2012.1.3 > Release : 1.el6 > Repo : epel-x86_64-server-6 Yes, if you use --showduplicates it will show you older versions of the same package. Adding the Folsom RPMs to the EPEL6 repos in a sense overwrote the Essex versions since a simple yum install will always pull the latest packages only. But the packages are certainly still available for download if you get very specific and include the NVR or just download them via a wget from an online repo :) Also it should be noted, that the RHOS Preview includes both Essex and Folsom repositories, but we definitely recommend that folks use Folsom at this point. Perry From gilles at redhat.com Tue Feb 5 04:43:29 2013 From: gilles at redhat.com (Gilles Dubreuil) Date: Tue, 05 Feb 2013 15:43:29 +1100 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <511083EA.1040808@redhat.com> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> <5102F1B9.30705@redhat.com> <421EE192CD0C6C49A23B97027914202B03B17C@marathon> <421EE192CD0C6C49A23B97027914202B03B233@marathon> <421EE192CD0C6C49A23B97027914202B03E13B@marchand> <1359939557.2145.10.camel@gil.surfgate.org> <6A8340D9D5097144961EEF98758D7089130D5D@marathon> <511083EA.1040808@redhat.com> Message-ID: <1360039409.5791.28.camel@gil.surfgate.org> On Mon, 2013-02-04 at 23:00 -0500, Perry Myers wrote: > On 02/04/2013 08:29 AM, Kodiak Firesmith wrote: > > Hello All, > > I can't speak for the requirements of essex over folsom part of your question, > > but I can cautiously assert that "Essex" is available for RHEL6 still via EPEL > > repos based on a table of versions here: > > http://docs.openstack.org/folsom/openstack-compute/install/yum/content/version.html#d6e297 > > > > ...and the output of a yum query this morning from my EL6 workstation: > > > > # yum --showduplicates search openstack-nova-compute > > ... > > N/S Matched: openstack-nova-compute > > openstack-nova-compute-2012.1.1-15.el6.noarch : OpenStack Nova Virtual Machine > > control service > > openstack-nova-compute-2012.1.3-1.el6.noarch : OpenStack Nova Virtual Machine > > control service <-[Essex] > > openstack-nova-compute-2012.2-2.el6.noarch : OpenStack Nova Virtual Machine > > control service <- [Folsom] > > > > # yum info openstack-nova-compute-2012.1.3-1.el6.noarch > > Name : openstack-nova-compute > > Arch : noarch > > Version : 2012.1.3 > > Release : 1.el6 > > Repo : epel-x86_64-server-6 > > Yes, if you use --showduplicates it will show you older versions of the > same package. Is that the case as well for EPEL? Because I cannot get any older packages from EPEL6 at all. Actually I never really realized that EPEL has only one repo, unlike Fedora with the release and update ones. So for EPEL, any previous rpm version gets overridden by updates. This might be not be happening for Derrick and Kodiak because I understand they are using EPEL through a software channel on the Red Hat Network Satellite which keeps the older packages by default. Kodiak, could you please confirm? I know that's a bit out of the original question, but I'm curious to see where we could get Essex from EPEL6 if we needed to. > Adding the Folsom RPMs to the EPEL6 repos in a sense overwrote the Essex > versions since a simple yum install will always pull the latest packages > only. > > But the packages are certainly still available for download if you get > very specific and include the NVR or just download them via a wget from > an online repo :) > > Also it should be noted, that the RHOS Preview includes both Essex and > Folsom repositories, but we definitely recommend that folks use Folsom > at this point. > I understand you are using RHEL6.2, meanwhile I suggest to update to RHEL6.3. Cheers, Gilles From apevec at redhat.com Tue Feb 5 10:25:26 2013 From: apevec at redhat.com (Alan Pevec) Date: Tue, 5 Feb 2013 05:25:26 -0500 (EST) Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <1360039409.5791.28.camel@gil.surfgate.org> Message-ID: <311065949.193118.1360059926478.JavaMail.root@redhat.com> > Is that the case as well for EPEL? Because I cannot get any older > packages from EPEL6 at all. Correct, EPEL doesn't keep old RPMs around. > This might be not be happening for Derrick and Kodiak because I > understand they are using EPEL through a software channel on the Red There isn't "EPEL through RHN" thing - on RHN you get RHOS packages. And yes, RHN keeps old RPMs around. > I know that's a bit out of the original question, but I'm curious to > see where we could get Essex from EPEL6 if we needed to. After Folsom was pushed to EPEL6, old Essex RPMs where moved to a side-repo: http://repos.fedorapeople.org/repos/openstack/openstack-essex/README Cheers, Alan From gilles at redhat.com Tue Feb 5 11:55:42 2013 From: gilles at redhat.com (Gilles Dubreuil) Date: Tue, 05 Feb 2013 22:55:42 +1100 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <311065949.193118.1360059926478.JavaMail.root@redhat.com> References: <311065949.193118.1360059926478.JavaMail.root@redhat.com> Message-ID: <1360065342.4518.23.camel@gil.surfgate.org> On Tue, 2013-02-05 at 05:25 -0500, Alan Pevec wrote: > > Is that the case as well for EPEL? Because I cannot get any older > > packages from EPEL6 at all. > > Correct, EPEL doesn't keep old RPMs around. > > > This might be not be happening for Derrick and Kodiak because I > > understand they are using EPEL through a software channel on the Red > > There isn't "EPEL through RHN" thing - on RHN you get RHOS packages. Sorry what I meant is by creating a RHN Satellite Custom Software Channel, eventually linked to a yum repo. This way packages are pushed in the channel, old ones are not removed by default. > And yes, RHN keeps old RPMs around. > > I know that's a bit out of the original question, but I'm curious to > > see where we could get Essex from EPEL6 if we needed to. > > After Folsom was pushed to EPEL6, old Essex RPMs where moved to a side-repo: > http://repos.fedorapeople.org/repos/openstack/openstack-essex/README > > Cheers, > Alan From gilles at redhat.com Tue Feb 5 12:04:43 2013 From: gilles at redhat.com (Gilles Dubreuil) Date: Tue, 05 Feb 2013 23:04:43 +1100 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <311065949.193118.1360059926478.JavaMail.root@redhat.com> References: <311065949.193118.1360059926478.JavaMail.root@redhat.com> Message-ID: <1360065883.4518.24.camel@gil.surfgate.org> On Tue, 2013-02-05 at 05:25 -0500, Alan Pevec wrote: > > Is that the case as well for EPEL? Because I cannot get any older > > packages from EPEL6 at all. > > Correct, EPEL doesn't keep old RPMs around. > > > This might be not be happening for Derrick and Kodiak because I > > understand they are using EPEL through a software channel on the Red > > There isn't "EPEL through RHN" thing - on RHN you get RHOS packages. > And yes, RHN keeps old RPMs around. > > > I know that's a bit out of the original question, but I'm curious to > > see where we could get Essex from EPEL6 if we needed to. > > After Folsom was pushed to EPEL6, old Essex RPMs where moved to a side-repo: > http://repos.fedorapeople.org/repos/openstack/openstack-essex/README > Alan, BTW, thank you for the confirmation and the link :-) Cheers Gilles From jlabocki at redhat.com Tue Feb 5 15:47:30 2013 From: jlabocki at redhat.com (James Labocki) Date: Tue, 5 Feb 2013 10:47:30 -0500 (EST) Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <511082EC.6080303@redhat.com> Message-ID: <757625824.10855647.1360079250497.JavaMail.root@redhat.com> Thanks all. I was able to get packstack working using the newer version and RHEL 6.4. It's a really nice utility for automating the installation! A couple of quick comments. 1. It seems that packstack automatically subscribes the system using certificate based entitlement (subscription-manager). This can cause some issues if someone already has a system subscribed using classic entitlement. Perhaps adding a check and disabling classic entitlement/yum plugins could solve this. 2. If there was a way to run packstack with an option to determine the state of the system(s) or where the installation has failed when previously run it would be helpful to the end user. -James ----- Original Message ----- > From: "Perry Myers" > To: "Paul Robert Marino" , "Derek Higgins" > Cc: "James Labocki" , "Martin Magr" , "Yaniv Kaul" , > "rhos-list" > Sent: Monday, February 4, 2013 10:56:28 PM > Subject: Re: [rhos-list] Packstack Interactive Error > > >> I attempted using > >> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm that you > >> directed me to. Unfortunately it appears it requires RHEL 6.4. > >> > >> "OS support check... Host 10.16.46.104: RHEL version not > >> supported. > >> RHEL >6.4 required" > >> > >> RHEL 6.4 is not unavailable in RHN at this time. I believe that > >> anyone outside of Red Hat trying to use the Folsom installation > >> documentation might conclude that they could use packstack and > >> RHEL > >> 6.3, > > >> but I'm worried they would run into this same problem that I > >> have encountered and not be able to complete installation using > >> packstack. Is the correct action: > >> > >> 1. Amend the installation documentation at access.redhat.com to > >> remove the references to packstack until > >> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm and RHEL > >> 6.4 are available in RHN. > > RHEL 6.4 is not yet GA, but we want to make sure that users of RHOS > 2.1 > are starting with at least RHEL 6.4 Beta, which is available on > RHN/CDN > > So if you're using RHEL 6.3, that error message is valid. Best to > start > with the RHEL 6.4 Beta. > > The docs do instruct users on enabling the beta repositories: > https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html > > But if you're using PackStack, it probably doesn't set the priorities > properly for the Beta channel and it also probably doesn't enable > that > channel by default. > > Derek, thoughts on the above? > > >> > >> 2. Continue to find the root cause of the issue I am encountering > >> with openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch and > >> RHEL > >> 6.3 > > The second issue you hit is valid > (https://bugzilla.redhat.com/show_bug.cgi?id=904670) and does need to > be > fixed. It should be fixed in the next few weeks and pushed up to the > RHOS Folsom Preview channels. > > Perry > > From shshang at cisco.com Tue Feb 5 20:45:12 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Tue, 5 Feb 2013 20:45:12 +0000 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview Message-ID: <6190AA83EB69374DABAE074D7E900F750F6A67CC@xmb-aln-x13.cisco.com> Hi, experts: I am trying to install Redhat Openstack 2.0 (Folsom) preview on RHEL 6.4 beta. According to the Getting Started Guide, I can use "packstack" tool to automate the installation. However, I noticed that a few parameters in the tool still refer to "nove-network" as shown below. CONFIG_NOVA_NETWORK_HOST CONFIG_NOVA_NETWORK_PUBIF CONFIG_NOVA_NETWORK_PRIVIF CONFIG_NOVA_NETWORK_FIXEDRANGE CONFIG_NOVA_NETWORK_FLOATRANGE Does that mean the setup will still run on top of nova-network, or nova-network will use quantum client plugin to communicate with Quantum server? In addition, I don't see any quantum related parameters are included in the tool. When will it be available? Furthermore, does the tool support sub interface with VLAN TAGGING enabled, such as eth2.277? Thanks! Shixiong [cid:042D1C3B-E4F9-4826-940D-24F41AF35AA8] Shixiong Shang Solution Architect WWSP Digital Media Distribution Advanced Services CCIE R&S - #17235 shshang at cisco.com Phone: +1 919 392 5192 Mobile: +1 919 272 1358 Cisco Systems, Inc. 7200-4 Kit Creek Road RTP, NC 27709-4987 United States Cisco.com !--- Stay Hungry Stay Foolish ---! This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: A94EACAF-A587-477E-8B89-160A48E838C1.png Type: image/png Size: 9461 bytes Desc: A94EACAF-A587-477E-8B89-160A48E838C1.png URL: From pmyers at redhat.com Tue Feb 5 20:53:14 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 05 Feb 2013 15:53:14 -0500 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <6190AA83EB69374DABAE074D7E900F750F6A67CC@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F750F6A67CC@xmb-aln-x13.cisco.com> Message-ID: <5111713A.3090809@redhat.com> On 02/05/2013 03:45 PM, Shixiong Shang (shshang) wrote: > Hi, experts: > > I am trying to install Redhat Openstack 2.0 (Folsom) preview on RHEL 6.4 > beta. According to the Getting Started Guide, I can use "packstack" tool > to automate the installation. However, I noticed that a few parameters > in the tool still refer to "nove-network" as shown below. > > CONFIG_NOVA_NETWORK_HOST > CONFIG_NOVA_NETWORK_PUBIF > CONFIG_NOVA_NETWORK_PRIVIF > CONFIG_NOVA_NETWORK_FIXEDRANGE > CONFIG_NOVA_NETWORK_FLOATRANGE > > Does that mean the setup will still run on top of nova-network, or > nova-network will use quantum client plugin to communicate with Quantum > server? > > In addition, I don't see any quantum related parameters are included in > the tool. When will it be available? Right now, PackStack only supports Nova Networking. We will eventually have support for Quantum in PackStack, but that support is not completed yet. So if you specifically need to use Quantum, you would need to: * Use PackStack to install w/ Nova Networking * Manually convert from Nova Networking to Quantum Gary (cc'd) has a draft process for this, but it is still a little shaky. Gary can you share that process? Perhaps Shixiong can try it out and give us feedback on whether it works for him or not. > Furthermore, does the tool support sub interface with VLAN TAGGING > enabled, such as eth2.277? I don't think so, but Derek would need to confirm. I think this would be a feature enhancement. Perry From red at fedoraproject.org Tue Feb 5 21:44:43 2013 From: red at fedoraproject.org (Sandro "red" Mathys) Date: Tue, 5 Feb 2013 22:44:43 +0100 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <6190AA83EB69374DABAE074D7E900F750F6A67CC@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F750F6A67CC@xmb-aln-x13.cisco.com> Message-ID: On Tue, Feb 5, 2013 at 9:45 PM, Shixiong Shang (shshang) wrote: > > Furthermore, does the tool support sub interface with VLAN TAGGING enabled, such as eth2.277? It does, I used some eth0.1100 (pubif) and eth1.1101 (privif) with Packstack before. As far as I understand, vlan subinterfaces work completely transparent anyway, i.e. Packstack won't even notice. -- Sandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From shshang at cisco.com Tue Feb 5 21:46:02 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Tue, 5 Feb 2013 21:46:02 +0000 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <5111713A.3090809@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F750F6A6C71@xmb-aln-x13.cisco.com> Hi, Perry: Thank you for the prompt response! Garry, if you can share with me the procedure, I will be more than happy to try it out and provide you with feedback. Shixiong On 2/5/13 3:53 PM, "Perry Myers" wrote: >On 02/05/2013 03:45 PM, Shixiong Shang (shshang) wrote: >> Hi, experts: >> >> I am trying to install Redhat Openstack 2.0 (Folsom) preview on RHEL 6.4 >> beta. According to the Getting Started Guide, I can use "packstack" tool >> to automate the installation. However, I noticed that a few parameters >> in the tool still refer to "nove-network" as shown below. >> >> CONFIG_NOVA_NETWORK_HOST >> CONFIG_NOVA_NETWORK_PUBIF >> CONFIG_NOVA_NETWORK_PRIVIF >> CONFIG_NOVA_NETWORK_FIXEDRANGE >> CONFIG_NOVA_NETWORK_FLOATRANGE >> >> Does that mean the setup will still run on top of nova-network, or >> nova-network will use quantum client plugin to communicate with Quantum >> server? >> >> In addition, I don't see any quantum related parameters are included in >> the tool. When will it be available? > >Right now, PackStack only supports Nova Networking. We will eventually >have support for Quantum in PackStack, but that support is not completed >yet. > >So if you specifically need to use Quantum, you would need to: >* Use PackStack to install w/ Nova Networking >* Manually convert from Nova Networking to Quantum > >Gary (cc'd) has a draft process for this, but it is still a little shaky. > >Gary can you share that process? Perhaps Shixiong can try it out and >give us feedback on whether it works for him or not. > >> Furthermore, does the tool support sub interface with VLAN TAGGING >> enabled, such as eth2.277? > >I don't think so, but Derek would need to confirm. I think this would >be a feature enhancement. > >Perry From derekh at redhat.com Tue Feb 5 22:27:48 2013 From: derekh at redhat.com (Derek Higgins) Date: Tue, 05 Feb 2013 22:27:48 +0000 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <757625824.10855647.1360079250497.JavaMail.root@redhat.com> References: <757625824.10855647.1360079250497.JavaMail.root@redhat.com> Message-ID: <51118764.1040102@redhat.com> On 02/05/2013 03:47 PM, James Labocki wrote: > Thanks all. I was able to get packstack working using the newer version and RHEL 6.4. It's a really nice utility for automating the installation! A couple of quick comments. Thanks James for you feedback see below > > 1. It seems that packstack automatically subscribes the system using certificate based entitlement (subscription-manager). This can cause some issues if someone already has a system subscribed using classic entitlement. Perhaps adding a check and disabling classic entitlement/yum plugins could solve this. We're currently looking into supporting rhn classic so this hopefully wont be a problem soon. > > 2. If there was a way to run packstack with an option to determine the state of the system(s) or where the installation has failed when previously run it would be helpful to the end user. We can look into this, improving the feedback given to the user in the event of an error should help in this area a lot. > > -James > > ----- Original Message ----- >> From: "Perry Myers" >> To: "Paul Robert Marino" , "Derek Higgins" >> Cc: "James Labocki" , "Martin Magr" , "Yaniv Kaul" , >> "rhos-list" >> Sent: Monday, February 4, 2013 10:56:28 PM >> Subject: Re: [rhos-list] Packstack Interactive Error >> >>>> I attempted using >>>> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm that you >>>> directed me to. Unfortunately it appears it requires RHEL 6.4. >>>> >>>> "OS support check... Host 10.16.46.104: RHEL version not >>>> supported. >>>> RHEL >6.4 required" >>>> >>>> RHEL 6.4 is not unavailable in RHN at this time. I believe that >>>> anyone outside of Red Hat trying to use the Folsom installation >>>> documentation might conclude that they could use packstack and >>>> RHEL >>>> 6.3, >> >>>> but I'm worried they would run into this same problem that I >>>> have encountered and not be able to complete installation using >>>> packstack. Is the correct action: >>>> >>>> 1. Amend the installation documentation at access.redhat.com to >>>> remove the references to packstack until >>>> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm and RHEL >>>> 6.4 are available in RHN. >> >> RHEL 6.4 is not yet GA, but we want to make sure that users of RHOS >> 2.1 >> are starting with at least RHEL 6.4 Beta, which is available on >> RHN/CDN >> >> So if you're using RHEL 6.3, that error message is valid. Best to >> start >> with the RHEL 6.4 Beta. >> >> The docs do instruct users on enabling the beta repositories: >> https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html >> >> But if you're using PackStack, it probably doesn't set the priorities >> properly for the Beta channel and it also probably doesn't enable >> that >> channel by default. >> >> Derek, thoughts on the above? >> >>>> >>>> 2. Continue to find the root cause of the issue I am encountering >>>> with openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch and >>>> RHEL >>>> 6.3 >> >> The second issue you hit is valid >> (https://bugzilla.redhat.com/show_bug.cgi?id=904670) and does need to >> be >> fixed. It should be fixed in the next few weeks and pushed up to the >> RHOS Folsom Preview channels. >> >> Perry >> >> > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From derekh at redhat.com Tue Feb 5 22:42:46 2013 From: derekh at redhat.com (Derek Higgins) Date: Tue, 05 Feb 2013 22:42:46 +0000 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <511082EC.6080303@redhat.com> References: <511025dc.1883650a.2632.2528@mx.google.com> <511082EC.6080303@redhat.com> Message-ID: <51118AE6.9020300@redhat.com> On 02/05/2013 03:56 AM, Perry Myers wrote: >>> I attempted using >>> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm that you >>> directed me to. Unfortunately it appears it requires RHEL 6.4. >>> >>> "OS support check... Host 10.16.46.104: RHEL version not supported. >>> RHEL >6.4 required" >>> >>> RHEL 6.4 is not unavailable in RHN at this time. I believe that >>> anyone outside of Red Hat trying to use the Folsom installation >>> documentation might conclude that they could use packstack and RHEL >>> 6.3, > >>> but I'm worried they would run into this same problem that I >>> have encountered and not be able to complete installation using >>> packstack. Is the correct action: >>> >>> 1. Amend the installation documentation at access.redhat.com to >>> remove the references to packstack until >>> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm and RHEL >>> 6.4 are available in RHN. > > RHEL 6.4 is not yet GA, but we want to make sure that users of RHOS 2.1 > are starting with at least RHEL 6.4 Beta, which is available on RHN/CDN > > So if you're using RHEL 6.3, that error message is valid. Best to start > with the RHEL 6.4 Beta. > > The docs do instruct users on enabling the beta repositories: > https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html > > But if you're using PackStack, it probably doesn't set the priorities > properly for the Beta channel and it also probably doesn't enable that > channel by default. > > Derek, thoughts on the above? We're currently registering with subscription-manager register --username=username --autosubscribe which on RHEL 6.4 subscribes you to 6.4 Beta, a priority is then set on the repo rhel-server-ost-6-folsom-rpms to ensure openstack packages are installed from here. I think this is the correct behaviour. > >>> >>> 2. Continue to find the root cause of the issue I am encountering >>> with openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch and RHEL >>> 6.3 > > The second issue you hit is valid > (https://bugzilla.redhat.com/show_bug.cgi?id=904670) and does need to be > fixed. It should be fixed in the next few weeks and pushed up to the > RHOS Folsom Preview channels. > > Perry > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From pmyers at redhat.com Tue Feb 5 22:49:12 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 05 Feb 2013 17:49:12 -0500 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <51118AE6.9020300@redhat.com> References: <511025dc.1883650a.2632.2528@mx.google.com> <511082EC.6080303@redhat.com> <51118AE6.9020300@redhat.com> Message-ID: <51118C68.5040904@redhat.com> On 02/05/2013 05:42 PM, Derek Higgins wrote: > On 02/05/2013 03:56 AM, Perry Myers wrote: >>>> I attempted using >>>> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm that you >>>> directed me to. Unfortunately it appears it requires RHEL 6.4. >>>> >>>> "OS support check... Host 10.16.46.104: RHEL version not supported. >>>> RHEL >6.4 required" >>>> >>>> RHEL 6.4 is not unavailable in RHN at this time. I believe that >>>> anyone outside of Red Hat trying to use the Folsom installation >>>> documentation might conclude that they could use packstack and RHEL >>>> 6.3, >> >>>> but I'm worried they would run into this same problem that I >>>> have encountered and not be able to complete installation using >>>> packstack. Is the correct action: >>>> >>>> 1. Amend the installation documentation at access.redhat.com to >>>> remove the references to packstack until >>>> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm and RHEL >>>> 6.4 are available in RHN. >> >> RHEL 6.4 is not yet GA, but we want to make sure that users of RHOS 2.1 >> are starting with at least RHEL 6.4 Beta, which is available on RHN/CDN >> >> So if you're using RHEL 6.3, that error message is valid. Best to start >> with the RHEL 6.4 Beta. >> >> The docs do instruct users on enabling the beta repositories: >> https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html >> >> But if you're using PackStack, it probably doesn't set the priorities >> properly for the Beta channel and it also probably doesn't enable that >> channel by default. >> >> Derek, thoughts on the above? > We're currently registering with > subscription-manager register --username=username --autosubscribe > > which on RHEL 6.4 subscribes you to 6.4 Beta, a priority is then set on > the repo rhel-server-ost-6-folsom-rpms to ensure openstack packages are > installed from here. I think this is the correct behaviour. Right. That subscription-manager command gets you access to the Beta channel, but does not _enable_ it unfortunately. So we'd need to include a command in packstack like: yum-config-manager --enable rhel-6-server-beta-rpms >> >>>> >>>> 2. Continue to find the root cause of the issue I am encountering >>>> with openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch and RHEL >>>> 6.3 >> >> The second issue you hit is valid >> (https://bugzilla.redhat.com/show_bug.cgi?id=904670) and does need to be >> fixed. It should be fixed in the next few weeks and pushed up to the >> RHOS Folsom Preview channels. >> >> Perry >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list >> > From derekh at redhat.com Tue Feb 5 22:54:53 2013 From: derekh at redhat.com (Derek Higgins) Date: Tue, 05 Feb 2013 22:54:53 +0000 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <51118C68.5040904@redhat.com> References: <511025dc.1883650a.2632.2528@mx.google.com> <511082EC.6080303@redhat.com> <51118AE6.9020300@redhat.com> <51118C68.5040904@redhat.com> Message-ID: <51118DBD.3060505@redhat.com> On 02/05/2013 10:49 PM, Perry Myers wrote: > On 02/05/2013 05:42 PM, Derek Higgins wrote: >> On 02/05/2013 03:56 AM, Perry Myers wrote: >>>>> I attempted using >>>>> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm that you >>>>> directed me to. Unfortunately it appears it requires RHEL 6.4. >>>>> >>>>> "OS support check... Host 10.16.46.104: RHEL version not supported. >>>>> RHEL >6.4 required" >>>>> >>>>> RHEL 6.4 is not unavailable in RHN at this time. I believe that >>>>> anyone outside of Red Hat trying to use the Folsom installation >>>>> documentation might conclude that they could use packstack and RHEL >>>>> 6.3, >>> >>>>> but I'm worried they would run into this same problem that I >>>>> have encountered and not be able to complete installation using >>>>> packstack. Is the correct action: >>>>> >>>>> 1. Amend the installation documentation at access.redhat.com to >>>>> remove the references to packstack until >>>>> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm and RHEL >>>>> 6.4 are available in RHN. >>> >>> RHEL 6.4 is not yet GA, but we want to make sure that users of RHOS 2.1 >>> are starting with at least RHEL 6.4 Beta, which is available on RHN/CDN >>> >>> So if you're using RHEL 6.3, that error message is valid. Best to start >>> with the RHEL 6.4 Beta. >>> >>> The docs do instruct users on enabling the beta repositories: >>> https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html >>> >>> But if you're using PackStack, it probably doesn't set the priorities >>> properly for the Beta channel and it also probably doesn't enable that >>> channel by default. >>> >>> Derek, thoughts on the above? >> We're currently registering with >> subscription-manager register --username=username --autosubscribe >> >> which on RHEL 6.4 subscribes you to 6.4 Beta, a priority is then set on >> the repo rhel-server-ost-6-folsom-rpms to ensure openstack packages are >> installed from here. I think this is the correct behaviour. > > Right. That subscription-manager command gets you access to the Beta > channel, but does not _enable_ it unfortunately. > > So we'd need to include a command in packstack like: > > yum-config-manager --enable rhel-6-server-beta-rpms Ok thanks, will create a bug for it. > >>> >>>>> >>>>> 2. Continue to find the root cause of the issue I am encountering >>>>> with openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch and RHEL >>>>> 6.3 >>> >>> The second issue you hit is valid >>> (https://bugzilla.redhat.com/show_bug.cgi?id=904670) and does need to be >>> fixed. It should be fixed in the next few weeks and pushed up to the >>> RHOS Folsom Preview channels. >>> >>> Perry >>> >>> _______________________________________________ >>> rhos-list mailing list >>> rhos-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rhos-list >>> >> > From pmyers at redhat.com Tue Feb 5 22:56:53 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 05 Feb 2013 17:56:53 -0500 Subject: [rhos-list] Packstack Interactive Error In-Reply-To: <51118DBD.3060505@redhat.com> References: <511025dc.1883650a.2632.2528@mx.google.com> <511082EC.6080303@redhat.com> <51118AE6.9020300@redhat.com> <51118C68.5040904@redhat.com> <51118DBD.3060505@redhat.com> Message-ID: <51118E35.5080408@redhat.com> On 02/05/2013 05:54 PM, Derek Higgins wrote: > On 02/05/2013 10:49 PM, Perry Myers wrote: >> On 02/05/2013 05:42 PM, Derek Higgins wrote: >>> On 02/05/2013 03:56 AM, Perry Myers wrote: >>>>>> I attempted using >>>>>> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm that you >>>>>> directed me to. Unfortunately it appears it requires RHEL 6.4. >>>>>> >>>>>> "OS support check... Host 10.16.46.104: RHEL version not supported. >>>>>> RHEL >6.4 required" >>>>>> >>>>>> RHEL 6.4 is not unavailable in RHN at this time. I believe that >>>>>> anyone outside of Red Hat trying to use the Folsom installation >>>>>> documentation might conclude that they could use packstack and RHEL >>>>>> 6.3, >>>> >>>>>> but I'm worried they would run into this same problem that I >>>>>> have encountered and not be able to complete installation using >>>>>> packstack. Is the correct action: >>>>>> >>>>>> 1. Amend the installation documentation at access.redhat.com to >>>>>> remove the references to packstack until >>>>>> openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch.rpm and RHEL >>>>>> 6.4 are available in RHN. >>>> >>>> RHEL 6.4 is not yet GA, but we want to make sure that users of RHOS 2.1 >>>> are starting with at least RHEL 6.4 Beta, which is available on RHN/CDN >>>> >>>> So if you're using RHEL 6.3, that error message is valid. Best to start >>>> with the RHEL 6.4 Beta. >>>> >>>> The docs do instruct users on enabling the beta repositories: >>>> https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html >>>> >>>> But if you're using PackStack, it probably doesn't set the priorities >>>> properly for the Beta channel and it also probably doesn't enable that >>>> channel by default. >>>> >>>> Derek, thoughts on the above? >>> We're currently registering with >>> subscription-manager register --username=username --autosubscribe >>> >>> which on RHEL 6.4 subscribes you to 6.4 Beta, a priority is then set on >>> the repo rhel-server-ost-6-folsom-rpms to ensure openstack packages are >>> installed from here. I think this is the correct behaviour. >> >> Right. That subscription-manager command gets you access to the Beta >> channel, but does not _enable_ it unfortunately. >> >> So we'd need to include a command in packstack like: >> >> yum-config-manager --enable rhel-6-server-beta-rpms > Ok thanks, will create a bug for it. Well... it's not clear we _should_ fix this, since once RHEL 6.4 is GA'd, we of course wouldn't want to use the Beta channel Maybe we shouldn't do it by default, but maybe we need a PackStack config option like: USE_BETA_CHANNEL I'm sure you can think of a better name for the parameter though :) > >> >>>> >>>>>> >>>>>> 2. Continue to find the root cause of the issue I am encountering >>>>>> with openstack-packstack-2012.2.2-0.5.dev318.el6ost.noarch and RHEL >>>>>> 6.3 >>>> >>>> The second issue you hit is valid >>>> (https://bugzilla.redhat.com/show_bug.cgi?id=904670) and does need to be >>>> fixed. It should be fixed in the next few weeks and pushed up to the >>>> RHOS Folsom Preview channels. >>>> >>>> Perry >>>> >>>> _______________________________________________ >>>> rhos-list mailing list >>>> rhos-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rhos-list >>>> >>> >> > From gkotton at redhat.com Wed Feb 6 07:06:16 2013 From: gkotton at redhat.com (Gary Kotton) Date: Wed, 06 Feb 2013 09:06:16 +0200 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <5111713A.3090809@redhat.com> References: <6190AA83EB69374DABAE074D7E900F750F6A67CC@xmb-aln-x13.cisco.com> <5111713A.3090809@redhat.com> Message-ID: <511200E8.4090606@redhat.com> On 02/05/2013 10:53 PM, Perry Myers wrote: > On 02/05/2013 03:45 PM, Shixiong Shang (shshang) wrote: >> Hi, experts: >> >> I am trying to install Redhat Openstack 2.0 (Folsom) preview on RHEL 6.4 >> beta. According to the Getting Started Guide, I can use "packstack" tool >> to automate the installation. However, I noticed that a few parameters >> in the tool still refer to "nove-network" as shown below. >> >> CONFIG_NOVA_NETWORK_HOST >> CONFIG_NOVA_NETWORK_PUBIF >> CONFIG_NOVA_NETWORK_PRIVIF >> CONFIG_NOVA_NETWORK_FIXEDRANGE >> CONFIG_NOVA_NETWORK_FLOATRANGE >> >> Does that mean the setup will still run on top of nova-network, or >> nova-network will use quantum client plugin to communicate with Quantum >> server? >> >> In addition, I don't see any quantum related parameters are included in >> the tool. When will it be available? > Right now, PackStack only supports Nova Networking. We will eventually > have support for Quantum in PackStack, but that support is not completed > yet. > > So if you specifically need to use Quantum, you would need to: > * Use PackStack to install w/ Nova Networking > * Manually convert from Nova Networking to Quantum > > Gary (cc'd) has a draft process for this, but it is still a little shaky. > > Gary can you share that process? Perhaps Shixiong can try it out and > give us feedback on whether it works for him or not. As Perry said, things are still not 100%. There are two ways about going about it. The first is to install packstack on an all in one node, the second is to have a second node that will be used for nova networking. Below is a list of steps that you can do to go from Nova networking to Quantum. Please note that we found a few problems that we are dealing with - the IP tables were dropping the DHCP request from the VM that is spawned. At the moment we have narrowed it down to a few suspicious IP table rules (if deleted then it works). We have an open bug about this and are currently investigating. So the steps are below (I used OpenvSwitch): 1. Terminate nova networking (if you do packstack with 2 nodes then you will not need to do this part and the nova-network iptable rules will not be about) service openstack-nova-network stop chkconfig openstack-nova-network off 2. Install quantum service yum install openstack-quantum 3. Install OpenvSwicth plugin yum install openstack-quantum-openvswitch 4. source keystonerc_admin [the quantum installation scripts make use of these environment variables] 5. Configure quantum service quantum-server-setup 6.1. The scrip above will ask if the nova parameters need to be updated. Restart the nova compute service. The script will configure Quantum as the networking module for Nova. service openstack-nova-compute-restart 6.2 Start the openvswitch service service openvswitch start chkconfig openvswitch on Run "ovs-vsctl show" 6.3 Create the integration bridge ovs-vsctl add-br br-int 7. Start quantum service service quantum-server start chkconfig quantum-server on 7.1 Start quantum agent service quantum-openvswitch-agent start chkconfig quantum-openvswitch-agent on 8. Create a Quantum endpoint with keystone keystone service-create --name=quantum --type=network --description="Quantum Service" keystone endpoint-create --region RegionOne --service-id --publicurl "http://127.0.0.1:9696" --adminurl "http://127.0.0.1:9696" --internalurl "http://127.0.0.1:9696" 8.1 Create a keystone tenant and user. 9. Now one is able to start to use the Quantum CLI 9.1 Create a private network quantum net-create private 9.2 Create a subnet quantum subnet-create 10.0.0.0/24 10. In order for the IPAM to take place one needs to invoke the DHCP agent. Use DHCP setup tool: quantum-dhcp-setup 11. Start DHCP agent service quantum-dhcp-agent start chkconfg quantum-dhcp-agent on 12. Validate that a port has been created by the DHCP agent (quantum port-list). This port will have IP address 10.0.0.2. The gateway will 10.0.0.1. 13. At this stage VM's can be deployed and they will receive IP addresses from the DHCP service. 14. Layer 3 agent quantum-l3-setup Start service service quantum-l3-agent start chkconfig quantum-l3-agent on Please note that if you choose to use openvswitch then you will need to patch the following file: /etc/init.d/quantum-ovs-cleanup Swap the exec line with: daemon --user quantum $exec --config-file /usr/share/$proj/$proj-dist.conf --config-file /etc/$proj/$proj.conf --config-file $config &>/dev/null In addition to this you will need to ensure that this is run on boot: chkconfig quantum-ovs-cleanup on Hopefully in the future packstack will do the majority of the stuff above. Please let me know if you have any problems or questions. Thanks Gary >> Furthermore, does the tool support sub interface with VLAN TAGGING >> enabled, such as eth2.277? > I don't think so, but Derek would need to confirm. I think this would > be a feature enhancement. > > Perry From tbrunell at redhat.com Thu Feb 7 02:03:20 2013 From: tbrunell at redhat.com (Ted Brunell) Date: Wed, 6 Feb 2013 21:03:20 -0500 (EST) Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <511200E8.4090606@redhat.com> Message-ID: <1814206347.10248179.1360202600535.JavaMail.root@redhat.com> I was working getting OpenStack working on a two-node setup using PackStack and then changing out Nova Network with Quantum + Open vSwitch using the RHEL 6.4 kernel. I have attached the step-by-step instructions that I used to accomplish this in case they are of use to anyone else. They are not as stream-lined as what Gary posted, but they should accomplish the same thing. R/ Ted Ted Brunell - RHCDS, RHCE, RHCVA Solution Architect Red Hat, Inc. tbrunell at redhat.com ----- Original Message ----- From: "Gary Kotton" To: "Perry Myers" Cc: "Derek Higgins" , "Alvaro Ortega" , "Imtiaz Hussain (ihussain)" , rhos-list at redhat.com, "Randy Tuttle (rantuttl)" , "Ramki Ramakrishnan (ramkiram)" Sent: Wednesday, February 6, 2013 2:06:16 AM Subject: Re: [rhos-list] Nova-network v.s. Quantum in Openstack preview On 02/05/2013 10:53 PM, Perry Myers wrote: > On 02/05/2013 03:45 PM, Shixiong Shang (shshang) wrote: >> Hi, experts: >> >> I am trying to install Redhat Openstack 2.0 (Folsom) preview on RHEL 6.4 >> beta. According to the Getting Started Guide, I can use "packstack" tool >> to automate the installation. However, I noticed that a few parameters >> in the tool still refer to "nove-network" as shown below. >> >> CONFIG_NOVA_NETWORK_HOST >> CONFIG_NOVA_NETWORK_PUBIF >> CONFIG_NOVA_NETWORK_PRIVIF >> CONFIG_NOVA_NETWORK_FIXEDRANGE >> CONFIG_NOVA_NETWORK_FLOATRANGE >> >> Does that mean the setup will still run on top of nova-network, or >> nova-network will use quantum client plugin to communicate with Quantum >> server? >> >> In addition, I don't see any quantum related parameters are included in >> the tool. When will it be available? > Right now, PackStack only supports Nova Networking. We will eventually > have support for Quantum in PackStack, but that support is not completed > yet. > > So if you specifically need to use Quantum, you would need to: > * Use PackStack to install w/ Nova Networking > * Manually convert from Nova Networking to Quantum > > Gary (cc'd) has a draft process for this, but it is still a little shaky. > > Gary can you share that process? Perhaps Shixiong can try it out and > give us feedback on whether it works for him or not. As Perry said, things are still not 100%. There are two ways about going about it. The first is to install packstack on an all in one node, the second is to have a second node that will be used for nova networking. Below is a list of steps that you can do to go from Nova networking to Quantum. Please note that we found a few problems that we are dealing with - the IP tables were dropping the DHCP request from the VM that is spawned. At the moment we have narrowed it down to a few suspicious IP table rules (if deleted then it works). We have an open bug about this and are currently investigating. So the steps are below (I used OpenvSwitch): 1. Terminate nova networking (if you do packstack with 2 nodes then you will not need to do this part and the nova-network iptable rules will not be about) service openstack-nova-network stop chkconfig openstack-nova-network off 2. Install quantum service yum install openstack-quantum 3. Install OpenvSwicth plugin yum install openstack-quantum-openvswitch 4. source keystonerc_admin [the quantum installation scripts make use of these environment variables] 5. Configure quantum service quantum-server-setup 6.1. The scrip above will ask if the nova parameters need to be updated. Restart the nova compute service. The script will configure Quantum as the networking module for Nova. service openstack-nova-compute-restart 6.2 Start the openvswitch service service openvswitch start chkconfig openvswitch on Run "ovs-vsctl show" 6.3 Create the integration bridge ovs-vsctl add-br br-int 7. Start quantum service service quantum-server start chkconfig quantum-server on 7.1 Start quantum agent service quantum-openvswitch-agent start chkconfig quantum-openvswitch-agent on 8. Create a Quantum endpoint with keystone keystone service-create --name=quantum --type=network --description="Quantum Service" keystone endpoint-create --region RegionOne --service-id --publicurl "http://127.0.0.1:9696" --adminurl "http://127.0.0.1:9696" --internalurl "http://127.0.0.1:9696" 8.1 Create a keystone tenant and user. 9. Now one is able to start to use the Quantum CLI 9.1 Create a private network quantum net-create private 9.2 Create a subnet quantum subnet-create 10.0.0.0/24 10. In order for the IPAM to take place one needs to invoke the DHCP agent. Use DHCP setup tool: quantum-dhcp-setup 11. Start DHCP agent service quantum-dhcp-agent start chkconfg quantum-dhcp-agent on 12. Validate that a port has been created by the DHCP agent (quantum port-list). This port will have IP address 10.0.0.2. The gateway will 10.0.0.1. 13. At this stage VM's can be deployed and they will receive IP addresses from the DHCP service. 14. Layer 3 agent quantum-l3-setup Start service service quantum-l3-agent start chkconfig quantum-l3-agent on Please note that if you choose to use openvswitch then you will need to patch the following file: /etc/init.d/quantum-ovs-cleanup Swap the exec line with: daemon --user quantum $exec --config-file /usr/share/$proj/$proj-dist.conf --config-file /etc/$proj/$proj.conf --config-file $config &>/dev/null In addition to this you will need to ensure that this is run on boot: chkconfig quantum-ovs-cleanup on Hopefully in the future packstack will do the majority of the stuff above. Please let me know if you have any problems or questions. Thanks Gary >> Furthermore, does the tool support sub interface with VLAN TAGGING >> enabled, such as eth2.277? > I don't think so, but Derek would need to confirm. I think this would > be a feature enhancement. > > Perry _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- A non-text attachment was scrubbed... Name: quantum-openvswitch.pdf Type: application/pdf Size: 89609 bytes Desc: not available URL: From gkotton at redhat.com Thu Feb 7 08:58:25 2013 From: gkotton at redhat.com (Gary Kotton) Date: Thu, 07 Feb 2013 10:58:25 +0200 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <1814206347.10248179.1360202600535.JavaMail.root@redhat.com> References: <1814206347.10248179.1360202600535.JavaMail.root@redhat.com> Message-ID: <51136CB1.9050804@redhat.com> On 02/07/2013 04:03 AM, Ted Brunell wrote: > I was working getting OpenStack working on a two-node setup using PackStack and then changing out Nova Network with Quantum + Open vSwitch using the RHEL 6.4 kernel. Great article. I have a few minor comments: For Node 1: 1. There is a typo "the compute nade" 2. Point #5 - I think that packstack takes care of this. 3. In point #6 you do not need to do the quantum client. This is pulled in by openstack-quantum (it is required for the l3 agent) 4. The gedit is just to work around https://bugzilla.redhat.com/show_bug.cgi?id=889774 (you can solve this by setting an environment variable. I think that Martin has taken care of this so hopefully we can drop it from the doc soon. :) 5. For point 10 can you please add a note that the user must have sourced the keystonerc_admin first. This is essential as the quantum server script uses the environment variables to configure the keystone authentication. 6. For point 12 you have a typo "[DATABSE]". Please note that the quantum-server-setup creates a symbolic link for the plugin ini file. This can be seen at /etc/quantum/plugin.ini. The reason for doing this was to ensure that we can have generic startup script for the quantum service 7. Point 13 has a typo "that thy" 8. Point 15: This is on the host where the quantum service is running and the user does not need to run this. The nova conf was updated via the quantum-server-setup script 9. Regarding 15 - "/usr/bin/openstack-config --set|--del config_file section [parameter] [value]". That looks fishy and like a bug. I'll check it. You should remove that line from the doc 10. Point 22. The DHCP agent does not require the keystone settings. You can drop the following: auth_url = http://192.168.2.193:35357/v2.0/ admin_username = quantum admin_password = Passw0rd admin_tenant_name = quantum 11. The l3 agent is required if you want to do the following: 1. floating IP support 2. enable the instances to get the meta data from the nova metadata service For Node 2: 1. Point 8. Typo "[DATABASE}". Please note that the database does not used. The user does not need to update this. Point 7 ensured that the qpid hostname is correct that suffices. Once again great doc. Thanks Gary -------------- next part -------------- A non-text attachment was scrubbed... Name: quantum-openvswitch.pdf Type: application/pdf Size: 89609 bytes Desc: not available URL: From shshang at cisco.com Thu Feb 7 18:41:40 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Thu, 7 Feb 2013 18:41:40 +0000 Subject: [rhos-list] nova-manage db sync error Message-ID: <6190AA83EB69374DABAE074D7E900F750F6AB652@xmb-aln-x13.cisco.com> Hi, experts: I am trying to issue the following command on nova node to sync with db, but it returns with error: [root at as-ctl1 bin]# sudo nova-manage db sync 2013-02-07 13:36:25 19362 DEBUG nova.utils [-] backend __get_backend /usr/lib/python2.6/site-packages/nova/utils.py:502 Command failed, please check log for more info 2013-02-07 13:36:25 19362 CRITICAL nova [-] No module named kombu 2013-02-07 13:36:25 19362 TRACE nova Traceback (most recent call last): 2013-02-07 13:36:25 19362 TRACE nova File "/usr/bin/nova-manage", line 1403, in 2013-02-07 13:36:25 19362 TRACE nova main() 2013-02-07 13:36:25 19362 TRACE nova File "/usr/bin/nova-manage", line 1391, in main 2013-02-07 13:36:25 19362 TRACE nova rpc.cleanup() 2013-02-07 13:36:25 19362 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py", line 203, in cleanup 2013-02-07 13:36:25 19362 TRACE nova return _get_impl().cleanup() 2013-02-07 13:36:25 19362 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py", line 269, in _get_impl 2013-02-07 13:36:25 19362 TRACE nova _RPCIMPL = importutils.import_module(impl) 2013-02-07 13:36:25 19362 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", line 58, in import_module 2013-02-07 13:36:25 19362 TRACE nova __import__(import_str) 2013-02-07 13:36:25 19362 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_kombu.py", line 27, in 2013-02-07 13:36:25 19362 TRACE nova import kombu 2013-02-07 13:36:25 19362 TRACE nova ImportError: No module named kombu 2013-02-07 13:36:25 19362 TRACE nova Seems like it is missing python kombu module. I am using Preview plus RHEL 6.4 beta. Any suggestions? Thanks! Shixiong [cid:AC897C5D-48CF-49E3-92BA-4156447E07DA] Shixiong Shang Solution Architect WWSP Digital Media Distribution Advanced Services CCIE R&S - #17235 shshang at cisco.com Phone: +1 919 392 5192 Mobile: +1 919 272 1358 Cisco Systems, Inc. 7200-4 Kit Creek Road RTP, NC 27709-4987 United States Cisco.com !--- Stay Hungry Stay Foolish ---! This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: A94EACAF-A587-477E-8B89-160A48E838C1[1].png Type: image/png Size: 9461 bytes Desc: A94EACAF-A587-477E-8B89-160A48E838C1[1].png URL: From dripton at redhat.com Thu Feb 7 18:50:30 2013 From: dripton at redhat.com (David Ripton) Date: Thu, 07 Feb 2013 13:50:30 -0500 Subject: [rhos-list] nova-manage db sync error In-Reply-To: <6190AA83EB69374DABAE074D7E900F750F6AB652@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F750F6AB652@xmb-aln-x13.cisco.com> Message-ID: <5113F776.5030407@redhat.com> On 02/07/2013 01:41 PM, Shixiong Shang (shshang) wrote: > I am trying to issue the following command on nova node to sync with db, > but it returns with error: > > [root at as-ctl1 bin]# sudo nova-manage db sync > 2013-02-07 13:36:25 19362 DEBUG nova.utils [-] backend 'nova.db.sqlalchemy.migration' from > '/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/migration.pyc'> > __get_backend /usr/lib/python2.6/site-packages/nova/utils.py:502 > Command failed, please check log for more info > 2013-02-07 13:36:25 19362 CRITICAL nova [-] No module named kombu > 2013-02-07 13:36:25 19362 TRACE nova Traceback (most recent call last): > 2013-02-07 13:36:25 19362 TRACE nova File "/usr/bin/nova-manage", line > 1403, in > 2013-02-07 13:36:25 19362 TRACE nova main() > 2013-02-07 13:36:25 19362 TRACE nova File "/usr/bin/nova-manage", line > 1391, in main > 2013-02-07 13:36:25 19362 TRACE nova rpc.cleanup() > 2013-02-07 13:36:25 19362 TRACE nova File > "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py", line > 203, in cleanup > 2013-02-07 13:36:25 19362 TRACE nova return _get_impl().cleanup() > 2013-02-07 13:36:25 19362 TRACE nova File > "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py", line > 269, in _get_impl > 2013-02-07 13:36:25 19362 TRACE nova _RPCIMPL = > importutils.import_module(impl) > 2013-02-07 13:36:25 19362 TRACE nova File > "/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", > line 58, in import_module > 2013-02-07 13:36:25 19362 TRACE nova __import__(import_str) > 2013-02-07 13:36:25 19362 TRACE nova File > "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_kombu.py", > line 27, in > 2013-02-07 13:36:25 19362 TRACE nova import kombu > 2013-02-07 13:36:25 19362 TRACE nova ImportError: No module named kombu > 2013-02-07 13:36:25 19362 TRACE nova > > Seems like it is missing python kombu module. I am using Preview plus > RHEL 6.4 beta. Any suggestions? Look in nova.conf (which I believe will be in /etc/nova). There's a field called rpc_backend which defaults to nova.openstack.common.rpc.impl_kombu Changing that to impl_qpid might fix it. -- David Ripton Red Hat dripton at redhat.com From pbrady at redhat.com Thu Feb 7 18:57:09 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 07 Feb 2013 18:57:09 +0000 Subject: [rhos-list] nova-manage db sync error In-Reply-To: <6190AA83EB69374DABAE074D7E900F750F6AB652@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F750F6AB652@xmb-aln-x13.cisco.com> Message-ID: <5113F905.5010803@redhat.com> On 02/07/2013 06:41 PM, Shixiong Shang (shshang) wrote: > Hi, experts: > > I am trying to issue the following command on nova node to sync with db, but it returns with error: > > [root at as-ctl1 bin]# sudo nova-manage db sync > 2013-02-07 13:36:25 19362 DEBUG nova.utils [-] backend __get_backend /usr/lib/python2.6/site-packages/nova/utils.py:502 > Command failed, please check log for more info > 2013-02-07 13:36:25 19362 CRITICAL nova [-] No module named kombu > 2013-02-07 13:36:25 19362 TRACE nova Traceback (most recent call last): > 2013-02-07 13:36:25 19362 TRACE nova File "/usr/bin/nova-manage", line 1403, in > 2013-02-07 13:36:25 19362 TRACE nova main() > 2013-02-07 13:36:25 19362 TRACE nova File "/usr/bin/nova-manage", line 1391, in main > 2013-02-07 13:36:25 19362 TRACE nova rpc.cleanup() > 2013-02-07 13:36:25 19362 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py", line 203, in cleanup > 2013-02-07 13:36:25 19362 TRACE nova return _get_impl().cleanup() > 2013-02-07 13:36:25 19362 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py", line 269, in _get_impl > 2013-02-07 13:36:25 19362 TRACE nova _RPCIMPL = importutils.import_module(impl) > 2013-02-07 13:36:25 19362 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", line 58, in import_module > 2013-02-07 13:36:25 19362 TRACE nova __import__(import_str) > 2013-02-07 13:36:25 19362 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_kombu.py", line 27, in > 2013-02-07 13:36:25 19362 TRACE nova import kombu > 2013-02-07 13:36:25 19362 TRACE nova ImportError: No module named kombu > 2013-02-07 13:36:25 19362 TRACE nova > > Seems like it is missing python kombu module. I am using Preview plus RHEL 6.4 beta. Any suggestions? This is fallout from our rearrangement of config variables. Everything used to be in /etc/nova/nova.conf and that's the default file read by nova-manage. However we changed some distribution specific config vars, that end users wouldn't need to touch out to /usr/share/nova/nova-dist.conf, and that's where rpc_backend is set to "qpid". Unfortunately nova-manage is unaware of that. So to work around the issue I suggest adding this to /etc/nova/nova.conf rpc_backend = nova.openstack.common.rpc.impl_qpid The sample applies for /etc/cinder/cinder.conf We'll fix this issue for the next version of these packages. thanks, P?draig. From shshang at cisco.com Thu Feb 7 18:59:54 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Thu, 7 Feb 2013 18:59:54 +0000 Subject: [rhos-list] nova-manage db sync error In-Reply-To: <5113F905.5010803@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F750F6AB7E7@xmb-aln-x13.cisco.com> Got it! Thanks, Padraig! Thanks, David! Shixiong On 2/7/13 1:57 PM, "P?draig Brady" wrote: >On 02/07/2013 06:41 PM, Shixiong Shang (shshang) wrote: >> Hi, experts: >> >> I am trying to issue the following command on nova node to sync with >>db, but it returns with error: >> >> [root at as-ctl1 bin]# sudo nova-manage db sync >> 2013-02-07 13:36:25 19362 DEBUG nova.utils [-] backend >'nova.db.sqlalchemy.migration' from >>'/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/migration.pyc'> >>__get_backend /usr/lib/python2.6/site-packages/nova/utils.py:502 >> Command failed, please check log for more info >> 2013-02-07 13:36:25 19362 CRITICAL nova [-] No module named kombu >> 2013-02-07 13:36:25 19362 TRACE nova Traceback (most recent call last): >> 2013-02-07 13:36:25 19362 TRACE nova File "/usr/bin/nova-manage", >>line 1403, in >> 2013-02-07 13:36:25 19362 TRACE nova main() >> 2013-02-07 13:36:25 19362 TRACE nova File "/usr/bin/nova-manage", >>line 1391, in main >> 2013-02-07 13:36:25 19362 TRACE nova rpc.cleanup() >> 2013-02-07 13:36:25 19362 TRACE nova File >>"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py", >> line 203, in cleanup >> 2013-02-07 13:36:25 19362 TRACE nova return _get_impl().cleanup() >> 2013-02-07 13:36:25 19362 TRACE nova File >>"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py", >> line 269, in _get_impl >> 2013-02-07 13:36:25 19362 TRACE nova _RPCIMPL = >>importutils.import_module(impl) >> 2013-02-07 13:36:25 19362 TRACE nova File >>"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", >>line 58, in import_module >> 2013-02-07 13:36:25 19362 TRACE nova __import__(import_str) >> 2013-02-07 13:36:25 19362 TRACE nova File >>"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_kombu.py >>", line 27, in >> 2013-02-07 13:36:25 19362 TRACE nova import kombu >> 2013-02-07 13:36:25 19362 TRACE nova ImportError: No module named kombu >> 2013-02-07 13:36:25 19362 TRACE nova >> >> Seems like it is missing python kombu module. I am using Preview plus >>RHEL 6.4 beta. Any suggestions? > >This is fallout from our rearrangement of config variables. > >Everything used to be in /etc/nova/nova.conf >and that's the default file read by nova-manage. > >However we changed some distribution specific config vars, >that end users wouldn't need to touch out to >/usr/share/nova/nova-dist.conf, >and that's where rpc_backend is set to "qpid". >Unfortunately nova-manage is unaware of that. > >So to work around the issue I suggest adding this to /etc/nova/nova.conf >rpc_backend = nova.openstack.common.rpc.impl_qpid > >The sample applies for /etc/cinder/cinder.conf > >We'll fix this issue for the next version of these packages. > >thanks, >P?draig. From dneary at redhat.com Thu Feb 7 19:00:56 2013 From: dneary at redhat.com (Dave Neary) Date: Thu, 07 Feb 2013 20:00:56 +0100 Subject: [rhos-list] OpenStack Summit CfP deadline approaching Message-ID: <5113F9E8.2050806@redhat.com> Hi everyone, The OpenStack Summit call for proposals has its deadline on February 15th, at the end of next week. This email is for two things: First, to encourage anyone who plans to submit a proposal and has not yet done so to do so ASAP (if you'd like peer review, I would be happy to review proposals). Second: if you have already put in a proposal, I would like to centralise them so that we can (a) identify things we should be talking about for which we don't have a proposal yet, and (b) as an organisation understand when proposals get accepted and rejected to improve our chances next time. Is there a good place for people to post their proposals, or should they just go here? Thanks! Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From shshang at cisco.com Thu Feb 7 19:25:59 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Thu, 7 Feb 2013 19:25:59 +0000 Subject: [rhos-list] openstack-db command error Message-ID: <6190AA83EB69374DABAE074D7E900F750F6AB8AD@xmb-aln-x13.cisco.com> Hi, experts: I am running "openstack-db" command to initiate database table for nova and saw this error: [dmd at as-msg1 ~]$ sudo openstack-db --init --service nova --password nova --rootpw mysql Verified connectivity to MySQL. Creating 'nova' database. ERROR 1396 (HY000) at line 2: Operation CREATE USER failed for 'nova'@'localhost' It traced back to the following two commands in the "openstack-db" script under /usr/bin: CREATE USER '$APP'@'localhost' IDENTIFIED BY '${MYSQL_APP_PW}'; CREATE USER '$APP'@'%' IDENTIFIED BY '${MYSQL_APP_PW}'; I tried to manually create DB and user in mysql and it still gave me the same error. CREATE USER 'nova@'localhost' IDENTIFIED BY 'nova'; CREATE USER 'nova'@'%' IDENTIFIED BY 'nova'; If I used the following command instead, it passed successfully. CREATE USER 'nova' IDENTIFIED BY 'nova'; Is it normal? Thanks! Shixiong [cid:339726B9-FB1D-4F23-9820-263F5D7D86E1] Shixiong Shang Solution Architect WWSP Digital Media Distribution Advanced Services CCIE R&S - #17235 shshang at cisco.com Phone: +1 919 392 5192 Mobile: +1 919 272 1358 Cisco Systems, Inc. 7200-4 Kit Creek Road RTP, NC 27709-4987 United States Cisco.com !--- Stay Hungry Stay Foolish ---! This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: A94EACAF-A587-477E-8B89-160A48E838C1[4].png Type: image/png Size: 9461 bytes Desc: A94EACAF-A587-477E-8B89-160A48E838C1[4].png URL: From pbrady at redhat.com Thu Feb 7 19:46:55 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 07 Feb 2013 19:46:55 +0000 Subject: [rhos-list] openstack-db command error In-Reply-To: <6190AA83EB69374DABAE074D7E900F750F6AB8AD@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F750F6AB8AD@xmb-aln-x13.cisco.com> Message-ID: <511404AF.20406@redhat.com> On 02/07/2013 07:25 PM, Shixiong Shang (shshang) wrote: > Hi, experts: > > I am running "openstack-db" command to initiate database table for nova and saw this error: > > [dmd at as-msg1 ~]$ sudo openstack-db --init --service nova --password nova --rootpw mysql > Verified connectivity to MySQL. > Creating 'nova' database. > ERROR 1396 (HY000) at line 2: Operation CREATE USER failed for 'nova'@'localhost' > > It traced back to the following two commands in the "openstack-db" script under /usr/bin: > CREATE USER '$APP'@'localhost' IDENTIFIED BY '${MYSQL_APP_PW}'; > CREATE USER '$APP'@'%' IDENTIFIED BY '${MYSQL_APP_PW}'; > > > I tried to manually create DB and user in mysql and it still gave me the same error. > CREATE USER 'nova@'localhost' IDENTIFIED BY 'nova'; > CREATE USER 'nova'@'%' IDENTIFIED BY 'nova'; > > If I used the following command instead, it passed successfully. > CREATE USER 'nova' IDENTIFIED BY 'nova'; > > Is it normal? Thanks! What version of mysql are you using? I'll not also that we have a new packstack installer for setting up these things rather than the lower level openstack-db commands. thanks, P?draig. From shshang at cisco.com Thu Feb 7 20:00:02 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Thu, 7 Feb 2013 20:00:02 +0000 Subject: [rhos-list] openstack-db command error In-Reply-To: <511404AF.20406@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F750F6ABA9E@xmb-aln-x13.cisco.com> I am using 5.1.66 sudo mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 124 Server version: 5.1.66 Source distribution On 2/7/13 2:46 PM, "P?draig Brady" wrote: >On 02/07/2013 07:25 PM, Shixiong Shang (shshang) wrote: >> Hi, experts: >> >> I am running "openstack-db" command to initiate database table for nova >>and saw this error: >> >> [dmd at as-msg1 ~]$ sudo openstack-db --init --service nova --password >>nova --rootpw mysql >> Verified connectivity to MySQL. >> Creating 'nova' database. >> ERROR 1396 (HY000) at line 2: Operation CREATE USER failed for >>'nova'@'localhost' >> >> It traced back to the following two commands in the "openstack-db" >>script under /usr/bin: >> CREATE USER '$APP'@'localhost' IDENTIFIED BY '${MYSQL_APP_PW}'; >> CREATE USER '$APP'@'%' IDENTIFIED BY '${MYSQL_APP_PW}'; >> >> >> I tried to manually create DB and user in mysql and it still gave me >>the same error. >> CREATE USER 'nova@'localhost' IDENTIFIED BY 'nova'; >> CREATE USER 'nova'@'%' IDENTIFIED BY 'nova'; >> >> If I used the following command instead, it passed successfully. >> CREATE USER 'nova' IDENTIFIED BY 'nova'; >> >> Is it normal? Thanks! > >What version of mysql are you using? > >I'll not also that we have a new packstack installer >for setting up these things rather than the lower >level openstack-db commands. > >thanks, >P?draig. From sellis at redhat.com Fri Feb 8 02:24:16 2013 From: sellis at redhat.com (Steven Ellis) Date: Fri, 08 Feb 2013 15:24:16 +1300 Subject: [rhos-list] RHOS at Linux.conf.au last week in Canberra Message-ID: <511461D0.9000701@redhat.com> I Just wanted to thank the various Red Hatters at LCA in Canberra last week. A number of attendee's commented about the sheer number of Red Hat people that appear to be involved in OpenStack which was great. I'd recommend anyone who is interested to take a look at the schedule as a number of the talks are now available in video form - http://linux.conf.au/programme/schedule - https://lca2013.linux.org.au/wiki/Miniconfs/CloudDistributedStorageandHighAvailability - https://lca2013.linux.org.au/wiki/Miniconfs/OpenStack - http://linux.conf.au/schedule/30131/view_talk?day=friday - http://mirror.linux.org.au/linux.conf.au/2013 - Videos (not all online yet) I'd like to give a particular shout out to Steve and Angus who presented a session on Heat. Some Miniconf sessions I'd recommend are - https://lca2013.linux.org.au/wiki/Miniconfs/OpenStack#NeCTAR_Research_Cloud:_OpenStack_in_production - https://lca2013.linux.org.au/wiki/Miniconfs/OpenStack#Bare_metal_provisioning_with_OpenStack - https://lca2013.linux.org.au/wiki/Miniconfs/CloudDistributedStorageandHighAvailability#The_Grand_Distributed_Storage_Debate:_GlusterFS_and_Ceph_going_head_to_head A couple of comments from some of the OpenStack leads present at the event. - Red Hat have provided great governance and guidance, in particular helping clean up the duplication across various projects - They need an easier way to spin up Red Hat instances for testing the recent OpenStack builds My personal view is we need to play some catch up within the OpenStack community. There is a lot of Ubuntu and a lot of Ceph, leaving Gluster/RHS out in the cold and very little mention of RHEL or Fedora. Just my thoughts Steve -- Steven Ellis Solution Architect - Red Hat New Zealand *X:* (85) 48151 *M:* +64 21 321 673 *T:* +64 9 927 8856 / +61 3 9624 8151 *E:* sellis at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbrunell at redhat.com Fri Feb 8 03:36:26 2013 From: tbrunell at redhat.com (Ted Brunell) Date: Thu, 7 Feb 2013 22:36:26 -0500 (EST) Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <51136CB1.9050804@redhat.com> Message-ID: <645701759.10712879.1360294586448.JavaMail.root@redhat.com> Gary, Thanks for the great feedback. I have incorporated them into the attached doc. For step 5 (the mysql password step), I added that in because when running the quantum-server-setup command in step 10, I would be a prompted for the mysql password. If I rememebr properly, password was blank instead of what was entered into the packstack answer file. Setting the password in step 5 seemed to fix that issue. I have not tried this in the past three weeks, so maybe it is fixed already. R/ Ted Ted Brunell - RHCDS, RHCE, RHCVA Solution Architect Red Hat, Inc. tbrunell at redhat.com ----- Original Message ----- From: "Gary Kotton" To: "Ted Brunell" Cc: rhos-list at redhat.com, "rh-openstack-dev" , "Perry Myers" Sent: Thursday, February 7, 2013 3:58:25 AM Subject: Re: [rhos-list] Nova-network v.s. Quantum in Openstack preview On 02/07/2013 04:03 AM, Ted Brunell wrote: > I was working getting OpenStack working on a two-node setup using PackStack and then changing out Nova Network with Quantum + Open vSwitch using the RHEL 6.4 kernel. Great article. I have a few minor comments: For Node 1: 1. There is a typo "the compute nade" 2. Point #5 - I think that packstack takes care of this. 3. In point #6 you do not need to do the quantum client. This is pulled in by openstack-quantum (it is required for the l3 agent) 4. The gedit is just to work around https://bugzilla.redhat.com/show_bug.cgi?id=889774 (you can solve this by setting an environment variable. I think that Martin has taken care of this so hopefully we can drop it from the doc soon. :) 5. For point 10 can you please add a note that the user must have sourced the keystonerc_admin first. This is essential as the quantum server script uses the environment variables to configure the keystone authentication. 6. For point 12 you have a typo "[DATABSE]". Please note that the quantum-server-setup creates a symbolic link for the plugin ini file. This can be seen at /etc/quantum/plugin.ini. The reason for doing this was to ensure that we can have generic startup script for the quantum service 7. Point 13 has a typo "that thy" 8. Point 15: This is on the host where the quantum service is running and the user does not need to run this. The nova conf was updated via the quantum-server-setup script 9. Regarding 15 - "/usr/bin/openstack-config --set|--del config_file section [parameter] [value]". That looks fishy and like a bug. I'll check it. You should remove that line from the doc 10. Point 22. The DHCP agent does not require the keystone settings. You can drop the following: auth_url = http://192.168.2.193:35357/v2.0/ admin_username = quantum admin_password = Passw0rd admin_tenant_name = quantum 11. The l3 agent is required if you want to do the following: 1. floating IP support 2. enable the instances to get the meta data from the nova metadata service For Node 2: 1. Point 8. Typo "[DATABASE}". Please note that the database does not used. The user does not need to update this. Point 7 ensured that the qpid hostname is correct that suffices. Once again great doc. Thanks Gary From shshang at cisco.com Sat Feb 9 05:11:17 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Sat, 9 Feb 2013 05:11:17 +0000 Subject: [rhos-list] KVM error Message-ID: <6190AA83EB69374DABAE074D7E900F750F6AEB6F@xmb-aln-x13.cisco.com> Hi, experts: I am trying to launch a VM from Horizon Dashboard, but the instance was started, but then paused for unknown reason. The libvirt log showed: [root at as-cmp1 libvirt]# tail -f libvirtd.log 2013-02-09 04:35:04.257+0000: 2754: error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname "vnet0" not in key map 2013-02-09 04:35:04.267+0000: 2754: error : virNetDevGetIndex:653 : Unable to get index for interface vnet0: No such device 2013-02-09 04:49:41.427+0000: 2756: warning : virCgroupMoveTask:885 : no vm cgroup in controller 3 2013-02-09 04:49:41.427+0000: 2756: warning : virCgroupMoveTask:885 : no vm cgroup in controller 4 2013-02-09 04:49:41.427+0000: 2756: warning : virCgroupMoveTask:885 : no vm cgroup in controller 6 2013-02-09 04:57:29.579+0000: 2755: error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname "vnet0" not in key map 2013-02-09 04:57:29.591+0000: 2755: error : virNetDevGetIndex:653 : Unable to get index for interface vnet0: No such device 2013-02-09 04:57:45.724+0000: 2751: error : virNetSocketReadWire:1184 : End of file while reading data: Input/output error 2013-02-09 04:57:49.193+0000: 2751: error : virNetSocketReadWire:1184 : End of file while reading data: Input/output error 2013-02-09 04:57:49.194+0000: 2751: error : virNetSocketReadWire:1184 : End of file while reading data: Input/output error The KVM instance log showed: [root at as-cmp1 qemu]# tail -f instance-00000005.log qemu: terminating on signal 15 from pid 2746 2013-02-09 05:03:25.426+0000: shutting down 2013-02-09 05:03:29.078+0000: starting up LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name instance-00000005 -S -M rhel6.4.0 -cpu Nehalem,+rdtscp,+vmx,+ht,+ss,+acpi,+ds,+vme -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid c533cf9f-8d11-4204-8ba6-5b803b7311c4 -smbios type=1,manufacturer=Red Hat,, Inc.,product=Red Hat OpenStack Nova,version=2012.2.2-8.el6ost,serial=421511de-c214-d27f-ab02-78ec0144b8c2,uuid=c533cf9f-8d11-4204-8ba6-5b803b7311c4 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000005.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/instance-00000005/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/nova/instances/instance-00000005/disk.local,if=none,id=drive-virtio-disk1,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:f1:bc:a8,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/instance-00000005/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 char device redirected to /dev/pts/1 KVM internal error. Suberror: 2 extra data[0]: 80000003 extra data[1]: 80000603 rax 0000000000000023 rbx 00000000000000fd rcx 0000000000000000 rdx 00000000000003d1 rsi 0000000000000003 rdi 000000000000c992 rsp 000000000000be76 rbp 0000000000000000 r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 rip 0000000000006f05 rflags 00000016 cs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) ds 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) es f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) gdt fc558/37 idt 0/3ff cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 Have you seen this issue before? I am using RHEL 6.4 beta + Folsom Preview. Thanks! Shixiong [cid:2ED319A4-F4F0-4D83-92A1-3C0F7778C691] Shixiong Shang Solution Architect WWSP Digital Media Distribution Advanced Services CCIE R&S - #17235 shshang at cisco.com Phone: +1 919 392 5192 Mobile: +1 919 272 1358 Cisco Systems, Inc. 7200-4 Kit Creek Road RTP, NC 27709-4987 United States Cisco.com !--- Stay Hungry Stay Foolish ---! This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: A94EACAF-A587-477E-8B89-160A48E838C1[1].png Type: image/png Size: 9461 bytes Desc: A94EACAF-A587-477E-8B89-160A48E838C1[1].png URL: From shshang at cisco.com Sat Feb 9 06:01:43 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Sat, 9 Feb 2013 06:01:43 +0000 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <511200E8.4090606@redhat.com> References: <6190AA83EB69374DABAE074D7E900F750F6A67CC@xmb-aln-x13.cisco.com> <5111713A.3090809@redhat.com>,<511200E8.4090606@redhat.com> Message-ID: Hi, Gary: Would you please elaborate on the issues caused by IP table rules? Thanks a lot! Shixiong On Feb 6, 2013, at 2:06 AM, "Gary Kotton" wrote: > On 02/05/2013 10:53 PM, Perry Myers wrote: >> On 02/05/2013 03:45 PM, Shixiong Shang (shshang) wrote: >>> Hi, experts: >>> >>> I am trying to install Redhat Openstack 2.0 (Folsom) preview on RHEL 6.4 >>> beta. According to the Getting Started Guide, I can use "packstack" tool >>> to automate the installation. However, I noticed that a few parameters >>> in the tool still refer to "nove-network" as shown below. >>> >>> CONFIG_NOVA_NETWORK_HOST >>> CONFIG_NOVA_NETWORK_PUBIF >>> CONFIG_NOVA_NETWORK_PRIVIF >>> CONFIG_NOVA_NETWORK_FIXEDRANGE >>> CONFIG_NOVA_NETWORK_FLOATRANGE >>> >>> Does that mean the setup will still run on top of nova-network, or >>> nova-network will use quantum client plugin to communicate with Quantum >>> server? >>> >>> In addition, I don't see any quantum related parameters are included in >>> the tool. When will it be available? >> Right now, PackStack only supports Nova Networking. We will eventually >> have support for Quantum in PackStack, but that support is not completed >> yet. >> >> So if you specifically need to use Quantum, you would need to: >> * Use PackStack to install w/ Nova Networking >> * Manually convert from Nova Networking to Quantum >> >> Gary (cc'd) has a draft process for this, but it is still a little shaky. >> >> Gary can you share that process? Perhaps Shixiong can try it out and >> give us feedback on whether it works for him or not. > > As Perry said, things are still not 100%. There are two ways about going about it. The first is to install packstack on an all in one node, the second is to have a second node that will be used for nova networking. Below is a list of steps that you can do to go from Nova networking to Quantum. > Please note that we found a few problems that we are dealing with - the IP tables were dropping the DHCP request from the VM that is spawned. At the moment we have narrowed it down to a few suspicious IP table rules (if deleted then it works). We have an open bug about this and are currently investigating. > > So the steps are below (I used OpenvSwitch): > > 1. Terminate nova networking (if you do packstack with 2 nodes then you will not need to do this part and the nova-network iptable rules will not be about) > service openstack-nova-network stop > chkconfig openstack-nova-network off > 2. Install quantum service > yum install openstack-quantum > 3. Install OpenvSwicth plugin > yum install openstack-quantum-openvswitch > 4. source keystonerc_admin [the quantum installation scripts make use of these environment variables] > 5. Configure quantum service > quantum-server-setup > 6.1. The scrip above will ask if the nova parameters need to be updated. Restart the nova compute service. The script will configure Quantum as the networking module for Nova. > service openstack-nova-compute-restart > 6.2 Start the openvswitch service > service openvswitch start > chkconfig openvswitch on > Run "ovs-vsctl show" > 6.3 Create the integration bridge > ovs-vsctl add-br br-int > 7. Start quantum service > service quantum-server start > chkconfig quantum-server on > 7.1 Start quantum agent > service quantum-openvswitch-agent start > chkconfig quantum-openvswitch-agent on > 8. Create a Quantum endpoint with keystone > keystone service-create --name=quantum --type=network --description="Quantum Service" > keystone endpoint-create --region RegionOne --service-id --publicurl "http://127.0.0.1:9696" --adminurl "http://127.0.0.1:9696" --internalurl "http://127.0.0.1:9696" > 8.1 Create a keystone tenant and user. > 9. Now one is able to start to use the Quantum CLI > 9.1 Create a private network > quantum net-create private > 9.2 Create a subnet > quantum subnet-create 10.0.0.0/24 > 10. In order for the IPAM to take place one needs to invoke the DHCP agent. Use DHCP setup tool: > quantum-dhcp-setup > 11. Start DHCP agent > service quantum-dhcp-agent start > chkconfg quantum-dhcp-agent on > 12. Validate that a port has been created by the DHCP agent (quantum port-list). This port will have IP address 10.0.0.2. The gateway will 10.0.0.1. > 13. At this stage VM's can be deployed and they will receive IP addresses from the DHCP service. > 14. Layer 3 agent > quantum-l3-setup > Start service > service quantum-l3-agent start > chkconfig quantum-l3-agent on > > Please note that if you choose to use openvswitch then you will need to patch the following file: > /etc/init.d/quantum-ovs-cleanup > Swap the exec line with: > daemon --user quantum $exec --config-file /usr/share/$proj/$proj-dist.conf --config-file /etc/$proj/$proj.conf --config-file $config &>/dev/null > In addition to this you will need to ensure that this is run on boot: > chkconfig quantum-ovs-cleanup on > > Hopefully in the future packstack will do the majority of the stuff above. > > Please let me know if you have any problems or questions. > Thanks > Gary >>> Furthermore, does the tool support sub interface with VLAN TAGGING >>> enabled, such as eth2.277? >> I don't think so, but Derek would need to confirm. I think this would >> be a feature enhancement. >> >> Perry > From pbrady at redhat.com Sun Feb 10 03:28:44 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Sun, 10 Feb 2013 03:28:44 +0000 Subject: [rhos-list] openstack-db command error In-Reply-To: <511404AF.20406@redhat.com> References: <6190AA83EB69374DABAE074D7E900F750F6AB8AD@xmb-aln-x13.cisco.com> <511404AF.20406@redhat.com> Message-ID: <511713EC.9050807@redhat.com> On 02/07/2013 07:46 PM, P?draig Brady wrote: > On 02/07/2013 07:25 PM, Shixiong Shang (shshang) wrote: >> Hi, experts: >> >> I am running "openstack-db" command to initiate database table for nova and saw this error: >> >> [dmd at as-msg1 ~]$ sudo openstack-db --init --service nova --password nova --rootpw mysql >> Verified connectivity to MySQL. >> Creating 'nova' database. >> ERROR 1396 (HY000) at line 2: Operation CREATE USER failed for 'nova'@'localhost' >> >> It traced back to the following two commands in the "openstack-db" script under /usr/bin: >> CREATE USER '$APP'@'localhost' IDENTIFIED BY '${MYSQL_APP_PW}'; >> CREATE USER '$APP'@'%' IDENTIFIED BY '${MYSQL_APP_PW}'; >> >> >> I tried to manually create DB and user in mysql and it still gave me the same error. >> CREATE USER 'nova@'localhost' IDENTIFIED BY 'nova'; >> CREATE USER 'nova'@'%' IDENTIFIED BY 'nova'; >> >> If I used the following command instead, it passed successfully. >> CREATE USER 'nova' IDENTIFIED BY 'nova'; >> >> Is it normal? Thanks! > > What version of mysql are you using? > > I'll not also that we have a new packstack installer > for setting up these things rather than the lower > level openstack-db commands. So looking further into this, the obtuse "ERROR 1396 (HY000)", essentially means the user already exists in the mysql DB. So I'm guessing that because the nova-manage command initially failed because of the previously discussed settings issues, you may have tried to manually clean the database and users, which is a little tricky to do robustly in mysql. With the correct config in place you would not have run into this issue, but we can do better in this case. For the next version of openstack-db we'll indicate clearly that the user still exists in the database and suggest to use the --drop mode to clear the database before trying again. As a quick way to supply the correct config for the nova-manage version you have currently installed, you could do the following. A caveat to note is to `rm /etc/nova-manage.conf` when upgrading to the next version of openstack-nova-common, so that config precedence rules are honored correctly. ln -nsf /usr/share/nova/nova-dist.conf /etc/nova-manage.conf So with the correct config in place you should be able to clear everything with the following command, before following the documented instructions: sudo openstack-db --drop --service nova --password nova --rootpw mysql Note the operations need to be done for cinder, so just replace cinder with nova above. thanks, P?draig. p.s. I'll note again that testing is currently concentrating on the packstack installer and we'll over time stop supporting these lower level scripts. From gkotton at redhat.com Tue Feb 12 15:56:09 2013 From: gkotton at redhat.com (Gary Kotton) Date: Tue, 12 Feb 2013 17:56:09 +0200 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: References: <6190AA83EB69374DABAE074D7E900F750F6A67CC@xmb-aln-x13.cisco.com> <5111713A.3090809@redhat.com>, <511200E8.4090606@redhat.com> Message-ID: <511A6619.2090509@redhat.com> On 02/09/2013 08:01 AM, Shixiong Shang (shshang) wrote: > Hi, Gary: > > Would you please elaborate on the issues caused by IP table rules? Sorry for taking a while to get back to you. After installing RHEL, running packstack with traditional nova networking and then patching for Quantum has caused some conflicting rules with the iptables. We are currently trying to isolate this and ensure that it will be dealt with correctly. The best way to work around this would be to delete the iptables and then reboot. This will ensure that nova compute and the api will create the relevant rules without having any unwanted rules. Thanks Gary > > Thanks a lot! > > Shixiong > > > > On Feb 6, 2013, at 2:06 AM, "Gary Kotton" wrote: > >> On 02/05/2013 10:53 PM, Perry Myers wrote: >>> On 02/05/2013 03:45 PM, Shixiong Shang (shshang) wrote: >>>> Hi, experts: >>>> >>>> I am trying to install Redhat Openstack 2.0 (Folsom) preview on RHEL 6.4 >>>> beta. According to the Getting Started Guide, I can use "packstack" tool >>>> to automate the installation. However, I noticed that a few parameters >>>> in the tool still refer to "nove-network" as shown below. >>>> >>>> CONFIG_NOVA_NETWORK_HOST >>>> CONFIG_NOVA_NETWORK_PUBIF >>>> CONFIG_NOVA_NETWORK_PRIVIF >>>> CONFIG_NOVA_NETWORK_FIXEDRANGE >>>> CONFIG_NOVA_NETWORK_FLOATRANGE >>>> >>>> Does that mean the setup will still run on top of nova-network, or >>>> nova-network will use quantum client plugin to communicate with Quantum >>>> server? >>>> >>>> In addition, I don't see any quantum related parameters are included in >>>> the tool. When will it be available? >>> Right now, PackStack only supports Nova Networking. We will eventually >>> have support for Quantum in PackStack, but that support is not completed >>> yet. >>> >>> So if you specifically need to use Quantum, you would need to: >>> * Use PackStack to install w/ Nova Networking >>> * Manually convert from Nova Networking to Quantum >>> >>> Gary (cc'd) has a draft process for this, but it is still a little shaky. >>> >>> Gary can you share that process? Perhaps Shixiong can try it out and >>> give us feedback on whether it works for him or not. >> As Perry said, things are still not 100%. There are two ways about going about it. The first is to install packstack on an all in one node, the second is to have a second node that will be used for nova networking. Below is a list of steps that you can do to go from Nova networking to Quantum. >> Please note that we found a few problems that we are dealing with - the IP tables were dropping the DHCP request from the VM that is spawned. At the moment we have narrowed it down to a few suspicious IP table rules (if deleted then it works). We have an open bug about this and are currently investigating. >> >> So the steps are below (I used OpenvSwitch): >> >> 1. Terminate nova networking (if you do packstack with 2 nodes then you will not need to do this part and the nova-network iptable rules will not be about) >> service openstack-nova-network stop >> chkconfig openstack-nova-network off >> 2. Install quantum service >> yum install openstack-quantum >> 3. Install OpenvSwicth plugin >> yum install openstack-quantum-openvswitch >> 4. source keystonerc_admin [the quantum installation scripts make use of these environment variables] >> 5. Configure quantum service >> quantum-server-setup >> 6.1. The scrip above will ask if the nova parameters need to be updated. Restart the nova compute service. The script will configure Quantum as the networking module for Nova. >> service openstack-nova-compute-restart >> 6.2 Start the openvswitch service >> service openvswitch start >> chkconfig openvswitch on >> Run "ovs-vsctl show" >> 6.3 Create the integration bridge >> ovs-vsctl add-br br-int >> 7. Start quantum service >> service quantum-server start >> chkconfig quantum-server on >> 7.1 Start quantum agent >> service quantum-openvswitch-agent start >> chkconfig quantum-openvswitch-agent on >> 8. Create a Quantum endpoint with keystone >> keystone service-create --name=quantum --type=network --description="Quantum Service" >> keystone endpoint-create --region RegionOne --service-id --publicurl "http://127.0.0.1:9696" --adminurl "http://127.0.0.1:9696" --internalurl "http://127.0.0.1:9696" >> 8.1 Create a keystone tenant and user. >> 9. Now one is able to start to use the Quantum CLI >> 9.1 Create a private network >> quantum net-create private >> 9.2 Create a subnet >> quantum subnet-create 10.0.0.0/24 >> 10. In order for the IPAM to take place one needs to invoke the DHCP agent. Use DHCP setup tool: >> quantum-dhcp-setup >> 11. Start DHCP agent >> service quantum-dhcp-agent start >> chkconfg quantum-dhcp-agent on >> 12. Validate that a port has been created by the DHCP agent (quantum port-list). This port will have IP address 10.0.0.2. The gateway will 10.0.0.1. >> 13. At this stage VM's can be deployed and they will receive IP addresses from the DHCP service. >> 14. Layer 3 agent >> quantum-l3-setup >> Start service >> service quantum-l3-agent start >> chkconfig quantum-l3-agent on >> >> Please note that if you choose to use openvswitch then you will need to patch the following file: >> /etc/init.d/quantum-ovs-cleanup >> Swap the exec line with: >> daemon --user quantum $exec --config-file /usr/share/$proj/$proj-dist.conf --config-file /etc/$proj/$proj.conf --config-file $config&>/dev/null >> In addition to this you will need to ensure that this is run on boot: >> chkconfig quantum-ovs-cleanup on >> >> Hopefully in the future packstack will do the majority of the stuff above. >> >> Please let me know if you have any problems or questions. >> Thanks >> Gary >>>> Furthermore, does the tool support sub interface with VLAN TAGGING >>>> enabled, such as eth2.277? >>> I don't think so, but Derek would need to confirm. I think this would >>> be a feature enhancement. >>> >>> Perry From shshang at cisco.com Thu Feb 14 14:57:50 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Thu, 14 Feb 2013 14:57:50 +0000 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <511A6619.2090509@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> Hi, Gary: Thank you so much for the clarification! What you described below makes perfect sense. I will keep it in mind when I verify the iptable settings on my side. Btw, I run into a KVM issue a couple of days ago. As a result, I cannot instantiate VM on top of Preview. I tried several different images, but no luck. Right now, this problem became show stopper for us. I am not sure whether rho-list is the right email alias for me to ask for help. If not, then would you please kindly point me to the right direction? Thanks again, everybody! Happy Valentine's Day! Shixiong On 2/12/13 10:56 AM, "Gary Kotton" wrote: >On 02/09/2013 08:01 AM, Shixiong Shang (shshang) wrote: >> Hi, Gary: >> >> Would you please elaborate on the issues caused by IP table rules? > >Sorry for taking a while to get back to you. After installing RHEL, >running packstack with traditional nova networking and then patching for >Quantum has caused some conflicting rules with the iptables. We are >currently trying to isolate this and ensure that it will be dealt with >correctly. The best way to work around this would be to delete the >iptables and then reboot. This will ensure that nova compute and the api >will create the relevant rules without having any unwanted rules. > >Thanks >Gary >> >> Thanks a lot! >> >> Shixiong >> >> -------------- next part -------------- An embedded message was scrubbed... From: "Shixiong Shang (shshang)" Subject: KVM error Date: Sat, 9 Feb 2013 00:11:16 -0500 Size: 32023 URL: From gkotton at redhat.com Thu Feb 14 15:13:05 2013 From: gkotton at redhat.com (Gary Kotton) Date: Thu, 14 Feb 2013 17:13:05 +0200 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> Message-ID: <511CFF01.4080909@redhat.com> On 02/14/2013 04:57 PM, Shixiong Shang (shshang) wrote: > Hi, Gary: > > Thank you so much for the clarification! What you described below makes > perfect sense. I will keep it in mind when I verify the iptable settings > on my side. ok, great. let me know if you need any assistance with it > > Btw, I run into a KVM issue a couple of days ago. Can you please shed some light here. Due to various selinus issues I do the following: sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config At the moment I am using the following image: glance image-create --name cirros --disk-format qcow2 --container-format bare --is-public 1 --copy-from https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img Thanks Gary > As a result, I cannot > instantiate VM on top of Preview. I tried several different images, but no > luck. Right now, this problem became show stopper for us. I am not sure > whether rho-list is the right email alias for me to ask for help. If not, > then would you please kindly point me to the right direction? > > Thanks again, everybody! Happy Valentine's Day! > > Shixiong > > > > > > > > > On 2/12/13 10:56 AM, "Gary Kotton" wrote: > >> On 02/09/2013 08:01 AM, Shixiong Shang (shshang) wrote: >>> Hi, Gary: >>> >>> Would you please elaborate on the issues caused by IP table rules? >> Sorry for taking a while to get back to you. After installing RHEL, >> running packstack with traditional nova networking and then patching for >> Quantum has caused some conflicting rules with the iptables. We are >> currently trying to isolate this and ensure that it will be dealt with >> correctly. The best way to work around this would be to delete the >> iptables and then reboot. This will ensure that nova compute and the api >> will create the relevant rules without having any unwanted rules. >> >> Thanks >> Gary >>> Thanks a lot! >>> >>> Shixiong >>> >>> From prmarino1 at gmail.com Fri Feb 15 01:19:33 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Thu, 14 Feb 2013 20:19:33 -0500 Subject: [rhos-list] clustering Cinder with Gluster Message-ID: Hello Ive been thinking of ways to make cinder redundant without shared storage and I think I have two possible quick answer using Gluster, but before I go down this rabbit hole in my test environment I wanted to see if any one has tried this before or if any one could point out any obvious problems. Now I know that support for exporting ISCSI block devices natively is in the Gluster road map but it doesn't look like it will happen soon. here is what I'm thinking Scenario 1 similar to the examples in the guide I'm thinking of creating a a disk image created with the truncate command. The big difference is I'm planing to create it on a Gluster share and creating a clustered LVM volume and managing it with the HA addon. it should be fairly simple for me to create an init script to create and remove loop devices via losetup. in this scenario the thing that concerns me is the possibility of a system getting fenced on boot before the the Gluster volume is ready. Scenario 2 This one is a little simpler since I'm very familiar with keepalived I could create a VRRP instance with a floating VIP. when a node becomes primary it could initiate a script to start the loop device then start the Cinder service. on fault or if the node becomes backup I could have it ensure cinder has been stopped then remove the loop device. there are two things that I'm worried with this scenario 1) Since keepalived doesn't understand the concept of a quorum if they went into split brain mode this could possibly cause a significant problem. I can mitigate this risk by connecting the cinder nodes with a pair of dedicated cross over cables (preferably run via separate cable trays) but it can never be absolutely eliminate the possibility. I can also add a secondary check script that does a file based secondary heartbeat but that would be a little more complicated and wouldn't help if Gluster was split brained as well. 2) When a fault happens in keepalived there is either a lag before the backup notices and takes over based on the time in the heartbeat interval (approximately interval x 3) so there will be a 3 second or more delay before the second node attempts to take over. there are several patches for sub second intervals some of which I'm familiar with (I wrote one of them :-) ) but they add their own issue because they can make the system try to react too fast and may not allow sufficient time for the failed node to cleanly detach from the volume. scenario 2 is the easiest to implement and despite the concerns its the one i think is the safest mostly because I don't like to fence nodes because a single process or volume has an issue. my personal experiences with fencing is it usually causes more problems than it solves although admittedly my opinion of fencing has been tainted by a Oracle stretch cluster I use to support which liked to fence nodes any time someone half way around the world sneezed. So does any one have any opinions or comments? From shshang at cisco.com Fri Feb 15 02:54:08 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Fri, 15 Feb 2013 02:54:08 +0000 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <511CFF01.4080909@redhat.com> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> <511CFF01.4080909@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> Hi, Gary: I tried to put SELinux in PERMISSIVE mode and spawned up new VM using cirros image today, but still no luck. I spent the rest of the day searching Redhat bugs based on numerous error msgs in various log files. Found tons of stuff, but most of them didn't seem to be relevant and helpful. One thing caught my eyes is a case submitted back in 2010 and updated early this year. The KVM error message in instance console log is similar to the one in my case. Based on the description, seems like the KVM crash is caused by defective CPU or unsupported CPU model. I will try different machine for better luck. https://bugzilla.redhat.com/show_bug.cgi?id=639208 Will keep you posted if I can root cause the problem. In the meanwhile, if anything pops to your mind, please let me know. Thanks a lot for your help! Shixiong On Feb 14, 2013, at 10:13 AM, Gary Kotton > wrote: On 02/14/2013 04:57 PM, Shixiong Shang (shshang) wrote: Hi, Gary: Thank you so much for the clarification! What you described below makes perfect sense. I will keep it in mind when I verify the iptable settings on my side. ok, great. let me know if you need any assistance with it Btw, I run into a KVM issue a couple of days ago. Can you please shed some light here. Due to various selinus issues I do the following: sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config At the moment I am using the following image: glance image-create --name cirros --disk-format qcow2 --container-format bare --is-public 1 --copy-from https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From nux at li.nux.ro Fri Feb 15 08:41:23 2013 From: nux at li.nux.ro (Nux!) Date: Fri, 15 Feb 2013 08:41:23 +0000 Subject: [rhos-list] clustering Cinder with Gluster In-Reply-To: References: Message-ID: On 15.02.2013 01:19, Paul Robert Marino wrote: > Hello > > Ive been thinking of ways to make cinder redundant without shared > storage and I think I have two possible quick answer using Gluster, Hello, I've also thought about this, but decided to let Openstack mature some more for the moment, so I have no results to speak of. My idea was to use Gluster's NFS server capability with Cinder's NFS driver[1], so basically files on Gluster NFS shares, no block level stuff; maybe use round-robin or some other sort of load balancing with it. HTH Lucian [1] - http://docs.openstack.org/developer/cinder/api/cinder.volume.nfs.html -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro From gkotton at redhat.com Fri Feb 15 09:31:35 2013 From: gkotton at redhat.com (Gary Kotton) Date: Fri, 15 Feb 2013 11:31:35 +0200 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> <511CFF01.4080909@redhat.com> <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> Message-ID: <511E0077.9060001@redhat.com> On 02/15/2013 04:54 AM, Shixiong Shang (shshang) wrote: > Hi, Gary: > > I tried to put SELinux in PERMISSIVE mode and spawned up new VM using > cirros image today, but still no luck. I spent the rest of the day > searching Redhat bugs based on numerous error msgs in various log > files. Found tons of stuff, but most of them didn't seem to be > relevant and helpful. > > One thing caught my eyes is a case submitted back in 2010 and updated > early this year. The KVM error message in instance console log is > similar to the one in my case. Based on the description, seems like > the KVM crash is caused by defective CPU or unsupported CPU model. I > will try different machine for better luck. > > https://bugzilla.redhat.com/show_bug.cgi?id=639208 > > Will keep you posted if I can root cause the problem. In the > meanwhile, if anything pops to your mind, please let me know. Thanks for the update. Which RHEL version are you using? I am using RHEL 6.4. I do the following: - install from disk - subscripion registration - i then make sure that the RHEL 6.4 beta repo is added: [rhel64beta] name=RHEL64 BETA #baseurl=http://download.eng.tlv.redhat.com/pub/rhel/rel-eng/RHEL6.4-20130130.0/6.4/source baseurl=http://download.eng.tlv.redhat.com/pub/rhel/rel-eng/RHEL6.4-20130130.0/6.4/Server/x86_64/os/ enabled=1 gpgcheck=0 (please do not do a yum update after this - it causes conflicts with packstack) - then i install the latest packstack - run packstack and everything works (bar the minor iptable issue we have discussed) Thanks Gary > > Thanks a lot for your help! > > Shixiong > > > > > > > > > > > > > On Feb 14, 2013, at 10:13 AM, Gary Kotton > wrote: > >> On 02/14/2013 04:57 PM, Shixiong Shang (shshang) wrote: >>> Hi, Gary: >>> >>> Thank you so much for the clarification! What you described below makes >>> perfect sense. I will keep it in mind when I verify the iptable settings >>> on my side. >> >> ok, great. let me know if you need any assistance with it >>> >>> Btw, I run into a KVM issue a couple of days ago. >> >> Can you please shed some light here. Due to various selinus issues I >> do the following: >> >> sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config >> >> At the moment I am using the following image: >> >> glance image-create --name cirros --disk-format qcow2 >> --container-format bare --is-public 1 --copy-from >> https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img >> >> Thanks >> Gary >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Fri Feb 15 11:04:58 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 15 Feb 2013 06:04:58 -0500 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <511E0077.9060001@redhat.com> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> <511CFF01.4080909@redhat.com> <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> <511E0077.9060001@redhat.com> Message-ID: <511E165A.9040104@redhat.com> On 02/15/2013 04:31 AM, Gary Kotton wrote: > On 02/15/2013 04:54 AM, Shixiong Shang (shshang) wrote: >> Hi, Gary: >> >> I tried to put SELinux in PERMISSIVE mode and spawned up new VM using >> cirros image today, but still no luck. I spent the rest of the day >> searching Redhat bugs based on numerous error msgs in various log >> files. Found tons of stuff, but most of them didn't seem to be >> relevant and helpful. >> >> One thing caught my eyes is a case submitted back in 2010 and updated >> early this year. The KVM error message in instance console log is >> similar to the one in my case. Based on the description, seems like >> the KVM crash is caused by defective CPU or unsupported CPU model. I >> will try different machine for better luck. >> >> https://bugzilla.redhat.com/show_bug.cgi?id=639208 Shixiong, Is it a kvm crash during operation or does it fail to start at all? Just as a sanity check, can you run virsh capabilities and give us the output? And also: lsmod | grep kvm >> Will keep you posted if I can root cause the problem. In the >> meanwhile, if anything pops to your mind, please let me know. > > Thanks for the update. Which RHEL version are you using? I am using RHEL > 6.4. I do the following: > - install from disk > - subscripion registration > - i then make sure that the RHEL 6.4 beta repo is added: > > [rhel64beta] > name=RHEL64 BETA > #baseurl=http://download.eng.tlv.redhat.com/pub/rhel/rel-eng/RHEL6.4-20130130.0/6.4/source > baseurl=http://download.eng.tlv.redhat.com/pub/rhel/rel-eng/RHEL6.4-20130130.0/6.4/Server/x86_64/os/ > enabled=1 > gpgcheck=0 > (please do not do a yum update after this - it causes conflicts with packstack) > - then i install the latest packstack > - run packstack and everything works (bar the minor iptable issue we have discussed) One hiccup in the above... Because the RHEL 6.4 GA release is imminent, the RHEL 6.4 Beta Repos have been emptied. It's part of the normal release engineering process. So for a short time, you might be stuck with getting updated packages or reinstalling from scratch on a new host. This issue should clear up in the next week as we get GA packages out the door. I'll send a more general note to the list about it. Perry From pmyers at redhat.com Fri Feb 15 11:21:32 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 15 Feb 2013 06:21:32 -0500 Subject: [rhos-list] clustering Cinder with Gluster In-Reply-To: References: Message-ID: <511E1A3C.6030207@redhat.com> On 02/14/2013 08:19 PM, Paul Robert Marino wrote: > Hello > > Ive been thinking of ways to make cinder redundant without shared > storage and I think I have two possible quick answer using Gluster, > but before I go down this rabbit hole in my test environment I wanted > to see if any one has tried this before or if any one could point out > any obvious problems. > Now I know that support for exporting ISCSI block devices natively is > in the Gluster road map but it doesn't look like it will happen soon. > here is what I'm thinking Using iSCSI from Gluster or even falling back to NFS in Gluster isn't strictly necessary. The right thing to do is to have a native gluster driver for Cinder, which our engineers are busy working on. I've cc'd Eric Harney from my team who has been pushing forward on that effort with help from the Red Hat Storage (Gluster) team. We're tracking inclusion of this Cinder Gluster driver here for RHOS 2.1: https://bugzilla.redhat.com/show_bug.cgi?id=892686 It's not guaranteed it'll get into 2.1, because it still needs to get in upstream by G-3 in order for that to happen. But that's the target. In the future, this Cinder Gluster Driver can be updated to utilize the qemu native support for Gluster, which should make things perform better. But that functionality is not yet in RHEL 6, so we wouldn't be able to use it quite yet. Hopefully by RHEL 6.5 it will be there. > Scenario 1 > similar to the examples in the guide I'm thinking of creating a a disk > image created with the truncate command. > The big difference is I'm planing to create it on a Gluster share and > creating a clustered LVM volume and managing it with the HA addon. What does clvm provide here? Ideally, use the Cinder Driver mentioned above, but in the absence of that, if you had a Gluster storage environment set up and had that glusterfs mounted on every Compute Node, I don't see why clvm would be necessary, and in fact clvm would create a lot of architectural complexity here. The redundancy would be in the Gluster cluster itself (i.e. once you point your compute node at one of the gluster bricks to mount the fs, my understanding is that if a single brick fails, as long as the data you need is accessible via replication on another brick, things will just fail over w/o you needing add'l HA software like RHEL HA/CLVM) I've cc'd a Gluster expert (Vijay) who can correct me if I have that horribly wrong :) > it should be fairly simple for me to create an init script to create > and remove loop devices via losetup. > in this scenario the thing that concerns me is the possibility of a > system getting fenced on boot before the the Gluster volume is ready. > > Scenario 2 > This one is a little simpler since I'm very familiar with keepalived I > could create a VRRP instance with a floating VIP. > when a node becomes primary it could initiate a script to start the > loop device then start the Cinder service. > on fault or if the node becomes backup I could have it ensure cinder > has been stopped then remove the loop device. > there are two things that I'm worried with this scenario > 1) Since keepalived doesn't understand the concept of a quorum if they > went into split brain mode this could possibly cause a significant > problem. I can mitigate this risk by connecting the cinder nodes with > a pair of dedicated cross over cables (preferably run via separate > cable trays) but it can never be absolutely eliminate the possibility. > I can also add a secondary check script that does a file based > secondary heartbeat but that would be a little more complicated and > wouldn't help if Gluster was split brained as well. > 2) When a fault happens in keepalived there is either a lag before the > backup notices and takes over based on the time in the heartbeat > interval (approximately interval x 3) so there will be a 3 second or > more delay before the second node attempts to take over. there are > several patches for sub second intervals some of which I'm familiar > with (I wrote one of them :-) ) but they add their own issue because > they can make the system try to react too fast and may not allow > sufficient time for the failed node to cleanly detach from the volume. > > > scenario 2 is the easiest to implement and despite the concerns its > the one i think is the safest mostly because I don't like to fence > nodes because a single process or volume has an issue. my personal > experiences with fencing is it usually causes more problems than it > solves although admittedly my opinion of fencing has been tainted by a > Oracle stretch cluster I use to support which liked to fence nodes any > time someone half way around the world sneezed. > > So does any one have any opinions or comments? I think given how Gluster works and provides redundancy by having multiple storage bricks and replication of the data, the above doesn't seem like it is necessary and would provide a lot of overhead/complication to the configuration. But I'll let the Cinder/Gluster folks on the thread here weigh in and let me know if that's not correct Perry From pmyers at redhat.com Fri Feb 15 11:51:11 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 15 Feb 2013 06:51:11 -0500 Subject: [rhos-list] Updated RHOS Folsom Preview Packages Message-ID: <511E212F.6040205@redhat.com> The Red Hat OpenStack team has been hard at work, fixing bugs and making things easier to use. We've been both taking updates/patches from the upstream stable branches for Folsom, as well as selectively backporting patches from Grizzly for bug fixes that don't quite fit stable branch inclusion, but are important for our customers. We've pushed an update to the Red Hat external repositories (RHN and CDN). The errata for the updates are available here: http://rhn.redhat.com/errata/RHBA-2013-0260.html http://rhn.redhat.com/errata/RHSA-2013-0253.html The RHBA (bug advisory) covers updates to pretty much all of the core OpenStack packages and a few dependency updates as well. The RHSA (security advisory) is specifically for Keystone. Take a look at each advisory to see the details of what changed with this update. And the documentation team has also been pushing many updates and clarifications to our installation docs. Please take a look: https://access.redhat.com/knowledge/docs/Red_Hat_OpenStack_Preview/ One other thing to note (that some of you may have already run into), is that the RHOS Folsom Preview requires RHEL 6.4. Right now, the documentation instructs you to use the RHEL 6.4 Beta channels available from our content mirrors (RHN/CDN). But since we are prepping to get RHEL 6.4 GA out the door, these Beta repositories have been emptied out. So for a short time (a week or so), we'll be a little stuck, as we won't yet have 6.4 GA RPMs accessible, and we also lost the 6.4 Beta RPMs. So just be patient, 6.4 GA should be just around the corner, and then we'll be back in business. And finally, as a reminder, if you find issues please let us know on list here or feel free to file bugs on the public Red Hat OpenStack bug tracker: https://bugzilla.redhat.com/enter_bug.cgi?product=Red+Hat+OpenStack Cheers, and happy hacking. Perry From thomas.oulevey at cern.ch Fri Feb 15 12:28:36 2013 From: thomas.oulevey at cern.ch (Thomas Oulevey) Date: Fri, 15 Feb 2013 13:28:36 +0100 Subject: [rhos-list] clustering Cinder with Gluster In-Reply-To: <511E1A3C.6030207@redhat.com> References: <511E1A3C.6030207@redhat.com> Message-ID: <511E29F4.80000@cern.ch> Hello, On 02/14/2013 08:19 PM, Paul Robert Marino wrote: > > In the future, this Cinder Gluster Driver can be updated to utilize the > qemu native support for Gluster, which should make things perform > better. But that functionality is not yet in RHEL 6, so we wouldn't be > able to use it quite yet. Hopefully by RHEL 6.5 it will be there. Is there a bugzilla ticket where we can track qemu>= 1.3 inclusion for RHEL 6.5 ? cheers, Thomas From shshang at cisco.com Fri Feb 15 15:06:52 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Fri, 15 Feb 2013 15:06:52 +0000 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <511E165A.9040104@redhat.com> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> <511CFF01.4080909@redhat.com> <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> <511E0077.9060001@redhat.com> <511E165A.9040104@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F751245DD6C@xmb-aln-x13.cisco.com> Hi, Perry: Thanks a lot for chiming in! KVM crashed during operation. VM stayed in "starting" mode for about 10 secs and then went straight to "paused" mode. Here is the output you are looking for and appreciate the head up of RHEL 6.4 GA release! I cannot wait! Shixiong Last login: Thu Feb 14 21:03:17 2013 from 13.23.225.252 [dmd at as-cmp1 ~]$ virsh capabilities ad4b57cf-bf15-06ef-a735-0be06abefc82 x86_64 Nehalem Intel tcp selinux 0 hvm 32 /usr/libexec/qemu-kvm rhel6.4.0 pc rhel6.3.0 rhel6.2.0 rhel6.1.0 rhel6.0.0 rhel5.5.0 rhel5.4.4 rhel5.4.0 /usr/libexec/qemu-kvm hvm 64 /usr/libexec/qemu-kvm rhel6.4.0 pc rhel6.3.0 rhel6.2.0 rhel6.1.0 rhel6.0.0 rhel5.5.0 rhel5.4.4 rhel5.4.0 /usr/libexec/qemu-kvm [dmd at as-cmp1 ~]$ lsmod | grep kvm kvm_intel 53484 0 kvm 315450 1 kvm_intel [dmd at as-cmp1 ~]$ On Feb 15, 2013, at 6:04 AM, Perry Myers > wrote: On 02/15/2013 04:31 AM, Gary Kotton wrote: On 02/15/2013 04:54 AM, Shixiong Shang (shshang) wrote: Hi, Gary: I tried to put SELinux in PERMISSIVE mode and spawned up new VM using cirros image today, but still no luck. I spent the rest of the day searching Redhat bugs based on numerous error msgs in various log files. Found tons of stuff, but most of them didn't seem to be relevant and helpful. One thing caught my eyes is a case submitted back in 2010 and updated early this year. The KVM error message in instance console log is similar to the one in my case. Based on the description, seems like the KVM crash is caused by defective CPU or unsupported CPU model. I will try different machine for better luck. https://bugzilla.redhat.com/show_bug.cgi?id=639208 Shixiong, Is it a kvm crash during operation or does it fail to start at all? Just as a sanity check, can you run virsh capabilities and give us the output? And also: lsmod | grep kvm Will keep you posted if I can root cause the problem. In the meanwhile, if anything pops to your mind, please let me know. Thanks for the update. Which RHEL version are you using? I am using RHEL 6.4. I do the following: - install from disk - subscripion registration - i then make sure that the RHEL 6.4 beta repo is added: [rhel64beta] name=RHEL64 BETA #baseurl=http://download.eng.tlv.redhat.com/pub/rhel/rel-eng/RHEL6.4-20130130.0/6.4/source baseurl=http://download.eng.tlv.redhat.com/pub/rhel/rel-eng/RHEL6.4-20130130.0/6.4/Server/x86_64/os/ enabled=1 gpgcheck=0 (please do not do a yum update after this - it causes conflicts with packstack) - then i install the latest packstack - run packstack and everything works (bar the minor iptable issue we have discussed) One hiccup in the above... Because the RHEL 6.4 GA release is imminent, the RHEL 6.4 Beta Repos have been emptied. It's part of the normal release engineering process. So for a short time, you might be stuck with getting updated packages or reinstalling from scratch on a new host. This issue should clear up in the next week as we get GA packages out the door. I'll send a more general note to the list about it. Perry -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Fri Feb 15 15:09:40 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 15 Feb 2013 10:09:40 -0500 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <6190AA83EB69374DABAE074D7E900F751245DD6C@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> <511CFF01.4080909@redhat.com> <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> <511E0077.9060001@redhat.com> <511E165A.9040104@redhat.com> <6190AA83EB69374DABAE074D7E900F751245DD6C@xmb-aln-x13.cisco.com> Message-ID: <511E4FB4.4020503@redhat.com> On 02/15/2013 10:06 AM, Shixiong Shang (shshang) wrote: > Hi, Perry: > > Thanks a lot for chiming in! KVM crashed during operation. VM stayed in > "starting" mode for about 10 secs and then went straight to "paused" mode. > > Here is the output you are looking for and appreciate the head up of > RHEL 6.4 GA release! I cannot wait! Thanks for the input below. I don't see anything obvious. We might need to bring in a member of the kvm team to help out here. Karen, is there someone that could take a look at this backtrace? https://www.redhat.com/archives/rhos-list/2013-February/msg00046.html Cheers, Perry > Shixiong > > > > > Last login: Thu Feb 14 21:03:17 2013 from 13.23.225.252 > [dmd at as-cmp1 ~]$ virsh capabilities > > > > ad4b57cf-bf15-06ef-a735-0be06abefc82 > > x86_64 > Nehalem > Intel > > > > > > > > > > > > > > > > tcp > > > > > > > > > > > > > > > > > > > > selinux > 0 > > > > > hvm > > 32 > /usr/libexec/qemu-kvm > rhel6.4.0 > pc > rhel6.3.0 > rhel6.2.0 > rhel6.1.0 > rhel6.0.0 > rhel5.5.0 > rhel5.4.4 > rhel5.4.0 > > > > /usr/libexec/qemu-kvm > > > > > > > > > > > > > > hvm > > 64 > /usr/libexec/qemu-kvm > rhel6.4.0 > pc > rhel6.3.0 > rhel6.2.0 > rhel6.1.0 > rhel6.0.0 > rhel5.5.0 > rhel5.4.4 > rhel5.4.0 > > > > /usr/libexec/qemu-kvm > > > > > > > > > > > > > > > [dmd at as-cmp1 ~]$ lsmod | grep kvm > kvm_intel 53484 0 > kvm 315450 1 kvm_intel > [dmd at as-cmp1 ~]$ > > > > > > > > > > > > On Feb 15, 2013, at 6:04 AM, Perry Myers > > wrote: > >> On 02/15/2013 04:31 AM, Gary Kotton wrote: >>> On 02/15/2013 04:54 AM, Shixiong Shang (shshang) wrote: >>>> Hi, Gary: >>>> >>>> I tried to put SELinux in PERMISSIVE mode and spawned up new VM using >>>> cirros image today, but still no luck. I spent the rest of the day >>>> searching Redhat bugs based on numerous error msgs in various log >>>> files. Found tons of stuff, but most of them didn't seem to be >>>> relevant and helpful. >>>> >>>> One thing caught my eyes is a case submitted back in 2010 and updated >>>> early this year. The KVM error message in instance console log is >>>> similar to the one in my case. Based on the description, seems like >>>> the KVM crash is caused by defective CPU or unsupported CPU model. I >>>> will try different machine for better luck. >>>> >>>> https://bugzilla.redhat.com/show_bug.cgi?id=639208 >> >> Shixiong, >> >> Is it a kvm crash during operation or does it fail to start at all? >> >> Just as a sanity check, can you run virsh capabilities and give us the >> output? >> >> And also: >> lsmod | grep kvm >> >>>> Will keep you posted if I can root cause the problem. In the >>>> meanwhile, if anything pops to your mind, please let me know. >>> >>> Thanks for the update. Which RHEL version are you using? I am using RHEL >>> 6.4. I do the following: >>> - install from disk >>> - subscripion registration >>> - i then make sure that the RHEL 6.4 beta repo is added: >>> >>> [rhel64beta] >>> name=RHEL64 BETA >>> #baseurl=http://download.eng.tlv.redhat.com/pub/rhel/rel-eng/RHEL6.4-20130130.0/6.4/source >>> baseurl=http://download.eng.tlv.redhat.com/pub/rhel/rel-eng/RHEL6.4-20130130.0/6.4/Server/x86_64/os/ >>> enabled=1 >>> gpgcheck=0 >>> (please do not do a yum update after this - it causes conflicts with >>> packstack) >>> - then i install the latest packstack >>> - run packstack and everything works (bar the minor iptable issue we >>> have discussed) >> >> One hiccup in the above... >> >> Because the RHEL 6.4 GA release is imminent, the RHEL 6.4 Beta Repos >> have been emptied. It's part of the normal release engineering process. >> So for a short time, you might be stuck with getting updated packages >> or reinstalling from scratch on a new host. >> >> This issue should clear up in the next week as we get GA packages out >> the door. I'll send a more general note to the list about it. >> >> Perry > From christopher.cobb at nesassociates.com Fri Feb 15 15:42:00 2013 From: christopher.cobb at nesassociates.com (Christopher Cobb) Date: Fri, 15 Feb 2013 15:42:00 +0000 Subject: [rhos-list] yum repolist: [Errno 14] PYCURL ERROR 22 Message-ID: <566CCFB492940B4DA07EB97FE99C9D7F0C786B@nes-exdb-01.nesassociates.com> Good day, All! I just did a fresh install of RHES 6.3 and am working through the Getting Started Guide. I'm on page 16 where it asks me to do: yum repolist. I get the following results: # yum repolist Loaded plugins: product-id, subscription-manager Updating certificate-based repositories. rhel-6-server-beta-rpms | 3.4 kB 00:00 rhel-6-server-beta-rpms/primary_db | 1.2 MB 00:02 rhel-6-server-cf-tools-1-rpms | 2.8 kB 00:00 rhel-6-server-cf-tools-1-rpms/primary_db | 18 kB 00:00 rhel-6-server-rhev-agent-rpms | 2.8 kB 00:00 rhel-6-server-rhev-agent-rpms/primary_db | 11 kB 00:00 rhel-6-server-rpms | 3.7 kB 00:00 rhel-6-server-rpms/primary_db | 16 MB 00:41 https://cdn.redhat.com/content/dist/rhel/server/6/6Server/i386/openstack/folsom/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" Trying other mirror. repo id repo name status rhel-6-server-beta-rpms Red Hat Enterprise Linux 6 Server Beta (RPMs) 831 rhel-6-server-cf-tools-1-rpms Red Hat CloudForms Tools for RHEL 6 (RPMs) 30 rhel-6-server-rhev-agent-rpms Red Hat Enterprise Virtualization Agents for RHEL 6 Server (RPMs) 16 rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 6,812 rhel-server-ost-6-folsom-rpms Red Hat OpenStack Folsom Preview (RPMs) 0 repolist: 7,689 I've tried yum clean metadata and yum clean all. They execute without error but I still get the same results with yum repolist. I can't figure out what I've got wrong. Any suggestions? cc From bkearney at redhat.com Fri Feb 15 15:49:45 2013 From: bkearney at redhat.com (Bryan Kearney) Date: Fri, 15 Feb 2013 10:49:45 -0500 Subject: [rhos-list] yum repolist: [Errno 14] PYCURL ERROR 22 In-Reply-To: <566CCFB492940B4DA07EB97FE99C9D7F0C786B@nes-exdb-01.nesassociates.com> References: <566CCFB492940B4DA07EB97FE99C9D7F0C786B@nes-exdb-01.nesassociates.com> Message-ID: <511E5919.40509@redhat.com> On 02/15/2013 10:42 AM, Christopher Cobb wrote: > Good day, All! > > I just did a fresh install of RHES 6.3 and am working through the Getting Started Guide. I'm on page 16 where it asks me to do: yum repolist. I get the following results: > > # yum repolist > Loaded plugins: product-id, subscription-manager > Updating certificate-based repositories. > rhel-6-server-beta-rpms | 3.4 kB 00:00 > rhel-6-server-beta-rpms/primary_db | 1.2 MB 00:02 > rhel-6-server-cf-tools-1-rpms | 2.8 kB 00:00 > rhel-6-server-cf-tools-1-rpms/primary_db | 18 kB 00:00 > rhel-6-server-rhev-agent-rpms | 2.8 kB 00:00 > rhel-6-server-rhev-agent-rpms/primary_db | 11 kB 00:00 > rhel-6-server-rpms | 3.7 kB 00:00 > rhel-6-server-rpms/primary_db | 16 MB 00:41 > https://cdn.redhat.com/content/dist/rhel/server/6/6Server/i386/openstack/folsom/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" > Trying other mirror. > repo id repo name status > rhel-6-server-beta-rpms Red Hat Enterprise Linux 6 Server Beta (RPMs) 831 > rhel-6-server-cf-tools-1-rpms Red Hat CloudForms Tools for RHEL 6 (RPMs) 30 > rhel-6-server-rhev-agent-rpms Red Hat Enterprise Virtualization Agents for RHEL 6 Server (RPMs) 16 > rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 6,812 > rhel-server-ost-6-folsom-rpms Red Hat OpenStack Folsom Preview (RPMs) 0 > repolist: 7,689 > > I've tried yum clean metadata and yum clean all. They execute without error but I still get the same results with yum repolist. > > I can't figure out what I've got wrong. Any suggestions? > > cc Are you installed on an i386 machine? if so, the bits are not available there (I think). You need to deploy on x86_64. -- bk From pmyers at redhat.com Fri Feb 15 15:54:27 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 15 Feb 2013 10:54:27 -0500 Subject: [rhos-list] clustering Cinder with Gluster In-Reply-To: <511E29F4.80000@cern.ch> References: <511E1A3C.6030207@redhat.com> <511E29F4.80000@cern.ch> Message-ID: <511E5A33.5090902@redhat.com> On 02/15/2013 07:28 AM, Thomas Oulevey wrote: > Hello, > > On 02/14/2013 08:19 PM, Paul Robert Marino wrote: >> >> In the future, this Cinder Gluster Driver can be updated to utilize the >> qemu native support for Gluster, which should make things perform >> better. But that functionality is not yet in RHEL 6, so we wouldn't be >> able to use it quite yet. Hopefully by RHEL 6.5 it will be there. > > Is there a bugzilla ticket where we can track qemu>= 1.3 inclusion for > RHEL 6.5 ? Well, the bug isn't specifically qemu>=1.3 It could be that they're going to backport the qemu/gluster functionality... I don't know. But the bug specific to qemu/gluster interop for 6.5 is here: https://bugzilla.redhat.com/show_bug.cgi?id=848070 From christopher.cobb at nesassociates.com Fri Feb 15 15:56:47 2013 From: christopher.cobb at nesassociates.com (Christopher Cobb) Date: Fri, 15 Feb 2013 15:56:47 +0000 Subject: [rhos-list] yum repolist: [Errno 14] PYCURL ERROR 22 In-Reply-To: <511E5919.40509@redhat.com> References: <566CCFB492940B4DA07EB97FE99C9D7F0C786B@nes-exdb-01.nesassociates.com> <511E5919.40509@redhat.com> Message-ID: <566CCFB492940B4DA07EB97FE99C9D7F0C788E@nes-exdb-01.nesassociates.com> Thank you. It's an Atom N2600. According to this spec sheet: http://ark.intel.com/products/58916/Intel-Atom-Processor-N2600-%281M-Cache-1_6-GHz%29 it should have "Intel 64" architecture. I guess not everyone agrees... :( -----Original Message----- From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Bryan Kearney Sent: Friday, February 15, 2013 10:50 AM To: rhos-list at redhat.com Subject: Re: [rhos-list] yum repolist: [Errno 14] PYCURL ERROR 22 On 02/15/2013 10:42 AM, Christopher Cobb wrote: > Good day, All! > > I just did a fresh install of RHES 6.3 and am working through the Getting Started Guide. I'm on page 16 where it asks me to do: yum repolist. I get the following results: > > # yum repolist > Loaded plugins: product-id, subscription-manager Updating > certificate-based repositories. > rhel-6-server-beta-rpms | 3.4 kB 00:00 > rhel-6-server-beta-rpms/primary_db | 1.2 MB 00:02 > rhel-6-server-cf-tools-1-rpms | 2.8 kB 00:00 > rhel-6-server-cf-tools-1-rpms/primary_db | 18 kB 00:00 > rhel-6-server-rhev-agent-rpms | 2.8 kB 00:00 > rhel-6-server-rhev-agent-rpms/primary_db | 11 kB 00:00 > rhel-6-server-rpms | 3.7 kB 00:00 > rhel-6-server-rpms/primary_db | 16 MB 00:41 > https://cdn.redhat.com/content/dist/rhel/server/6/6Server/i386/openstack/folsom/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" > Trying other mirror. > repo id repo name status > rhel-6-server-beta-rpms Red Hat Enterprise Linux 6 Server Beta (RPMs) 831 > rhel-6-server-cf-tools-1-rpms Red Hat CloudForms Tools for RHEL 6 (RPMs) 30 > rhel-6-server-rhev-agent-rpms Red Hat Enterprise Virtualization Agents for RHEL 6 Server (RPMs) 16 > rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 6,812 > rhel-server-ost-6-folsom-rpms Red Hat OpenStack Folsom Preview (RPMs) 0 > repolist: 7,689 > > I've tried yum clean metadata and yum clean all. They execute without error but I still get the same results with yum repolist. > > I can't figure out what I've got wrong. Any suggestions? > > cc Are you installed on an i386 machine? if so, the bits are not available there (I think). You need to deploy on x86_64. -- bk _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list From pmyers at redhat.com Fri Feb 15 16:14:47 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 15 Feb 2013 11:14:47 -0500 Subject: [rhos-list] yum repolist: [Errno 14] PYCURL ERROR 22 In-Reply-To: <566CCFB492940B4DA07EB97FE99C9D7F0C788E@nes-exdb-01.nesassociates.com> References: <566CCFB492940B4DA07EB97FE99C9D7F0C786B@nes-exdb-01.nesassociates.com> <511E5919.40509@redhat.com> <566CCFB492940B4DA07EB97FE99C9D7F0C788E@nes-exdb-01.nesassociates.com> Message-ID: <511E5EF7.205@redhat.com> On 02/15/2013 10:56 AM, Christopher Cobb wrote: > Thank you. It's an Atom N2600. According to this spec sheet: > > http://ark.intel.com/products/58916/Intel-Atom-Processor-N2600-%281M-Cache-1_6-GHz%29 > > it should have "Intel 64" architecture. I guess not everyone agrees... :( Even if the system is x86_64 capable, you need to install the x86_64 version of RHEL to utilize that. It's possible to install the i686 version of RHEL on top of a x86_64 machine, and it appears that is what you may have done in this case > -----Original Message----- > From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Bryan Kearney > Sent: Friday, February 15, 2013 10:50 AM > To: rhos-list at redhat.com > Subject: Re: [rhos-list] yum repolist: [Errno 14] PYCURL ERROR 22 > > On 02/15/2013 10:42 AM, Christopher Cobb wrote: >> Good day, All! >> >> I just did a fresh install of RHES 6.3 and am working through the Getting Started Guide. I'm on page 16 where it asks me to do: yum repolist. I get the following results: >> >> # yum repolist >> Loaded plugins: product-id, subscription-manager Updating >> certificate-based repositories. >> rhel-6-server-beta-rpms | 3.4 kB 00:00 >> rhel-6-server-beta-rpms/primary_db | 1.2 MB 00:02 >> rhel-6-server-cf-tools-1-rpms | 2.8 kB 00:00 >> rhel-6-server-cf-tools-1-rpms/primary_db | 18 kB 00:00 >> rhel-6-server-rhev-agent-rpms | 2.8 kB 00:00 >> rhel-6-server-rhev-agent-rpms/primary_db | 11 kB 00:00 >> rhel-6-server-rpms | 3.7 kB 00:00 >> rhel-6-server-rpms/primary_db | 16 MB 00:41 >> https://cdn.redhat.com/content/dist/rhel/server/6/6Server/i386/openstack/folsom/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" >> Trying other mirror. >> repo id repo name status >> rhel-6-server-beta-rpms Red Hat Enterprise Linux 6 Server Beta (RPMs) 831 >> rhel-6-server-cf-tools-1-rpms Red Hat CloudForms Tools for RHEL 6 (RPMs) 30 >> rhel-6-server-rhev-agent-rpms Red Hat Enterprise Virtualization Agents for RHEL 6 Server (RPMs) 16 >> rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 6,812 >> rhel-server-ost-6-folsom-rpms Red Hat OpenStack Folsom Preview (RPMs) 0 >> repolist: 7,689 >> >> I've tried yum clean metadata and yum clean all. They execute without error but I still get the same results with yum repolist. >> >> I can't figure out what I've got wrong. Any suggestions? >> >> cc > > Are you installed on an i386 machine? if so, the bits are not available there (I think). You need to deploy on x86_64. > > -- bk > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From prmarino1 at gmail.com Fri Feb 15 18:00:14 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Fri, 15 Feb 2013 13:00:14 -0500 Subject: [rhos-list] clustering Cinder with Gluster In-Reply-To: <511E1A3C.6030207@redhat.com> References: <511E1A3C.6030207@redhat.com> Message-ID: On Fri, Feb 15, 2013 at 6:21 AM, Perry Myers wrote: > On 02/14/2013 08:19 PM, Paul Robert Marino wrote: >> Hello >> >> Ive been thinking of ways to make cinder redundant without shared >> storage and I think I have two possible quick answer using Gluster, >> but before I go down this rabbit hole in my test environment I wanted >> to see if any one has tried this before or if any one could point out >> any obvious problems. >> Now I know that support for exporting ISCSI block devices natively is >> in the Gluster road map but it doesn't look like it will happen soon. >> here is what I'm thinking > > Using iSCSI from Gluster or even falling back to NFS in Gluster isn't > strictly necessary. > > The right thing to do is to have a native gluster driver for Cinder, > which our engineers are busy working on. I've cc'd Eric Harney from my > team who has been pushing forward on that effort with help from the Red > Hat Storage (Gluster) team. > > We're tracking inclusion of this Cinder Gluster driver here for RHOS 2.1: > https://bugzilla.redhat.com/show_bug.cgi?id=892686 > > It's not guaranteed it'll get into 2.1, because it still needs to get in > upstream by G-3 in order for that to happen. But that's the target. > > In the future, this Cinder Gluster Driver can be updated to utilize the > qemu native support for Gluster, which should make things perform > better. But that functionality is not yet in RHEL 6, so we wouldn't be > able to use it quite yet. Hopefully by RHEL 6.5 it will be there. > >> Scenario 1 >> similar to the examples in the guide I'm thinking of creating a a disk >> image created with the truncate command. >> The big difference is I'm planing to create it on a Gluster share and >> creating a clustered LVM volume and managing it with the HA addon. > > What does clvm provide here? well as I said this is a work around in the mean time until Gluster can natively or even indirectly be used by Cinder. what clvm would be doing is ensuring the volume group and the logical volumes it contains on the disk image would be accessible by both the primary and backup cinder hosts at the same time without having to worry too much about race conditions, but also as I said it is complicated and does add significant risks of its own. The one good thing i can say about this method is could theoretically allow multiple hosts running cinder to export the same logical volumes via ISCSI at the same time, thus allowing a load balancer to handle fail over and distribution of the traffic. Also keep in mind even after cinder gets native support for Gluster this would be useful for shared physical disks such a fiber channel sans and external drive chassis that support being connected to multiple hosts concurrently. with a standard logical volume locking prevents you from accessing the same logical volume on multiple hosts concurrently so its really not possible without it. > > Ideally, use the Cinder Driver mentioned above, but in the absence of > that, if you had a Gluster storage environment set up and had that > glusterfs mounted on every Compute Node, I don't see why clvm would be > necessary, and in fact clvm would create a lot of architectural > complexity here. > > The redundancy would be in the Gluster cluster itself (i.e. once you > point your compute node at one of the gluster bricks to mount the fs, my > understanding is that if a single brick fails, as long as the data you > need is accessible via replication on another brick, things will just > fail over w/o you needing add'l HA software like RHEL HA/CLVM) Well I sort of agree Gluster does provide redundancy via replicated bricks but it doesn't provide process fail over. So either way something will have to manage that. > > I've cc'd a Gluster expert (Vijay) who can correct me if I have that > horribly wrong :) > >> it should be fairly simple for me to create an init script to create >> and remove loop devices via losetup. >> in this scenario the thing that concerns me is the possibility of a >> system getting fenced on boot before the the Gluster volume is ready. >> >> Scenario 2 >> This one is a little simpler since I'm very familiar with keepalived I >> could create a VRRP instance with a floating VIP. >> when a node becomes primary it could initiate a script to start the >> loop device then start the Cinder service. >> on fault or if the node becomes backup I could have it ensure cinder >> has been stopped then remove the loop device. >> there are two things that I'm worried with this scenario >> 1) Since keepalived doesn't understand the concept of a quorum if they >> went into split brain mode this could possibly cause a significant >> problem. I can mitigate this risk by connecting the cinder nodes with >> a pair of dedicated cross over cables (preferably run via separate >> cable trays) but it can never be absolutely eliminate the possibility. >> I can also add a secondary check script that does a file based >> secondary heartbeat but that would be a little more complicated and >> wouldn't help if Gluster was split brained as well. >> 2) When a fault happens in keepalived there is either a lag before the >> backup notices and takes over based on the time in the heartbeat >> interval (approximately interval x 3) so there will be a 3 second or >> more delay before the second node attempts to take over. there are >> several patches for sub second intervals some of which I'm familiar >> with (I wrote one of them :-) ) but they add their own issue because >> they can make the system try to react too fast and may not allow >> sufficient time for the failed node to cleanly detach from the volume. >> >> >> scenario 2 is the easiest to implement and despite the concerns its >> the one i think is the safest mostly because I don't like to fence >> nodes because a single process or volume has an issue. my personal >> experiences with fencing is it usually causes more problems than it >> solves although admittedly my opinion of fencing has been tainted by a >> Oracle stretch cluster I use to support which liked to fence nodes any >> time someone half way around the world sneezed. >> >> So does any one have any opinions or comments? > > I think given how Gluster works and provides redundancy by having > multiple storage bricks and replication of the data, the above doesn't > seem like it is necessary and would provide a lot of > overhead/complication to the configuration. Again there is two different redundancies here. yes Gluster is providing data replication but it doesn't handle process fail over for Cinder its self. > > But I'll let the Cinder/Gluster folks on the thread here weigh in and > let me know if that's not correct > > Perry From shshang at cisco.com Sat Feb 16 02:31:24 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Sat, 16 Feb 2013 02:31:24 +0000 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <511E4FB4.4020503@redhat.com> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> <511CFF01.4080909@redhat.com> <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> <511E0077.9060001@redhat.com> <511E165A.9040104@redhat.com> <6190AA83EB69374DABAE074D7E900F751245DD6C@xmb-aln-x13.cisco.com> <511E4FB4.4020503@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F75124609BA@xmb-aln-x13.cisco.com> Hi, Perry and Karen: I did some further investigation tonight. The VM instance was initiated with lot of parameters, among which, here is one line related to CPU model: -cpu Nehalem,+rdtscp,+vmx,+ht,+ss,+acpi,+ds,+vme -enable-kvm Based on qemu-kvm command and cpu_map.xml file, Nehalem and all of the flags are supported. However, when I tried to perform CPU check, KVM crashed again. The backtrace is identical to the ones I saw in failed VM instance log: [root at as-cmp1 libvirt]# /usr/libexec/qemu-kvm -cpu Nehalem,check VNC server running on `::1:5900' KVM internal error. Suberror: 2 extra data[0]: 80000003 extra data[1]: 80000603 rax 00000000000003c3 rbx 00000000000008f2 rcx 000000000000013f rdx 000000000000ffdf rsi 0000000000000006 rdi 000000000000c993 rsp 00000000000003aa rbp 000000000000f000 r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 rip 00000000000010e2 rflags 00000286 cs c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) ds c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) es f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) gdt fc558/37 idt 0/3ff cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 FYI, I am using this qemu-kvm version: qemu-kvm-0.12.1.2-2.335.el6.x86_64 The potential workaround is to use generic CPU model, such as KVM64, with performance penalty. I will give it a try and keep you posted. In the meanwhile, if you can think of anything else, please let me at your early convenience. Thanks for your help! Shixiong On Feb 15, 2013, at 10:09 AM, Perry Myers > wrote: On 02/15/2013 10:06 AM, Shixiong Shang (shshang) wrote: Hi, Perry: Thanks a lot for chiming in! KVM crashed during operation. VM stayed in "starting" mode for about 10 secs and then went straight to "paused" mode. Here is the output you are looking for and appreciate the head up of RHEL 6.4 GA release! I cannot wait! Thanks for the input below. I don't see anything obvious. We might need to bring in a member of the kvm team to help out here. Karen, is there someone that could take a look at this backtrace? https://www.redhat.com/archives/rhos-list/2013-February/msg00046.html Cheers, Perry Shixiong Last login: Thu Feb 14 21:03:17 2013 from 13.23.225.252 [dmd at as-cmp1 ~]$ virsh capabilities ad4b57cf-bf15-06ef-a735-0be06abefc82 x86_64 Nehalem Intel tcp selinux 0 hvm 32 /usr/libexec/qemu-kvm rhel6.4.0 pc rhel6.3.0 rhel6.2.0 rhel6.1.0 rhel6.0.0 rhel5.5.0 rhel5.4.4 rhel5.4.0 /usr/libexec/qemu-kvm hvm 64 /usr/libexec/qemu-kvm rhel6.4.0 pc rhel6.3.0 rhel6.2.0 rhel6.1.0 rhel6.0.0 rhel5.5.0 rhel5.4.4 rhel5.4.0 /usr/libexec/qemu-kvm [dmd at as-cmp1 ~]$ lsmod | grep kvm kvm_intel 53484 0 kvm 315450 1 kvm_intel [dmd at as-cmp1 ~]$ On Feb 15, 2013, at 6:04 AM, Perry Myers > wrote: On 02/15/2013 04:31 AM, Gary Kotton wrote: On 02/15/2013 04:54 AM, Shixiong Shang (shshang) wrote: Hi, Gary: I tried to put SELinux in PERMISSIVE mode and spawned up new VM using cirros image today, but still no luck. I spent the rest of the day searching Redhat bugs based on numerous error msgs in various log files. Found tons of stuff, but most of them didn't seem to be relevant and helpful. One thing caught my eyes is a case submitted back in 2010 and updated early this year. The KVM error message in instance console log is similar to the one in my case. Based on the description, seems like the KVM crash is caused by defective CPU or unsupported CPU model. I will try different machine for better luck. https://bugzilla.redhat.com/show_bug.cgi?id=639208 Shixiong, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehabkost at redhat.com Sat Feb 16 03:52:52 2013 From: ehabkost at redhat.com (Eduardo Habkost) Date: Sat, 16 Feb 2013 01:52:52 -0200 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <6190AA83EB69374DABAE074D7E900F75124609BA@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> <511CFF01.4080909@redhat.com> <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> <511E0077.9060001@redhat.com> <511E165A.9040104@redhat.com> <6190AA83EB69374DABAE074D7E900F751245DD6C@xmb-aln-x13.cisco.com> <511E4FB4.4020503@redhat.com> <6190AA83EB69374DABAE074D7E900F75124609BA@xmb-aln-x13.cisco.com> Message-ID: <20130216035252.GA15512@otherpad.lan.raisama.net> Hi, all, I'm Eduardo from the KVM team. Some comments and questions below: On Sat, Feb 16, 2013 at 02:31:24AM +0000, Shixiong Shang (shshang) wrote: > Hi, Perry and Karen: > > I did some further investigation tonight. The VM instance was > initiated with lot of parameters, among which, here is one line > related to CPU model: > > -cpu Nehalem,+rdtscp,+vmx,+ht,+ss,+acpi,+ds,+vme -enable-kvm > > > Based on qemu-kvm command and cpu_map.xml file, Nehalem and all of the > flags are supported. However, when I tried to perform CPU check, KVM > crashed again. The backtrace is identical to the ones I saw in failed > VM instance log: The "check" parameter asks QEMU to print warnings if some CPU features are not supported by the host CPU, but QEMU will start the guest normally after that. So, if you got to the "VNC server running" stage, it means all CPU features from the QEMU "Nehalem" CPU model should be supported by your host CPU + kernel, and the crash happened while the guest was already running, not during the CPU feature check. > > > [root at as-cmp1 libvirt]# /usr/libexec/qemu-kvm -cpu Nehalem,check I am assuming you used just the above command with no extra parameters (meaning you don't even need a disk image to reproduce the bug), right? > VNC server running on `::1:5900' > KVM internal error. Suberror: 2 How long does the error message take to appear, after starting qemu-kvm? > extra data[0]: 80000003 > extra data[1]: 80000603 The data above is weird: the CPU is reporting that it was trying to deliver an int3 (but with the interrupt type bits set to "external interrupt", which doesn't make sense), and got another int3 interrupt generated when trying to deliver it. It doesn't look right (the codes don't seem to make sense), and even if it was right, simply running qemu-kvm with no arguments shouldn't end up generating int3 interrupts at all. I would test this in other machines, to make sure this is really not a hardware defect. Could you send the contents of /proc/cpuinfo? If you are able to install the x86info package, the output of 'x86info -v -a' would be useful, too. > rax 00000000000003c3 rbx 00000000000008f2 rcx 000000000000013f rdx 000000000000ffdf > rsi 0000000000000006 rdi 000000000000c993 rsp 00000000000003aa rbp 000000000000f000 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 00000000000010e2 rflags 00000286 Interesting, RIP is different from your previous report. Does the value change if you run "/usr/libexec/qemu-kvm -cpu Nehalem,check" again? > cs c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > > FYI, I am using this qemu-kvm version: > qemu-kvm-0.12.1.2-2.335.el6.x86_64 Thanks. What are the versions of the kernel, seabios, vgabios, and gpxe packages? > > > The potential workaround is to use generic CPU model, such as KVM64, > with performance penalty. I will give it a try and keep you posted. In > the meanwhile, if you can think of anything else, please let me at > your early convenience. If other CPU models work, it may simply indicate that some feature bit enabled by the Nehalem CPU model may be triggering the problem. If that's the case, one way to find out which feature is causing the problem is to try: $ /usr/lib/qemu-kvm -cpu qemu64,+sse2,+sse,+fxsr,+mmx,+clflush,+pse36,+pat,+cmov,+mca,+pge,+mtrr,+sep,+apic,+cx8,+mce,+pae,+msr,+tsc,+pse,+de,+fpu,+popcnt,+x2apic,+sse4.2,+sse4.1,+cx16,+ssse3,+sse3,+i64,+syscall,+xd,+lahf_lm,model=26 I expect the bug to be reproduced easily using the above command-line. After that, you can gradually remove features from the command-line, until we find which one is triggering the problem. > > Thanks for your help! > > Shixiong > > -- Eduardo From atganesan at paypal.com Sat Feb 16 07:01:39 2013 From: atganesan at paypal.com (Ganesan, Athiyaman) Date: Sat, 16 Feb 2013 07:01:39 +0000 Subject: [rhos-list] Case #00784007 (libvirtd libvirt-0.9.10-21) Message-ID: <70A392E37C052E4FBB00010C0E0A9F084CCE39B0@RHV-EXRDA-S21.corp.ebay.com> Hi Team, We are facing few issues with the libvirtd, it seems the daemon sporadically dies, We have already filed Case #00784007 with red-hat and we have been redirected to this dl to support. Could you please check and us help on this | Case Information | --------------------------------------- https://access.redhat.com/support/cases/00784007 Case Title : libvirtd libvirt-0.9.10-21 sporadically dies Case Number : 00784007 Case Open Date : 2013-01-31 11:56:51 Most recent comment: On 2013-02-04 13:17:28, Miles, Hannah commented: "Ed, We would be unable to directly assist with a custom build of this nature. I can direct you to this email list that may be of some assistance: rhos-list at redhat.com --------error details--- libvirt-0.9.10-21.el6.x86_64 is sporadically dying across multiple hosts in production: [root at ILOUSE237EBV6 ~]# /etc/init.d/libvirtd status libvirtd dead but pid file exists Please advise on the issue and troubleshooting further. -------- Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From atganesan at paypal.com Sat Feb 16 13:38:36 2013 From: atganesan at paypal.com (Ganesan, Athiyaman) Date: Sat, 16 Feb 2013 13:38:36 +0000 Subject: [rhos-list] Case #00784007 (libvirtd libvirt-0.9.10-21) In-Reply-To: <70A392E37C052E4FBB00010C0E0A9F084CCE39B0@RHV-EXRDA-S21.corp.ebay.com> Message-ID: <70A392E37C052E4FBB00010C0E0A9F084CCE3AC5@RHV-EXRDA-S21.corp.ebay.com> Hi Ken, As discussed, resending the mail with redhat techsupport team. Please treat this as high priority. Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 Mob: +919500125652 email : atganesan at paypal.com From: , "Ganesan, Athiyaman" > Date: Saturday, February 16, 2013 12:31 PM To: "rhos-list at redhat.com" >, "support at redhat.com" > Cc: "Vasantha, Rajesh" >, "Pabbisetty, Aravind Kumar" >, "Pillai, Vinu" >, "Sexton,Ed" >, "Hoo, Louis" > Subject: Case #00784007 (libvirtd libvirt-0.9.10-21) Hi Team, We are facing few issues with the libvirtd, it seems the daemon sporadically dies, We have already filed Case #00784007 with red-hat and we have been redirected to this dl to support. Could you please check and us help on this | Case Information | --------------------------------------- https://access.redhat.com/support/cases/00784007 Case Title : libvirtd libvirt-0.9.10-21 sporadically dies Case Number : 00784007 Case Open Date : 2013-01-31 11:56:51 Most recent comment: On 2013-02-04 13:17:28, Miles, Hannah commented: "Ed, We would be unable to directly assist with a custom build of this nature. I can direct you to this email list that may be of some assistance: rhos-list at redhat.com --------error details--- libvirt-0.9.10-21.el6.x86_64 is sporadically dying across multiple hosts in production: [root at ILOUSE237EBV6 ~]# /etc/init.d/libvirtd status libvirtd dead but pid file exists Please advise on the issue and troubleshooting further. -------- Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From atganesan at paypal.com Sat Feb 16 17:21:11 2013 From: atganesan at paypal.com (Ganesan, Athiyaman) Date: Sat, 16 Feb 2013 17:21:11 +0000 Subject: [rhos-list] [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) In-Reply-To: <511F9E7F.5050505@redhat.com> Message-ID: <70A392E37C052E4FBB00010C0E0A9F084CCE3BE2@RHV-EXRDA-S21.corp.ebay.com> Hi Hannah, Update us, when you start analysis on this case. Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com From: Red Hat Support > Reply-To: "techsupport at redhat.com" > Date: Saturday, February 16, 2013 8:28 PM To: "Ganesan, Athiyaman" > Cc: "rhos-list at redhat.com" >, "support at redhat.com" >, "Sexton,Ed" >, "Vasantha, Rajesh" >, "Pabbisetty, Aravind Kumar" >, "Pillai, Vinu" >, "Hoo, Louis" >, Hannah Miles > Subject: Re: [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) Hi, Further to our conversation earlier I wasn't aware at the time that you were using openstack software which has not yet been released by Red Hat. As such, all unsupported software calls are treated as severity 4 and do not qualify for 24x7 support. You are lucky that Hannah is working today, so she might be able to give you some help before the end of the weekend. Otherwise our normal support terms have no time limit for unsupported software. Regards, Ken. On 16/02/13 13:38, Ganesan, Athiyaman wrote: Hi Ken, As discussed, resending the mail with redhat techsupport team. Please treat this as high priority. Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 Mob: +919500125652 email : atganesan at paypal.com From: , "Ganesan, Athiyaman" > Date: Saturday, February 16, 2013 12:31 PM To: "rhos-list at redhat.com" >, "support at redhat.com" > Cc: "Vasantha, Rajesh" >, "Pabbisetty, Aravind Kumar" >, "Pillai, Vinu" >, "Sexton,Ed" >, "Hoo, Louis" > Subject: Case #00784007 (libvirtd libvirt-0.9.10-21) Hi Team, We are facing few issues with the libvirtd, it seems the daemon sporadically dies, We have already filed Case #00784007 with red-hat and we have been redirected to this dl to support. Could you please check and us help on this | Case Information | --------------------------------------- https://access.redhat.com/support/cases/00784007 Case Title : libvirtd libvirt-0.9.10-21 sporadically dies Case Number : 00784007 Case Open Date : 2013-01-31 11:56:51 Most recent comment: On 2013-02-04 13:17:28, Miles, Hannah commented: "Ed, We would be unable to directly assist with a custom build of this nature. I can direct you to this email list that may be of some assistance: rhos-list at redhat.com --------error details--- libvirt-0.9.10-21.el6.x86_64 is sporadically dying across multiple hosts in production: [root at ILOUSE237EBV6 ~]# /etc/init.d/libvirtd status libvirtd dead but pid file exists Please advise on the issue and troubleshooting further. -------- Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com -- Regards, Ken Booth Red Hat UK Ltd Senior Technical Support Engineer 200 Fowler Avenue Tel: +44 1252 362710 Farnborough Business Park email: techsupport at redhat.com Farnborough, Hampshire GU14 7JP Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham (USA), Brendan Lane (Ireland), Matt Parson (USA), Charlie Peters (USA) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shshang at cisco.com Sat Feb 16 18:55:56 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Sat, 16 Feb 2013 18:55:56 +0000 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <20130216035252.GA15512@otherpad.lan.raisama.net> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> <511CFF01.4080909@redhat.com> <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> <511E0077.9060001@redhat.com> <511E165A.9040104@redhat.com> <6190AA83EB69374DABAE074D7E900F751245DD6C@xmb-aln-x13.cisco.com> <511E4FB4.4020503@redhat.com> <6190AA83EB69374DABAE074D7E900F75124609BA@xmb-aln-x13.cisco.com> <20130216035252.GA15512@otherpad.lan.raisama.net> Message-ID: <6190AA83EB69374DABAE074D7E900F75124615D2@xmb-aln-x13.cisco.com> Hi, Eduardo: Thanks a lot for the comments! It is really helpful! Based on your suggestion, I did a quick verification on the cpu flags, and the result is very ugly?KVM crashes pretty much for most of the flags I tested: No crash: fxsr Crash: sse2, sse, mmx, clflush, pse36, pat, cmov, mca The test was conducted by using qemu-kvm-rhev-0.12.1.2-2.351.el6.x86_64. I did yum update last night going to bed and the qemu-kvm-0.12.1.2-2.335.el6.x86_64 was obsoleted by qemu-kvm-rhev-0.12.1.2-2.351.el6.x86_64. I didn't exhaust the list, but all of these flags should be supported by Nehalem. At this moment, do you think we may have CPU defect? Please see attached TXT file for details. Thanks! Shixiong On Feb 15, 2013, at 10:52 PM, Eduardo Habkost > wrote: Hi, all, I'm Eduardo from the KVM team. Some comments and questions below: On Sat, Feb 16, 2013 at 02:31:24AM +0000, Shixiong Shang (shshang) wrote: Hi, Perry and Karen: I did some further investigation tonight. The VM instance was initiated with lot of parameters, among which, here is one line related to CPU model: -cpu Nehalem,+rdtscp,+vmx,+ht,+ss,+acpi,+ds,+vme -enable-kvm Based on qemu-kvm command and cpu_map.xml file, Nehalem and all of the flags are supported. However, when I tried to perform CPU check, KVM crashed again. The backtrace is identical to the ones I saw in failed VM instance log: The "check" parameter asks QEMU to print warnings if some CPU features are not supported by the host CPU, but QEMU will start the guest normally after that. So, if you got to the "VNC server running" stage, it means all CPU features from the QEMU "Nehalem" CPU model should be supported by your host CPU + kernel, and the crash happened while the guest was already running, not during the CPU feature check. [root at as-cmp1 libvirt]# /usr/libexec/qemu-kvm -cpu Nehalem,check I am assuming you used just the above command with no extra parameters (meaning you don't even need a disk image to reproduce the bug), right? VNC server running on `::1:5900' KVM internal error. Suberror: 2 How long does the error message take to appear, after starting qemu-kvm? extra data[0]: 80000003 extra data[1]: 80000603 The data above is weird: the CPU is reporting that it was trying to deliver an int3 (but with the interrupt type bits set to "external interrupt", which doesn't make sense), and got another int3 interrupt generated when trying to deliver it. It doesn't look right (the codes don't seem to make sense), and even if it was right, simply running qemu-kvm with no arguments shouldn't end up generating int3 interrupts at all. I would test this in other machines, to make sure this is really not a hardware defect. Could you send the contents of /proc/cpuinfo? If you are able to install the x86info package, the output of 'x86info -v -a' would be useful, too. rax 00000000000003c3 rbx 00000000000008f2 rcx 000000000000013f rdx 000000000000ffdf rsi 0000000000000006 rdi 000000000000c993 rsp 00000000000003aa rbp 000000000000f000 r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 rip 00000000000010e2 rflags 00000286 Interesting, RIP is different from your previous report. Does the value change if you run "/usr/libexec/qemu-kvm -cpu Nehalem,check" again? cs c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) ds c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) es f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) gdt fc558/37 idt 0/3ff cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 FYI, I am using this qemu-kvm version: qemu-kvm-0.12.1.2-2.335.el6.x86_64 Thanks. What are the versions of the kernel, seabios, vgabios, and gpxe packages? The potential workaround is to use generic CPU model, such as KVM64, with performance penalty. I will give it a try and keep you posted. In the meanwhile, if you can think of anything else, please let me at your early convenience. If other CPU models work, it may simply indicate that some feature bit enabled by the Nehalem CPU model may be triggering the problem. If that's the case, one way to find out which feature is causing the problem is to try: $ /usr/lib/qemu-kvm -cpu qemu64,+sse2,+sse,+fxsr,+mmx,+clflush,+pse36,+pat,+cmov,+mca,+pge,+mtrr,+sep,+apic,+cx8,+mce,+pae,+msr,+tsc,+pse,+de,+fpu,+popcnt,+x2apic,+sse4.2,+sse4.1,+cx16,+ssse3,+sse3,+i64,+syscall,+xd,+lahf_lm,model=26 I expect the bug to be reproduced easily using the above command-line. After that, you can gradually remove features from the command-line, until we find which one is triggering the problem. Thanks for your help! Shixiong -- Eduardo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: qemu-kvm cpu checks.txt URL: From atganesan at paypal.com Sun Feb 17 09:30:31 2013 From: atganesan at paypal.com (Ganesan, Athiyaman) Date: Sun, 17 Feb 2013 09:30:31 +0000 Subject: [rhos-list] [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) In-Reply-To: Message-ID: <70A392E37C052E4FBB00010C0E0A9F084CCE404D@RHV-EXRDA-S21.corp.ebay.com> Hi Redhat support team, I have uploaded the libvirtd debug error log and the core dump of the libvirtd to "https://access.redhat.com/support/cases/00784007". Please analysis the log and update us the status. Note: now the case is Sev 1, we need the updates based on the Sev1 support timeline. Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com From: , Louis > Date: Sunday, February 17, 2013 5:25 AM To: "Ganesan, Athiyaman" >, "techsupport at redhat.com" >, Hannah Miles > Cc: "rhos-list at redhat.com" >, "support at redhat.com" >, "Sexton,Ed" >, "Vasantha, Rajesh" >, "Pabbisetty, Aravind Kumar" >, "Pillai, Vinu" >, "Brodskiy, Yuriy" > Subject: Re: [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) Redhat Support: Openstack has nothing to do with libvert. What is it going to take to escalate this to a sev 1? Thanks, -Louis. From: , Athiyaman > Date: Saturday, February 16, 2013 10:21 AM To: "techsupport at redhat.com" >, Hannah Miles > Cc: "rhos-list at redhat.com" >, "support at redhat.com" >, "Sexton,Ed" >, "Vasantha, Rajesh" >, Aravind Kumar Pabbisetty >, "Pillai, Vinu" >, "Hoo, Louis" > Subject: Re: [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) Hi Hannah, Update us, when you start analysis on this case. Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com From: Red Hat Support > Reply-To: "techsupport at redhat.com" > Date: Saturday, February 16, 2013 8:28 PM To: "Ganesan, Athiyaman" > Cc: "rhos-list at redhat.com" >, "support at redhat.com" >, "Sexton,Ed" >, "Vasantha, Rajesh" >, "Pabbisetty, Aravind Kumar" >, "Pillai, Vinu" >, "Hoo, Louis" >, Hannah Miles > Subject: Re: [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) Hi, Further to our conversation earlier I wasn't aware at the time that you were using openstack software which has not yet been released by Red Hat. As such, all unsupported software calls are treated as severity 4 and do not qualify for 24x7 support. You are lucky that Hannah is working today, so she might be able to give you some help before the end of the weekend. Otherwise our normal support terms have no time limit for unsupported software. Regards, Ken. On 16/02/13 13:38, Ganesan, Athiyaman wrote: Hi Ken, As discussed, resending the mail with redhat techsupport team. Please treat this as high priority. Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 Mob: +919500125652 email : atganesan at paypal.com From: , "Ganesan, Athiyaman" > Date: Saturday, February 16, 2013 12:31 PM To: "rhos-list at redhat.com" >, "support at redhat.com" > Cc: "Vasantha, Rajesh" >, "Pabbisetty, Aravind Kumar" >, "Pillai, Vinu" >, "Sexton,Ed" >, "Hoo, Louis" > Subject: Case #00784007 (libvirtd libvirt-0.9.10-21) Hi Team, We are facing few issues with the libvirtd, it seems the daemon sporadically dies, We have already filed Case #00784007 with red-hat and we have been redirected to this dl to support. Could you please check and us help on this | Case Information | --------------------------------------- https://access.redhat.com/support/cases/00784007 Case Title : libvirtd libvirt-0.9.10-21 sporadically dies Case Number : 00784007 Case Open Date : 2013-01-31 11:56:51 Most recent comment: On 2013-02-04 13:17:28, Miles, Hannah commented: "Ed, We would be unable to directly assist with a custom build of this nature. I can direct you to this email list that may be of some assistance: rhos-list at redhat.com --------error details--- libvirt-0.9.10-21.el6.x86_64 is sporadically dying across multiple hosts in production: [root at ILOUSE237EBV6 ~]# /etc/init.d/libvirtd status libvirtd dead but pid file exists Please advise on the issue and troubleshooting further. -------- Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com -- Regards, Ken Booth Red Hat UK Ltd Senior Technical Support Engineer 200 Fowler Avenue Tel: +44 1252 362710 Farnborough Business Park email: techsupport at redhat.com Farnborough, Hampshire GU14 7JP Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham (USA), Brendan Lane (Ireland), Matt Parson (USA), Charlie Peters (USA) -------------- next part -------------- An HTML attachment was scrubbed... URL: From atganesan at paypal.com Sun Feb 17 13:10:38 2013 From: atganesan at paypal.com (Ganesan, Athiyaman) Date: Sun, 17 Feb 2013 13:10:38 +0000 Subject: [rhos-list] [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) In-Reply-To: <5120B416.6090806@redhat.com> Message-ID: <70A392E37C052E4FBB00010C0E0A9F084CCE4126@RHV-EXRDA-S21.corp.ebay.com> Hi Ken, Core is not generating "https://access.redhat.com/knowledge/node/5352". Do we have another way to get that. [root at slcbcn29use235db0e kernel]# ls -l core* -rw-r--r-- 1 root root 0 Feb 17 03:51 core_pattern -rw-r--r-- 1 root root 0 Feb 17 03:05 core_pipe_limit -rw-r--r-- 1 root root 0 Feb 17 05:03 core_uses_pid [root at slcbcn29use235db0e kernel]# Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com From: Red Hat Support > Reply-To: "techsupport at redhat.com" > Date: Sunday, February 17, 2013 4:12 PM To: "Ganesan, Athiyaman" > Cc: Hannah Miles >, "Vasantha, Rajesh" >, "Pabbisetty, Aravind Kumar" >, "Brodskiy, Yuriy" >, "rhos-list at redhat.com" >, "Sexton,Ed" >, "Pillai, Vinu" >, "support at redhat.com" >, "Hoo, Louis" > Subject: Re: [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) Hi, I've checked through the libvirt log and not seen anything obvious to identify an error, not even a message to indicate that it aborted and created a corefile. Also, the corefile is not present on the ticket. Please can you re-upload the corefile, and provide details about exactly what you were doing and when? So I can match these times to the times in the log. Regards, Ken. On 17/02/13 09:30, Ganesan, Athiyaman wrote: Hi Redhat support team, I have uploaded the libvirtd debug error log and the core dump of the libvirtd to "https://access.redhat.com/support/cases/00784007". Please analysis the log and update us the status. Note: now the case is Sev 1, we need the updates based on the Sev1 support timeline. Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com From: , Louis > Date: Sunday, February 17, 2013 5:25 AM To: "Ganesan, Athiyaman" >, "techsupport at redhat.com" >, Hannah Miles > Cc: "rhos-list at redhat.com" >, "support at redhat.com" >, "Sexton,Ed" >, "Vasantha, Rajesh" >, "Pabbisetty, Aravind Kumar" >, "Pillai, Vinu" >, "Brodskiy, Yuriy" > Subject: Re: [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) Redhat Support: Openstack has nothing to do with libvert. What is it going to take to escalate this to a sev 1? Thanks, -Louis. From: , Athiyaman > Date: Saturday, February 16, 2013 10:21 AM To: "techsupport at redhat.com" >, Hannah Miles > Cc: "rhos-list at redhat.com" >, "support at redhat.com" >, "Sexton,Ed" >, "Vasantha, Rajesh" >, Aravind Kumar Pabbisetty >, "Pillai, Vinu" >, "Hoo, Louis" > Subject: Re: [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) Hi Hannah, Update us, when you start analysis on this case. Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com From: Red Hat Support > Reply-To: "techsupport at redhat.com" > Date: Saturday, February 16, 2013 8:28 PM To: "Ganesan, Athiyaman" > Cc: "rhos-list at redhat.com" >, "support at redhat.com" >, "Sexton,Ed" >, "Vasantha, Rajesh" >, "Pabbisetty, Aravind Kumar" >, "Pillai, Vinu" >, "Hoo, Louis" >, Hannah Miles > Subject: Re: [techsupport] Case #00784007 (libvirtd libvirt-0.9.10-21) Hi, Further to our conversation earlier I wasn't aware at the time that you were using openstack software which has not yet been released by Red Hat. As such, all unsupported software calls are treated as severity 4 and do not qualify for 24x7 support. You are lucky that Hannah is working today, so she might be able to give you some help before the end of the weekend. Otherwise our normal support terms have no time limit for unsupported software. Regards, Ken. On 16/02/13 13:38, Ganesan, Athiyaman wrote: Hi Ken, As discussed, resending the mail with redhat techsupport team. Please treat this as high priority. Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 Mob: +919500125652 email : atganesan at paypal.com From: , "Ganesan, Athiyaman" > Date: Saturday, February 16, 2013 12:31 PM To: "rhos-list at redhat.com" >, "support at redhat.com" > Cc: "Vasantha, Rajesh" >, "Pabbisetty, Aravind Kumar" >, "Pillai, Vinu" >, "Sexton,Ed" >, "Hoo, Louis" > Subject: Case #00784007 (libvirtd libvirt-0.9.10-21) Hi Team, We are facing few issues with the libvirtd, it seems the daemon sporadically dies, We have already filed Case #00784007 with red-hat and we have been redirected to this dl to support. Could you please check and us help on this | Case Information | --------------------------------------- https://access.redhat.com/support/cases/00784007 Case Title : libvirtd libvirt-0.9.10-21 sporadically dies Case Number : 00784007 Case Open Date : 2013-01-31 11:56:51 Most recent comment: On 2013-02-04 13:17:28, Miles, Hannah commented: "Ed, We would be unable to directly assist with a custom build of this nature. I can direct you to this email list that may be of some assistance: rhos-list at redhat.com --------error details--- libvirt-0.9.10-21.el6.x86_64 is sporadically dying across multiple hosts in production: [root at ILOUSE237EBV6 ~]# /etc/init.d/libvirtd status libvirtd dead but pid file exists Please advise on the issue and troubleshooting further. -------- Best Regards Athiyaman. G Cloud Coverage PayPal an ebay Company Ph: +914466346880 email : atganesan at paypal.com -- Regards, Ken Booth Red Hat UK Ltd Senior Technical Support Engineer 200 Fowler Avenue Tel: +44 1252 362710 Farnborough Business Park email: techsupport at redhat.com Farnborough, Hampshire GU14 7JP Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham (USA), Brendan Lane (Ireland), Matt Parson (USA), Charlie Peters (USA) -- Regards, Ken Booth Red Hat UK Ltd Senior Technical Support Engineer 200 Fowler Avenue Tel: +44 1252 362710 Farnborough Business Park email: techsupport at redhat.com Farnborough, Hampshire GU14 7JP Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham (USA), Brendan Lane (Ireland), Matt Parson (USA), Charlie Peters (USA) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shshang at cisco.com Mon Feb 18 04:48:05 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Mon, 18 Feb 2013 04:48:05 +0000 Subject: [rhos-list] dnsmasq cannot start properly Message-ID: <6190AA83EB69374DABAE074D7E900F75124647E7@xmb-aln-x13.cisco.com> Hi, guys: I am using dnsmasq as DHCP server to assign IP address to VMs. The "dnsmasq" process seemed to start ok. nobody 2919 1 0 23:16 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --local=// --domain-needed --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts However, I noticed that, all three config files referred by dnsmasq process were all empty. Based on dhcp_agent.ini file, dnsmasq should go to /var/lib/quantum for config files?.Why did they load files from /var/lib/libvirt/dnsmasq? [root at as-net1 bin]# cd /var/lib/libvirt/dnsmasq/ [root at as-net1 dnsmasq]# ls -lh total 0 -rw-r--r--. 1 root root 0 Feb 17 23:14 default.addnhosts -rw-r--r--. 1 root root 0 Feb 17 23:14 default.hostsfile -rw-r--r--. 1 root root 0 Feb 4 10:03 default.leases In addition, system log threw the following error message at the time when I restarted dhcp agent: Feb 17 23:37:19 as-net1 kernel: type=1400 audit(1361162239.626:560): avc: denied { read } for pid=13252 comm="dnsmasq" name="sh" dev=dm-0 ino=1572867 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=lnk_file Feb 17 23:37:19 as-net1 dnsmasq[13251]: cannot run lease-init script /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update: No such file or directory Feb 17 23:37:19 as-net1 dnsmasq[13251]: FAILED to start up Feb 17 23:37:22 as-net1 dnsmasq[13297]: cannot run lease-init script /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update: No such file or directory Feb 17 23:37:22 as-net1 dnsmasq[13297]: FAILED to start up When I tried to execute the script manually, it gave me this traceback?.. [dmd at as-net1 bin]$ /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update Traceback (most recent call last): File "/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update", line 20, in dhcp.Dnsmasq.lease_update() File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 341, in lease_update action = sys.argv[1] IndexError: list index out of range Would you please shed some light here? Thank you! Shixiong [cid:image001.png at 01CE0A3A.4C739FF0] Shixiong Shang Solution Architect WWSP Digital Media Solution Architect Cisco Services CCIE R&S - #17235 shshang at cisco.com Phone: +1 919 392 5192 Mobile: +1 919 272 1358 Cisco Systems, Inc. 7200-4 Kit Creek Road RTP, NC 27709-4987 United States Cisco.com !--- Stay Hungry Stay Foolish ---! This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 9461 bytes Desc: image001.png URL: From ehabkost at redhat.com Mon Feb 18 13:34:46 2013 From: ehabkost at redhat.com (Eduardo Habkost) Date: Mon, 18 Feb 2013 10:34:46 -0300 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <6190AA83EB69374DABAE074D7E900F75124615D2@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> <511CFF01.4080909@redhat.com> <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> <511E0077.9060001@redhat.com> <511E165A.9040104@redhat.com> <6190AA83EB69374DABAE074D7E900F751245DD6C@xmb-aln-x13.cisco.com> <511E4FB4.4020503@redhat.com> <6190AA83EB69374DABAE074D7E900F75124609BA@xmb-aln-x13.cisco.com> <20130216035252.GA15512@otherpad.lan.raisama.net> <6190AA83EB69374DABAE074D7E900F75124615D2@xmb-aln-x13.cisco.com> Message-ID: <20130218133446.GM8494@otherpad.lan.raisama.net> Thanks for the information. There's no need to test exhaustively each CPU flag, I just wanted to find a minimal test case where the bug could be reproduced by simply enabling one feature bit. Note that the bug may be completely unrelated to support of feature flags on Nehalem or your host CPU. The problem is happening when KVM is already running guest code, not during CPU feature check/initialization. I don't know yet what can be causing the issue, but I don't discard the possibility of a CPU defect. In either case, this bug is not reproducible on any of the machines where we have run our tests, so we need more details about your host CPU: the contents of /proc/cpuinfo and (in case you can install x86info) the output of 'x86info -v -a' Also, I still need to know the version of the kernel, seabios, vgabios, and gpxe packages. On Sat, Feb 16, 2013 at 06:55:56PM +0000, Shixiong Shang (shshang) wrote: > Hi, Eduardo: > > Thanks a lot for the comments! It is really helpful! Based on your > suggestion, I did a quick verification on the cpu flags, and the > result is very ugly?KVM crashes pretty much for most of the flags I > tested: > > No crash: fxsr Crash: sse2, sse, mmx, clflush, pse36, pat, cmov, mca > > The test was conducted by using > qemu-kvm-rhev-0.12.1.2-2.351.el6.x86_64. I did yum update last night > going to bed and the qemu-kvm-0.12.1.2-2.335.el6.x86_64 was obsoleted > by qemu-kvm-rhev-0.12.1.2-2.351.el6.x86_64. > > I didn't exhaust the list, but all of these flags should be supported > by Nehalem. At this moment, do you think we may have CPU defect? > Please see attached TXT file for details. > > Thanks! > > Shixiong > > > > > > > > > On Feb 15, 2013, at 10:52 PM, Eduardo Habkost > > wrote: > > > Hi, all, > > I'm Eduardo from the KVM team. Some comments and questions below: > > > On Sat, Feb 16, 2013 at 02:31:24AM +0000, Shixiong Shang (shshang) wrote: > Hi, Perry and Karen: > > I did some further investigation tonight. The VM instance was > initiated with lot of parameters, among which, here is one line > related to CPU model: > > -cpu Nehalem,+rdtscp,+vmx,+ht,+ss,+acpi,+ds,+vme -enable-kvm > > > Based on qemu-kvm command and cpu_map.xml file, Nehalem and all of the > flags are supported. However, when I tried to perform CPU check, KVM > crashed again. The backtrace is identical to the ones I saw in failed > VM instance log: > > The "check" parameter asks QEMU to print warnings if some CPU features > are not supported by the host CPU, but QEMU will start the guest > normally after that. So, if you got to the "VNC server running" stage, > it means all CPU features from the QEMU "Nehalem" CPU model should be > supported by your host CPU + kernel, and the crash happened while the > guest was already running, not during the CPU feature check. > > > > [root at as-cmp1 libvirt]# /usr/libexec/qemu-kvm -cpu Nehalem,check > > I am assuming you used just the above command with no extra parameters > (meaning you don't even need a disk image to reproduce the bug), right? > > > VNC server running on `::1:5900' > KVM internal error. Suberror: 2 > > How long does the error message take to appear, after starting qemu-kvm? > > extra data[0]: 80000003 > extra data[1]: 80000603 > > The data above is weird: the CPU is reporting that it was trying to > deliver an int3 (but with the interrupt type bits set to "external > interrupt", which doesn't make sense), and got another int3 interrupt > generated when trying to deliver it. > > It doesn't look right (the codes don't seem to make sense), and even if > it was right, simply running qemu-kvm with no arguments shouldn't end up > generating int3 interrupts at all. > > I would test this in other machines, to make sure this is really not a > hardware defect. Could you send the contents of /proc/cpuinfo? If you > are able to install the x86info package, the output of 'x86info -v -a' > would be useful, too. > > > rax 00000000000003c3 rbx 00000000000008f2 rcx 000000000000013f rdx 000000000000ffdf > rsi 0000000000000006 rdi 000000000000c993 rsp 00000000000003aa rbp 000000000000f000 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 00000000000010e2 rflags 00000286 > > Interesting, RIP is different from your previous report. Does the value > change if you run "/usr/libexec/qemu-kvm -cpu Nehalem,check" again? > > > cs c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > > FYI, I am using this qemu-kvm version: > qemu-kvm-0.12.1.2-2.335.el6.x86_64 > > Thanks. What are the versions of the kernel, seabios, vgabios, and gpxe > packages? > > > > The potential workaround is to use generic CPU model, such as KVM64, > with performance penalty. I will give it a try and keep you posted. In > the meanwhile, if you can think of anything else, please let me at > your early convenience. > > If other CPU models work, it may simply indicate that some feature bit > enabled by the Nehalem CPU model may be triggering the problem. > > If that's the case, one way to find out which feature is causing the > problem is to try: > > $ /usr/lib/qemu-kvm -cpu qemu64,+sse2,+sse,+fxsr,+mmx,+clflush,+pse36,+pat,+cmov,+mca,+pge,+mtrr,+sep,+apic,+cx8,+mce,+pae,+msr,+tsc,+pse,+de,+fpu,+popcnt,+x2apic,+sse4.2,+sse4.1,+cx16,+ssse3,+sse3,+i64,+syscall,+xd,+lahf_lm,model=26 > > I expect the bug to be reproduced easily using the above command-line. > After that, you can gradually remove features from the command-line, > until we find which one is triggering the problem. > > > > Thanks for your help! > > Shixiong > > > > -- > Eduardo > > # > # Full Feature List > # > > /usr/libexec/qemu-kvm -cpu qemu64,+sse2,+sse,+fxsr,+mmx,+clflush,+pse36,+pat,+cmov,+mca,+pge,+mtrr,+sep,+apic,+cx8,+mce,+pae,+msr,+tsc,+pse,+de,+fpu,+popcnt,+x2apic,+sse4.2,+sse4.1,+cx16,+ssse3,+sse3,+i64,+syscall,+xd,+lahf_lm,model=26 > VNC server running on `::1:5900' > > KVM internal error. Suberror: 2 > extra data[0]: 80000006 > extra data[1]: 80000306 > rax 000000000000f043 rbx 000000000000049a rcx 0000000000000003 rdx 000000000000f000 > rsi 0000000000000003 rdi 000000000000c992 rsp 00000000000003ca rbp 000000000000f000 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 0000000000001038 rflags 00010286 > cs c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > > > > > > > > # > # Bad - sse2 > # > [dmd at as-cmp1 ~]$ /usr/libexec/qemu-kvm -cpu qemu64,+sse2 > VNC server running on `::1:5900' > KVM internal error. Suberror: 2 > extra data[0]: 80000003 > extra data[1]: 80000603 > rax 000000000000ffc3 rbx 000000000000045e rcx 00000000000018ff rdx 000000000000f0ff > rsi 0000000000000003 rdi 000000000000c993 rsp 000000000000038c rbp 000000000000f000 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 00000000000010e2 rflags 00000286 > cs c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > > > > # > # Bad - sse > # > > [dmd at as-cmp1 ~]$ /usr/libexec/qemu-kvm -cpu qemu64,+sse > VNC server running on `::1:5900' > KVM internal error. Suberror: 2 > extra data[0]: 80000006 > extra data[1]: 80000306 > rax 000000000000df7f rbx 0000000000000052 rcx 000000000000f4ff rdx 000000000000f07f > rsi 0000000000000003 rdi 000000000000c992 rsp 00000000000003c8 rbp 000000000000f000 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 0000000000001140 rflags 00010213 > cs c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > > > > [dmd at as-cmp1 ~]$ /usr/libexec/qemu-kvm -cpu qemu64,+fxsr > VNC server running on `::1:5900' > > > > # > # Bad - mmx > # > > [dmd at as-cmp1 ~]$ /usr/libexec/qemu-kvm -cpu qemu64,+fxsr,+mmx, > VNC server running on `::1:5900' > KVM internal error. Suberror: 2 > extra data[0]: 80000003 > extra data[1]: 80000603 > rax 0000000000009f7f rbx 0000000000000042 rcx 000000000000f4ff rdx 000000000000f07f > rsi 0000000000000003 rdi 000000000000c992 rsp 00000000000003c8 rbp 000000000000f000 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 0000000000001138 rflags 00000213 > cs c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es c000 (000c0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > > > > # > # Bad clflush > # > [dmd at as-cmp1 ~]$ /usr/libexec/qemu-kvm -cpu qemu64,+fxsr,+clflush > VNC server running on `::1:5900' > KVM internal error. Suberror: 2 > extra data[0]: 80000003 > extra data[1]: 80000603 > rax 0000000000000023 rbx 00000000000000fd rcx 0000000000000000 rdx 00000000000003d1 > rsi 0000000000000003 rdi 000000000000c992 rsp 000000000000be76 rbp 0000000000000000 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 0000000000006f05 rflags 00000016 > cs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > > > > > # > # Bad pse36 > # > [dmd at as-cmp1 ~]$ /usr/libexec/qemu-kvm -cpu qemu64,+fxsr,+pse36 > VNC server running on `::1:5900' > KVM internal error. Suberror: 2 > extra data[0]: 80000006 > extra data[1]: 80000306 > rax 000000003c710000 rbx 0000000000000000 rcx 00000000f000be74 rdx 000000000cfe0000 > rsi 000000006ef00000 rdi 000000007e208000 rsp 0000000000006ea2 rbp 000000006f2f0000 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 0000000000000040 rflags 00010446 > cs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es 1800 (00018000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > > > > # > # Bad pat > # > [dmd at as-cmp1 ~]$ /usr/libexec/qemu-kvm -cpu qemu64,+fxsr,+pat > VNC server running on `::1:5900' > KVM internal error. Suberror: 2 > extra data[0]: 80000003 > extra data[1]: 80000603 > rax 0000000000000023 rbx 00000000000000fd rcx 0000000000000000 rdx 00000000000003d1 > rsi 0000000000000003 rdi 000000000000c992 rsp 000000000000be76 rbp 0000000000000000 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 0000000000006f05 rflags 00000016 > cs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > > > > > # > # Bad cmov > # > [dmd at as-cmp1 ~]$ /usr/libexec/qemu-kvm -cpu qemu64,+fxsr,+cmov > VNC server running on `::1:5900' > KVM internal error. Suberror: 1 > rax 000000003c71ffff rbx 00000000000000ef rcx 000000000000ffff rdx 000000000cfea962 > rsi 000000006ef07ae4 rdi 000000007e20fffc rsp 0000000000009101 rbp 000000006f2ffff0 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 0000000000004586 rflags 00010006 > cs f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss a961 (000a9610/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > emulation failure, check dmesg for details > > > > # > # Bad mca > # > [dmd at as-cmp1 ~]$ /usr/libexec/qemu-kvm -cpu qemu64,+fxsr,+mca > VNC server running on `::1:5900' > KVM internal error. Suberror: 2 > extra data[0]: 80000003 > extra data[1]: 80000603 > rax 0000000000000023 rbx 00000000000000fd rcx 0000000000000000 rdx 00000000000003d1 > rsi 0000000000000003 rdi 000000000000c992 rsp 000000000000be76 rbp 0000000000000000 > r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 > r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 > rip 0000000000006f05 rflags 00000016 > cs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ds 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > es f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0) > tr 0000 (feffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) > ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0) > gdt fc558/37 > idt 0/3ff > cr0 10 cr2 0 cr3 0 cr4 0 cr8 0 efer 0 > > > > > > > > > > > > > > > > > > > > > > > > > > -- Eduardo From pmyers at redhat.com Mon Feb 18 13:38:01 2013 From: pmyers at redhat.com (Perry Myers) Date: Mon, 18 Feb 2013 08:38:01 -0500 Subject: [rhos-list] Nova-network v.s. Quantum in Openstack preview In-Reply-To: <20130218133446.GM8494@otherpad.lan.raisama.net> References: <6190AA83EB69374DABAE074D7E900F75124592B7@xmb-aln-x13.cisco.com> <511CFF01.4080909@redhat.com> <6190AA83EB69374DABAE074D7E900F751245D241@xmb-aln-x13.cisco.com> <511E0077.9060001@redhat.com> <511E165A.9040104@redhat.com> <6190AA83EB69374DABAE074D7E900F751245DD6C@xmb-aln-x13.cisco.com> <511E4FB4.4020503@redhat.com> <6190AA83EB69374DABAE074D7E900F75124609BA@xmb-aln-x13.cisco.com> <20130216035252.GA15512@otherpad.lan.raisama.net> <6190AA83EB69374DABAE074D7E900F75124615D2@xmb-aln-x13.cisco.com> <20130218133446.GM8494@otherpad.lan.raisama.net> Message-ID: <51222EB9.4040805@redhat.com> On 02/18/2013 08:34 AM, Eduardo Habkost wrote: > > Thanks for the information. There's no need to test exhaustively each > CPU flag, I just wanted to find a minimal test case where the bug could > be reproduced by simply enabling one feature bit. > > Note that the bug may be completely unrelated to support of feature > flags on Nehalem or your host CPU. The problem is happening when KVM is > already running guest code, not during CPU feature check/initialization. > > I don't know yet what can be causing the issue, but I don't discard the > possibility of a CPU defect. In either case, this bug is not > reproducible on any of the machines where we have run our tests, so we > need more details about your host CPU: the contents of /proc/cpuinfo and > (in case you can install x86info) the output of 'x86info -v -a' If it's a cpu defect... Shixiong, can you at least reproduce this on multiple physical machines or is it only reproducible on a single server thus far? > Also, I still need to know the version of the kernel, seabios, vgabios, > and gpxe packages. From alvaro at redhat.com Mon Feb 18 16:09:15 2013 From: alvaro at redhat.com (Alvaro Lopez Ortega) Date: Mon, 18 Feb 2013 17:09:15 +0100 Subject: [rhos-list] Introducing Packstack Message-ID: Hi everybody, Packstack is a utility that can be used to setup a OpenStack deployment on a distributed environment. Packstack is part of the RHOS, EPEL and Fodora 18 repositories. Our current target distributions are RHEL 6.4 and Fedora 18. Using Packstack you define remote hosts on which you would like to install various OpenStack components. Packstack should be first run to generate an answerfile, the user then edits this answerfile and reruns packstack with this answerfile. Below is brief outline of how a user can use PackStack to install a single OpenStack controller with 2 compute nodes: $ yum install -y openstack-packstack $ packstack --gen-answer-file=ans.txt The file ans.txt has been created and some of the default values should be edited to represent the deployment you want to install, in most cases the values you will want to edit are: - These should be set to match the public and private interfaces for nova-network (flatdhcp): CONFIG_NOVA_NETWORK_PUBIF=eth0 CONFIG_NOVA_COMPUTE_PRIVIF=eth1 CONFIG_NOVA_NETWORK_PRIVIF=eth1 - If your hosts have not been subscribed to Red Hat with subscription-manager then packstack can do this, simply provide you credentials here: CONFIG_RH_USERNAME= CONFIG_RH_PASSWORD= - All of the hostname by default are set to the current host, in our case this will be the openstack controller, but we need to edit the IP address for the nova-compute nodes: CONFIG_NOVA_COMPUTE_HOSTS=1.2.3.4,1.2.3.5 Then, the deployment would be launched by executing: $ packstack --answer-file=ans.txt Packstack will now ssh to each host and install openstack by applying a series of Puppet manifests, depending on your setup this process can take a little time. There are a lot more options that can be changed in the answerfile, all of which are documented in the answerfile. It is also possible to enable / disable some modules in order to install just what's needed. Please, find more information about Packstack at: https://wiki.openstack.org/wiki/Packstack Kind regards, Alvaro From chrisw at redhat.com Mon Feb 18 20:17:30 2013 From: chrisw at redhat.com (Chris Wright) Date: Mon, 18 Feb 2013 12:17:30 -0800 Subject: [rhos-list] dnsmasq cannot start properly In-Reply-To: <6190AA83EB69374DABAE074D7E900F75124647E7@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F75124647E7@xmb-aln-x13.cisco.com> Message-ID: <20130218201730.GE26800@x200.localdomain> * Shixiong Shang (shshang) (shshang at cisco.com) wrote: > Hi, guys: > > I am using dnsmasq as DHCP server to assign IP address to VMs. The "dnsmasq" process seemed to start ok. > > nobody 2919 1 0 23:16 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --local=// --domain-needed --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts This is the libvirt dnsmasq for running locally (un)managed libvirt based VMs, NAT'd to the host interface. > > However, I noticed that, all three config files referred by dnsmasq process were all empty. Based on dhcp_agent.ini file, dnsmasq should go to /var/lib/quantum for config files?.Why did they load files from /var/lib/libvirt/dnsmasq? > [root at as-net1 bin]# cd /var/lib/libvirt/dnsmasq/ > [root at as-net1 dnsmasq]# ls -lh > total 0 > -rw-r--r--. 1 root root 0 Feb 17 23:14 default.addnhosts > -rw-r--r--. 1 root root 0 Feb 17 23:14 default.hostsfile > -rw-r--r--. 1 root root 0 Feb 4 10:03 default.leases > > > In addition, system log threw the following error message at the time when I restarted dhcp agent: > > Feb 17 23:37:19 as-net1 kernel: type=1400 audit(1361162239.626:560): avc: denied { read } for pid=13252 comm="dnsmasq" name="sh" dev=dm-0 ino=1572867 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=lnk_file This is SELinux...you can make sure selinux is in permissive mode > Feb 17 23:37:19 as-net1 dnsmasq[13251]: cannot run lease-init script /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update: No such file or directory > Feb 17 23:37:19 as-net1 dnsmasq[13251]: FAILED to start up > Feb 17 23:37:22 as-net1 dnsmasq[13297]: cannot run lease-init script /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update: No such file or directory > Feb 17 23:37:22 as-net1 dnsmasq[13297]: FAILED to start up > > When I tried to execute the script manually, it gave me this traceback?.. > > [dmd at as-net1 bin]$ /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update > Traceback (most recent call last): > File "/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update", line 20, in > dhcp.Dnsmasq.lease_update() > File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 341, in lease_update > action = sys.argv[1] > IndexError: list index out of range This may be related to the above, can take a deeper look shortly. thanks, -chris From nux at li.nux.ro Mon Feb 18 21:40:25 2013 From: nux at li.nux.ro (Nux!) Date: Mon, 18 Feb 2013 21:40:25 +0000 Subject: [rhos-list] Introducing Packstack In-Reply-To: References: Message-ID: <252e63471760e7ca55a4d65b46703f67@li.nux.ro> On 18.02.2013 16:09, Alvaro Lopez Ortega wrote: > - These should be set to match the public and private interfaces for > nova-network (flatdhcp): Hello, Great to hear about packstack maturing, but a quick question. Since you are targeting EL 6.4 which finally supports openvswitch based setups, why not go straight for quantum? It sounds like it's the future of openstack networking. Any idea for how many more versions nova-network will be present in openstack? Lucian -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro From rbryant at redhat.com Mon Feb 18 21:42:36 2013 From: rbryant at redhat.com (Russell Bryant) Date: Mon, 18 Feb 2013 16:42:36 -0500 Subject: [rhos-list] Introducing Packstack In-Reply-To: <252e63471760e7ca55a4d65b46703f67@li.nux.ro> References: <252e63471760e7ca55a4d65b46703f67@li.nux.ro> Message-ID: <5122A04C.3050100@redhat.com> On 02/18/2013 04:40 PM, Nux! wrote: > On 18.02.2013 16:09, Alvaro Lopez Ortega wrote: >> - These should be set to match the public and private interfaces for >> nova-network (flatdhcp): > > Hello, > > Great to hear about packstack maturing, but a quick question. Since you > are targeting EL 6.4 which finally supports openvswitch based setups, > why not go straight for quantum? It sounds like it's the future of > openstack networking. > Any idea for how many more versions nova-network will be present in > openstack? nova-network isn't going away until there is complete feature parity in Quantum. Packstack suppots folsom, where quantum and nova-network do not have feature parity. The Grizzly release will be closer, but I'm not sure that all of the gaps have been closed yet. -- Russell Bryant From shshang at cisco.com Mon Feb 18 22:09:58 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Mon, 18 Feb 2013 22:09:58 +0000 Subject: [rhos-list] dnsmasq cannot start properly In-Reply-To: <20130218201730.GE26800@x200.localdomain> References: <6190AA83EB69374DABAE074D7E900F75124647E7@xmb-aln-x13.cisco.com> <20130218201730.GE26800@x200.localdomain> Message-ID: <6190AA83EB69374DABAE074D7E900F7512466564@xmb-aln-x13.cisco.com> Hi, Chris: Thanks a lot for the quick response! I turned the SELinux to "Permissive" mode and now the error messages are different. Seems like the "dnsmasq" process still has hard time to access some files. But the good news is, at least the process started and loaded the right files from "/var/lib/quantum/dhcp" directory. dmd at as-net1 ~]$ sudo tail -n 200 /var/log/messages | grep dnsmasq Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.261:42): avc: denied { read } for pid=16963 comm="dnsmasq" name="sh" dev=dm-0 ino=1572867 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=lnk_file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.261:43): avc: denied { execute } for pid=16963 comm="dnsmasq" name="bash" dev=dm-0 ino=1572905 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.261:44): avc: denied { read open } for pid=16963 comm="dnsmasq" name="bash" dev=dm-0 ino=1572905 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.261:45): avc: denied { execute_no_trans } for pid=16963 comm="dnsmasq" path="/bin/bash" dev=dm-0 ino=1572905 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.261:46): avc: denied { getattr } for pid=16963 comm="sh" path="/bin/bash" dev=dm-0 ino=1572905 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.262:47): avc: denied { execute } for pid=16963 comm="sh" name="quantum-dhcp-agent-dnsmasq-lease-update" dev=dm-0 ino=2102246 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.262:48): avc: denied { read open } for pid=16963 comm="sh" name="quantum-dhcp-agent-dnsmasq-lease-update" dev=dm-0 ino=2102246 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.262:49): avc: denied { execute_no_trans } for pid=16963 comm="sh" path="/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update" dev=dm-0 ino=2102246 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=file Feb 18 12:41:31 as-net1 dnsmasq[16966]: started, version 2.48 cachesize 150 Feb 18 12:41:31 as-net1 dnsmasq[16966]: compile time options: IPv6 GNU-getopt DBus no-I18N DHCP TFTP Feb 18 12:41:31 as-net1 dnsmasq[16966]: warning: no upstream servers configured Feb 18 12:41:31 as-net1 dnsmasq-dhcp[16966]: DHCP, static leases only on 192.168.178.0, lease time 2m Feb 18 12:41:31 as-net1 dnsmasq[16966]: cleared cache Feb 18 12:41:31 as-net1 dnsmasq[16966]: read /var/lib/quantum/dhcp/6462a2a6-28cc-4472-907e-34bf02c9e81e/host Feb 18 12:41:31 as-net1 dnsmasq[16966]: read /var/lib/quantum/dhcp/6462a2a6-28cc-4472-907e-34bf02c9e81e/opts [dmd at as-net1 ~]$ Shixiong On Feb 18, 2013, at 3:17 PM, Chris Wright > wrote: * Shixiong Shang (shshang) (shshang at cisco.com) wrote: Hi, guys: I am using dnsmasq as DHCP server to assign IP address to VMs. The "dnsmasq" process seemed to start ok. nobody 2919 1 0 23:16 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --local=// --domain-needed --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts This is the libvirt dnsmasq for running locally (un)managed libvirt based VMs, NAT'd to the host interface. However, I noticed that, all three config files referred by dnsmasq process were all empty. Based on dhcp_agent.ini file, dnsmasq should go to /var/lib/quantum for config files?.Why did they load files from /var/lib/libvirt/dnsmasq? [root at as-net1 bin]# cd /var/lib/libvirt/dnsmasq/ [root at as-net1 dnsmasq]# ls -lh total 0 -rw-r--r--. 1 root root 0 Feb 17 23:14 default.addnhosts -rw-r--r--. 1 root root 0 Feb 17 23:14 default.hostsfile -rw-r--r--. 1 root root 0 Feb 4 10:03 default.leases In addition, system log threw the following error message at the time when I restarted dhcp agent: Feb 17 23:37:19 as-net1 kernel: type=1400 audit(1361162239.626:560): avc: denied { read } for pid=13252 comm="dnsmasq" name="sh" dev=dm-0 ino=1572867 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=lnk_file This is SELinux...you can make sure selinux is in permissive mode Feb 17 23:37:19 as-net1 dnsmasq[13251]: cannot run lease-init script /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update: No such file or directory Feb 17 23:37:19 as-net1 dnsmasq[13251]: FAILED to start up Feb 17 23:37:22 as-net1 dnsmasq[13297]: cannot run lease-init script /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update: No such file or directory Feb 17 23:37:22 as-net1 dnsmasq[13297]: FAILED to start up When I tried to execute the script manually, it gave me this traceback?.. [dmd at as-net1 bin]$ /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update Traceback (most recent call last): File "/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update", line 20, in dhcp.Dnsmasq.lease_update() File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 341, in lease_update action = sys.argv[1] IndexError: list index out of range This may be related to the above, can take a deeper look shortly. thanks, -chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From shshang at cisco.com Tue Feb 19 00:50:20 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Tue, 19 Feb 2013 00:50:20 +0000 Subject: [rhos-list] dnsmasq cannot start properly In-Reply-To: <6190AA83EB69374DABAE074D7E900F7512466564@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F75124647E7@xmb-aln-x13.cisco.com> <20130218201730.GE26800@x200.localdomain> <6190AA83EB69374DABAE074D7E900F7512466564@xmb-aln-x13.cisco.com> Message-ID: <6190AA83EB69374DABAE074D7E900F7512466AF5@xmb-aln-x13.cisco.com> Hi, Chris: If I completely disabled SELinux, then dnsmasq process could start without any error. However, my VM still couldn't obtain IP address from the DHCP server. Based on tcpdump right on tap interface to which DHCP server was hooked up, I saw inbound DHCP Discover message from VM. But I never saw DHCP OFFER message returned back to VM. I checked dnsmasq host files and it contains the right mapping between VM MAC and IP address?..Anything could go wrong? Feb 18 19:26:21 as-net1 dnsmasq[2888]: started, version 2.48 cachesize 150 Feb 18 19:26:21 as-net1 dnsmasq[2888]: compile time options: IPv6 GNU-getopt DBus no-I18N DHCP TFTP Feb 18 19:26:21 as-net1 dnsmasq-dhcp[2888]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h Feb 18 19:26:21 as-net1 dnsmasq[2888]: using local addresses only for unqualified names Feb 18 19:26:21 as-net1 dnsmasq[2888]: reading /etc/resolv.conf Feb 18 19:26:21 as-net1 dnsmasq[2888]: using nameserver 10.122.88.11#53 Feb 18 19:26:21 as-net1 dnsmasq[2888]: using nameserver 7.10.177.251#53 Feb 18 19:26:21 as-net1 dnsmasq[2888]: using local addresses only for unqualified names Feb 18 19:26:21 as-net1 dnsmasq[2888]: read /etc/hosts - 2 addresses Feb 18 19:26:21 as-net1 dnsmasq[2888]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Feb 18 19:26:21 as-net1 dnsmasq[2888]: read /var/lib/libvirt/dnsmasq/default.hostsfile Feb 18 19:27:53 as-net1 dnsmasq[3596]: started, version 2.48 cachesize 150 Feb 18 19:27:53 as-net1 dnsmasq[3596]: compile time options: IPv6 GNU-getopt DBus no-I18N DHCP TFTP Feb 18 19:27:53 as-net1 dnsmasq[3596]: warning: no upstream servers configured Feb 18 19:27:53 as-net1 dnsmasq-dhcp[3596]: DHCP, static leases only on 192.168.178.0, lease time 2m Feb 18 19:27:53 as-net1 dnsmasq[3596]: cleared cache Feb 18 19:27:53 as-net1 dnsmasq[3596]: read /var/lib/quantum/dhcp/6462a2a6-28cc-4472-907e-34bf02c9e81e/host Feb 18 19:27:53 as-net1 dnsmasq[3596]: read /var/lib/quantum/dhcp/6462a2a6-28cc-4472-907e-34bf02c9e81e/opts Thanks! Shixiong On Feb 18, 2013, at 5:09 PM, "Shixiong Shang (shshang)" > wrote: Hi, Chris: Thanks a lot for the quick response! I turned the SELinux to "Permissive" mode and now the error messages are different. Seems like the "dnsmasq" process still has hard time to access some files. But the good news is, at least the process started and loaded the right files from "/var/lib/quantum/dhcp" directory. dmd at as-net1 ~]$ sudo tail -n 200 /var/log/messages | grep dnsmasq Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.261:42): avc: denied { read } for pid=16963 comm="dnsmasq" name="sh" dev=dm-0 ino=1572867 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=lnk_file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.261:43): avc: denied { execute } for pid=16963 comm="dnsmasq" name="bash" dev=dm-0 ino=1572905 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.261:44): avc: denied { read open } for pid=16963 comm="dnsmasq" name="bash" dev=dm-0 ino=1572905 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.261:45): avc: denied { execute_no_trans } for pid=16963 comm="dnsmasq" path="/bin/bash" dev=dm-0 ino=1572905 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.261:46): avc: denied { getattr } for pid=16963 comm="sh" path="/bin/bash" dev=dm-0 ino=1572905 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:shell_exec_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.262:47): avc: denied { execute } for pid=16963 comm="sh" name="quantum-dhcp-agent-dnsmasq-lease-update" dev=dm-0 ino=2102246 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.262:48): avc: denied { read open } for pid=16963 comm="sh" name="quantum-dhcp-agent-dnsmasq-lease-update" dev=dm-0 ino=2102246 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=file Feb 18 12:41:31 as-net1 kernel: type=1400 audit(1361209291.262:49): avc: denied { execute_no_trans } for pid=16963 comm="sh" path="/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update" dev=dm-0 ino=2102246 scontext=unconfined_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=file Feb 18 12:41:31 as-net1 dnsmasq[16966]: started, version 2.48 cachesize 150 Feb 18 12:41:31 as-net1 dnsmasq[16966]: compile time options: IPv6 GNU-getopt DBus no-I18N DHCP TFTP Feb 18 12:41:31 as-net1 dnsmasq[16966]: warning: no upstream servers configured Feb 18 12:41:31 as-net1 dnsmasq-dhcp[16966]: DHCP, static leases only on 192.168.178.0, lease time 2m Feb 18 12:41:31 as-net1 dnsmasq[16966]: cleared cache Feb 18 12:41:31 as-net1 dnsmasq[16966]: read /var/lib/quantum/dhcp/6462a2a6-28cc-4472-907e-34bf02c9e81e/host Feb 18 12:41:31 as-net1 dnsmasq[16966]: read /var/lib/quantum/dhcp/6462a2a6-28cc-4472-907e-34bf02c9e81e/opts [dmd at as-net1 ~]$ Shixiong On Feb 18, 2013, at 3:17 PM, Chris Wright > wrote: * Shixiong Shang (shshang) (shshang at cisco.com) wrote: Hi, guys: I am using dnsmasq as DHCP server to assign IP address to VMs. The "dnsmasq" process seemed to start ok. nobody 2919 1 0 23:16 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --local=// --domain-needed --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts This is the libvirt dnsmasq for running locally (un)managed libvirt based VMs, NAT'd to the host interface. However, I noticed that, all three config files referred by dnsmasq process were all empty. Based on dhcp_agent.ini file, dnsmasq should go to /var/lib/quantum for config files?.Why did they load files from /var/lib/libvirt/dnsmasq? [root at as-net1 bin]# cd /var/lib/libvirt/dnsmasq/ [root at as-net1 dnsmasq]# ls -lh total 0 -rw-r--r--. 1 root root 0 Feb 17 23:14 default.addnhosts -rw-r--r--. 1 root root 0 Feb 17 23:14 default.hostsfile -rw-r--r--. 1 root root 0 Feb 4 10:03 default.leases In addition, system log threw the following error message at the time when I restarted dhcp agent: Feb 17 23:37:19 as-net1 kernel: type=1400 audit(1361162239.626:560): avc: denied { read } for pid=13252 comm="dnsmasq" name="sh" dev=dm-0 ino=1572867 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:bin_t:s0 tclass=lnk_file This is SELinux...you can make sure selinux is in permissive mode Feb 17 23:37:19 as-net1 dnsmasq[13251]: cannot run lease-init script /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update: No such file or directory Feb 17 23:37:19 as-net1 dnsmasq[13251]: FAILED to start up Feb 17 23:37:22 as-net1 dnsmasq[13297]: cannot run lease-init script /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update: No such file or directory Feb 17 23:37:22 as-net1 dnsmasq[13297]: FAILED to start up When I tried to execute the script manually, it gave me this traceback?.. [dmd at as-net1 bin]$ /usr/bin/quantum-dhcp-agent-dnsmasq-lease-update Traceback (most recent call last): File "/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update", line 20, in dhcp.Dnsmasq.lease_update() File "/usr/lib/python2.6/site-packages/quantum/agent/linux/dhcp.py", line 341, in lease_update action = sys.argv[1] IndexError: list index out of range This may be related to the above, can take a deeper look shortly. thanks, -chris _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Tue Feb 19 00:58:14 2013 From: pmyers at redhat.com (Perry Myers) Date: Mon, 18 Feb 2013 19:58:14 -0500 Subject: [rhos-list] dnsmasq cannot start properly In-Reply-To: <6190AA83EB69374DABAE074D7E900F7512466AF5@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F75124647E7@xmb-aln-x13.cisco.com> <20130218201730.GE26800@x200.localdomain> <6190AA83EB69374DABAE074D7E900F7512466564@xmb-aln-x13.cisco.com> <6190AA83EB69374DABAE074D7E900F7512466AF5@xmb-aln-x13.cisco.com> Message-ID: <5122CE26.7070904@redhat.com> On 02/18/2013 07:50 PM, Shixiong Shang (shshang) wrote: > Hi, Chris: > > If I completely disabled SELinux, then dnsmasq process could start > without any error. However, my VM still couldn't obtain IP address from > the DHCP server. Based on tcpdump right on tap interface to which DHCP > server was hooked up, I saw inbound DHCP Discover message from VM. But I > never saw DHCP OFFER message returned back to VM. I checked dnsmasq host > files and it contains the right mapping between VM MAC and IP > address?..Anything could go wrong? Maybe you're hitting this error? https://bugzilla.redhat.com/show_bug.cgi?id=889868 Gary is working on that one, and we hope to have a fix in the next build of quantum to put out on RHN/CDN. There's an upstream patch in review: https://review.openstack.org/#/c/22183/ From shshang at cisco.com Tue Feb 19 01:14:15 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Tue, 19 Feb 2013 01:14:15 +0000 Subject: [rhos-list] dnsmasq cannot start properly In-Reply-To: <5122CE26.7070904@redhat.com> References: <6190AA83EB69374DABAE074D7E900F75124647E7@xmb-aln-x13.cisco.com> <20130218201730.GE26800@x200.localdomain> <6190AA83EB69374DABAE074D7E900F7512466564@xmb-aln-x13.cisco.com> <6190AA83EB69374DABAE074D7E900F7512466AF5@xmb-aln-x13.cisco.com> <5122CE26.7070904@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F7512466C77@xmb-aln-x13.cisco.com> Hi, Perry: Thanks a bunch for the pointer. I exchanged a couple of emails with Gary on this issue, which occurs on compute-node. We had workaround/solution in place right now, so it is not a road blocker for us. I will continue working with Gary until it is resolved and verified. In my configuration, I separated all quantum functions from compute onto a separate machine, i.e. as-net1. The default iptables rules happen to allow DHCP packets at inbound direction. Beyond that point, I verified at OVS bridges (both br-eth1 and br-int) and I could see DHCP DISCOVER message was passed upstream all the way to tap interface. As a result, I expected dnsmasq to return IP assignment conveyed by DHCP OFFER message. However, it never happened. Would like to make sure there is no misconfiguration on my side. Thanks again! Shixiong On Feb 18, 2013, at 7:58 PM, Perry Myers > wrote: On 02/18/2013 07:50 PM, Shixiong Shang (shshang) wrote: Hi, Chris: If I completely disabled SELinux, then dnsmasq process could start without any error. However, my VM still couldn't obtain IP address from the DHCP server. Based on tcpdump right on tap interface to which DHCP server was hooked up, I saw inbound DHCP Discover message from VM. But I never saw DHCP OFFER message returned back to VM. I checked dnsmasq host files and it contains the right mapping between VM MAC and IP address?..Anything could go wrong? Maybe you're hitting this error? https://bugzilla.redhat.com/show_bug.cgi?id=889868 Gary is working on that one, and we hope to have a fix in the next build of quantum to put out on RHN/CDN. There's an upstream patch in review: https://review.openstack.org/#/c/22183/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From shshang at cisco.com Tue Feb 19 04:43:44 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Tue, 19 Feb 2013 04:43:44 +0000 Subject: [rhos-list] dnsmasq cannot start properly In-Reply-To: <5122CE26.7070904@redhat.com> References: <6190AA83EB69374DABAE074D7E900F75124647E7@xmb-aln-x13.cisco.com> <20130218201730.GE26800@x200.localdomain> <6190AA83EB69374DABAE074D7E900F7512466564@xmb-aln-x13.cisco.com> <6190AA83EB69374DABAE074D7E900F7512466AF5@xmb-aln-x13.cisco.com> <5122CE26.7070904@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F75124673EC@xmb-aln-x13.cisco.com> Hi, Perry: I guess I said "iptable rule allows DHCP" too early. :) I read the iptable entries one more time and here is the problem on quantum node running dhcp agent. [dmd at as-net1 ~]$ sudo iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination quantum-l3-agent-INPUT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:9696 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain quantum-l3-agent-INPUT (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 7.10.177.2 tcp dpt:8775 By this default setting, DHCP request from remote host (i.e. nova-compute) destined to UDP/67 is ACCEPTED, as we saw previously. However, the DHCP OFFER message, which is sent to UDP/68 is actually blocked. Considering this bi-directional communication, I added the following line to "quantum-l3-agent-INPUT" chain and now my VM got IP address!!!! iptables -A quantum-l3-agent-INPUT -p udp --dport 67:68 --sport 67:68 -j ACCEPT [dmd at as-net1 ~]$ sudo tcpdump -vvn -i tap335cc0a4-c7 | grep BOOTP/DHCP tcpdump: listening on tap335cc0a4-c7, link-type EN10MB (Ethernet), capture size 65535 bytes 0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from fa:16:3e:98:7d:98, length 300, xid 0xa7c6d351, secs 3, Flags [none] (0x0000) 0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from fa:16:3e:98:7d:98, length 300, xid 0xa7c6d351, secs 3, Flags [none] (0x0000) 192.168.178.2.bootps > 192.168.178.3.bootpc: [udp sum ok] BOOTP/DHCP, Reply, length 323, xid 0xa7c6d351, secs 3, Flags [none] (0x0000) 192.168.178.2.bootps > 192.168.178.3.bootpc: [udp sum ok] BOOTP/DHCP, Reply, length 323, xid 0xa7c6d351, secs 3, Flags [none] (0x0000) Would you please fix this problem in the bug you referred to, or maybe create a new for tracking purpose? In the meanwhile, we still need to understand the SELinux problem mentioned earlier in the email thread. Right now, my SELinux is disabled, which is not desired. Thank you for the pointer, which truly forced me to think about it twice! Good night! Shixiong On Feb 18, 2013, at 7:58 PM, Perry Myers > wrote: On 02/18/2013 07:50 PM, Shixiong Shang (shshang) wrote: Hi, Chris: If I completely disabled SELinux, then dnsmasq process could start without any error. However, my VM still couldn't obtain IP address from the DHCP server. Based on tcpdump right on tap interface to which DHCP server was hooked up, I saw inbound DHCP Discover message from VM. But I never saw DHCP OFFER message returned back to VM. I checked dnsmasq host files and it contains the right mapping between VM MAC and IP address?..Anything could go wrong? Maybe you're hitting this error? https://bugzilla.redhat.com/show_bug.cgi?id=889868 Gary is working on that one, and we hope to have a fix in the next build of quantum to put out on RHN/CDN. There's an upstream patch in review: https://review.openstack.org/#/c/22183/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Tue Feb 19 05:38:34 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Tue, 19 Feb 2013 00:38:34 -0500 Subject: [rhos-list] dnsmasq cannot start properly In-Reply-To: <6190AA83EB69374DABAE074D7E900F75124673EC@xmb-aln-x13.cisco.com> Message-ID: <51230fdc.88c7ec0a.4ad4.0fd8@mx.google.com> An HTML attachment was scrubbed... URL: From christopher.cobb at nesassociates.com Tue Feb 19 17:27:07 2013 From: christopher.cobb at nesassociates.com (Christopher Cobb) Date: Tue, 19 Feb 2013 17:27:07 +0000 Subject: [rhos-list] yum repolist: [Errno 14] PYCURL ERROR 22 In-Reply-To: <511E5EF7.205@redhat.com> References: <566CCFB492940B4DA07EB97FE99C9D7F0C786B@nes-exdb-01.nesassociates.com> <511E5919.40509@redhat.com> <566CCFB492940B4DA07EB97FE99C9D7F0C788E@nes-exdb-01.nesassociates.com> <511E5EF7.205@redhat.com> Message-ID: <566CCFB492940B4DA07EB97FE99C9D7F0C8A78@nes-exdb-01.nesassociates.com> You're right, I had installed the 32-bit version of RHEL. I have installed the 64-bit version and the problem has gone away. Thank you! -----Original Message----- From: Perry Myers [mailto:pmyers at redhat.com] Sent: Friday, February 15, 2013 11:15 AM To: Christopher Cobb Cc: Bryan Kearney; rhos-list at redhat.com Subject: Re: [rhos-list] yum repolist: [Errno 14] PYCURL ERROR 22 On 02/15/2013 10:56 AM, Christopher Cobb wrote: > Thank you. It's an Atom N2600. According to this spec sheet: > > http://ark.intel.com/products/58916/Intel-Atom-Processor-N2600-%281M-C > ache-1_6-GHz%29 > > it should have "Intel 64" architecture. I guess not everyone > agrees... :( Even if the system is x86_64 capable, you need to install the x86_64 version of RHEL to utilize that. It's possible to install the i686 version of RHEL on top of a x86_64 machine, and it appears that is what you may have done in this case > -----Original Message----- > From: rhos-list-bounces at redhat.com > [mailto:rhos-list-bounces at redhat.com] On Behalf Of Bryan Kearney > Sent: Friday, February 15, 2013 10:50 AM > To: rhos-list at redhat.com > Subject: Re: [rhos-list] yum repolist: [Errno 14] PYCURL ERROR 22 > > On 02/15/2013 10:42 AM, Christopher Cobb wrote: >> Good day, All! >> >> I just did a fresh install of RHES 6.3 and am working through the Getting Started Guide. I'm on page 16 where it asks me to do: yum repolist. I get the following results: >> >> # yum repolist >> Loaded plugins: product-id, subscription-manager Updating >> certificate-based repositories. >> rhel-6-server-beta-rpms | 3.4 kB 00:00 >> rhel-6-server-beta-rpms/primary_db | 1.2 MB 00:02 >> rhel-6-server-cf-tools-1-rpms | 2.8 kB 00:00 >> rhel-6-server-cf-tools-1-rpms/primary_db | 18 kB 00:00 >> rhel-6-server-rhev-agent-rpms | 2.8 kB 00:00 >> rhel-6-server-rhev-agent-rpms/primary_db | 11 kB 00:00 >> rhel-6-server-rpms | 3.7 kB 00:00 >> rhel-6-server-rpms/primary_db | 16 MB 00:41 >> https://cdn.redhat.com/content/dist/rhel/server/6/6Server/i386/openstack/folsom/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" >> Trying other mirror. >> repo id repo name status >> rhel-6-server-beta-rpms Red Hat Enterprise Linux 6 Server Beta (RPMs) 831 >> rhel-6-server-cf-tools-1-rpms Red Hat CloudForms Tools for RHEL 6 (RPMs) 30 >> rhel-6-server-rhev-agent-rpms Red Hat Enterprise Virtualization Agents for RHEL 6 Server (RPMs) 16 >> rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 6,812 >> rhel-server-ost-6-folsom-rpms Red Hat OpenStack Folsom Preview (RPMs) 0 >> repolist: 7,689 >> >> I've tried yum clean metadata and yum clean all. They execute without error but I still get the same results with yum repolist. >> >> I can't figure out what I've got wrong. Any suggestions? >> >> cc > > Are you installed on an i386 machine? if so, the bits are not available there (I think). You need to deploy on x86_64. > > -- bk > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From k.mohammad1 at physics.ox.ac.uk Fri Feb 22 00:01:22 2013 From: k.mohammad1 at physics.ox.ac.uk (Kashif Mohammad) Date: Fri, 22 Feb 2013 00:01:22 +0000 Subject: [rhos-list] starting openstack folsom preview Message-ID: <88B17E26E0A9F94381C67535AEF2BB5E798DD7F9@EXCHNG13.physics.ox.ac.uk> Hi I am trying to install openstack folsom with quantum and openvswitch. I have few questions. We have a institutional subscription of Redhat and I requested for openstack folsom preview subscription. I got a mail saying that your subscription in in process but didn't get any mail confirming the subscription. Where is openvswitch rpm ? I can't see in rhel6.4. Is it in openstack folsom repo ? Are rpm's from openstack folsom repo moves to epel repo at later stage ? Thanks Kashif From pmyers at redhat.com Fri Feb 22 00:29:55 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 21 Feb 2013 19:29:55 -0500 Subject: [rhos-list] starting openstack folsom preview In-Reply-To: <88B17E26E0A9F94381C67535AEF2BB5E798DD7F9@EXCHNG13.physics.ox.ac.uk> References: <88B17E26E0A9F94381C67535AEF2BB5E798DD7F9@EXCHNG13.physics.ox.ac.uk> Message-ID: <5126BC03.6010303@redhat.com> On 02/21/2013 07:01 PM, Kashif Mohammad wrote: > > Hi > > I am trying to install openstack folsom with quantum and openvswitch. I have few questions. > > We have a institutional subscription of Redhat and I requested for openstack folsom preview subscription. I got a mail saying that your subscription in in process but didn't get any mail confirming the subscription. > Where is openvswitch rpm ? I can't see in rhel6.4. Is it in openstack folsom repo ? > Are rpm's from openstack folsom repo moves to epel repo at later stage ? What you've run into is a known issue with the system that Red Hat is using to process submissions for software evaluations. (The issue fwiw is not Red Hat OpenStack specific) I've cc'd some folks from our business team that can assist. Thanks, Perry From pmyers at redhat.com Fri Feb 22 02:03:54 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 21 Feb 2013 21:03:54 -0500 Subject: [rhos-list] Discussion about quantum + iptables and libvirt-vif In-Reply-To: <6190AA83EB69374DABAE074D7E900F751246E687@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F751246E687@xmb-aln-x13.cisco.com> Message-ID: <5126D20A.4000604@redhat.com> Bringing a useful thread from off-list to the list, in case the info is helpful for others >> After I updated the system to latest release today, I immediately >> noticed a change on nova-compute side. With the new release, now if I >> try to spawn up a VM, the libvert.xml contains a tap interface as >> "target dev", which never happened before. I can also see this >> interface >> by ifconfig command in Linux. >> >> >> >> >> >> >> > filter="nova-instance-instance-0000000f-fa163ef679d7"> >> >> >> >> >> >> > > Towards the end of the folsom release we discovered a number of > problems with the OVS VIF driver and the Nova security groups. In > short the traffic was bypassing the security groups. The solution was > to add in a bridge that would enforce the security groups (this is > done by the OVSHybrid driver). > There is a tap device from the VM which is connected to a bridge > which is connected via a veth pair to the OVS bridge. > > So for example one would have: > > qbr3bdb52c4-e9 8000.96fcd9e833df no qvb3bdb52c4-e9 > tap3bdb52c4-e9 > > And: > > [root at dhcp-4-83 ~(keystone_joe)]$ ovs-vsctl show > 1a06b182-8419-4612-9a6e-3e4b7c75570d > Bridge br-int > Port br-int > Interface br-int > type: internal > Port "tap77830a12-b6" > tag: 1 > Interface "tap77830a12-b6" > type: internal > Port "qvo3bdb52c4-e9" > tag: 1 > Interface "qvo3bdb52c4-e9" > > > This is for: > > > > > > > > > > > > > >> I am wondering what's the main reason to add this tap interface? >> Further >> more, what's the implication on the traffic flow in and out of >> VM? For >> example, does iptable rule work on this tap interface? If so, which >> chain, INPUT/FORWARD/OUTPUT? I realized that now DHCP, which used to >> work last night, stopped working. DHCP request from VM is not handed >> over from linux bridge to OVS bridge. No iptable rules is >> helpful?..:( >> Do you think it is correlated to this change? > > To be honest I think this is one of the problems that we have with > the iptables rules at the moment. This is still under investigation. > The mail I sent you yesterday may have been incorrect when it comes > to the ip tables. > > With the linux bridge it was working with me and I have problems with > the OVS with todays installation. > One thing for sure is that there is a performance penalty for the > additional bridges. > > Hope that the explanation helps. From llange at redhat.com Fri Feb 22 11:18:01 2013 From: llange at redhat.com (Lutz Lange) Date: Fri, 22 Feb 2013 12:18:01 +0100 Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 with Subscription Manager - no Repositories? Message-ID: <512753E9.7020704@redhat.com> Hi list, i'm trying to get OpenStack installed on RHEL 6.4. I did register my systems with system-manager and have the entitlements : *root at hv01 ~]# subscription-manager list --consumed* +-------------------------------------------+ Consumed Subscriptions +-------------------------------------------+ Subscription Name: Red Hat Employee Subscription Provides: Red Hat Enterprise Linux Server Red Hat Enterprise Linux Resilient Storage (for RHEL Server) Red Hat Enterprise Linux Load Balancer (for RHEL Server) Red Hat Enterprise Linux High Availability (for RHEL Server) Red Hat Enterprise Linux Workstation SKU: SYS0395 Contract: 3230762 Account: 1495773 Serial Number: 4011105903681551707 Active: True Quantity Used: 1 Service Level: None Service Type: None Starts: 28/09/12 Ends: 28/09/13 Subscription Name: Red Hat OpenStack Tech Preview Provides: Red Hat OpenStack Red Hat Enterprise Linux Server SKU: SER0406 Contract: 10116941 Account: 1495773 Serial Number: 3278350936972371219 Active: True Quantity Used: 1 Service Level: None Service Type: None Starts: 21/02/13 Ends: 22/05/13 But i can't access any OpenStack repositories :* * *[root at hv01 ~]# yum-config-manager | grep '^name ='* name = Red Hat CloudForms Tools for RHEL 6 (RPMs) name = Red Hat Enterprise Virtualization Agents for RHEL 6 Server (RPMs) name = Red Hat Enterprise Linux 6 Server (RPMs) name = Red Hat Enterprise Linux High Availability (for RHEL 6 Server) (RPMs) name = Red Hat Enterprise Linux Load Balancer (for RHEL 6 Server) (RPMs) name = Red Hat Enterprise Linux Resilient Storage (for RHEL 6 Server) (RPMs) What did i miss? Cheers & Tx Lutz -- Lutz Lange, RHCA lutz at redhat.com Solution Architect Red Hat GmbH Wankelstrasse 5 Cell : +49 172 75 285 17 D-70563 Stuttgart ____________________________________________________________________ Reg. Adresse: Red Hat GmbH, Werner-von-Siemens-Ring 11-15 D-85630 Grasbrunn Handelsregister: Amtsgericht Muenchen HRB 153243 Geschaeftsfuehrer: Mark Hegarty, Charlie Peters, Michael Cunningham, Charles Cachera -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 263 bytes Desc: OpenPGP digital signature URL: From sgordon at redhat.com Fri Feb 22 12:31:09 2013 From: sgordon at redhat.com (Steve Gordon) Date: Fri, 22 Feb 2013 07:31:09 -0500 (EST) Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 with Subscription Manager - no Repositories? In-Reply-To: <512753E9.7020704@redhat.com> Message-ID: <1801628107.6854511.1361536269000.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Lutz Lange" > To: rhos-list at redhat.com > Sent: Friday, February 22, 2013 6:18:01 AM > Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 with Subscription Manager - no Repositories? > > Hi list, > > i'm trying to get OpenStack installed on RHEL 6.4. > > I did register my systems with system-manager and have the > entitlements : > > *root at hv01 ~]# subscription-manager list --consumed* > +-------------------------------------------+ > Consumed Subscriptions > +-------------------------------------------+ > Subscription Name: Red Hat Employee Subscription > Provides: Red Hat Enterprise Linux Server > Red Hat Enterprise Linux Resilient Storage > (for RHEL Server) > Red Hat Enterprise Linux Load Balancer (for > RHEL Server) > Red Hat Enterprise Linux High Availability > (for RHEL Server) > Red Hat Enterprise Linux Workstation > SKU: SYS0395 > Contract: 3230762 > Account: 1495773 > Serial Number: 4011105903681551707 > Active: True > Quantity Used: 1 > Service Level: None > Service Type: None > Starts: 28/09/12 > Ends: 28/09/13 > > Subscription Name: Red Hat OpenStack Tech Preview > Provides: Red Hat OpenStack > Red Hat Enterprise Linux Server > SKU: SER0406 > Contract: 10116941 > Account: 1495773 > Serial Number: 3278350936972371219 > Active: True > Quantity Used: 1 > Service Level: None > Service Type: None > Starts: 21/02/13 > Ends: 22/05/13 > > But i can't access any OpenStack repositories :* > * > > *[root at hv01 ~]# yum-config-manager | grep '^name ='* > name = Red Hat CloudForms Tools for RHEL 6 (RPMs) > name = Red Hat Enterprise Virtualization Agents for RHEL 6 Server > (RPMs) > name = Red Hat Enterprise Linux 6 Server (RPMs) > name = Red Hat Enterprise Linux High Availability (for RHEL 6 Server) > (RPMs) > name = Red Hat Enterprise Linux Load Balancer (for RHEL 6 Server) > (RPMs) > name = Red Hat Enterprise Linux Resilient Storage (for RHEL 6 Server) > (RPMs) > > What did i miss? > > Cheers & Tx > Lutz Hi Lutz, Have you tried running a 'yum repolist' since adding the subscriptions using subscription manager? Thanks, Steve From llange at redhat.com Fri Feb 22 12:45:10 2013 From: llange at redhat.com (Lutz Lange) Date: Fri, 22 Feb 2013 13:45:10 +0100 Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 with Subscription Manager - no Repositories? In-Reply-To: <1801628107.6854511.1361536269000.JavaMail.root@redhat.com> References: <1801628107.6854511.1361536269000.JavaMail.root@redhat.com> Message-ID: <51276856.7010000@redhat.com> Hi Steve, i did a # yum clean all and # yum repolist both with no help, the folsom repo is not available ... Cheers Lutz On 22/02/13 13:31, Steve Gordon wrote: > ----- Original Message ----- >> From: "Lutz Lange" >> To: rhos-list at redhat.com >> Sent: Friday, February 22, 2013 6:18:01 AM >> Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 with Subscription Manager - no Repositories? >> >> Hi list, >> >> i'm trying to get OpenStack installed on RHEL 6.4. >> >> I did register my systems with system-manager and have the >> entitlements : >> >> *root at hv01 ~]# subscription-manager list --consumed* >> +-------------------------------------------+ >> Consumed Subscriptions >> +-------------------------------------------+ >> Subscription Name: Red Hat Employee Subscription >> Provides: Red Hat Enterprise Linux Server >> Red Hat Enterprise Linux Resilient Storage >> (for RHEL Server) >> Red Hat Enterprise Linux Load Balancer (for >> RHEL Server) >> Red Hat Enterprise Linux High Availability >> (for RHEL Server) >> Red Hat Enterprise Linux Workstation >> SKU: SYS0395 >> Contract: 3230762 >> Account: 1495773 >> Serial Number: 4011105903681551707 >> Active: True >> Quantity Used: 1 >> Service Level: None >> Service Type: None >> Starts: 28/09/12 >> Ends: 28/09/13 >> >> Subscription Name: Red Hat OpenStack Tech Preview >> Provides: Red Hat OpenStack >> Red Hat Enterprise Linux Server >> SKU: SER0406 >> Contract: 10116941 >> Account: 1495773 >> Serial Number: 3278350936972371219 >> Active: True >> Quantity Used: 1 >> Service Level: None >> Service Type: None >> Starts: 21/02/13 >> Ends: 22/05/13 >> >> But i can't access any OpenStack repositories :* >> * >> >> *[root at hv01 ~]# yum-config-manager | grep '^name ='* >> name = Red Hat CloudForms Tools for RHEL 6 (RPMs) >> name = Red Hat Enterprise Virtualization Agents for RHEL 6 Server >> (RPMs) >> name = Red Hat Enterprise Linux 6 Server (RPMs) >> name = Red Hat Enterprise Linux High Availability (for RHEL 6 Server) >> (RPMs) >> name = Red Hat Enterprise Linux Load Balancer (for RHEL 6 Server) >> (RPMs) >> name = Red Hat Enterprise Linux Resilient Storage (for RHEL 6 Server) >> (RPMs) >> >> What did i miss? >> >> Cheers & Tx >> Lutz > Hi Lutz, > > Have you tried running a 'yum repolist' since adding the subscriptions using subscription manager? > > Thanks, > > Steve -- Lutz Lange, RHCA lutz at redhat.com Solution Architect Red Hat GmbH Wankelstrasse 5 Cell : +49 172 75 285 17 D-70563 Stuttgart ____________________________________________________________________ Reg. Adresse: Red Hat GmbH, Werner-von-Siemens-Ring 11-15 D-85630 Grasbrunn Handelsregister: Amtsgericht Muenchen HRB 153243 Geschaeftsfuehrer: Mark Hegarty, Charlie Peters, Michael Cunningham, Charles Cachera -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 263 bytes Desc: OpenPGP digital signature URL: From llange at redhat.com Fri Feb 22 15:04:48 2013 From: llange at redhat.com (Lutz Lange) Date: Fri, 22 Feb 2013 16:04:48 +0100 Subject: [rhos-list] Red Hat OpenStack Guide? Message-ID: <51278910.1070901@redhat.com> Hi there, i started playing with our Folsom packages and used our quickstart guide for that. What i found strange was, that we did not separate cloud controler & compute nodes. Do we have a guide on how to setup more that one node that does everything? Cheers Lutz -- Lutz Lange, RHCA lutz at redhat.com Solution Architect Red Hat GmbH Wankelstrasse 5 Cell : +49 172 75 285 17 D-70563 Stuttgart ____________________________________________________________________ Reg. Adresse: Red Hat GmbH, Werner-von-Siemens-Ring 11-15 D-85630 Grasbrunn Handelsregister: Amtsgericht Muenchen HRB 153243 Geschaeftsfuehrer: Mark Hegarty, Charlie Peters, Michael Cunningham, Charles Cachera -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 263 bytes Desc: OpenPGP digital signature URL: From pmyers at redhat.com Fri Feb 22 15:51:02 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 22 Feb 2013 10:51:02 -0500 Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 with Subscription Manager - no Repositories? In-Reply-To: <51276856.7010000@redhat.com> References: <1801628107.6854511.1361536269000.JavaMail.root@redhat.com> <51276856.7010000@redhat.com> Message-ID: <512793E6.8000509@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/22/2013 07:45 AM, Lutz Lange wrote: > Hi Steve, > > i did a # yum clean all and # yum repolist > > both with no help, the folsom repo is not available ... > > Cheers Lutz > > On 22/02/13 13:31, Steve Gordon wrote: >> ----- Original Message ----- >>> From: "Lutz Lange" To: >>> rhos-list at redhat.com Sent: Friday, February 22, 2013 6:18:01 >>> AM Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 >>> with Subscription Manager - no Repositories? >>> >>> Hi list, >>> >>> i'm trying to get OpenStack installed on RHEL 6.4. >>> >>> I did register my systems with system-manager and have the >>> entitlements : >>> >>> *root at hv01 ~]# subscription-manager list --consumed* >>> +-------------------------------------------+ Consumed >>> Subscriptions +-------------------------------------------+ >>> Subscription Name: Red Hat Employee Subscription Provides: >>> Red Hat Enterprise Linux Server Red Hat Enterprise Linux >>> Resilient Storage (for RHEL Server) Red Hat Enterprise Linux >>> Load Balancer (for RHEL Server) Red Hat Enterprise Linux High >>> Availability (for RHEL Server) Red Hat Enterprise Linux >>> Workstation SKU: SYS0395 Contract: >>> 3230762 Account: 1495773 Serial Number: >>> 4011105903681551707 Active: True Quantity Used: >>> 1 Service Level: None Service Type: None >>> Starts: 28/09/12 Ends: >>> 28/09/13 >>> >>> Subscription Name: Red Hat OpenStack Tech Preview >>> Provides: Red Hat OpenStack Red Hat Enterprise >>> Linux Server SKU: SER0406 Contract: >>> 10116941 Account: 1495773 Serial Number: >>> 3278350936972371219 Active: True Quantity Used: >>> 1 Service Level: None Service Type: None >>> Starts: 21/02/13 Ends: >>> 22/05/13 >>> >>> But i can't access any OpenStack repositories :* * >>> >>> *[root at hv01 ~]# yum-config-manager | grep '^name ='* name = Red >>> Hat CloudForms Tools for RHEL 6 (RPMs) name = Red Hat >>> Enterprise Virtualization Agents for RHEL 6 Server (RPMs) name >>> = Red Hat Enterprise Linux 6 Server (RPMs) name = Red Hat >>> Enterprise Linux High Availability (for RHEL 6 Server) (RPMs) >>> name = Red Hat Enterprise Linux Load Balancer (for RHEL 6 >>> Server) (RPMs) name = Red Hat Enterprise Linux Resilient >>> Storage (for RHEL 6 Server) (RPMs) >>> >>> What did i miss? >>> >>> Cheers & Tx Lutz >> Hi Lutz, >> >> Have you tried running a 'yum repolist' since adding the >> subscriptions using subscription manager? Did you follow the instructions here, to enable the repos? Note, they are disabled by default: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html Specifically look for the lines: > yum-config-manager --enable rhel-server-ost-6-folsom-rpms > --setopt="rhel-server-ost-6-folsom-rpms.priority=1" If you ran this command and still don't see the repos via yum repolist, then can you provide a copy of your /etc/yum.repos.d/redhat.repo file? Perry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.13 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlEnk+YACgkQxdKLkeZeTz283wCeOUwmAMhyn+R7PtT+Xi7Usafz SMwAn2UOp8CFUO5s1fiaY0j7ugxtPn2Q =9V5Y -----END PGP SIGNATURE----- From pmyers at redhat.com Fri Feb 22 15:52:20 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 22 Feb 2013 10:52:20 -0500 Subject: [rhos-list] Red Hat OpenStack Guide? In-Reply-To: <51278910.1070901@redhat.com> References: <51278910.1070901@redhat.com> Message-ID: <51279434.1090301@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/22/2013 10:04 AM, Lutz Lange wrote: > Hi there, > > i started playing with our Folsom packages and used our quickstart > guide for that. > > What i found strange was, that we did not separate cloud controler > & compute nodes. Do we have a guide on how to setup more that one > node that does everything? If you use the openstack-packstack tool, it allows you to separate the controller and compute nodes. Try: yum install openstack-packstack man packstack And it should provide you with some help Also: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/pt02.html Perry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.13 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlEnlDQACgkQxdKLkeZeTz0tgQCfWiZv/l8F7ZC0OAqnk2Mi/FIr j44Anip4dZIgyMem3CnOCRHel4gY3/lR =/qzb -----END PGP SIGNATURE----- From bkearney at redhat.com Fri Feb 22 15:55:48 2013 From: bkearney at redhat.com (Bryan Kearney) Date: Fri, 22 Feb 2013 10:55:48 -0500 Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 with Subscription Manager - no Repositories? In-Reply-To: <512793E6.8000509@redhat.com> References: <1801628107.6854511.1361536269000.JavaMail.root@redhat.com> <51276856.7010000@redhat.com> <512793E6.8000509@redhat.com> Message-ID: <51279504.3030705@redhat.com> On 02/22/2013 10:51 AM, Perry Myers wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 02/22/2013 07:45 AM, Lutz Lange wrote: >> Hi Steve, >> >> i did a # yum clean all and # yum repolist >> >> both with no help, the folsom repo is not available ... >> >> Cheers Lutz >> >> On 22/02/13 13:31, Steve Gordon wrote: >>> ----- Original Message ----- >>>> From: "Lutz Lange" To: >>>> rhos-list at redhat.com Sent: Friday, February 22, 2013 6:18:01 >>>> AM Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 >>>> with Subscription Manager - no Repositories? >>>> >>>> Hi list, >>>> >>>> i'm trying to get OpenStack installed on RHEL 6.4. >>>> >>>> I did register my systems with system-manager and have the >>>> entitlements : >>>> >>>> *root at hv01 ~]# subscription-manager list --consumed* >>>> +-------------------------------------------+ Consumed >>>> Subscriptions +-------------------------------------------+ >>>> Subscription Name: Red Hat Employee Subscription Provides: >>>> Red Hat Enterprise Linux Server Red Hat Enterprise Linux >>>> Resilient Storage (for RHEL Server) Red Hat Enterprise Linux >>>> Load Balancer (for RHEL Server) Red Hat Enterprise Linux High >>>> Availability (for RHEL Server) Red Hat Enterprise Linux >>>> Workstation SKU: SYS0395 Contract: >>>> 3230762 Account: 1495773 Serial Number: >>>> 4011105903681551707 Active: True Quantity Used: >>>> 1 Service Level: None Service Type: None >>>> Starts: 28/09/12 Ends: >>>> 28/09/13 >>>> >>>> Subscription Name: Red Hat OpenStack Tech Preview >>>> Provides: Red Hat OpenStack Red Hat Enterprise >>>> Linux Server SKU: SER0406 Contract: >>>> 10116941 Account: 1495773 Serial Number: >>>> 3278350936972371219 Active: True Quantity Used: >>>> 1 Service Level: None Service Type: None >>>> Starts: 21/02/13 Ends: >>>> 22/05/13 >>>> >>>> But i can't access any OpenStack repositories :* * >>>> >>>> *[root at hv01 ~]# yum-config-manager | grep '^name ='* name = Red >>>> Hat CloudForms Tools for RHEL 6 (RPMs) name = Red Hat >>>> Enterprise Virtualization Agents for RHEL 6 Server (RPMs) name >>>> = Red Hat Enterprise Linux 6 Server (RPMs) name = Red Hat >>>> Enterprise Linux High Availability (for RHEL 6 Server) (RPMs) >>>> name = Red Hat Enterprise Linux Load Balancer (for RHEL 6 >>>> Server) (RPMs) name = Red Hat Enterprise Linux Resilient >>>> Storage (for RHEL 6 Server) (RPMs) >>>> >>>> What did i miss? >>>> >>>> Cheers & Tx Lutz >>> Hi Lutz, >>> >>> Have you tried running a 'yum repolist' since adding the >>> subscriptions using subscription manager? > > Did you follow the instructions here, to enable the repos? Note, they > are disabled by default: > > https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html Why are they disabled by default? > > Specifically look for the lines: > >> yum-config-manager --enable rhel-server-ost-6-folsom-rpms >> --setopt="rhel-server-ost-6-folsom-rpms.priority=1" > > If you ran this command and still don't see the repos via yum > repolist, then can you provide a copy of your > /etc/yum.repos.d/redhat.repo file? > If it helps, you can see the repos which come from your susbcriptions by doing: [root at bkearney ~]# subscription-manager repos --list +----------------------------------------------------------+ Available Repositories in /etc/yum.repos.d/redhat.repo +----------------------------------------------------------+ Repo ID: rhel-6-server-sam-source-rpms Repo Name: Red Hat Subscription Asset Manager (for RHEL 6 Server) (Source RPMs) Repo URL: https://cdn.redhat.com/content/dist/rhel/server/6/$releasever/$basearch/subscription-asset-manager/1/source/SRPMS Enabled: 0 What perry said is accurate... the repo commands just gives you tooling into that redhat.repo file. -- bk From rbryant at redhat.com Fri Feb 22 16:16:52 2013 From: rbryant at redhat.com (Russell Bryant) Date: Fri, 22 Feb 2013 11:16:52 -0500 Subject: [rhos-list] Red Hat OpenStack Guide? In-Reply-To: <51279434.1090301@redhat.com> References: <51278910.1070901@redhat.com> <51279434.1090301@redhat.com> Message-ID: <512799F4.8060501@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/22/2013 10:52 AM, Perry Myers wrote: > On 02/22/2013 10:04 AM, Lutz Lange wrote: >> Hi there, > >> i started playing with our Folsom packages and used our >> quickstart guide for that. > >> What i found strange was, that we did not separate cloud >> controler & compute nodes. Do we have a guide on how to setup >> more that one node that does everything? > > If you use the openstack-packstack tool, it allows you to separate > the controller and compute nodes. > > Try: > > yum install openstack-packstack man packstack > > And it should provide you with some help > > Also: > > https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/pt02.html Installation > using packstack is easiest even if you want to install it all on one node. The other instructions in the getting started guide are useful from an educational perspective since you see a bit more of how all of the components get hooked together by doing it manually. Beyond that, I would be using packstack. - -- Russell Bryant -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.13 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlEnmfQACgkQFg9ft4s9SAYO6wCghphclwkdTZMAUL1jdnHvr6r6 Dg4AnjC5Gi/bGr6xHEy9rSbaxj5SsDrE =GTCo -----END PGP SIGNATURE----- From prmarino1 at gmail.com Fri Feb 22 16:19:34 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Fri, 22 Feb 2013 11:19:34 -0500 Subject: [rhos-list] Red Hat OpenStack Guide? In-Reply-To: <51279434.1090301@redhat.com> References: <51278910.1070901@redhat.com> <51279434.1090301@redhat.com> Message-ID: you can also look at this guide http://d.hatena.ne.jp/enakai00/20121118/1353226066 Its not perfect but it should give you some idea of how to do it. Im also planning to release an alpha my scripts in a few days I'm just doing final debugging my scripts aren't quite as friendly as packstack but they are meant to provide more geared towards consistent scaling then ease of initial install. On Fri, Feb 22, 2013 at 10:52 AM, Perry Myers wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 02/22/2013 10:04 AM, Lutz Lange wrote: >> Hi there, >> >> i started playing with our Folsom packages and used our quickstart >> guide for that. >> >> What i found strange was, that we did not separate cloud controler >> & compute nodes. Do we have a guide on how to setup more that one >> node that does everything? > > If you use the openstack-packstack tool, it allows you to separate the > controller and compute nodes. > > Try: > > yum install openstack-packstack > man packstack > > And it should provide you with some help > > Also: > > https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/pt02.html > > Perry > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.13 (GNU/Linux) > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iEYEARECAAYFAlEnlDQACgkQxdKLkeZeTz0tgQCfWiZv/l8F7ZC0OAqnk2Mi/FIr > j44Anip4dZIgyMem3CnOCRHel4gY3/lR > =/qzb > -----END PGP SIGNATURE----- > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From pmyers at redhat.com Fri Feb 22 16:27:17 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 22 Feb 2013 11:27:17 -0500 Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 with Subscription Manager - no Repositories? In-Reply-To: <51279504.3030705@redhat.com> References: <1801628107.6854511.1361536269000.JavaMail.root@redhat.com> <51276856.7010000@redhat.com> <512793E6.8000509@redhat.com> <51279504.3030705@redhat.com> Message-ID: <51279C65.4040107@redhat.com> On 02/22/2013 10:55 AM, Bryan Kearney wrote: > On 02/22/2013 10:51 AM, Perry Myers wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> On 02/22/2013 07:45 AM, Lutz Lange wrote: >>> Hi Steve, >>> >>> i did a # yum clean all and # yum repolist >>> >>> both with no help, the folsom repo is not available ... >>> >>> Cheers Lutz >>> >>> On 22/02/13 13:31, Steve Gordon wrote: >>>> ----- Original Message ----- >>>>> From: "Lutz Lange" To: >>>>> rhos-list at redhat.com Sent: Friday, February 22, 2013 6:18:01 >>>>> AM Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 >>>>> with Subscription Manager - no Repositories? >>>>> >>>>> Hi list, >>>>> >>>>> i'm trying to get OpenStack installed on RHEL 6.4. >>>>> >>>>> I did register my systems with system-manager and have the >>>>> entitlements : >>>>> >>>>> *root at hv01 ~]# subscription-manager list --consumed* >>>>> +-------------------------------------------+ Consumed >>>>> Subscriptions +-------------------------------------------+ >>>>> Subscription Name: Red Hat Employee Subscription Provides: >>>>> Red Hat Enterprise Linux Server Red Hat Enterprise Linux >>>>> Resilient Storage (for RHEL Server) Red Hat Enterprise Linux >>>>> Load Balancer (for RHEL Server) Red Hat Enterprise Linux High >>>>> Availability (for RHEL Server) Red Hat Enterprise Linux >>>>> Workstation SKU: SYS0395 Contract: >>>>> 3230762 Account: 1495773 Serial Number: >>>>> 4011105903681551707 Active: True Quantity Used: >>>>> 1 Service Level: None Service Type: None >>>>> Starts: 28/09/12 Ends: >>>>> 28/09/13 >>>>> >>>>> Subscription Name: Red Hat OpenStack Tech Preview >>>>> Provides: Red Hat OpenStack Red Hat Enterprise >>>>> Linux Server SKU: SER0406 Contract: >>>>> 10116941 Account: 1495773 Serial Number: >>>>> 3278350936972371219 Active: True Quantity Used: >>>>> 1 Service Level: None Service Type: None >>>>> Starts: 21/02/13 Ends: >>>>> 22/05/13 >>>>> >>>>> But i can't access any OpenStack repositories :* * >>>>> >>>>> *[root at hv01 ~]# yum-config-manager | grep '^name ='* name = Red >>>>> Hat CloudForms Tools for RHEL 6 (RPMs) name = Red Hat >>>>> Enterprise Virtualization Agents for RHEL 6 Server (RPMs) name >>>>> = Red Hat Enterprise Linux 6 Server (RPMs) name = Red Hat >>>>> Enterprise Linux High Availability (for RHEL 6 Server) (RPMs) >>>>> name = Red Hat Enterprise Linux Load Balancer (for RHEL 6 >>>>> Server) (RPMs) name = Red Hat Enterprise Linux Resilient >>>>> Storage (for RHEL 6 Server) (RPMs) >>>>> >>>>> What did i miss? >>>>> >>>>> Cheers & Tx Lutz >>>> Hi Lutz, >>>> >>>> Have you tried running a 'yum repolist' since adding the >>>> subscriptions using subscription manager? >> >> Did you follow the instructions here, to enable the repos? Note, they >> are disabled by default: >> >> https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html >> > > Why are they disabled by default? Because the Essex and Folsom repos can't be enabled at the same time (they'd be conflicting) and because a single subscription (The Preview) provides access to both repo channels So it was either disable them both to start with, or have to create two separate Preview SKUs, and the latter was not an option with the product folks From gfidente at fedoraproject.org Fri Feb 22 16:27:38 2013 From: gfidente at fedoraproject.org (Giulio Fidente) Date: Fri, 22 Feb 2013 17:27:38 +0100 Subject: [rhos-list] Red Hat OpenStack Guide? In-Reply-To: <51278910.1070901@redhat.com> References: <51278910.1070901@redhat.com> Message-ID: <51279C7A.4060200@fedoraproject.org> On 02/22/2013 04:04 PM, Lutz Lange wrote: > Hi there, > > i started playing with our Folsom packages and used our quickstart guide > for that. > > What i found strange was, that we did not separate cloud controler & > compute nodes. Do we have a guide on how to setup more that one node > that does everything? I've done this before on Fedora installing the components each on a separate host, hacking around some upstream guide: http://docs.openstack.org/folsom/openstack-compute/install/yum/content/ You can also share a single qpid and mysql instance across the different services. For what concerns the nova-network service, you only need to install it on a single node and configure the network_host on the other compute instances. If you're going to use quantum instead, there is a clear distinguish between the steps needed to install the quantum service and the agents. -- Giulio Fidente IRC: giulivo From bkearney at redhat.com Fri Feb 22 16:28:20 2013 From: bkearney at redhat.com (Bryan Kearney) Date: Fri, 22 Feb 2013 11:28:20 -0500 Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 with Subscription Manager - no Repositories? In-Reply-To: <51279C65.4040107@redhat.com> References: <1801628107.6854511.1361536269000.JavaMail.root@redhat.com> <51276856.7010000@redhat.com> <512793E6.8000509@redhat.com> <51279504.3030705@redhat.com> <51279C65.4040107@redhat.com> Message-ID: <51279CA4.4060300@redhat.com> On 02/22/2013 11:27 AM, Perry Myers wrote: > On 02/22/2013 10:55 AM, Bryan Kearney wrote: >> On 02/22/2013 10:51 AM, Perry Myers wrote: >>> -----BEGIN PGP SIGNED MESSAGE----- >>> Hash: SHA1 >>> >>> On 02/22/2013 07:45 AM, Lutz Lange wrote: >>>> Hi Steve, >>>> >>>> i did a # yum clean all and # yum repolist >>>> >>>> both with no help, the folsom repo is not available ... >>>> >>>> Cheers Lutz >>>> >>>> On 22/02/13 13:31, Steve Gordon wrote: >>>>> ----- Original Message ----- >>>>>> From: "Lutz Lange" To: >>>>>> rhos-list at redhat.com Sent: Friday, February 22, 2013 6:18:01 >>>>>> AM Subject: [rhos-list] Red Hat OpenStack Folsom on RHEV 6.4 >>>>>> with Subscription Manager - no Repositories? >>>>>> >>>>>> Hi list, >>>>>> >>>>>> i'm trying to get OpenStack installed on RHEL 6.4. >>>>>> >>>>>> I did register my systems with system-manager and have the >>>>>> entitlements : >>>>>> >>>>>> *root at hv01 ~]# subscription-manager list --consumed* >>>>>> +-------------------------------------------+ Consumed >>>>>> Subscriptions +-------------------------------------------+ >>>>>> Subscription Name: Red Hat Employee Subscription Provides: >>>>>> Red Hat Enterprise Linux Server Red Hat Enterprise Linux >>>>>> Resilient Storage (for RHEL Server) Red Hat Enterprise Linux >>>>>> Load Balancer (for RHEL Server) Red Hat Enterprise Linux High >>>>>> Availability (for RHEL Server) Red Hat Enterprise Linux >>>>>> Workstation SKU: SYS0395 Contract: >>>>>> 3230762 Account: 1495773 Serial Number: >>>>>> 4011105903681551707 Active: True Quantity Used: >>>>>> 1 Service Level: None Service Type: None >>>>>> Starts: 28/09/12 Ends: >>>>>> 28/09/13 >>>>>> >>>>>> Subscription Name: Red Hat OpenStack Tech Preview >>>>>> Provides: Red Hat OpenStack Red Hat Enterprise >>>>>> Linux Server SKU: SER0406 Contract: >>>>>> 10116941 Account: 1495773 Serial Number: >>>>>> 3278350936972371219 Active: True Quantity Used: >>>>>> 1 Service Level: None Service Type: None >>>>>> Starts: 21/02/13 Ends: >>>>>> 22/05/13 >>>>>> >>>>>> But i can't access any OpenStack repositories :* * >>>>>> >>>>>> *[root at hv01 ~]# yum-config-manager | grep '^name ='* name = Red >>>>>> Hat CloudForms Tools for RHEL 6 (RPMs) name = Red Hat >>>>>> Enterprise Virtualization Agents for RHEL 6 Server (RPMs) name >>>>>> = Red Hat Enterprise Linux 6 Server (RPMs) name = Red Hat >>>>>> Enterprise Linux High Availability (for RHEL 6 Server) (RPMs) >>>>>> name = Red Hat Enterprise Linux Load Balancer (for RHEL 6 >>>>>> Server) (RPMs) name = Red Hat Enterprise Linux Resilient >>>>>> Storage (for RHEL 6 Server) (RPMs) >>>>>> >>>>>> What did i miss? >>>>>> >>>>>> Cheers & Tx Lutz >>>>> Hi Lutz, >>>>> >>>>> Have you tried running a 'yum repolist' since adding the >>>>> subscriptions using subscription manager? >>> >>> Did you follow the instructions here, to enable the repos? Note, they >>> are disabled by default: >>> >>> https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/html/Getting_Started_Guide/ch02.html >>> >> >> Why are they disabled by default? > > Because the Essex and Folsom repos can't be enabled at the same time > (they'd be conflicting) and because a single subscription (The Preview) > provides access to both repo channels > > So it was either disable them both to start with, or have to create two > separate Preview SKUs, and the latter was not an option with the product > folks > thanks! -- bk From prmarino1 at gmail.com Fri Feb 22 17:10:34 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Fri, 22 Feb 2013 12:10:34 -0500 Subject: [rhos-list] Red Hat OpenStack Guide? In-Reply-To: <51279C7A.4060200@fedoraproject.org> References: <51278910.1070901@redhat.com> <51279C7A.4060200@fedoraproject.org> Message-ID: Ive been working on that with my scripts essentially just about every thing in quantum can be set up in a live live mode except the L3 agent (the one that manages the gateways for the VLANs). see here http://docs.openstack.org/trunk/openstack-network/admin/content/ch_high_avail.html The openvswitch or Linux bridge agent should be running on each compute node, I also run the dhcp agent on each compute node as well. For the L3 agent since you can only have 1 running at a time you need to have a HA management service handle the failover. The end point can be a standard http(s) loadbalancer with two or more instances of the API server running on your controller nodes. I haven't gotten into ryu yet so I have no information for you on that. On Fri, Feb 22, 2013 at 11:27 AM, Giulio Fidente wrote: > On 02/22/2013 04:04 PM, Lutz Lange wrote: >> Hi there, >> >> i started playing with our Folsom packages and used our quickstart guide >> for that. >> >> What i found strange was, that we did not separate cloud controler & >> compute nodes. Do we have a guide on how to setup more that one node >> that does everything? > > I've done this before on Fedora installing the components each on a > separate host, hacking around some upstream guide: > > http://docs.openstack.org/folsom/openstack-compute/install/yum/content/ > > You can also share a single qpid and mysql instance across the different > services. > > For what concerns the nova-network service, you only need to install it > on a single node and configure the network_host on the other compute > instances. > > If you're going to use quantum instead, there is a clear distinguish > between the steps needed to install the quantum service and the agents. > -- > Giulio Fidente > IRC: giulivo > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From pbrady at redhat.com Tue Feb 26 01:43:45 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 26 Feb 2013 01:43:45 +0000 Subject: [rhos-list] openstack-db --init fails In-Reply-To: <5113F905.5010803@redhat.com> References: <6190AA83EB69374DABAE074D7E900F750F6AB652@xmb-aln-x13.cisco.com> <5113F905.5010803@redhat.com> Message-ID: <512C1351.6010303@redhat.com> On 02/07/2013 06:57 PM, P?draig Brady wrote: > On 02/07/2013 06:41 PM, Shixiong Shang (shshang) wrote: >> Hi, experts: >> >> I am trying to issue the following command on nova node to sync with db, but it returns with error: >> >> 2013-02-07 13:36:25 19362 TRACE nova ImportError: No module named kombu Some extra info about this issue, which is included in release notes. Note if you're using packstack to install, then none of these issues will occur. While this is not the particular reason for the issue above, this can happen due to rearrangement of config variables, in recent package updates. These changes may also induce an error when trying to use `openstack-db --init` to initialize the database. To avoid, please ensure that for the current and about to be released nova, cinder and glance packages, that the 'rpc_backend' and 'sql_connection' variables are uncommented in /etc/nova/nova.conf, /etc/nova/cinder.conf and /etc/glance/glance-registry.conf thanks, P?draig. From aydinp at destek.as Wed Feb 27 16:32:32 2013 From: aydinp at destek.as (Aydin PAYKOC) Date: Wed, 27 Feb 2013 16:32:32 +0000 Subject: [rhos-list] two node configuration with packstack Message-ID: <8B91F67B8509F34A8588E7EFD4005BECEDC20B@ANKPHEXC.destekas.local> Hi, I am trying to set up 2 node system with openstack. 1st server will be could control node where all the services except nova compute. 2nd server will run only nova. I am following "getting started guide" and everything went smoothly until now. I am about to start packstart interactive installation (chapter 6). I have installed RHEL 6.3 on the 2nd server but have not installed openstack. That system only has standart RHEL. Will it be enough or do I have to install everything on the 2nd server as well. Regards, Aydin ________________________________ Bu E-posta mesaji gizlidir. Ayrica sadece yukarida adi ge?en kisiye ?zel bilgi i?eriyor olabilir. Mesajin g?nderilmek istendigi kisi siz degilseniz hi?bir kismini kopyalayamaz baskasina g?nderemez baskasina a?iklayamaz veya kullanamazsiniz. Eger bu mesaj size yanlislikla ulasmissa l?tfen mesaji ve t?m kopyalarini sisteminizden silin ve g?nderen kisiyi E-posta yolu ile bilgilendirin. Internet iletisiminde zamaninda g?venli hatasiz ya da vir?ss?z g?nderim garanti edilemez. G?nderen taraf hata veya unutmalardan sorumluluk kabul etmez. This E-mail is confidential. It may also be legally privileged. If you are not the addressee you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return E-mail. Internet communications cannot be guaranteed to be timely, secure, error or virus-free. The sender does not accept liability for any errors or omissions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Wed Feb 27 16:53:27 2013 From: pmyers at redhat.com (Perry Myers) Date: Wed, 27 Feb 2013 11:53:27 -0500 Subject: [rhos-list] two node configuration with packstack In-Reply-To: <8B91F67B8509F34A8588E7EFD4005BECEDC20B@ANKPHEXC.destekas.local> References: <8B91F67B8509F34A8588E7EFD4005BECEDC20B@ANKPHEXC.destekas.local> Message-ID: <512E3A07.4080104@redhat.com> On 02/27/2013 11:32 AM, Aydin PAYKOC wrote: > Hi, > > I am trying to set up 2 node system with openstack. > > 1st server will be could control node where all the services > except nova compute. 2nd server will run only nova. > > I am following "getting started guide" and everything > went smoothly until now. I am about to start packstart > interactive installation (chapter 6). > > I have installed RHEL 6.3 on the 2nd server but have not > installed openstack. That system only has standart RHEL. > > Will it be enough or do I have to install everything on the 2nd > server as well. To run packstack, you just need a base RHEL install on each machine. No additional OpenStack packages are required. Then on only one of those machines, you just need to install the openstack-packstack RPM, create the config file, run packstack and it takes care of the rest. One note... RHOS Folsom Preview and Packstack will only work on RHEL 6.4. There are dependencies that cannot be met by RHEL 6.3. so we had to go with 6.4 Perry From brian.saunders1 at navy.mil Wed Feb 27 17:42:42 2013 From: brian.saunders1 at navy.mil (Saunders, Brian CIV NAWCWD, 452000D) Date: Wed, 27 Feb 2013 09:42:42 -0800 Subject: [rhos-list] Aqueduct and RHEL 6 etc In-Reply-To: References: Message-ID: <1E00D8EA4175074B878D912AC2D10E684FC3B8@nawechlkez05v.nadsuswe.nads.navy.mil> I have started using Puppet in my labs. I would love to have the opportunity to contribute to updating the RHEL5 modules for RHEL6. Brian Saunders Systems Administrator 452000D Desk: 760.939.1732 Mobile: 760.264.5750 -----Original Message----- From: aqueduct-bounces at lists.fedorahosted.org [mailto:aqueduct-bounces at lists.fedorahosted.org] On Behalf Of Vincent Passaro Sent: Friday, February 01, 2013 8:31 To: aqueduct at lists.fedorahosted.org Subject: Re: Aqueduct and RHEL 6 etc No worries :) There is a lot of data on that page! The RHEL 6 STIG is coming, the best part being that its being written in the Open Source Community rather than coming out of the bunker of DISA! R/ Vince From: , Christopher M Reply-To: "aqueduct at lists.fedorahosted.org" Date: Wednesday, January 30, 2013 4:07 PM To: "aqueduct at lists.fedorahosted.org" Subject: RE: Aqueduct and RHEL 6 etc Hey thanks for the quick answers! I don't know how I missed that RHEL5 wiki and I guess the RHEL6 was wishful thinking :) From: aqueduct-bounces at lists.fedorahosted.org [mailto:aqueduct-bounces at lists.fedorahosted.org] On Behalf Of Vincent Passaro Sent: Wednesday, January 30, 2013 4:03 PM To: aqueduct at lists.fedorahosted.org Subject: Re: Aqueduct and RHEL 6 etc Chris, Welcome to the list. I have embedded my comments below: From: , Christopher M Reply-To: "aqueduct at lists.fedorahosted.org" Date: Wednesday, January 30, 2013 3:52 PM To: "aqueduct at lists.fedorahosted.org" Subject: Aqueduct and RHEL 6 etc Hi, I've been checking out the project. I think it's great since I see a lot of people around the community writing their own scripts to do the same exact thing over and over again (even within my own company). We are putting together a project hosted on RHEL 6.x. These days the vast majority of our project costs are related to accreditors scrutinizing scan results and managing paperwork to document deviations so my group is trying to get smarter at scripting security hardening and we were thinking using aqueduct might be a good approach. I started playing around with aqueduct the other day and I have a couple of questions. The short version is I'm trying to determine if aqueduct can be applied to RHEL6. I know just enough Linux to be dangerous so bear with me: 1) Is the RHEL5 STIG process under the "road map" that uses puppet still accurate? It worked fine for me in general but the reason I ask is I noticed more recent development on the trunk but the procedure says to grab puppet scripts from an "archive" path. Puppet is not on the roadmap for either the RHEL 5 or RHEL 6 STIG. If you are interested in contributing code for puppet, we can work that out! 2) Is there a documented way to run through the bash scripts? I saw the aqueduct script at the root of the trunk. Out of curiosity, I tried pulling down a copy of the entire trunk onto a CentOS load I had on a vm (I didn't have RHEL handy at home). By messing around with the paths in the aqueduct.conf file and copying some files to /etc I actually got it run through the STIG bash scripts by invoking the high level "aqueduct" script. I think I had to hack around the fact that the only thing currently under /profiles/DISA folder was "firefox" and "rhel-5-beta" and I was actually trying to run the "rhel-5" bash scripts at the time. I didn't see a walk through on the site so I don't know if I'm on the right path here. Check the Wiki, we have how-to guides. https://fedorahosted.org/aqueduct/wiki/Rhel5DraftStigGettingStarted 3) Is there a preferred language for running the lockdown (e.g. puppet over bash)? Just wondering if one is considered more mature or less buggy since I see both being worked on. Right now it's Bash. If the community starts really adopting Puppet and wants to contribute code, we can look at shifting from Bash to Puppet (or have both) 1) What's the state of the RHEL 6 STIG support? I'm guessing a lot of the RHEL 5 work will port directly over to RHEL 6 when the STIG gets officially published but have you guys tried this yet? Or is there already an established way to apply aqueduct to RHEL 6 that I missed? As we know the RHEL 6 STIG isn't final yet. You can check out our sister project SCAP Security Guide https://fedorahosted.org/scap-security-guide/ We have some content that right now for the impending RHEL 6 STIG, but no direct mapping has occurred. Once DISA accepts the Draft version and releases it, development will begin. I know the STIG still hasn't been officially released but my group had already committed to using RHEL6 over a year ago thinking it would be released by now (we are working with a pre-lease draft copy). But I was browsing around the source tree and I found a few RHEL 6 bash scripts in the works. Again out of curiosity, I tried running the RHEL 5 STIG scripts against a CentOS 6 load just to see what would happen. I also loaded the latest version of EPEL I could find (6.8 I think). I noticed some of the scripts reference the RHEL version file (/etc/redhat-release) so I faked one to indicate RHEL6 since I was using CentOS 6. I called the aqueduct script to invoke the RHEL 5 STIG bash scripts then I manually invoked all the RHEL 6 STIG bash scripts. I spot checked a few STIG related settings that I happened to remember on my CentOS 6.3 vm and I was pleased to discover they had all been implemented the way I expected despite the OS mismatch (5 vs 6). So that was kind of pr! omising. Outside of the OS differences the RHEL 5 STIG and the RHEL 6 STIG are going to be pretty different, so trying to jackhammer RHEL 5 scripts onto a RHEL 6 box isn't going to go very well. Anyways, I'm looking forward to integrating aqueduct with our work in the future. Thanks, Chris -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 5621 bytes Desc: not available URL: From brian.saunders1 at navy.mil Wed Feb 27 17:43:54 2013 From: brian.saunders1 at navy.mil (Saunders, Brian CIV NAWCWD, 452000D) Date: Wed, 27 Feb 2013 09:43:54 -0800 Subject: [rhos-list] Recall: Aqueduct and RHEL 6 etc Message-ID: <1E00D8EA4175074B878D912AC2D10E684FC3B9@nawechlkez05v.nadsuswe.nads.navy.mil> Saunders, Brian CIV NAWCWD, 452000D would like to recall the message, "Aqueduct and RHEL 6 etc". From rich.minton at lmco.com Wed Feb 27 18:55:14 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Wed, 27 Feb 2013 18:55:14 +0000 Subject: [rhos-list] Setting up Compute Nodes. Message-ID: Hi all, Does anyone know of a check-list type document that can be used to setup a compute node that includes nova-network and cinder? I have a two node openstack deployment-created basically from scratch without the use of packstack. The primary controller node, which also contains nova-compute, works fine. I can launch instances all day, connect to the network, I can do everything I need to do. My second node is compute, network and cinder-volume. I can see it when I run "nova-manage service list" and all is happy. When I try to force an instance to start up on compute1, it starts up fine but it attaches to the controller node. I have included my nova.conf file just in case anyone sees something I've missed. Thanks in advance for any help! # nova-manage service list Binary Host Zone Status State Updated_At nova-cert controller nova enabled :-) 2013-02-27 18:45:38 nova-scheduler controller nova enabled :-) 2013-02-27 18:45:41 nova-compute controller nova enabled :-) 2013-02-27 18:45:41 nova-console controller nova enabled :-) 2013-02-27 18:45:38 nova-network controller nova enabled :-) 2013-02-27 18:45:41 nova-consoleauth controller nova enabled :-) 2013-02-27 18:45:38 nova-compute compute1 nova enabled :-) 2013-02-27 18:45:37 nova-network compute1 nova enabled :-) 2013-02-27 18:45:43 nova.conf file: [DEFAULT] logdir = /var/log/nova debug=true state_path = /var/lib/nova lock_path = /var/lib/nova/tmp volumes_dir = /etc/nova/volumes dhcpbridge = /usr/bin/nova-dhcpbridge dhcpbridge_flagfile = /etc/nova/nova.conf dhcp_lease_time = 36000 force_dhcp_release = True injected_network_template = /usr/share/nova/interfaces.template glance_api_servers = controller:9292 keystone_ec2_url=http://controller:5000/v2.0/ec2tokens ## KVM libvirt_nonblocking = True libvirt_inject_partition = -1 libvirt_inject_key = True compute_driver = nova.virt.libvirt.LibvirtDriver libvirt_type = kvm ## Network my_ip = compute1 network_manager = nova.network.manager.VlanManager iscsi_helper = tgtadm sql_connection = mysql://nova:nova at controller/nova firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver rootwrap_config = /etc/nova/rootwrap.conf auth_strategy = keystone fixed_range = 10.10.16.0/24 public_interface = br972 use_single_default_gateway = True multi_host = True volume_api_class = nova.volume.cinder.API enabled_apis = ec2,osapi_compute ## QPID rpc_backend = nova.rpc.impl_qpid qpid_hostname = controller ## Metadata Service metadata_host = controller metadata_port = 8775 ## VNC Proxy 02/07/2013 novncproxy_base_url = http://controller:6080/vnc_auto.html xvpvncproxy_base_url = http://controller:6081/console vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = compute1 [keystone_authtoken] admin_tenant_name = service admin_user = nova admin_password = nova auth_host = controller auth_port = 35357 auth_protocol = http signing_dirname = /tmp/keystone-signing-nova Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbryant at redhat.com Wed Feb 27 19:52:41 2013 From: rbryant at redhat.com (Russell Bryant) Date: Wed, 27 Feb 2013 14:52:41 -0500 Subject: [rhos-list] Setting up Compute Nodes. In-Reply-To: References: Message-ID: <512E6409.4010000@redhat.com> On 02/27/2013 01:55 PM, Minton, Rich wrote: > Hi all, > > > > Does anyone know of a check-list type document that can be used to setup > a compute node that includes nova-network and cinder? I have a two node > openstack deployment?created basically from scratch without the use of > packstack. The primary controller node, which also contains > nova-compute, works fine. I can launch instances all day, connect to the > network, I can do everything I need to do. My second node is compute, > network and cinder-volume. I can see it when I run ?nova-manage service > list? and all is happy. When I try to force an instance to start up on > compute1, it starts up fine but it attaches to the controller node. I > have included my nova.conf file just in case anyone sees something I?ve > missed. When you say "but it attaches to the controller node", do you mean the instance starts on the controller? How are you trying to force the instance to start on compute1? -- Russell Bryant From rich.minton at lmco.com Wed Feb 27 21:20:24 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Wed, 27 Feb 2013 21:20:24 +0000 Subject: [rhos-list] EXTERNAL: Re: Setting up Compute Nodes. In-Reply-To: <512E6409.4010000@redhat.com> References: <512E6409.4010000@redhat.com> Message-ID: Yes, I try to force it to run on compute1 but it runs on controller. >From the command line I run... nova boot --image --flavor 3 --key_name test --availability-zone=nova:compute1 server-name -----Original Message----- From: Russell Bryant [mailto:rbryant at redhat.com] Sent: Wednesday, February 27, 2013 2:53 PM To: Minton, Rich Cc: rhos-list at redhat.com; Andrus, Gregory Subject: EXTERNAL: Re: [rhos-list] Setting up Compute Nodes. On 02/27/2013 01:55 PM, Minton, Rich wrote: > Hi all, > > > > Does anyone know of a check-list type document that can be used to > setup a compute node that includes nova-network and cinder? I have a > two node openstack deployment-created basically from scratch without > the use of packstack. The primary controller node, which also contains > nova-compute, works fine. I can launch instances all day, connect to > the network, I can do everything I need to do. My second node is > compute, network and cinder-volume. I can see it when I run > "nova-manage service list" and all is happy. When I try to force an > instance to start up on compute1, it starts up fine but it attaches to > the controller node. I have included my nova.conf file just in case > anyone sees something I've missed. When you say "but it attaches to the controller node", do you mean the instance starts on the controller? How are you trying to force the instance to start on compute1? -- Russell Bryant From rbryant at redhat.com Wed Feb 27 21:32:43 2013 From: rbryant at redhat.com (Russell Bryant) Date: Wed, 27 Feb 2013 16:32:43 -0500 Subject: [rhos-list] EXTERNAL: Re: Setting up Compute Nodes. In-Reply-To: References: <512E6409.4010000@redhat.com> Message-ID: <512E7B7B.9020205@redhat.com> On 02/27/2013 04:20 PM, Minton, Rich wrote: > Yes, I try to force it to run on compute1 but it runs on controller. > > From the command line I run... > > nova boot --image --flavor 3 --key_name test --availability-zone=nova:compute1 server-name Thanks for the command. This method of forcing a host is only supported if you're an admin. It would be ignored for a regular user. Since the instance did actually start, I assume it was a regular user. In that case, the 'compute1' part would be ignored and just the availability zone of 'nova' would be honored. Since both nodes are in the 'nova' zone, it happened to pick your controller. There aren't any ways to force an instance to a specific host that don't require some level of admin interaction that I know of. This really comes down to a core philosophy of this type of system, which is that the user of the cloud shouldn't have to know or care which specific node an instance runs on. This is not to be confused with nodes having capabilities and requesting instances that have requirements for certain capabilities. We have various features in that area. -- Russell Bryant From rich.minton at lmco.com Wed Feb 27 21:41:07 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Wed, 27 Feb 2013 21:41:07 +0000 Subject: [rhos-list] EXTERNAL: Re: Setting up Compute Nodes. In-Reply-To: <512E7B7B.9020205@redhat.com> References: <512E6409.4010000@redhat.com> <512E7B7B.9020205@redhat.com> Message-ID: Thank you very much for the feedback. I didn't know of another way to test the multiple compute node function except to create a bunch of instances and watch the activity on the other compute nodes. -----Original Message----- From: Russell Bryant [mailto:rbryant at redhat.com] Sent: Wednesday, February 27, 2013 4:33 PM To: Minton, Rich Cc: rhos-list at redhat.com; Andrus, Gregory Subject: Re: EXTERNAL: Re: [rhos-list] Setting up Compute Nodes. On 02/27/2013 04:20 PM, Minton, Rich wrote: > Yes, I try to force it to run on compute1 but it runs on controller. > > From the command line I run... > > nova boot --image --flavor 3 --key_name test > --availability-zone=nova:compute1 server-name Thanks for the command. This method of forcing a host is only supported if you're an admin. It would be ignored for a regular user. Since the instance did actually start, I assume it was a regular user. In that case, the 'compute1' part would be ignored and just the availability zone of 'nova' would be honored. Since both nodes are in the 'nova' zone, it happened to pick your controller. There aren't any ways to force an instance to a specific host that don't require some level of admin interaction that I know of. This really comes down to a core philosophy of this type of system, which is that the user of the cloud shouldn't have to know or care which specific node an instance runs on. This is not to be confused with nodes having capabilities and requesting instances that have requirements for certain capabilities. We have various features in that area. -- Russell Bryant From rbryant at redhat.com Wed Feb 27 21:43:55 2013 From: rbryant at redhat.com (Russell Bryant) Date: Wed, 27 Feb 2013 16:43:55 -0500 Subject: [rhos-list] EXTERNAL: Re: Setting up Compute Nodes. In-Reply-To: References: <512E6409.4010000@redhat.com> <512E7B7B.9020205@redhat.com> Message-ID: <512E7E1B.2040400@redhat.com> On 02/27/2013 04:41 PM, Minton, Rich wrote: > Thank you very much for the feedback. I didn't know of another way to test the multiple compute node function except to create a bunch of instances and watch the activity on the other compute nodes. You're welcome! Yes, creating a bunch of instances should certainly do the trick. :-) The command you showed should work if you try it as an admin, though. If it doesn't, please file a bug. -- Russell Bryant From jthomas at redhat.com Thu Feb 28 14:06:50 2013 From: jthomas at redhat.com (Jon Thomas) Date: Thu, 28 Feb 2013 09:06:50 -0500 Subject: [rhos-list] EXTERNAL: Re: Setting up Compute Nodes. In-Reply-To: References: <512E6409.4010000@redhat.com> <512E7B7B.9020205@redhat.com> Message-ID: <1362060410.2904.9.camel@basin.redhat.com> On Wed, 2013-02-27 at 21:41 +0000, Minton, Rich wrote: > Thank you very much for the feedback. I didn't know of another way to test the multiple compute node function except to create a bunch of instances and watch the activity on the other compute nodes. > If it's just a two node setup and you want to test the compute node, you can stop the openstack-compute service on the controller node. That will force the instance onto the machine running an openstack-compute. From prmarino1 at gmail.com Thu Feb 28 15:26:55 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Thu, 28 Feb 2013 10:26:55 -0500 Subject: [rhos-list] EXTERNAL: Re: Setting up Compute Nodes. In-Reply-To: <1362060410.2904.9.camel@basin.redhat.com> References: <512E6409.4010000@redhat.com> <512E7B7B.9020205@redhat.com> <1362060410.2904.9.camel@basin.redhat.com> Message-ID: you can always spin up an instance and migrate it to the mode you want to test from the command line. On Thu, Feb 28, 2013 at 9:06 AM, Jon Thomas wrote: > On Wed, 2013-02-27 at 21:41 +0000, Minton, Rich wrote: >> Thank you very much for the feedback. I didn't know of another way to test the multiple compute node function except to create a bunch of instances and watch the activity on the other compute nodes. >> > > If it's just a two node setup and you want to test the compute node, you > can stop the openstack-compute service on the controller node. That will > force the instance onto the machine running an openstack-compute. > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From rich.minton at lmco.com Thu Feb 28 18:01:20 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Thu, 28 Feb 2013 18:01:20 +0000 Subject: [rhos-list] EXTERNAL: Re: Setting up Compute Nodes. In-Reply-To: <512E7B7B.9020205@redhat.com> References: <512E6409.4010000@redhat.com> <512E7B7B.9020205@redhat.com> Message-ID: I found a strange occurrence when trying to bring up an instance on the second compute node. While watching the progress on the dashboard, I could see the instance try to come up on the second compute node then it would jump back to the first. I checked the logs and I was getting an "access denied" error when trying to access the instance directory. I did some snooping and found that the directory for my instances was not being mounted. I have my /var/lib/glance/images and /var/lib/nova/instances mounted to an NFS share on my Isilon storage cluster. The mounts on my controller node work fine and the config files for the automounter are identical on each node. I was finally able to get it to work by stopping the autofs service and running "automount" in the foreground and in debug mode (just to watch what was going on). Then all my mounts work fine and I can get an instance to come up on the second compute node. Not sure what's going on with the autofs service. I ran yum update on both nodes to make sure everything was in sync version wise. Has anyone seen this before? Thank you! Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -----Original Message----- From: Russell Bryant [mailto:rbryant at redhat.com] Sent: Wednesday, February 27, 2013 4:33 PM To: Minton, Rich Cc: rhos-list at redhat.com; Andrus, Gregory Subject: Re: EXTERNAL: Re: [rhos-list] Setting up Compute Nodes. On 02/27/2013 04:20 PM, Minton, Rich wrote: > Yes, I try to force it to run on compute1 but it runs on controller. > > From the command line I run... > > nova boot --image --flavor 3 --key_name test > --availability-zone=nova:compute1 server-name Thanks for the command. This method of forcing a host is only supported if you're an admin. It would be ignored for a regular user. Since the instance did actually start, I assume it was a regular user. In that case, the 'compute1' part would be ignored and just the availability zone of 'nova' would be honored. Since both nodes are in the 'nova' zone, it happened to pick your controller. There aren't any ways to force an instance to a specific host that don't require some level of admin interaction that I know of. This really comes down to a core philosophy of this type of system, which is that the user of the cloud shouldn't have to know or care which specific node an instance runs on. This is not to be confused with nodes having capabilities and requesting instances that have requirements for certain capabilities. We have various features in that area. -- Russell Bryant From rich.minton at lmco.com Thu Feb 28 19:12:38 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Thu, 28 Feb 2013 19:12:38 +0000 Subject: [rhos-list] Virbr0 Interface. Message-ID: Is the virbr0 interface required and if not how do I get rid of it? It seems to spin up another dnsmasq process that interferes with VlanManger networks. I have these libvirt components installed... libvirt-java-devel-0.4.9-1.el6.noarch libvirt-0.10.2-18.el6.x86_64 libvirt-client-0.10.2-18.el6.x86_64 virt-what-1.11-1.2.el6.x86_64 libvirt-devel-0.10.2-18.el6.x86_64 libvirt-python-0.10.2-18.el6.x86_64 libvirt-java-0.4.9-1.el6.noarch Are they all required for kvm to work? Thank you. Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.minton at lmco.com Thu Feb 28 19:17:54 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Thu, 28 Feb 2013 19:17:54 +0000 Subject: [rhos-list] Problems with compute node after update. Message-ID: Well I think I broke it... After running yum update to try to fix my autofs problem, my compute node can no longer talk to mysql server or my AMQP server on my controller node. ==> network.log <== 2013-02-28 14:10:39 15088 ERROR nova.manager [-] Error during VlanManager._disassociate_stale_fixed_ips: (OperationalError) (2003, "Can't connect to MySQL server on '10.10.12.245' (113)") None None ==> compute.log <== 2013-02-28 14:14:55 15113 ERROR nova.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 113] EHOSTUNREACH. Sleeping 60 seconds If I turn off iptables they begin talking again. I'm not quite sure what I need to reapply to get them talking again. Any ideas? Thanks again. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Thu Feb 28 19:25:42 2013 From: berrange at redhat.com (Daniel P. Berrange) Date: Thu, 28 Feb 2013 19:25:42 +0000 Subject: [rhos-list] Virbr0 Interface. In-Reply-To: References: Message-ID: <20130228192542.GB12318@redhat.com> On Thu, Feb 28, 2013 at 07:12:38PM +0000, Minton, Rich wrote: > Is the virbr0 interface required and if not how do I get rid of it? It seems to spin up another dnsmasq process that interferes with VlanManger networks. Just do virsh net-destroy default virsh net-autostart --disable default > I have these libvirt components installed... > libvirt-java-devel-0.4.9-1.el6.noarch > libvirt-0.10.2-18.el6.x86_64 > libvirt-client-0.10.2-18.el6.x86_64 > virt-what-1.11-1.2.el6.x86_64 > libvirt-devel-0.10.2-18.el6.x86_64 > libvirt-python-0.10.2-18.el6.x86_64 > libvirt-java-0.4.9-1.el6.noarch > > Are they all required for kvm to work? The libvirt-java package is not required for RHOS. In future RHEL-7, the RPM split for libvirt will be more granular allowing you to install only the minimal pieces required for KVM, skipping bits you don't want like the virbr0 config. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From pmyers at redhat.com Thu Feb 28 19:58:29 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 28 Feb 2013 14:58:29 -0500 Subject: [rhos-list] Virbr0 Interface. In-Reply-To: <20130228192542.GB12318@redhat.com> References: <20130228192542.GB12318@redhat.com> Message-ID: <512FB6E5.1010901@redhat.com> On 02/28/2013 02:25 PM, Daniel P. Berrange wrote: > On Thu, Feb 28, 2013 at 07:12:38PM +0000, Minton, Rich wrote: >> Is the virbr0 interface required and if not how do I get rid of it? It seems to spin up another dnsmasq process that interferes with VlanManger networks. > > Just do > > virsh net-destroy default > virsh net-autostart --disable default Derek/Martin, does it make sense to do this as part of packstack installation? If so a bug for this would be useful in bugzilla Perry From jthomas at redhat.com Thu Feb 28 19:58:28 2013 From: jthomas at redhat.com (Jon Thomas) Date: Thu, 28 Feb 2013 14:58:28 -0500 Subject: [rhos-list] Problems with compute node after update. In-Reply-To: References: Message-ID: <1362081508.2904.25.camel@basin.redhat.com> Hi On which host did you stop iptables? I'm assuming controller. Depends on how secure you want, but these should work. However, they are wide open in terms of source. iptables -I INPUT -p tcp -m tcp --dport 5672 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 3306 -j ACCEPT so to avoid saving the openstack changes to iptables into /etc/sysconfig on the controller $for svc in api objectstore network volume scheduler consoleauth cert novncproxy; do sudo service openstack-nova-$svc stop; done $service iptables restart ....to clean out nova entries $iptables -I INPUT -p tcp -m tcp --dport 5672 -j ACCEPT $iptables -I INPUT -p tcp -m tcp --dport 3306 -j ACCEPT $service iptables save $service iptables restart $iptables -L .....to verify the entries for 5672,3306 persist $for svc in api objectstore network volume scheduler consoleauth cert novncproxy; do sudo service openstack-nova-$svc start; done On Thu, 2013-02-28 at 19:17 +0000, Minton, Rich wrote: > Well I think I broke it? > > > > After running yum update to try to fix my autofs problem, my compute > node can no longer talk to mysql server or my AMQP server on my > controller node. > > > > ==> network.log <== > > 2013-02-28 14:10:39 15088 ERROR nova.manager [-] Error during > VlanManager._disassociate_stale_fixed_ips: (OperationalError) (2003, > "Can't connect to MySQL server on '10.10.12.245' (113)") None None > > ==> compute.log <== > > 2013-02-28 14:14:55 15113 ERROR nova.openstack.common.rpc.impl_qpid > [-] Unable to connect to AMQP server: [Errno 113] EHOSTUNREACH. > Sleeping 60 seconds > > > > If I turn off iptables they begin talking again. I?m not quite sure > what I need to reapply to get them talking again. > > > > Any ideas? > > > > Thanks again. > > Rick > > > > Richard Minton > > LMICC Systems Administrator > > 4000 Geerdes Blvd, 13D31 > > King of Prussia, PA 19406 > > Phone: 610-354-5482 > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From SAM.W.GABRIEL at saic.com Thu Feb 28 20:35:41 2013 From: SAM.W.GABRIEL at saic.com (SAM GABRIEL) Date: Thu, 28 Feb 2013 12:35:41 -0800 Subject: [rhos-list] Installation Guide Missing In-Reply-To: Message-ID: The installation guide I have been following: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/h tml-single/Getting_Started_Guide/index.html has disappeared from the Red Hat site. Can someone point me to the new location (no luck with search on the RH site) or send me a PDF version? Sam From sgordon at redhat.com Thu Feb 28 20:47:32 2013 From: sgordon at redhat.com (Steve Gordon) Date: Thu, 28 Feb 2013 15:47:32 -0500 (EST) Subject: [rhos-list] Installation Guide Missing In-Reply-To: Message-ID: <109595051.10442750.1362084452165.JavaMail.root@redhat.com> ----- Original Message ----- > From: "SAM GABRIEL" > To: rhos-list at redhat.com > Sent: Thursday, February 28, 2013 3:35:41 PM > Subject: [rhos-list] Installation Guide Missing > > The installation guide I have been following: > > > https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack_Preview/2/h > tml-single/Getting_Started_Guide/index.html > > has disappeared from the Red Hat site. Can someone point me to the > new > location (no luck with search on the RH site) or send me a PDF > version? > > Sam Hi Sam, We've moved it slightly, the documentation is now here: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_OpenStack/ Apologies for the inconvenience. Thanks, Steve From derekh at redhat.com Thu Feb 28 20:56:04 2013 From: derekh at redhat.com (Derek Higgins) Date: Thu, 28 Feb 2013 20:56:04 +0000 Subject: [rhos-list] Virbr0 Interface. In-Reply-To: <512FB6E5.1010901@redhat.com> References: <20130228192542.GB12318@redhat.com> <512FB6E5.1010901@redhat.com> Message-ID: <512FC464.1060107@redhat.com> On 02/28/2013 07:58 PM, Perry Myers wrote: > On 02/28/2013 02:25 PM, Daniel P. Berrange wrote: >> On Thu, Feb 28, 2013 at 07:12:38PM +0000, Minton, Rich wrote: >>> Is the virbr0 interface required and if not how do I get rid of it? It seems to spin up another dnsmasq process that interferes with VlanManger networks. >> >> Just do >> >> virsh net-destroy default >> virsh net-autostart --disable default > > Derek/Martin, does it make sense to do this as part of packstack > installation? If so a bug for this would be useful in bugzilla Its certainly something we could add but I havn't ever seen virbr0 cause problems the the networking that packstack sets up. Wouldn't be any harm to remove it anyways to avoid confusion. > > Perry >