From sellis at redhat.com Tue Jul 1 04:57:02 2014 From: sellis at redhat.com (Steven Ellis) Date: Tue, 01 Jul 2014 16:57:02 +1200 Subject: [Rdo-list] Nested RDO Icehouse nova-compute KVM / QEMU issues due to -cpu host Message-ID: <53B23F9E.7020001@redhat.com> So I'm having issues nesting RDO on my T440s laptop (Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz), and I'm hoping someone on the list can help My Physical Host (L0) is Fedora 19 running 3.14.4-100.fc19.x86_64 with nesting turned on My OpenStack Host is RHEL 6.5 or RHEL 7 (L1) My Guest is Cirros (L2) I'm installing RDO Icehouse under RHEL via packstack --allinone --os-neutron-install=n I then try to startup a Cirros guest (L2) and the guest never spawns Taking a look at the qemu command line it looks as follows /usr/libexec/qemu-kvm \ -global virtio-blk-pci.scsi=off \ -nodefconfig \ -nodefaults \ -nographic \ -machine accel=kvm:tcg \ -cpu host,+kvmclock \ -m 500 \ -no-reboot \ -kernel /var/tmp/.guestfs-497/kernel.2647 \ -initrd /var/tmp/.guestfs-497/initrd.2647 \ -device virtio-scsi-pci,id=scsi \ -drive file=/var/lib/nova/instances/3ae072b4-f4bf-42cf-b3ea-27d9768bc4df/disk,cache=none,format=qcow2,id=hd0,if=none \ -device scsi-hd,drive=hd0 \ -drive file=/var/tmp/.guestfs-497/root.2647,snapshot=on,id=appliance,if=none,cache=unsafe \ -device scsi-hd,drive=appliance \ -device virtio-serial \ -serial stdio \ -device sga \ -chardev socket,path=/tmp/libguestfsKGbB3D/guestfsd.sock,id=channel0 \ -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \ -append panic=1 console=ttyS0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 TERM=linux The issue appears to be running with "-cpu host" with this nesting combination. Now if I run the qemu command directly on RHEL7 (L1) I get this error KVM: entry failed, hardware error 0x7 Under RHEL 6.5 (L1) it is similar but not identical kvm: unhandled exit 7 In both cases on my Fedora physical host (L0) I see nested_vmx_run: VMCS MSR_{LOAD,STORE} unsupported There does appear to be a Red Hat bugzilla for RHEL7 relating to this but not for RHEL6 - https://bugzilla.redhat.com/show_bug.cgi?id=1038427 I can reproduce this issue using both RHEL 6.5 and RHEL 7 as my OpenStack Host (L1). Has anyone else hit this issue? Next I tried a work around of editing the /etc/nova/nova.conf file and forcing the CPU type for my guests under OpenStack #cpu_mode=none cpu_mode=custom # Set to a named libvirt CPU model (see names listed in # /usr/share/libvirt/cpu_map.xml). Only has effect if # cpu_mode="custom" and virt_type="kvm|qemu" (string value) # Deprecated group;name - DEFAULT;libvirt_cpu_model #cpu_model= cpu_model=Conroe Problem is qemu is still run with "-cpu host,+kvmclock" So am I hitting a secondary bug with nova-compute or is there another way to force OpenStack to select a particular CPU subset for Nova? Steve -- Steven Ellis Solution Architect - Red Hat New Zealand *E:* sellis at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Tue Jul 1 23:00:12 2014 From: flavio at redhat.com (Flavio Percoco) Date: Wed, 2 Jul 2014 01:00:12 +0200 Subject: [Rdo-list] plan to add a parameter to config glance backend? In-Reply-To: References: Message-ID: <20140701230012.GC11253@redhat.com> On 26/06/14 16:50 +0800, Kun Huang wrote: >Hi all > >Is there such a plan now? Actually it's okay to adjust glance.conf >only. Deploying ceph is not necessary. Hi Kun, I'm not sure I understand your question. What config parameter do you need? What do you think is missing in Glance? Cheers, Flavio -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From gareth at openstacker.org Wed Jul 2 01:45:10 2014 From: gareth at openstacker.org (Kun Huang) Date: Wed, 2 Jul 2014 09:45:10 +0800 Subject: [Rdo-list] plan to add a parameter to config glance backend? In-Reply-To: <20140701230012.GC11253@redhat.com> References: <20140701230012.GC11253@redhat.com> Message-ID: I need this: CONFIG_GLANCE_BACKEND=file|rbd|.... On Wed, Jul 2, 2014 at 7:00 AM, Flavio Percoco wrote: > On 26/06/14 16:50 +0800, Kun Huang wrote: >> >> Hi all >> >> Is there such a plan now? Actually it's okay to adjust glance.conf >> only. Deploying ceph is not necessary. > > > Hi Kun, > > I'm not sure I understand your question. What config parameter do you > need? What do you think is missing in Glance? > > Cheers, > Flavio > > -- > @flaper87 > Flavio Percoco From gareth at openstacker.org Wed Jul 2 01:49:13 2014 From: gareth at openstacker.org (Kun Huang) Date: Wed, 2 Jul 2014 09:49:13 +0800 Subject: [Rdo-list] how could I set netmask in iptables chain? Message-ID: In a RDO setup, I could get this in my iptables: -A INPUT -s 192.168.164.63/32 -p tcp ...... But I need /16 instead. When seeing firewall.pp, I find this is no such a parameter about netmask. So is it possible to set my own netmask? -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Wed Jul 2 07:54:32 2014 From: flavio at redhat.com (Flavio Percoco) Date: Wed, 2 Jul 2014 09:54:32 +0200 Subject: [Rdo-list] plan to add a parameter to config glance backend? In-Reply-To: References: <20140701230012.GC11253@redhat.com> Message-ID: <20140702075432.GD11253@redhat.com> On 02/07/14 09:45 +0800, Kun Huang wrote: >I need this: CONFIG_GLANCE_BACKEND=file|rbd|.... I'm not sure if you're talking about the client or the server. In glance-api there are 2 config options. The first one `known_stores` is used to enable/disable stores. The second one `default_store` allows you to specify which store should be used as the default one when none is passed to the API. Is there something missing in the above-mentioned options? Cheers, Flavio > >On Wed, Jul 2, 2014 at 7:00 AM, Flavio Percoco wrote: >> On 26/06/14 16:50 +0800, Kun Huang wrote: >>> >>> Hi all >>> >>> Is there such a plan now? Actually it's okay to adjust glance.conf >>> only. Deploying ceph is not necessary. >> >> >> Hi Kun, >> >> I'm not sure I understand your question. What config parameter do you >> need? What do you think is missing in Glance? >> >> Cheers, >> Flavio >> >> -- >> @flaper87 >> Flavio Percoco -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From kchamart at redhat.com Wed Jul 2 09:08:03 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 2 Jul 2014 14:38:03 +0530 Subject: [Rdo-list] Nested RDO Icehouse nova-compute KVM / QEMU issues due to -cpu host In-Reply-To: <53B23F9E.7020001@redhat.com> References: <53B23F9E.7020001@redhat.com> Message-ID: <20140702090803.GB30994@tesla.redhat.com> On Tue, Jul 01, 2014 at 04:57:02PM +1200, Steven Ellis wrote: > So I'm having issues nesting RDO on my T440s laptop (Intel(R) Core(TM) > i7-4600U CPU @ 2.10GHz), and I'm hoping someone on the list can help > > My Physical Host (L0) is Fedora 19 running 3.14.4-100.fc19.x86_64 with > nesting turned on If you can, I'd strongly suggest to use latest F20 Kernels (for L0 & L1) as nested KVM issues are freuqently upstream which are available in Fedora Rawhide. The thing with nested virtualization is the explosion of test matrix (different Kernels + distributions on L0, L1, L2) :-( I'm running F20 (L0) -> F20 (L1) -> F20 (L2), with current Fedora Rawhide Kernels (and cpu -host on for L1 & L2) and I don't see this issue. > My OpenStack Host is RHEL 6.5 or RHEL 7 (L1) > My Guest is Cirros (L2) [. . .] > The issue appears to be running with "-cpu host" with this nesting > combination. > > Now if I run the qemu command directly on RHEL7 (L1) I get this error > KVM: entry failed, hardware error 0x7 > > Under RHEL 6.5 (L1) it is similar but not identical > kvm: unhandled exit 7 > > > In both cases on my Fedora physical host (L0) I see > nested_vmx_run: VMCS MSR_{LOAD,STORE} unsupported IIRC, that's because your CPU just doesn't support VMCS shadowing (unless you're using Intel Haswell or above). I think the below command returns 'N' on your CPU: $ cat /sys/module/kvm_intel/parameters/enable_shadow_vmcs > There does appear to be a Red Hat bugzilla for RHEL7 relating to this > but not for RHEL6 > - https://bugzilla.redhat.com/show_bug.cgi?id=1038427 I recall that bug. Marcelo's suggestion to not use host-passthrough (-cpu host) for L2 is reasonable for now I guess. From my testing I haven't seen any significant performance benefits for hostpassthrough at both levels, I instead try to expose just 'vmx' extension (more on it below). > I can reproduce this issue using both RHEL 6.5 and RHEL 7 as my > OpenStack Host (L1). Has anyone else hit this issue? > > > Next I tried a work around of editing the /etc/nova/nova.conf file and > forcing the CPU type for my guests under OpenStack > > #cpu_mode=none > cpu_mode=custom > > # Set to a named libvirt CPU model (see names listed in > # /usr/share/libvirt/cpu_map.xml). Only has effect if > # cpu_mode="custom" and virt_type="kvm|qemu" (string value) > # Deprecated group;name - DEFAULT;libvirt_cpu_model > #cpu_model= > cpu_model=Conroe To see if it's working (only for testing), you can enforce the CPU model in your CirrOS guest XML and see the guest starts w/ `virsh start instance-foo` > Problem is qemu is still run with "-cpu host,+kvmclock" > > > So am I hitting a secondary bug with nova-compute or is there another > way to force OpenStack to select a particular CPU subset for Nova? Can you try to edit your L1 guest XML, and ensure you just expose the 'vmx' extension which is necessary for exposing KVM (/dev/kvm character device) inside your L1: SandyBridge Alternatively, you can also try exposing the CPU element values from the below command on your L0 & L1 and see if you can reproduce the errors: $ virsh capabilities | virsh cpu-baseline /dev/stdin -- /kashyap From sharad.aggarwal85 at gmail.com Wed Jul 2 10:08:11 2014 From: sharad.aggarwal85 at gmail.com (sharad aggarwal) Date: Wed, 2 Jul 2014 15:38:11 +0530 Subject: [Rdo-list] Problem regarding mysql.pp Message-ID: Dear Admin, I am trying to install rdo latest release i.e. openstack icehouse on CentOS 6.5 (64 bit) nut I am getting following error, *Applying 192.168.11.6_prescript.pp* *192.168.11.6_prescript.pp: [ DONE ]* *Applying 192.168.11.6_mysql.pp* *Applying 192.168.11.6_amqp.pp* *192.168.11.6_mysql.pp: [ ERROR ]* *Applying Puppet manifests [ ERROR ]* *ERROR : Error appeared during Puppet run: 192.168.11.6_mysql.pp* *Package mariadb-galera-server has not been found in enabled Yum repos.* *You will find full trace in log /var/tmp/packstack/20140702-152003-VVOe1r/manifests/192.168.11.6_mysql.pp.log* *Please check log file /var/tmp/packstack/20140702-152003-VVOe1r/openstack-setup.log for more information* I would like to inform you that I have installed MariaDB-Galera-server and removed mysql-server. Earlier I was getting error with prescript.pp but that I resolved by making a timeout changes in netns.pp file. I have also executed "yum install iproute iputils". Attached file holds the output */var/tmp/packstack/20140702-152003-VVOe1r/manifests/192.168.11.6_mysql.pp.log* Please help ASAP. Thanks -- Regards, Sharad Aggarwal +91 9999 197 992 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults Warning: Variable access via 'root_password' is deprecated. Use '@root_password' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.pass.erb]:4 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.pass.erb:4:in `result') Warning: Variable access via 'root_password' is deprecated. Use '@root_password' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.pass.erb]:5 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.pass.erb:5:in `result') Warning: Variable access via 'port' is deprecated. Use '@port' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:2 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:2:in `result') Warning: Variable access via 'socket' is deprecated. Use '@socket' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:3 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:3:in `result') Warning: Variable access via 'socket' is deprecated. Use '@socket' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:5 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:5:in `result') Warning: Variable access via 'pidfile' is deprecated. Use '@pidfile' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:9 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:9:in `result') Warning: Variable access via 'socket' is deprecated. Use '@socket' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:10 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:10:in `result') Warning: Variable access via 'port' is deprecated. Use '@port' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:11 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:11:in `result') Warning: Variable access via 'basedir' is deprecated. Use '@basedir' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:12 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:12:in `result') Warning: Variable access via 'datadir' is deprecated. Use '@datadir' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:13 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:13:in `result') Warning: Variable access via 'bind_address' is deprecated. Use '@bind_address' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:17 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:17:in `result') Warning: Variable access via 'bind_address' is deprecated. Use '@bind_address' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:18 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:18:in `result') Warning: Variable access via 'log_error' is deprecated. Use '@log_error' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:28 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:28:in `result') Warning: Variable access via 'default_engine' is deprecated. Use '@default_engine' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:31 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:31:in `result') Warning: Variable access via 'default_engine' is deprecated. Use '@default_engine' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:32 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:32:in `result') Warning: Variable access via 'ssl' is deprecated. Use '@ssl' instead. template[/var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb]:34 (at /var/tmp/packstack/cef1081afad44c2dbfb27a5034024e02/modules/mysql/templates/my.cnf.erb:34:in `result') Notice: Compiled catalog for cloud.8.8.8.8 in environment production in 1.08 seconds Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false. (at /usr/lib/ruby/site_ruby/1.8/puppet/type.rb:816:in `set_default') Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install mariadb-galera-server' returned 1: Error: Nothing to do Error: /Stage[main]/Mysql::Server/Package[mysql-server]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install mariadb-galera-server' returned 1: Error: Nothing to do Notice: /Stage[main]/Packstack::Innodb/File[/etc/my.cnf.d/innodb.cnf]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Packstack::Innodb/File[/etc/my.cnf.d/innodb.cnf]: Skipping because of failed dependencies Notice: /Stage[main]/Packstack::Innodb/Exec[clean_innodb_logs]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Packstack::Innodb/Exec[clean_innodb_logs]: Skipping because of failed dependencies Notice: /Stage[main]/Mysql::Config/File[/etc/mysql]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Mysql::Config/File[/etc/mysql]: Skipping because of failed dependencies Notice: /Stage[main]/Mysql::Config/File[/etc/my.cnf]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Mysql::Config/File[/etc/my.cnf]: Skipping because of failed dependencies Notice: /Stage[main]/Main/Service[mysqld]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Main/Service[mysqld]: Skipping because of failed dependencies Notice: /Stage[main]/Mysql::Config/File[/etc/mysql/conf.d]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Mysql::Config/File[/etc/mysql/conf.d]: Skipping because of failed dependencies Notice: /Stage[main]/Mysql::Config/Exec[set_mysql_rootpw]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Mysql::Config/Exec[set_mysql_rootpw]: Skipping because of failed dependencies Notice: /Stage[main]/Mysql::Config/File[/root/.my.cnf]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Mysql::Config/File[/root/.my.cnf]: Skipping because of failed dependencies Notice: /Stage[main]/Mysql::Config/Exec[mysqld-restart]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Mysql::Config/Exec[mysqld-restart]: Skipping because of failed dependencies Notice: /Stage[main]/Main/Database_user[@%]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Main/Database_user[@%]: Skipping because of failed dependencies Notice: /Stage[main]/Main/Database_user[root@::1]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Main/Database_user[root@::1]: Skipping because of failed dependencies Notice: /Stage[main]/Glance::Db::Mysql/Mysql::Db[glance]/Database[glance]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Glance::Db::Mysql/Mysql::Db[glance]/Database[glance]: Skipping because of failed dependencies Notice: /Stage[main]/Glance::Db::Mysql/Glance::Db::Mysql::Host_access[%]/Database_user[glance@%]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Glance::Db::Mysql/Glance::Db::Mysql::Host_access[%]/Database_user[glance@%]: Skipping because of failed dependencies Error: Could not prefetch database_grant provider 'mysql': Execution of '/usr/bin/mysql --defaults-file=/root/.my.cnf mysql -Be describe user' returned 1: Could not open required defaults file: /root/.my.cnf Fatal error in defaults handling. Program aborted Notice: /Stage[main]/Glance::Db::Mysql/Glance::Db::Mysql::Host_access[%]/Database_grant[glance@%/glance]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Glance::Db::Mysql/Glance::Db::Mysql::Host_access[%]/Database_grant[glance@%/glance]: Skipping because of failed dependencies Notice: /Stage[main]/Main/Database_user[root at cloud.8.8.8.8]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Main/Database_user[root at cloud.8.8.8.8]: Skipping because of failed dependencies Notice: /Stage[main]/Main/Database_user[@localhost]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Main/Database_user[@localhost]: Skipping because of failed dependencies Notice: /Stage[main]/Nova::Db::Mysql/Mysql::Db[nova]/Database[nova]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Nova::Db::Mysql/Mysql::Db[nova]/Database[nova]: Skipping because of failed dependencies Notice: /Stage[main]/Nova::Db::Mysql/Nova::Db::Mysql::Host_access[%]/Database_user[nova@%]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Nova::Db::Mysql/Nova::Db::Mysql::Host_access[%]/Database_user[nova@%]: Skipping because of failed dependencies Notice: /Stage[main]/Nova::Db::Mysql/Mysql::Db[nova]/Database_user[nova at 127.0.0.1]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Nova::Db::Mysql/Mysql::Db[nova]/Database_user[nova at 127.0.0.1]: Skipping because of failed dependencies Notice: /Stage[main]/Nova::Db::Mysql/Nova::Db::Mysql::Host_access[%]/Database_grant[nova@%/nova]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Nova::Db::Mysql/Nova::Db::Mysql::Host_access[%]/Database_grant[nova@%/nova]: Skipping because of failed dependencies Notice: /Stage[main]/Cinder::Db::Mysql/Mysql::Db[cinder]/Database[cinder]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Cinder::Db::Mysql/Mysql::Db[cinder]/Database[cinder]: Skipping because of failed dependencies Notice: /Stage[main]/Cinder::Db::Mysql/Mysql::Db[cinder]/Database_user[cinder at 127.0.0.1]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Cinder::Db::Mysql/Mysql::Db[cinder]/Database_user[cinder at 127.0.0.1]: Skipping because of failed dependencies Notice: /Stage[main]/Cinder::Db::Mysql/Mysql::Db[cinder]/Database_grant[cinder at 127.0.0.1/cinder]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Cinder::Db::Mysql/Mysql::Db[cinder]/Database_grant[cinder at 127.0.0.1/cinder]: Skipping because of failed dependencies Notice: /Stage[main]/Main/Database_user[@cloud]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Main/Database_user[@cloud]: Skipping because of failed dependencies Notice: /Stage[main]/Main/Database_user[@cloud.8.8.8.8]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Main/Database_user[@cloud.8.8.8.8]: Skipping because of failed dependencies Notice: /Stage[main]/Cinder::Db::Mysql/Cinder::Db::Mysql::Host_access[%]/Database_user[cinder@%]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Cinder::Db::Mysql/Cinder::Db::Mysql::Host_access[%]/Database_user[cinder@%]: Skipping because of failed dependencies Notice: /Stage[main]/Cinder::Db::Mysql/Cinder::Db::Mysql::Host_access[%]/Database_grant[cinder@%/cinder]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Cinder::Db::Mysql/Cinder::Db::Mysql::Host_access[%]/Database_grant[cinder@%/cinder]: Skipping because of failed dependencies Notice: /Stage[main]/Main/Database_user[root at 127.0.0.1]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Main/Database_user[root at 127.0.0.1]: Skipping because of failed dependencies Notice: /Stage[main]/Nova::Db::Mysql/Mysql::Db[nova]/Database_grant[nova at 127.0.0.1/nova]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Nova::Db::Mysql/Mysql::Db[nova]/Database_grant[nova at 127.0.0.1/nova]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database[neutron]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database[neutron]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Db::Mysql/Neutron::Db::Mysql::Host_access[%]/Database_user[neutron@%]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Neutron::Db::Mysql/Neutron::Db::Mysql::Host_access[%]/Database_user[neutron@%]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_user[neutron at 127.0.0.1]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_user[neutron at 127.0.0.1]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Db::Mysql/Neutron::Db::Mysql::Host_access[%]/Database_grant[neutron@%/neutron]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Neutron::Db::Mysql/Neutron::Db::Mysql::Host_access[%]/Database_grant[neutron@%/neutron]: Skipping because of failed dependencies Notice: /Stage[main]/Main/Database_user[root at cloud]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Main/Database_user[root at cloud]: Skipping because of failed dependencies Notice: /Stage[main]/Keystone::Db::Mysql/Mysql::Db[keystone]/Database[keystone]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Keystone::Db::Mysql/Mysql::Db[keystone]/Database[keystone]: Skipping because of failed dependencies Notice: /Stage[main]/Keystone::Db::Mysql/Mysql::Db[keystone]/Database_user[keystone_admin at 127.0.0.1]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Keystone::Db::Mysql/Mysql::Db[keystone]/Database_user[keystone_admin at 127.0.0.1]: Skipping because of failed dependencies Notice: /Stage[main]/Keystone::Db::Mysql/Mysql::Db[keystone]/Database_grant[keystone_admin at 127.0.0.1/keystone]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Keystone::Db::Mysql/Mysql::Db[keystone]/Database_grant[keystone_admin at 127.0.0.1/keystone]: Skipping because of failed dependencies Notice: /Stage[main]/Keystone::Db::Mysql/Keystone::Db::Mysql::Host_access[%]/Database_user[keystone_admin@%]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Keystone::Db::Mysql/Keystone::Db::Mysql::Host_access[%]/Database_user[keystone_admin@%]: Skipping because of failed dependencies Notice: /Stage[main]/Keystone::Db::Mysql/Keystone::Db::Mysql::Host_access[%]/Database_grant[keystone_admin@%/keystone]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Keystone::Db::Mysql/Keystone::Db::Mysql::Host_access[%]/Database_grant[keystone_admin@%/keystone]: Skipping because of failed dependencies Notice: /Stage[main]/Glance::Db::Mysql/Mysql::Db[glance]/Database_user[glance at 127.0.0.1]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Glance::Db::Mysql/Mysql::Db[glance]/Database_user[glance at 127.0.0.1]: Skipping because of failed dependencies Notice: /Stage[main]/Glance::Db::Mysql/Mysql::Db[glance]/Database_grant[glance at 127.0.0.1/glance]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Glance::Db::Mysql/Mysql::Db[glance]/Database_grant[glance at 127.0.0.1/glance]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: Dependency Package[mysql-server] has failures: true Warning: /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: Skipping because of failed dependencies Notice: Finished catalog run in 8.32 seconds From Brad.Lodgen at centurylink.com Wed Jul 2 17:05:29 2014 From: Brad.Lodgen at centurylink.com (Lodgen, Brad) Date: Wed, 2 Jul 2014 17:05:29 +0000 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List Message-ID: Hi folks, I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. -Foreman host (purely for Foreman) -Controller host (applied Controller(Nova) host group) -Compute Host (applied Compute(Nova) host group) -2 other hosts (not host group applied, but one will be compute and one will be storage) Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxenham at redhat.com Wed Jul 2 17:13:46 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 2 Jul 2014 18:13:46 +0100 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: Message-ID: Hi Brad, Have you checked the nova-compute logs in /var/log/nova/compute.log (on your new compute node?) This should point towards why it?s unable to connect/start etc. I suspect that it?s unable to join the message queue, and hence show up as an available hypervisor. Many thanks Rhys On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: > Hi folks, > > I have an issue where I?ve just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don?t see the compute node in the hypervisor list on the dashboard. I?m using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. > > -Foreman host (purely for Foreman) > -Controller host (applied Controller(Nova) host group) > -Compute Host (applied Compute(Nova) host group) > -2 other hosts (not host group applied, but one will be compute and one will be storage) > > Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From Brad.Lodgen at centurylink.com Wed Jul 2 17:23:20 2014 From: Brad.Lodgen at centurylink.com (Lodgen, Brad) Date: Wed, 2 Jul 2014 17:23:20 +0000 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: Message-ID: Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? On the compute node, I'm seeing this over and over in the compute log: Unable to connect to AMQP server: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address). Sleeping 5 seconds On the controller conductor log: Unable to connect to AMQP server: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address). Sleeping 5 seconds In the controller messages file: python: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: Wednesday, July 02, 2014 12:14 PM To: Lodgen, Brad Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List Hi Brad, Have you checked the nova-compute logs in /var/log/nova/compute.log (on your new compute node?) This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. Many thanks Rhys On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: > Hi folks, > > I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. > > -Foreman host (purely for Foreman) > -Controller host (applied Controller(Nova) host group) -Compute Host > (applied Compute(Nova) host group) > -2 other hosts (not host group applied, but one will be compute and > one will be storage) > > Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From roxenham at redhat.com Wed Jul 2 17:27:24 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 2 Jul 2014 18:27:24 +0100 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: Message-ID: No worries! Can you paste out your /etc/qpidd.conf file from the controller? (Make sure you sanitise the output) Cheers Rhys On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: > Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? > > > > On the compute node, I'm seeing this over and over in the compute log: > > Unable to connect to AMQP server: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address). Sleeping 5 seconds > > On the controller conductor log: > > Unable to connect to AMQP server: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address). Sleeping 5 seconds > > In the controller messages file: > > python: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) > > > > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Wednesday, July 02, 2014 12:14 PM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List > > Hi Brad, > > Have you checked the nova-compute logs in /var/log/nova/compute.log (on your new compute node?) > > This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. > > Many thanks > Rhys > > On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: > >> Hi folks, >> >> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >> >> -Foreman host (purely for Foreman) >> -Controller host (applied Controller(Nova) host group) -Compute Host >> (applied Compute(Nova) host group) >> -2 other hosts (not host group applied, but one will be compute and >> one will be storage) >> >> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list > From Brad.Lodgen at centurylink.com Wed Jul 2 17:30:19 2014 From: Brad.Lodgen at centurylink.com (Lodgen, Brad) Date: Wed, 2 Jul 2014 17:30:19 +0000 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: Message-ID: # GENERATED BY PUPPET # # Configuration file for qpidd. Entries are of the form: # name=value # # (Note: no spaces on either side of '='). Using default settings: # "qpidd --help" or "man qpidd" for more details. port=5672 max-connections=65535 worker-threads=17 connection-backlog=10 auth=yes realm=QPID -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: Wednesday, July 02, 2014 12:27 PM To: Lodgen, Brad Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List No worries! Can you paste out your /etc/qpidd.conf file from the controller? (Make sure you sanitise the output) Cheers Rhys On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: > Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? > > > > On the compute node, I'm seeing this over and over in the compute log: > > Unable to connect to AMQP server: Error in sasl_client_start (-1) > SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. > Minor code may provide more information (Cannot determine realm for > numeric host address). Sleeping 5 seconds > > On the controller conductor log: > > Unable to connect to AMQP server: Error in sasl_client_start (-1) > SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. > Minor code may provide more information (Cannot determine realm for > numeric host address). Sleeping 5 seconds > > In the controller messages file: > > python: GSSAPI Error: Unspecified GSS failure. Minor code may provide > more information (Cannot determine realm for numeric host address) > > > > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Wednesday, July 02, 2014 12:14 PM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: > First Compute Node Doesn't Show Up in Hypervisor List > > Hi Brad, > > Have you checked the nova-compute logs in /var/log/nova/compute.log > (on your new compute node?) > > This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. > > Many thanks > Rhys > > On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: > >> Hi folks, >> >> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >> >> -Foreman host (purely for Foreman) >> -Controller host (applied Controller(Nova) host group) -Compute Host >> (applied Compute(Nova) host group) >> -2 other hosts (not host group applied, but one will be compute and >> one will be storage) >> >> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list > From roxenham at redhat.com Wed Jul 2 17:34:46 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 2 Jul 2014 18:34:46 +0100 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: Message-ID: Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are. On 2 Jul 2014, at 18:30, Lodgen, Brad wrote: > # GENERATED BY PUPPET > # > # Configuration file for qpidd. Entries are of the form: > # name=value > # > # (Note: no spaces on either side of '='). Using default settings: > # "qpidd --help" or "man qpidd" for more details. > port=5672 > max-connections=65535 > worker-threads=17 > connection-backlog=10 > auth=yes > realm=QPID > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Wednesday, July 02, 2014 12:27 PM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List > > No worries! > > Can you paste out your /etc/qpidd.conf file from the controller? (Make sure you sanitise the output) > > Cheers > Rhys > > > On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: > >> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? >> >> >> >> On the compute node, I'm seeing this over and over in the compute log: >> >> Unable to connect to AMQP server: Error in sasl_client_start (-1) >> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >> Minor code may provide more information (Cannot determine realm for >> numeric host address). Sleeping 5 seconds >> >> On the controller conductor log: >> >> Unable to connect to AMQP server: Error in sasl_client_start (-1) >> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >> Minor code may provide more information (Cannot determine realm for >> numeric host address). Sleeping 5 seconds >> >> In the controller messages file: >> >> python: GSSAPI Error: Unspecified GSS failure. Minor code may provide >> more information (Cannot determine realm for numeric host address) >> >> >> >> >> >> -----Original Message----- >> From: Rhys Oxenham [mailto:roxenham at redhat.com] >> Sent: Wednesday, July 02, 2014 12:14 PM >> To: Lodgen, Brad >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> Hi Brad, >> >> Have you checked the nova-compute logs in /var/log/nova/compute.log >> (on your new compute node?) >> >> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. >> >> Many thanks >> Rhys >> >> On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: >> >>> Hi folks, >>> >>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >>> >>> -Foreman host (purely for Foreman) >>> -Controller host (applied Controller(Nova) host group) -Compute Host >>> (applied Compute(Nova) host group) >>> -2 other hosts (not host group applied, but one will be compute and >>> one will be storage) >>> >>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >> > From Lance.Fang at emc.com Wed Jul 2 17:42:26 2014 From: Lance.Fang at emc.com (Fang, Lance) Date: Wed, 2 Jul 2014 13:42:26 -0400 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) Message-ID: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> All, I am hoping you can help to resolve this. While installing RDO into a single VM, I continue to hit this problem. Appreciate any inputs .. == 10.110.80.62_mysql.pp: [ DONE ] 10.110.80.62_amqp.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 10.110.80.62_amqp.pp err: /Stage[main]/Rabbitmq::Service/Service[rabbitmq-server]/ensure: change from stopped to running failed: Could not start Service[rabbitmq-server]: Execution of '/sbin/service rabbitmq-server start' returned 1: at /var/tmp/packstack/754293704d5e4f66b3dd8532e8bd0300/modules/rabbitmq/manifests/service.pp:37 You will find full trace in log /var/tmp/packstack/20140702-123959-kJYnai/manifests/10.110.80.62_amqp.pp.log Please check log file /var/tmp/packstack/20140702-123959-kJYnai/openstack-setup.log for more information ----------------------------------------- Lance K. Fang Consultant Solutions Engineer Mobile: (510) 393-6208 ------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brad.Lodgen at centurylink.com Wed Jul 2 17:53:14 2014 From: Brad.Lodgen at centurylink.com (Lodgen, Brad) Date: Wed, 2 Jul 2014 17:53:14 +0000 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: Message-ID: The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller. I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not. I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect? -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: Wednesday, July 02, 2014 12:35 PM To: Lodgen, Brad Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are. On 2 Jul 2014, at 18:30, Lodgen, Brad wrote: > # GENERATED BY PUPPET > # > # Configuration file for qpidd. Entries are of the form: > # name=value > # > # (Note: no spaces on either side of '='). Using default settings: > # "qpidd --help" or "man qpidd" for more details. > port=5672 > max-connections=65535 > worker-threads=17 > connection-backlog=10 > auth=yes > realm=QPID > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Wednesday, July 02, 2014 12:27 PM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: > First Compute Node Doesn't Show Up in Hypervisor List > > No worries! > > Can you paste out your /etc/qpidd.conf file from the controller? (Make > sure you sanitise the output) > > Cheers > Rhys > > > On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: > >> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? >> >> >> >> On the compute node, I'm seeing this over and over in the compute log: >> >> Unable to connect to AMQP server: Error in sasl_client_start (-1) >> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >> Minor code may provide more information (Cannot determine realm for >> numeric host address). Sleeping 5 seconds >> >> On the controller conductor log: >> >> Unable to connect to AMQP server: Error in sasl_client_start (-1) >> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >> Minor code may provide more information (Cannot determine realm for >> numeric host address). Sleeping 5 seconds >> >> In the controller messages file: >> >> python: GSSAPI Error: Unspecified GSS failure. Minor code may >> provide more information (Cannot determine realm for numeric host >> address) >> >> >> >> >> >> -----Original Message----- >> From: Rhys Oxenham [mailto:roxenham at redhat.com] >> Sent: Wednesday, July 02, 2014 12:14 PM >> To: Lodgen, Brad >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> Hi Brad, >> >> Have you checked the nova-compute logs in /var/log/nova/compute.log >> (on your new compute node?) >> >> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. >> >> Many thanks >> Rhys >> >> On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: >> >>> Hi folks, >>> >>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >>> >>> -Foreman host (purely for Foreman) >>> -Controller host (applied Controller(Nova) host group) -Compute Host >>> (applied Compute(Nova) host group) >>> -2 other hosts (not host group applied, but one will be compute and >>> one will be storage) >>> >>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >> > From jeckersb at redhat.com Wed Jul 2 18:00:37 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Wed, 02 Jul 2014 14:00:37 -0400 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) In-Reply-To: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> References: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> Message-ID: <87zjgrprne.fsf@redhat.com> "Fang, Lance" writes: > All, > > I am hoping you can help to resolve this. While installing RDO into a single VM, I continue to hit this problem. Appreciate any inputs .. > > == > > > 10.110.80.62_mysql.pp: [ DONE ] > 10.110.80.62_amqp.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 10.110.80.62_amqp.pp > err: /Stage[main]/Rabbitmq::Service/Service[rabbitmq-server]/ensure: change from stopped to running failed: Could not start Service[rabbitmq-server]: Execution of '/sbin/service rabbitmq-server start' returned 1: at /var/tmp/packstack/754293704d5e4f66b3dd8532e8bd0300/modules/rabbitmq/manifests/service.pp:37 > You will find full trace in log /var/tmp/packstack/20140702-123959-kJYnai/manifests/10.110.80.62_amqp.pp.log > Please check log file /var/tmp/packstack/20140702-123959-kJYnai/openstack-setup.log for more information > There's quite a few bugs in the rabbitmq-server package that might be causing this. Some of them have been fixed recently and some of them I am actively fixing in rawhide and backporting to F20 (assuming you are using Fedora). Can you provide the output of: rpm -q rabbitmq-server journalctl -u rabbitmq-server That should help pin down why it's failing. From Lance.Fang at emc.com Wed Jul 2 18:03:54 2014 From: Lance.Fang at emc.com (Fang, Lance) Date: Wed, 2 Jul 2014 14:03:54 -0400 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) In-Reply-To: <87zjgrprne.fsf@redhat.com> References: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> <87zjgrprne.fsf@redhat.com> Message-ID: <95730731D64285418F19B9129C3BDC3D010E6CA3234E@MX40A.corp.emc.com> John, Thanks for the prompt response. Here is the output: [root at sse-durl-ora1 ~]# rpm -q rabbitmq-server rabbitmq-server-3.1.5-1.el6.noarch [root at sse-durl-ora1 ~]# journalctl -u rabbitmq-server (looks like the command was not found) -bash: journalctl: command not found -----Original Message----- From: John Eckersberg [mailto:jeckersb at redhat.com] Sent: Wednesday, July 02, 2014 11:01 AM To: Fang, Lance; rdo-list at redhat.com Subject: Re: [Rdo-list] ERROR while installing RDO (rabbitmq-server) "Fang, Lance" writes: > All, > > I am hoping you can help to resolve this. While installing RDO into a single VM, I continue to hit this problem. Appreciate any inputs .. > > == > > > 10.110.80.62_mysql.pp: [ DONE ] > 10.110.80.62_amqp.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 10.110.80.62_amqp.pp > err: /Stage[main]/Rabbitmq::Service/Service[rabbitmq-server]/ensure: > change from stopped to running failed: Could not start > Service[rabbitmq-server]: Execution of '/sbin/service rabbitmq-server > start' returned 1: at > /var/tmp/packstack/754293704d5e4f66b3dd8532e8bd0300/modules/rabbitmq/m > anifests/service.pp:37 You will find full trace in log > /var/tmp/packstack/20140702-123959-kJYnai/manifests/10.110.80.62_amqp. > pp.log Please check log file > /var/tmp/packstack/20140702-123959-kJYnai/openstack-setup.log for more > information > There's quite a few bugs in the rabbitmq-server package that might be causing this. Some of them have been fixed recently and some of them I am actively fixing in rawhide and backporting to F20 (assuming you are using Fedora). Can you provide the output of: rpm -q rabbitmq-server journalctl -u rabbitmq-server That should help pin down why it's failing. From jeckersb at redhat.com Wed Jul 2 18:50:00 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Wed, 02 Jul 2014 14:50:00 -0400 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) In-Reply-To: <95730731D64285418F19B9129C3BDC3D010E6CA3234E@MX40A.corp.emc.com> References: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> <87zjgrprne.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3234E@MX40A.corp.emc.com> Message-ID: <87wqbvppd3.fsf@redhat.com> "Fang, Lance" writes: > John, > > Thanks for the prompt response. > > Here is the output: > > [root at sse-durl-ora1 ~]# rpm -q rabbitmq-server > rabbitmq-server-3.1.5-1.el6.noarch > > [root at sse-durl-ora1 ~]# journalctl -u rabbitmq-server (looks like the command was not found) > -bash: journalctl: command not found Ah ok, this is an EL6 system, not Fedora. In that case, how about the output from... cat /var/log/rabbitmq/startup_log cat /var/log/rabbitmq/startup_err From Lance.Fang at emc.com Wed Jul 2 18:53:26 2014 From: Lance.Fang at emc.com (Fang, Lance) Date: Wed, 2 Jul 2014 14:53:26 -0400 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) In-Reply-To: <87wqbvppd3.fsf@redhat.com> References: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> <87zjgrprne.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3234E@MX40A.corp.emc.com> <87wqbvppd3.fsf@redhat.com> Message-ID: <95730731D64285418F19B9129C3BDC3D010E6CA3236F@MX40A.corp.emc.com> John, Yes .. this is RH. Here you go... == [root at sse-durl-ora1 howto]# cat /var/log/rabbitmq/startup_log RabbitMQ 3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc. ## ## Licensed under the MPL. See http://www.rabbitmq.com/ ## ## ########## Logs: /var/log/rabbitmq/rabbit at sse-durl-ora1.log ###### ## /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log ########## Starting broker... BOOT FAILED =========== Error description: {could_not_start_tcp_listener,{"::",5672}} Log files (may contain more information): /var/log/rabbitmq/rabbit at sse-durl-ora1.log /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log Stack trace: [{rabbit_networking,start_listener0,4}, {rabbit_networking,'-start_listener/4-lc$^0/1-0-',4}, {rabbit_networking,start_listener,4}, {rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1}, {rabbit_networking,boot_tcp,0}, {rabbit_networking,boot,0}, {rabbit,'-run_boot_step/1-lc$^1/1-1-',1}, {rabbit,run_boot_step,1}] BOOT FAILED =========== Error description: {could_not_start,rabbit, {bad_return, {{rabbit,start,[normal,[]]}, {'EXIT', {rabbit,failure_during_boot, {could_not_start_tcp_listener,{"::",5672}}}}}}} Log files (may contain more information): /var/log/rabbitmq/rabbit at sse-durl-ora1.log /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log {"init terminating in do_boot",{rabbit,failure_during_boot,{could_not_start,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot,{could_not_start_tcp_listener,{"::",5672}}}}}}}}} [root at sse-durl-ora1 howto]# cat /var/log/rabbitmq/startup_err Crash dump was written to: erl_crash.dump init terminating in do_boot () == -----Original Message----- From: John Eckersberg [mailto:jeckersb at redhat.com] Sent: Wednesday, July 02, 2014 11:50 AM To: Fang, Lance; rdo-list at redhat.com Subject: RE: [Rdo-list] ERROR while installing RDO (rabbitmq-server) "Fang, Lance" writes: > John, > > Thanks for the prompt response. > > Here is the output: > > [root at sse-durl-ora1 ~]# rpm -q rabbitmq-server > rabbitmq-server-3.1.5-1.el6.noarch > > [root at sse-durl-ora1 ~]# journalctl -u rabbitmq-server (looks like the command was not found) > -bash: journalctl: command not found Ah ok, this is an EL6 system, not Fedora. In that case, how about the output from... cat /var/log/rabbitmq/startup_log cat /var/log/rabbitmq/startup_err From Brad.Lodgen at centurylink.com Wed Jul 2 19:01:10 2014 From: Brad.Lodgen at centurylink.com (Lodgen, Brad) Date: Wed, 2 Jul 2014 19:01:10 +0000 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: Message-ID: So even though the default controller puppet module configures qpidd.conf to say "auth=yes", I changed it to "auth=no" and restarted qpidd service. I now see my compute host in the dashboard. Is that a misconfiguration in RHOSv4 that I should submit for a change somewhere? -----Original Message----- From: Lodgen, Brad Sent: Wednesday, July 02, 2014 12:53 PM To: 'Rhys Oxenham' Cc: 'rdo-list at redhat.com' Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller. I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not. I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect? -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: Wednesday, July 02, 2014 12:35 PM To: Lodgen, Brad Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are. On 2 Jul 2014, at 18:30, Lodgen, Brad wrote: > # GENERATED BY PUPPET > # > # Configuration file for qpidd. Entries are of the form: > # name=value > # > # (Note: no spaces on either side of '='). Using default settings: > # "qpidd --help" or "man qpidd" for more details. > port=5672 > max-connections=65535 > worker-threads=17 > connection-backlog=10 > auth=yes > realm=QPID > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Wednesday, July 02, 2014 12:27 PM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: > First Compute Node Doesn't Show Up in Hypervisor List > > No worries! > > Can you paste out your /etc/qpidd.conf file from the controller? (Make > sure you sanitise the output) > > Cheers > Rhys > > > On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: > >> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? >> >> >> >> On the compute node, I'm seeing this over and over in the compute log: >> >> Unable to connect to AMQP server: Error in sasl_client_start (-1) >> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >> Minor code may provide more information (Cannot determine realm for >> numeric host address). Sleeping 5 seconds >> >> On the controller conductor log: >> >> Unable to connect to AMQP server: Error in sasl_client_start (-1) >> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >> Minor code may provide more information (Cannot determine realm for >> numeric host address). Sleeping 5 seconds >> >> In the controller messages file: >> >> python: GSSAPI Error: Unspecified GSS failure. Minor code may >> provide more information (Cannot determine realm for numeric host >> address) >> >> >> >> >> >> -----Original Message----- >> From: Rhys Oxenham [mailto:roxenham at redhat.com] >> Sent: Wednesday, July 02, 2014 12:14 PM >> To: Lodgen, Brad >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> Hi Brad, >> >> Have you checked the nova-compute logs in /var/log/nova/compute.log >> (on your new compute node?) >> >> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. >> >> Many thanks >> Rhys >> >> On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: >> >>> Hi folks, >>> >>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >>> >>> -Foreman host (purely for Foreman) >>> -Controller host (applied Controller(Nova) host group) -Compute Host >>> (applied Compute(Nova) host group) >>> -2 other hosts (not host group applied, but one will be compute and >>> one will be storage) >>> >>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >> > From jeckersb at redhat.com Wed Jul 2 19:03:58 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Wed, 02 Jul 2014 15:03:58 -0400 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) In-Reply-To: <95730731D64285418F19B9129C3BDC3D010E6CA3236F@MX40A.corp.emc.com> References: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> <87zjgrprne.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3234E@MX40A.corp.emc.com> <87wqbvppd3.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3236F@MX40A.corp.emc.com> Message-ID: <87tx6zpopt.fsf@redhat.com> "Fang, Lance" writes: > John, > > Yes .. this is RH. Here you go... > > == > [root at sse-durl-ora1 howto]# cat /var/log/rabbitmq/startup_log > > RabbitMQ 3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc. > ## ## Licensed under the MPL. See http://www.rabbitmq.com/ > ## ## > ########## Logs: /var/log/rabbitmq/rabbit at sse-durl-ora1.log > ###### ## /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log > ########## > Starting broker... > > BOOT FAILED > =========== > > Error description: > {could_not_start_tcp_listener,{"::",5672}} > > Log files (may contain more information): > /var/log/rabbitmq/rabbit at sse-durl-ora1.log > /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log > > Stack trace: > [{rabbit_networking,start_listener0,4}, > {rabbit_networking,'-start_listener/4-lc$^0/1-0-',4}, > {rabbit_networking,start_listener,4}, > {rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1}, > {rabbit_networking,boot_tcp,0}, > {rabbit_networking,boot,0}, > {rabbit,'-run_boot_step/1-lc$^1/1-1-',1}, > {rabbit,run_boot_step,1}] > > > > BOOT FAILED > =========== > > Error description: > {could_not_start,rabbit, > {bad_return, > {{rabbit,start,[normal,[]]}, > {'EXIT', > {rabbit,failure_during_boot, > {could_not_start_tcp_listener,{"::",5672}}}}}}} > > Log files (may contain more information): > /var/log/rabbitmq/rabbit at sse-durl-ora1.log > /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log > > {"init terminating in do_boot",{rabbit,failure_during_boot,{could_not_start,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot,{could_not_start_tcp_listener,{"::",5672}}}}}}}}} > > > [root at sse-durl-ora1 howto]# cat /var/log/rabbitmq/startup_err > > Crash dump was written to: erl_crash.dump > init terminating in do_boot () > OK, that looks like something else is already listening on the port. Anything in the output for: ss -lp sport = :amqp ? From Lance.Fang at emc.com Wed Jul 2 21:35:51 2014 From: Lance.Fang at emc.com (Fang, Lance) Date: Wed, 2 Jul 2014 17:35:51 -0400 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) In-Reply-To: <87tx6zpopt.fsf@redhat.com> References: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> <87zjgrprne.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3234E@MX40A.corp.emc.com> <87wqbvppd3.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3236F@MX40A.corp.emc.com> <87tx6zpopt.fsf@redhat.com> Message-ID: <95730731D64285418F19B9129C3BDC3D010E6CA323BA@MX40A.corp.emc.com> Sorry for the delay .. .Here is the output. [root at sse-durl-ora1 howto]# ss -lp sport = :amqp State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 10 :::amqp :::* users:(("qpidd",1925,12)) LISTEN 0 10 *:amqp *:* users:(("qpidd",1925,11)) -----Original Message----- From: John Eckersberg [mailto:jeckersb at redhat.com] Sent: Wednesday, July 02, 2014 12:04 PM To: Fang, Lance; rdo-list at redhat.com Subject: RE: [Rdo-list] ERROR while installing RDO (rabbitmq-server) "Fang, Lance" writes: > John, > > Yes .. this is RH. Here you go... > > == > [root at sse-durl-ora1 howto]# cat /var/log/rabbitmq/startup_log > > RabbitMQ 3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc. > ## ## Licensed under the MPL. See http://www.rabbitmq.com/ > ## ## > ########## Logs: /var/log/rabbitmq/rabbit at sse-durl-ora1.log > ###### ## /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log > ########## > Starting broker... > > BOOT FAILED > =========== > > Error description: > {could_not_start_tcp_listener,{"::",5672}} > > Log files (may contain more information): > /var/log/rabbitmq/rabbit at sse-durl-ora1.log > /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log > > Stack trace: > [{rabbit_networking,start_listener0,4}, > {rabbit_networking,'-start_listener/4-lc$^0/1-0-',4}, > {rabbit_networking,start_listener,4}, > {rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1}, > {rabbit_networking,boot_tcp,0}, > {rabbit_networking,boot,0}, > {rabbit,'-run_boot_step/1-lc$^1/1-1-',1}, > {rabbit,run_boot_step,1}] > > > > BOOT FAILED > =========== > > Error description: > {could_not_start,rabbit, > {bad_return, > {{rabbit,start,[normal,[]]}, > {'EXIT', > {rabbit,failure_during_boot, > {could_not_start_tcp_listener,{"::",5672}}}}}}} > > Log files (may contain more information): > /var/log/rabbitmq/rabbit at sse-durl-ora1.log > /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log > > {"init terminating in > do_boot",{rabbit,failure_during_boot,{could_not_start,rabbit,{bad_retu > rn,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot,{co > uld_not_start_tcp_listener,{"::",5672}}}}}}}}} > > > [root at sse-durl-ora1 howto]# cat /var/log/rabbitmq/startup_err > > Crash dump was written to: erl_crash.dump init terminating in do_boot > () > OK, that looks like something else is already listening on the port. Anything in the output for: ss -lp sport = :amqp ? From roxenham at redhat.com Wed Jul 2 22:14:47 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 2 Jul 2014 23:14:47 +0100 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) In-Reply-To: <95730731D64285418F19B9129C3BDC3D010E6CA323BA@MX40A.corp.emc.com> References: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> <87zjgrprne.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3234E@MX40A.corp.emc.com> <87wqbvppd3.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3236F@MX40A.corp.emc.com> <87tx6zpopt.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA323BA@MX40A.corp.emc.com> Message-ID: <28331C9F-307F-4943-8735-E5DCAB4B53CA@redhat.com> On 2 Jul 2014, at 22:35, Fang, Lance wrote: > Sorry for the delay .. .Here is the output. > > [root at sse-durl-ora1 howto]# ss -lp sport = :amqp > State Recv-Q Send-Q Local Address:Port Peer Address:Port > LISTEN 0 10 :::amqp :::* users:(("qpidd",1925,12)) > LISTEN 0 10 *:amqp *:* users:(("qpidd",1925,11)) > > Looks like you?ve somehow got qpid installed and running too. Firstly, I?d stop and disable this service: service qpidd stop && chkconfig qpidd off Then attempt to restart RabbitMQ (or re-run packstack) Did you attempt this installation on a clean system or one that already had a previous OpenStack installation? > > -----Original Message----- > From: John Eckersberg [mailto:jeckersb at redhat.com] > Sent: Wednesday, July 02, 2014 12:04 PM > To: Fang, Lance; rdo-list at redhat.com > Subject: RE: [Rdo-list] ERROR while installing RDO (rabbitmq-server) > > "Fang, Lance" writes: >> John, >> >> Yes .. this is RH. Here you go... >> >> == >> [root at sse-durl-ora1 howto]# cat /var/log/rabbitmq/startup_log >> >> RabbitMQ 3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc. >> ## ## Licensed under the MPL. See http://www.rabbitmq.com/ >> ## ## >> ########## Logs: /var/log/rabbitmq/rabbit at sse-durl-ora1.log >> ###### ## /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log >> ########## >> Starting broker... >> >> BOOT FAILED >> =========== >> >> Error description: >> {could_not_start_tcp_listener,{"::",5672}} >> >> Log files (may contain more information): >> /var/log/rabbitmq/rabbit at sse-durl-ora1.log >> /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log >> >> Stack trace: >> [{rabbit_networking,start_listener0,4}, >> {rabbit_networking,'-start_listener/4-lc$^0/1-0-',4}, >> {rabbit_networking,start_listener,4}, >> {rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1}, >> {rabbit_networking,boot_tcp,0}, >> {rabbit_networking,boot,0}, >> {rabbit,'-run_boot_step/1-lc$^1/1-1-',1}, >> {rabbit,run_boot_step,1}] >> >> >> >> BOOT FAILED >> =========== >> >> Error description: >> {could_not_start,rabbit, >> {bad_return, >> {{rabbit,start,[normal,[]]}, >> {'EXIT', >> {rabbit,failure_during_boot, >> {could_not_start_tcp_listener,{"::",5672}}}}}}} >> >> Log files (may contain more information): >> /var/log/rabbitmq/rabbit at sse-durl-ora1.log >> /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log >> >> {"init terminating in >> do_boot",{rabbit,failure_during_boot,{could_not_start,rabbit,{bad_retu >> rn,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot,{co >> uld_not_start_tcp_listener,{"::",5672}}}}}}}}} >> >> >> [root at sse-durl-ora1 howto]# cat /var/log/rabbitmq/startup_err >> >> Crash dump was written to: erl_crash.dump init terminating in do_boot >> () >> > > OK, that looks like something else is already listening on the port. > Anything in the output for: > > ss -lp sport = :amqp > > ? > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From Lance.Fang at emc.com Wed Jul 2 22:53:39 2014 From: Lance.Fang at emc.com (Fang, Lance) Date: Wed, 2 Jul 2014 18:53:39 -0400 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) In-Reply-To: <28331C9F-307F-4943-8735-E5DCAB4B53CA@redhat.com> References: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> <87zjgrprne.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3234E@MX40A.corp.emc.com> <87wqbvppd3.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3236F@MX40A.corp.emc.com> <87tx6zpopt.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA323BA@MX40A.corp.emc.com> <28331C9F-307F-4943-8735-E5DCAB4B53CA@redhat.com> Message-ID: <95730731D64285418F19B9129C3BDC3D010E6CA323C6@MX40A.corp.emc.com> Guys, Thanks ... Executed the following and seems like I got passed the rabbitmq issue but not out of the wood as yet. service qpidd stop chkconfig qpidd off HammerRemoveOpenStack.sh packstack --allinone Now I am hitting keystone error below. From log /var/tmp/packstack/20140702-174029-Wvlkfj/manifests/10.110.80.62_keystone.pp.log. Appreciate your continue help ... == warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_service[cinderv2]: Skipping because of failed dependencies notice: /Stage[main]/Swift::Keystone::Auth/Keystone_role[SwiftOperator]: Dependency Service[keystone] has failures: true warning: /Stage[main]/Swift::Keystone::Auth/Keystone_role[SwiftOperator]: Skipping because of failed dependencies err: Could not prefetch keystone_endpoint provider 'keystone': Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ e ndpoint-list' returned 1: /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should r ebuild using libgmp >= 5 to avoid timing attack vulnerability. _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning) Unable to establish connection to http://127.0.0.1:35357/v2.0/endpoints When manually execute command: [root at sse-durl-ora1 ~]# /sbin/service openstack-keystone start Starting keystone: [FAILED] -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: Wednesday, July 02, 2014 3:15 PM To: Fang, Lance Cc: John Eckersberg; rdo-list at redhat.com Subject: Re: [Rdo-list] ERROR while installing RDO (rabbitmq-server) On 2 Jul 2014, at 22:35, Fang, Lance wrote: > Sorry for the delay .. .Here is the output. > > [root at sse-durl-ora1 howto]# ss -lp sport = :amqp > State Recv-Q Send-Q Local Address:Port Peer Address:Port > LISTEN 0 10 :::amqp :::* users:(("qpidd",1925,12)) > LISTEN 0 10 *:amqp *:* users:(("qpidd",1925,11)) > > Looks like you've somehow got qpid installed and running too. Firstly, I'd stop and disable this service: service qpidd stop && chkconfig qpidd off Then attempt to restart RabbitMQ (or re-run packstack) Did you attempt this installation on a clean system or one that already had a previous OpenStack installation? > > -----Original Message----- > From: John Eckersberg [mailto:jeckersb at redhat.com] > Sent: Wednesday, July 02, 2014 12:04 PM > To: Fang, Lance; rdo-list at redhat.com > Subject: RE: [Rdo-list] ERROR while installing RDO (rabbitmq-server) > > "Fang, Lance" writes: >> John, >> >> Yes .. this is RH. Here you go... >> >> == >> [root at sse-durl-ora1 howto]# cat /var/log/rabbitmq/startup_log >> >> RabbitMQ 3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc. >> ## ## Licensed under the MPL. See http://www.rabbitmq.com/ >> ## ## >> ########## Logs: /var/log/rabbitmq/rabbit at sse-durl-ora1.log >> ###### ## /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log >> ########## >> Starting broker... >> >> BOOT FAILED >> =========== >> >> Error description: >> {could_not_start_tcp_listener,{"::",5672}} >> >> Log files (may contain more information): >> /var/log/rabbitmq/rabbit at sse-durl-ora1.log >> /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log >> >> Stack trace: >> [{rabbit_networking,start_listener0,4}, >> {rabbit_networking,'-start_listener/4-lc$^0/1-0-',4}, >> {rabbit_networking,start_listener,4}, >> {rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1}, >> {rabbit_networking,boot_tcp,0}, >> {rabbit_networking,boot,0}, >> {rabbit,'-run_boot_step/1-lc$^1/1-1-',1}, >> {rabbit,run_boot_step,1}] >> >> >> >> BOOT FAILED >> =========== >> >> Error description: >> {could_not_start,rabbit, >> {bad_return, >> {{rabbit,start,[normal,[]]}, >> {'EXIT', >> {rabbit,failure_during_boot, >> {could_not_start_tcp_listener,{"::",5672}}}}}}} >> >> Log files (may contain more information): >> /var/log/rabbitmq/rabbit at sse-durl-ora1.log >> /var/log/rabbitmq/rabbit at sse-durl-ora1-sasl.log >> >> {"init terminating in >> do_boot",{rabbit,failure_during_boot,{could_not_start,rabbit,{bad_ret >> u >> rn,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot,{c >> o uld_not_start_tcp_listener,{"::",5672}}}}}}}}} >> >> >> [root at sse-durl-ora1 howto]# cat /var/log/rabbitmq/startup_err >> >> Crash dump was written to: erl_crash.dump init terminating in do_boot >> () >> > > OK, that looks like something else is already listening on the port. > Anything in the output for: > > ss -lp sport = :amqp > > ? > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From ihrachys at redhat.com Thu Jul 3 08:45:20 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 03 Jul 2014 10:45:20 +0200 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) In-Reply-To: <95730731D64285418F19B9129C3BDC3D010E6CA323C6@MX40A.corp.emc.com> References: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> <87zjgrprne.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3234E@MX40A.corp.emc.com> <87wqbvppd3.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3236F@MX40A.corp.emc.com> <87tx6zpopt.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA323BA@MX40A.corp.emc.com> <28331C9F-307F-4943-8735-E5DCAB4B53CA@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA323C6@MX40A.corp.emc.com> Message-ID: <53B51820.6020504@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 03/07/14 00:53, Fang, Lance wrote: > PowmInsecureWarning: Not using mpz_powm_sec. You should r ebuild > using libgmp >= 5 to avoid timing attack vulnerability Do you have all the needed repos enabled for yum? See: http://openstack.redhat.com/Repositories /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTtRgfAAoJEC5aWaUY1u57VOwIAI+bKVyZ7IkAIyLCZBeTYwgE J4ecYKv/LerCel/lFlJGhw1KApdqS9VvFJibGFpQlHtPA/DEgoENPcpxEkAaXB/z BXd/6Cm/H+d6TL1bSPK89bKn2FIZnnw0koTXUTkV4nTX+Kt3O5ojo/jWpL1HP/x2 LGUqIQZUkQyr2NbRR8LL7UnAQZM8PXFWLST0XAIOXWXwxwDMl5pcENJucT5iC5cR DLbNs8mtm7OgQG5+eTic2OvIVv8LY8ufbeOqr79MoB2FNIWUnw6aUMwiFJe4umsM mHNFGn4RQk/wr8cfRuC2sOA5uZUSh4XF1JiwLis6McML/7C8OblJ5bODSzwXolA= =NPPV -----END PGP SIGNATURE----- From Brad.Lodgen at centurylink.com Thu Jul 3 16:11:24 2014 From: Brad.Lodgen at centurylink.com (Lodgen, Brad) Date: Thu, 3 Jul 2014 16:11:24 +0000 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: Message-ID: Follow-up from yesterday... is this the same default in RDO, to have qpidd.conf default to auth=yes? Does that mean I have something on the compute side misconfigured? It looks to me like the username/password is the same on the controller/compute. For now, I've had to disable puppet agent on the controller, as it keeps resetting "auth=no" back to "auth=yes", and I don't see a host group parameter that would change that. -----Original Message----- From: Lodgen, Brad Sent: Wednesday, July 02, 2014 2:01 PM To: 'Rhys Oxenham' Cc: 'rdo-list at redhat.com' Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List So even though the default controller puppet module configures qpidd.conf to say "auth=yes", I changed it to "auth=no" and restarted qpidd service. I now see my compute host in the dashboard. Is that a misconfiguration in RHOSv4 that I should submit for a change somewhere? -----Original Message----- From: Lodgen, Brad Sent: Wednesday, July 02, 2014 12:53 PM To: 'Rhys Oxenham' Cc: 'rdo-list at redhat.com' Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller. I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not. I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect? -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: Wednesday, July 02, 2014 12:35 PM To: Lodgen, Brad Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are. On 2 Jul 2014, at 18:30, Lodgen, Brad wrote: > # GENERATED BY PUPPET > # > # Configuration file for qpidd. Entries are of the form: > # name=value > # > # (Note: no spaces on either side of '='). Using default settings: > # "qpidd --help" or "man qpidd" for more details. > port=5672 > max-connections=65535 > worker-threads=17 > connection-backlog=10 > auth=yes > realm=QPID > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Wednesday, July 02, 2014 12:27 PM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: > First Compute Node Doesn't Show Up in Hypervisor List > > No worries! > > Can you paste out your /etc/qpidd.conf file from the controller? (Make > sure you sanitise the output) > > Cheers > Rhys > > > On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: > >> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? >> >> >> >> On the compute node, I'm seeing this over and over in the compute log: >> >> Unable to connect to AMQP server: Error in sasl_client_start (-1) >> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >> Minor code may provide more information (Cannot determine realm for >> numeric host address). Sleeping 5 seconds >> >> On the controller conductor log: >> >> Unable to connect to AMQP server: Error in sasl_client_start (-1) >> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >> Minor code may provide more information (Cannot determine realm for >> numeric host address). Sleeping 5 seconds >> >> In the controller messages file: >> >> python: GSSAPI Error: Unspecified GSS failure. Minor code may >> provide more information (Cannot determine realm for numeric host >> address) >> >> >> >> >> >> -----Original Message----- >> From: Rhys Oxenham [mailto:roxenham at redhat.com] >> Sent: Wednesday, July 02, 2014 12:14 PM >> To: Lodgen, Brad >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> Hi Brad, >> >> Have you checked the nova-compute logs in /var/log/nova/compute.log >> (on your new compute node?) >> >> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. >> >> Many thanks >> Rhys >> >> On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: >> >>> Hi folks, >>> >>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >>> >>> -Foreman host (purely for Foreman) >>> -Controller host (applied Controller(Nova) host group) -Compute Host >>> (applied Compute(Nova) host group) >>> -2 other hosts (not host group applied, but one will be compute and >>> one will be storage) >>> >>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >> > From roxenham at redhat.com Thu Jul 3 16:12:31 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Thu, 3 Jul 2014 17:12:31 +0100 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: Message-ID: <939809EC-71B5-41BF-BC10-B14721D6E6B3@redhat.com> Sorry I didn?t respond to this? I have auth set to no in my environment, but that?s just for testing. Do things work when auth is set to no and the service is restarted? On 3 Jul 2014, at 17:11, Lodgen, Brad wrote: > Follow-up from yesterday... is this the same default in RDO, to have qpidd.conf default to auth=yes? > > Does that mean I have something on the compute side misconfigured? It looks to me like the username/password is the same on the controller/compute. > > For now, I've had to disable puppet agent on the controller, as it keeps resetting "auth=no" back to "auth=yes", and I don't see a host group parameter that would change that. > > > > > -----Original Message----- > From: Lodgen, Brad > Sent: Wednesday, July 02, 2014 2:01 PM > To: 'Rhys Oxenham' > Cc: 'rdo-list at redhat.com' > Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List > > So even though the default controller puppet module configures qpidd.conf to say "auth=yes", I changed it to "auth=no" and restarted qpidd service. I now see my compute host in the dashboard. > > Is that a misconfiguration in RHOSv4 that I should submit for a change somewhere? > > > > -----Original Message----- > From: Lodgen, Brad > Sent: Wednesday, July 02, 2014 12:53 PM > To: 'Rhys Oxenham' > Cc: 'rdo-list at redhat.com' > Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List > > The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller. > > I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not. > > I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect? > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Wednesday, July 02, 2014 12:35 PM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List > > Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are. > > On 2 Jul 2014, at 18:30, Lodgen, Brad wrote: > >> # GENERATED BY PUPPET >> # >> # Configuration file for qpidd. Entries are of the form: >> # name=value >> # >> # (Note: no spaces on either side of '='). Using default settings: >> # "qpidd --help" or "man qpidd" for more details. >> port=5672 >> max-connections=65535 >> worker-threads=17 >> connection-backlog=10 >> auth=yes >> realm=QPID >> >> >> -----Original Message----- >> From: Rhys Oxenham [mailto:roxenham at redhat.com] >> Sent: Wednesday, July 02, 2014 12:27 PM >> To: Lodgen, Brad >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> No worries! >> >> Can you paste out your /etc/qpidd.conf file from the controller? (Make >> sure you sanitise the output) >> >> Cheers >> Rhys >> >> >> On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: >> >>> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? >>> >>> >>> >>> On the compute node, I'm seeing this over and over in the compute log: >>> >>> Unable to connect to AMQP server: Error in sasl_client_start (-1) >>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >>> Minor code may provide more information (Cannot determine realm for >>> numeric host address). Sleeping 5 seconds >>> >>> On the controller conductor log: >>> >>> Unable to connect to AMQP server: Error in sasl_client_start (-1) >>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >>> Minor code may provide more information (Cannot determine realm for >>> numeric host address). Sleeping 5 seconds >>> >>> In the controller messages file: >>> >>> python: GSSAPI Error: Unspecified GSS failure. Minor code may >>> provide more information (Cannot determine realm for numeric host >>> address) >>> >>> >>> >>> >>> >>> -----Original Message----- >>> From: Rhys Oxenham [mailto:roxenham at redhat.com] >>> Sent: Wednesday, July 02, 2014 12:14 PM >>> To: Lodgen, Brad >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >>> First Compute Node Doesn't Show Up in Hypervisor List >>> >>> Hi Brad, >>> >>> Have you checked the nova-compute logs in /var/log/nova/compute.log >>> (on your new compute node?) >>> >>> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. >>> >>> Many thanks >>> Rhys >>> >>> On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: >>> >>>> Hi folks, >>>> >>>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >>>> >>>> -Foreman host (purely for Foreman) >>>> -Controller host (applied Controller(Nova) host group) -Compute Host >>>> (applied Compute(Nova) host group) >>>> -2 other hosts (not host group applied, but one will be compute and >>>> one will be storage) >>>> >>>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >> > From Brad.Lodgen at centurylink.com Thu Jul 3 16:16:19 2014 From: Brad.Lodgen at centurylink.com (Lodgen, Brad) Date: Thu, 3 Jul 2014 16:16:19 +0000 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: <939809EC-71B5-41BF-BC10-B14721D6E6B3@redhat.com> References: <939809EC-71B5-41BF-BC10-B14721D6E6B3@redhat.com> Message-ID: No worries. I understand you're busy and thank you for the assistance. To answer your question: yes, following the change and qpidd restart, the logs showed successful communication and the initial compute node showed up as a hypervisor in the dashboard. I also successfully added a second compute node. Success here relies upon disabling the puppet agent so it doesn't change auth back to yes; otherwise, communication fails. -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: Thursday, July 03, 2014 11:13 AM To: Lodgen, Brad Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List Sorry I didn't respond to this... I have auth set to no in my environment, but that's just for testing. Do things work when auth is set to no and the service is restarted? On 3 Jul 2014, at 17:11, Lodgen, Brad wrote: > Follow-up from yesterday... is this the same default in RDO, to have qpidd.conf default to auth=yes? > > Does that mean I have something on the compute side misconfigured? It looks to me like the username/password is the same on the controller/compute. > > For now, I've had to disable puppet agent on the controller, as it keeps resetting "auth=no" back to "auth=yes", and I don't see a host group parameter that would change that. > > > > > -----Original Message----- > From: Lodgen, Brad > Sent: Wednesday, July 02, 2014 2:01 PM > To: 'Rhys Oxenham' > Cc: 'rdo-list at redhat.com' > Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: > First Compute Node Doesn't Show Up in Hypervisor List > > So even though the default controller puppet module configures qpidd.conf to say "auth=yes", I changed it to "auth=no" and restarted qpidd service. I now see my compute host in the dashboard. > > Is that a misconfiguration in RHOSv4 that I should submit for a change somewhere? > > > > -----Original Message----- > From: Lodgen, Brad > Sent: Wednesday, July 02, 2014 12:53 PM > To: 'Rhys Oxenham' > Cc: 'rdo-list at redhat.com' > Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: > First Compute Node Doesn't Show Up in Hypervisor List > > The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller. > > I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not. > > I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect? > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Wednesday, July 02, 2014 12:35 PM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: > First Compute Node Doesn't Show Up in Hypervisor List > > Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are. > > On 2 Jul 2014, at 18:30, Lodgen, Brad wrote: > >> # GENERATED BY PUPPET >> # >> # Configuration file for qpidd. Entries are of the form: >> # name=value >> # >> # (Note: no spaces on either side of '='). Using default settings: >> # "qpidd --help" or "man qpidd" for more details. >> port=5672 >> max-connections=65535 >> worker-threads=17 >> connection-backlog=10 >> auth=yes >> realm=QPID >> >> >> -----Original Message----- >> From: Rhys Oxenham [mailto:roxenham at redhat.com] >> Sent: Wednesday, July 02, 2014 12:27 PM >> To: Lodgen, Brad >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> No worries! >> >> Can you paste out your /etc/qpidd.conf file from the controller? >> (Make sure you sanitise the output) >> >> Cheers >> Rhys >> >> >> On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: >> >>> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? >>> >>> >>> >>> On the compute node, I'm seeing this over and over in the compute log: >>> >>> Unable to connect to AMQP server: Error in sasl_client_start (-1) >>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >>> Minor code may provide more information (Cannot determine realm for >>> numeric host address). Sleeping 5 seconds >>> >>> On the controller conductor log: >>> >>> Unable to connect to AMQP server: Error in sasl_client_start (-1) >>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >>> Minor code may provide more information (Cannot determine realm for >>> numeric host address). Sleeping 5 seconds >>> >>> In the controller messages file: >>> >>> python: GSSAPI Error: Unspecified GSS failure. Minor code may >>> provide more information (Cannot determine realm for numeric host >>> address) >>> >>> >>> >>> >>> >>> -----Original Message----- >>> From: Rhys Oxenham [mailto:roxenham at redhat.com] >>> Sent: Wednesday, July 02, 2014 12:14 PM >>> To: Lodgen, Brad >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >>> First Compute Node Doesn't Show Up in Hypervisor List >>> >>> Hi Brad, >>> >>> Have you checked the nova-compute logs in /var/log/nova/compute.log >>> (on your new compute node?) >>> >>> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. >>> >>> Many thanks >>> Rhys >>> >>> On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: >>> >>>> Hi folks, >>>> >>>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >>>> >>>> -Foreman host (purely for Foreman) >>>> -Controller host (applied Controller(Nova) host group) -Compute >>>> Host (applied Compute(Nova) host group) >>>> -2 other hosts (not host group applied, but one will be compute and >>>> one will be storage) >>>> >>>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >> > From roxenham at redhat.com Thu Jul 3 16:22:17 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Thu, 3 Jul 2014 17:22:17 +0100 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: <939809EC-71B5-41BF-BC10-B14721D6E6B3@redhat.com> Message-ID: OK looks like a bug somewhere? if qpid auth is enabled it requires the authentication mechanism to be completed properly. See: http://qpid.apache.org/releases/qpid-0.14/books/AMQP-Messaging-Broker-CPP-Book/html/ch01s05.html >From looking at puppet-qpid it should have done this for you. Have you been able to reproduce this issue on a clean system? Cheers Rhys On 3 Jul 2014, at 17:16, Lodgen, Brad wrote: > No worries. I understand you're busy and thank you for the assistance. To answer your question: yes, following the change and qpidd restart, the logs showed successful communication and the initial compute node showed up as a hypervisor in the dashboard. I also successfully added a second compute node. Success here relies upon disabling the puppet agent so it doesn't change auth back to yes; otherwise, communication fails. > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Thursday, July 03, 2014 11:13 AM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List > > Sorry I didn't respond to this... I have auth set to no in my environment, but that's just for testing. Do things work when auth is set to no and the service is restarted? > > On 3 Jul 2014, at 17:11, Lodgen, Brad wrote: > >> Follow-up from yesterday... is this the same default in RDO, to have qpidd.conf default to auth=yes? >> >> Does that mean I have something on the compute side misconfigured? It looks to me like the username/password is the same on the controller/compute. >> >> For now, I've had to disable puppet agent on the controller, as it keeps resetting "auth=no" back to "auth=yes", and I don't see a host group parameter that would change that. >> >> >> >> >> -----Original Message----- >> From: Lodgen, Brad >> Sent: Wednesday, July 02, 2014 2:01 PM >> To: 'Rhys Oxenham' >> Cc: 'rdo-list at redhat.com' >> Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> So even though the default controller puppet module configures qpidd.conf to say "auth=yes", I changed it to "auth=no" and restarted qpidd service. I now see my compute host in the dashboard. >> >> Is that a misconfiguration in RHOSv4 that I should submit for a change somewhere? >> >> >> >> -----Original Message----- >> From: Lodgen, Brad >> Sent: Wednesday, July 02, 2014 12:53 PM >> To: 'Rhys Oxenham' >> Cc: 'rdo-list at redhat.com' >> Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller. >> >> I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not. >> >> I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect? >> >> >> -----Original Message----- >> From: Rhys Oxenham [mailto:roxenham at redhat.com] >> Sent: Wednesday, July 02, 2014 12:35 PM >> To: Lodgen, Brad >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are. >> >> On 2 Jul 2014, at 18:30, Lodgen, Brad wrote: >> >>> # GENERATED BY PUPPET >>> # >>> # Configuration file for qpidd. Entries are of the form: >>> # name=value >>> # >>> # (Note: no spaces on either side of '='). Using default settings: >>> # "qpidd --help" or "man qpidd" for more details. >>> port=5672 >>> max-connections=65535 >>> worker-threads=17 >>> connection-backlog=10 >>> auth=yes >>> realm=QPID >>> >>> >>> -----Original Message----- >>> From: Rhys Oxenham [mailto:roxenham at redhat.com] >>> Sent: Wednesday, July 02, 2014 12:27 PM >>> To: Lodgen, Brad >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >>> First Compute Node Doesn't Show Up in Hypervisor List >>> >>> No worries! >>> >>> Can you paste out your /etc/qpidd.conf file from the controller? >>> (Make sure you sanitise the output) >>> >>> Cheers >>> Rhys >>> >>> >>> On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: >>> >>>> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? >>>> >>>> >>>> >>>> On the compute node, I'm seeing this over and over in the compute log: >>>> >>>> Unable to connect to AMQP server: Error in sasl_client_start (-1) >>>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >>>> Minor code may provide more information (Cannot determine realm for >>>> numeric host address). Sleeping 5 seconds >>>> >>>> On the controller conductor log: >>>> >>>> Unable to connect to AMQP server: Error in sasl_client_start (-1) >>>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >>>> Minor code may provide more information (Cannot determine realm for >>>> numeric host address). Sleeping 5 seconds >>>> >>>> In the controller messages file: >>>> >>>> python: GSSAPI Error: Unspecified GSS failure. Minor code may >>>> provide more information (Cannot determine realm for numeric host >>>> address) >>>> >>>> >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Rhys Oxenham [mailto:roxenham at redhat.com] >>>> Sent: Wednesday, July 02, 2014 12:14 PM >>>> To: Lodgen, Brad >>>> Cc: rdo-list at redhat.com >>>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >>>> First Compute Node Doesn't Show Up in Hypervisor List >>>> >>>> Hi Brad, >>>> >>>> Have you checked the nova-compute logs in /var/log/nova/compute.log >>>> (on your new compute node?) >>>> >>>> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. >>>> >>>> Many thanks >>>> Rhys >>>> >>>> On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: >>>> >>>>> Hi folks, >>>>> >>>>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >>>>> >>>>> -Foreman host (purely for Foreman) >>>>> -Controller host (applied Controller(Nova) host group) -Compute >>>>> Host (applied Compute(Nova) host group) >>>>> -2 other hosts (not host group applied, but one will be compute and >>>>> one will be storage) >>>>> >>>>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>> >> > From Brad.Lodgen at centurylink.com Thu Jul 3 16:51:40 2014 From: Brad.Lodgen at centurylink.com (Lodgen, Brad) Date: Thu, 3 Jul 2014 16:51:40 +0000 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: <939809EC-71B5-41BF-BC10-B14721D6E6B3@redhat.com> Message-ID: Well, it had the same result with the second compute node I brought up which was a fresh system with RHEL6.5/RHOS package updates. I checked the nova.conf on controller and both compute nodes. All the same configuration, username, passwords, everything. rpc_backend=nova.openstack.common.rpc.impl_qpid qpid_hostname={controller node private IP} qpid_port=5672 #qpid_hosts=$qpid_hostname:$qpid_port qpid_username={same username} qpid_password={same password} #qpid_sasl_mechanisms= qpid_heartbeat=60 qpid_protocol=tcp qpid_tcp_nodelay=True #qpid_topology_version=1 Should the qpid client be installed on the compute nodes? Because this page notes that doing it manually, it should be (https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Installation_and_Configuration_Guide/Configuring_Compute_Block_Storage.html) but, as you can see via Foreman host group deployment it IS installed on the controller and IS NOT on the compute nodes. [root at ctlr ~]# yum list installed | grep qpid This system is receiving updates from Red Hat Subscription Management. python-qpid.noarch 0.14-11.el6_3 @rhel-6-server-rpms qpid-cpp-client.x86_64 0.14-22.el6_3 @rhel-6-server-rpms qpid-cpp-server.x86_64 0.14-22.el6_3 @rhel-6-server-rpms [root at comp1 ~]# yum list installed | grep qpid This system is receiving updates from Red Hat Subscription Management. python-qpid.noarch 0.14-11.el6_3 @rhel-6-server-rpms [root at comp2 ~]# yum list installed | grep qpid This system is receiving updates from Red Hat Subscription Management. python-qpid.noarch 0.14-11.el6_3 @rhel-6-server-rpms -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: Thursday, July 03, 2014 11:22 AM To: Lodgen, Brad Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List OK looks like a bug somewhere... if qpid auth is enabled it requires the authentication mechanism to be completed properly. See: http://qpid.apache.org/releases/qpid-0.14/books/AMQP-Messaging-Broker-CPP-Book/html/ch01s05.html >From looking at puppet-qpid it should have done this for you. Have you been able to reproduce this issue on a clean system? Cheers Rhys On 3 Jul 2014, at 17:16, Lodgen, Brad wrote: > No worries. I understand you're busy and thank you for the assistance. To answer your question: yes, following the change and qpidd restart, the logs showed successful communication and the initial compute node showed up as a hypervisor in the dashboard. I also successfully added a second compute node. Success here relies upon disabling the puppet agent so it doesn't change auth back to yes; otherwise, communication fails. > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Thursday, July 03, 2014 11:13 AM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: > First Compute Node Doesn't Show Up in Hypervisor List > > Sorry I didn't respond to this... I have auth set to no in my environment, but that's just for testing. Do things work when auth is set to no and the service is restarted? > > On 3 Jul 2014, at 17:11, Lodgen, Brad wrote: > >> Follow-up from yesterday... is this the same default in RDO, to have qpidd.conf default to auth=yes? >> >> Does that mean I have something on the compute side misconfigured? It looks to me like the username/password is the same on the controller/compute. >> >> For now, I've had to disable puppet agent on the controller, as it keeps resetting "auth=no" back to "auth=yes", and I don't see a host group parameter that would change that. >> >> >> >> >> -----Original Message----- >> From: Lodgen, Brad >> Sent: Wednesday, July 02, 2014 2:01 PM >> To: 'Rhys Oxenham' >> Cc: 'rdo-list at redhat.com' >> Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> So even though the default controller puppet module configures qpidd.conf to say "auth=yes", I changed it to "auth=no" and restarted qpidd service. I now see my compute host in the dashboard. >> >> Is that a misconfiguration in RHOSv4 that I should submit for a change somewhere? >> >> >> >> -----Original Message----- >> From: Lodgen, Brad >> Sent: Wednesday, July 02, 2014 12:53 PM >> To: 'Rhys Oxenham' >> Cc: 'rdo-list at redhat.com' >> Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller. >> >> I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not. >> >> I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect? >> >> >> -----Original Message----- >> From: Rhys Oxenham [mailto:roxenham at redhat.com] >> Sent: Wednesday, July 02, 2014 12:35 PM >> To: Lodgen, Brad >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are. >> >> On 2 Jul 2014, at 18:30, Lodgen, Brad wrote: >> >>> # GENERATED BY PUPPET >>> # >>> # Configuration file for qpidd. Entries are of the form: >>> # name=value >>> # >>> # (Note: no spaces on either side of '='). Using default settings: >>> # "qpidd --help" or "man qpidd" for more details. >>> port=5672 >>> max-connections=65535 >>> worker-threads=17 >>> connection-backlog=10 >>> auth=yes >>> realm=QPID >>> >>> >>> -----Original Message----- >>> From: Rhys Oxenham [mailto:roxenham at redhat.com] >>> Sent: Wednesday, July 02, 2014 12:27 PM >>> To: Lodgen, Brad >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >>> First Compute Node Doesn't Show Up in Hypervisor List >>> >>> No worries! >>> >>> Can you paste out your /etc/qpidd.conf file from the controller? >>> (Make sure you sanitise the output) >>> >>> Cheers >>> Rhys >>> >>> >>> On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: >>> >>>> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? >>>> >>>> >>>> >>>> On the compute node, I'm seeing this over and over in the compute log: >>>> >>>> Unable to connect to AMQP server: Error in sasl_client_start (-1) >>>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >>>> Minor code may provide more information (Cannot determine realm for >>>> numeric host address). Sleeping 5 seconds >>>> >>>> On the controller conductor log: >>>> >>>> Unable to connect to AMQP server: Error in sasl_client_start (-1) >>>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >>>> Minor code may provide more information (Cannot determine realm for >>>> numeric host address). Sleeping 5 seconds >>>> >>>> In the controller messages file: >>>> >>>> python: GSSAPI Error: Unspecified GSS failure. Minor code may >>>> provide more information (Cannot determine realm for numeric host >>>> address) >>>> >>>> >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Rhys Oxenham [mailto:roxenham at redhat.com] >>>> Sent: Wednesday, July 02, 2014 12:14 PM >>>> To: Lodgen, Brad >>>> Cc: rdo-list at redhat.com >>>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >>>> First Compute Node Doesn't Show Up in Hypervisor List >>>> >>>> Hi Brad, >>>> >>>> Have you checked the nova-compute logs in /var/log/nova/compute.log >>>> (on your new compute node?) >>>> >>>> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. >>>> >>>> Many thanks >>>> Rhys >>>> >>>> On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: >>>> >>>>> Hi folks, >>>>> >>>>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >>>>> >>>>> -Foreman host (purely for Foreman) -Controller host (applied >>>>> Controller(Nova) host group) -Compute Host (applied Compute(Nova) >>>>> host group) >>>>> -2 other hosts (not host group applied, but one will be compute >>>>> and one will be storage) >>>>> >>>>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>> >> > From Brad.Lodgen at centurylink.com Thu Jul 3 17:11:37 2014 From: Brad.Lodgen at centurylink.com (Lodgen, Brad) Date: Thu, 3 Jul 2014 17:11:37 +0000 Subject: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List In-Reply-To: References: <939809EC-71B5-41BF-BC10-B14721D6E6B3@redhat.com> Message-ID: I think I figured it out. I started looking into saslauthd configuration and the configuration looked right, so I checked the service status, it was off. I checked the chkconfig status of saslauthd, and it was off for all init levels. I ran "/etc/init.d/saslauthd start", it turned on, so I changed the /etc/qpidd.conf "auth=no" to "auth=yes", and restarted qpidd service while tailing the /var/log/nova/compute.log of my compute node. It had 2 failure notices immediately, but then right after said communication was successful, and all my hypervisors are showing in the dashboard, communication in logs still looks good. I guess for some reason the Foreman controller host group doesn't turn on saslauthd service and doesn't turn on chkconfig for saslauthd? -----Original Message----- From: Lodgen, Brad Sent: Thursday, July 03, 2014 11:52 AM To: 'Rhys Oxenham' Cc: 'rdo-list at redhat.com' Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List Well, it had the same result with the second compute node I brought up which was a fresh system with RHEL6.5/RHOS package updates. I checked the nova.conf on controller and both compute nodes. All the same configuration, username, passwords, everything. rpc_backend=nova.openstack.common.rpc.impl_qpid qpid_hostname={controller node private IP} qpid_port=5672 #qpid_hosts=$qpid_hostname:$qpid_port qpid_username={same username} qpid_password={same password} #qpid_sasl_mechanisms= qpid_heartbeat=60 qpid_protocol=tcp qpid_tcp_nodelay=True #qpid_topology_version=1 Should the qpid client be installed on the compute nodes? Because this page notes that doing it manually, it should be (https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Installation_and_Configuration_Guide/Configuring_Compute_Block_Storage.html) but, as you can see via Foreman host group deployment it IS installed on the controller and IS NOT on the compute nodes. [root at ctlr ~]# yum list installed | grep qpid This system is receiving updates from Red Hat Subscription Management. python-qpid.noarch 0.14-11.el6_3 @rhel-6-server-rpms qpid-cpp-client.x86_64 0.14-22.el6_3 @rhel-6-server-rpms qpid-cpp-server.x86_64 0.14-22.el6_3 @rhel-6-server-rpms [root at comp1 ~]# yum list installed | grep qpid This system is receiving updates from Red Hat Subscription Management. python-qpid.noarch 0.14-11.el6_3 @rhel-6-server-rpms [root at comp2 ~]# yum list installed | grep qpid This system is receiving updates from Red Hat Subscription Management. python-qpid.noarch 0.14-11.el6_3 @rhel-6-server-rpms -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: Thursday, July 03, 2014 11:22 AM To: Lodgen, Brad Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List OK looks like a bug somewhere... if qpid auth is enabled it requires the authentication mechanism to be completed properly. See: http://qpid.apache.org/releases/qpid-0.14/books/AMQP-Messaging-Broker-CPP-Book/html/ch01s05.html >From looking at puppet-qpid it should have done this for you. Have you been able to reproduce this issue on a clean system? Cheers Rhys On 3 Jul 2014, at 17:16, Lodgen, Brad wrote: > No worries. I understand you're busy and thank you for the assistance. To answer your question: yes, following the change and qpidd restart, the logs showed successful communication and the initial compute node showed up as a hypervisor in the dashboard. I also successfully added a second compute node. Success here relies upon disabling the puppet agent so it doesn't change auth back to yes; otherwise, communication fails. > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: Thursday, July 03, 2014 11:13 AM > To: Lodgen, Brad > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: > First Compute Node Doesn't Show Up in Hypervisor List > > Sorry I didn't respond to this... I have auth set to no in my environment, but that's just for testing. Do things work when auth is set to no and the service is restarted? > > On 3 Jul 2014, at 17:11, Lodgen, Brad wrote: > >> Follow-up from yesterday... is this the same default in RDO, to have qpidd.conf default to auth=yes? >> >> Does that mean I have something on the compute side misconfigured? It looks to me like the username/password is the same on the controller/compute. >> >> For now, I've had to disable puppet agent on the controller, as it keeps resetting "auth=no" back to "auth=yes", and I don't see a host group parameter that would change that. >> >> >> >> >> -----Original Message----- >> From: Lodgen, Brad >> Sent: Wednesday, July 02, 2014 2:01 PM >> To: 'Rhys Oxenham' >> Cc: 'rdo-list at redhat.com' >> Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> So even though the default controller puppet module configures qpidd.conf to say "auth=yes", I changed it to "auth=no" and restarted qpidd service. I now see my compute host in the dashboard. >> >> Is that a misconfiguration in RHOSv4 that I should submit for a change somewhere? >> >> >> >> -----Original Message----- >> From: Lodgen, Brad >> Sent: Wednesday, July 02, 2014 12:53 PM >> To: 'Rhys Oxenham' >> Cc: 'rdo-list at redhat.com' >> Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller. >> >> I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not. >> >> I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect? >> >> >> -----Original Message----- >> From: Rhys Oxenham [mailto:roxenham at redhat.com] >> Sent: Wednesday, July 02, 2014 12:35 PM >> To: Lodgen, Brad >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >> First Compute Node Doesn't Show Up in Hypervisor List >> >> Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are. >> >> On 2 Jul 2014, at 18:30, Lodgen, Brad wrote: >> >>> # GENERATED BY PUPPET >>> # >>> # Configuration file for qpidd. Entries are of the form: >>> # name=value >>> # >>> # (Note: no spaces on either side of '='). Using default settings: >>> # "qpidd --help" or "man qpidd" for more details. >>> port=5672 >>> max-connections=65535 >>> worker-threads=17 >>> connection-backlog=10 >>> auth=yes >>> realm=QPID >>> >>> >>> -----Original Message----- >>> From: Rhys Oxenham [mailto:roxenham at redhat.com] >>> Sent: Wednesday, July 02, 2014 12:27 PM >>> To: Lodgen, Brad >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >>> First Compute Node Doesn't Show Up in Hypervisor List >>> >>> No worries! >>> >>> Can you paste out your /etc/qpidd.conf file from the controller? >>> (Make sure you sanitise the output) >>> >>> Cheers >>> Rhys >>> >>> >>> On 2 Jul 2014, at 18:23, Lodgen, Brad wrote: >>> >>>> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"? >>>> >>>> >>>> >>>> On the compute node, I'm seeing this over and over in the compute log: >>>> >>>> Unable to connect to AMQP server: Error in sasl_client_start (-1) >>>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >>>> Minor code may provide more information (Cannot determine realm for >>>> numeric host address). Sleeping 5 seconds >>>> >>>> On the controller conductor log: >>>> >>>> Unable to connect to AMQP server: Error in sasl_client_start (-1) >>>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. >>>> Minor code may provide more information (Cannot determine realm for >>>> numeric host address). Sleeping 5 seconds >>>> >>>> In the controller messages file: >>>> >>>> python: GSSAPI Error: Unspecified GSS failure. Minor code may >>>> provide more information (Cannot determine realm for numeric host >>>> address) >>>> >>>> >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Rhys Oxenham [mailto:roxenham at redhat.com] >>>> Sent: Wednesday, July 02, 2014 12:14 PM >>>> To: Lodgen, Brad >>>> Cc: rdo-list at redhat.com >>>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: >>>> First Compute Node Doesn't Show Up in Hypervisor List >>>> >>>> Hi Brad, >>>> >>>> Have you checked the nova-compute logs in /var/log/nova/compute.log >>>> (on your new compute node?) >>>> >>>> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor. >>>> >>>> Many thanks >>>> Rhys >>>> >>>> On 2 Jul 2014, at 18:05, Lodgen, Brad wrote: >>>> >>>>> Hi folks, >>>>> >>>>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman. >>>>> >>>>> -Foreman host (purely for Foreman) -Controller host (applied >>>>> Controller(Nova) host group) -Compute Host (applied Compute(Nova) >>>>> host group) >>>>> -2 other hosts (not host group applied, but one will be compute >>>>> and one will be storage) >>>>> >>>>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard? >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>> >> > From rbowen at redhat.com Thu Jul 3 19:06:19 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 03 Jul 2014 15:06:19 -0400 Subject: [Rdo-list] [Rdo-newsletter] July 2014 RDO Community Newsletter Message-ID: <53B5A9AB.7080500@redhat.com> With the first milestone behind us this month, and the second one coming up fast - https://wiki.openstack.org/wiki/Juno_Release_Schedule - the Juno cycle seems to be speeding past. Here's some of what's happened in June, and what's coming in July. Hangouts: On June 6, Hugh Brock and the TripleO team talked about what's planned for OpenStack TripleO (the OpenStack deployment tool) in a Google Hangout. You can watch that at https://www.youtube.com/watch?v=ol5LuedIWBw On July 9, 15:00 UTC (11 am Eastern US time) Eoghan Glynn will be leading a Google Hangout in which he'll discuss what's new in Ceilometer in OpenStack Icehouse, and what's coming in Juno. Sign up to attend that event at https://plus.google.com/events/c6e8vjjn8klrf78ruhkr95j4tas Conferences: Also, in July, RDO will have a presence at OSCON, July 20-24, in Portland, Oregon, both in the Red Hat booth, and also in the Cloud track - http://www.oscon.com/oscon2014/public/schedule/topic/1113 If you're going to be at OSCON, drop by to say hi. In early August, the Flock conference will be held in Prague, Czech Republic - http://flocktofedora.com/ (August 6-9). In addition to all of the great Fedora content, Kashyap Chamarthy will be speaking about deploying OpenStack on Fedora. - http://sched.co/1kI1BWf Although the OpenStack Summit is still a few months away, be sure it's on your calendar. The summit will be held in Paris, November 3-7. More information and registration will be available in the next month or two. Blog posts: This month's blog posts from the RDO range from the technical to the philosophical. If you want to see the latest posts from the RDO community, you can follow at http://planet.rdoproject.org/ * Mark McLoughlin - An ideal openstack developer - http://blogs.gnome.org/markmc/2014/06/06/an-ideal-openstack-developer/ * Liz Blanchard - Moving forward as a User Experience Team in the OpenStack Juno release cycle - http://uxd-stackabledesign.rhcloud.com/moving-forward-user-experience-team-openstack-juno-release-cycle/ * Rich Bowen - Red Hat at the OpenStack Summit (recordings) - http://drbacchus.com/red-hat-at-the-openstack-summit * Adam Young - Why POpen for OpenSSL calls - http://adam.younglogic.com/2014/06/why-popen-for-openssl-calls/ * Flavio Percoco - Marconi to AMQP: See you later - http://blog.flaper87.com/post/53a09586d987d23f49c777bf/ * Kashyap Chamarthy - On bug reporting. . . http://kashyapc.com/2014/06/22/on-bug-reporting/ eNovance Acquisition: The biggest news in the RDO world this month was Red Hat's acquisition of eNovance: http://www.redhat.com/about/news/press-archive/2014/6/red-hat-to-acquire-enovance eNovance's engineers are prolific contributors to the OpenStack upstream and respected names in the OpenStack community. And eNovance is 9th, by number of contributions, on the list of organizations contributing to the OpenStack code: http://activity.openstack.org/dash/browser/scm-companies.html Stay in Touch: The best ways to keep up with what's going on in the RDO community are: * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter * RDO Q&A - http://ask.openstack.org/ Thanks again for being part of the RDO community! -- Rich Bowen, OpenStack Community Liaison rbowen at redhat.com http://openstack.redhat.com _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From rdo-info at redhat.com Thu Jul 3 19:59:39 2014 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 3 Jul 2014 19:59:39 +0000 Subject: [Rdo-list] [RDO] What's coming in OpenStack Ceilometer: Google Hangout Message-ID: <00000146fdcf9ab3-b4a2cbcc-3f0f-499a-b5b4-050a1036dcbc-000000@email.amazonses.com> rbowen started a discussion. What's coming in OpenStack Ceilometer: Google Hangout --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/977/whats-coming-in-openstack-ceilometer-google-hangout Have a great day! From madko77 at gmail.com Fri Jul 4 08:22:23 2014 From: madko77 at gmail.com (Madko) Date: Fri, 4 Jul 2014 10:22:23 +0200 Subject: [Rdo-list] ssh access to a fedora cloud image instance Message-ID: Hi, I have an almost working openstack platform deployed via foreman. When I launch an instance from the Fedora 19 cloud image, everything seems fine, the VM is running on one of my hypervisor, but I can't access it (ping is ok)... I'm following this documentation http://openstack.redhat.com/Running_an_instance I only get a permission denied when I do the last part: ssh -l root -i my_key_pair.pem floating_ip_address I also try by importing an ssh key. Same error. In the VM console, I see that CloudInit service is starting inside the VM, no error are shown here. So my question is: Where are the logs for that parts (cloud init server) in openstack ? Is the above documentation fine ? best regards, -- Edouard Bourguignon -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxenham at redhat.com Fri Jul 4 09:09:11 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Fri, 4 Jul 2014 10:09:11 +0100 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: References: Message-ID: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> Hi, Did you try with using the ?cloud-user? login username? Thanks Rhys On 4 Jul 2014, at 09:22, Madko wrote: > Hi, > > I have an almost working openstack platform deployed via foreman. When I launch an instance from the Fedora 19 cloud image, everything seems fine, the VM is running on one of my hypervisor, but I can't access it (ping is ok)... > > I'm following this documentation > http://openstack.redhat.com/Running_an_instance > > I only get a permission denied when I do the last part: > ssh -l root -i my_key_pair.pem floating_ip_address > I also try by importing an ssh key. Same error. > > In the VM console, I see that CloudInit service is starting inside the VM, no error are shown here. So my question is: Where are the logs for that parts (cloud init server) in openstack ? Is the above documentation fine ? > > best regards, > > -- > Edouard Bourguignon > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From madko77 at gmail.com Fri Jul 4 09:40:17 2014 From: madko77 at gmail.com (Madko) Date: Fri, 4 Jul 2014 11:40:17 +0200 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> Message-ID: Nope didn't try this one, but no luck, same problem :( (I tried root, fedora, and now cloud-user) [root at openstack-neutron ~]# ip netns exec qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. 64 bytes from 192.168.2.4: icmp_seq=1 ttl=64 time=2.02 ms 64 bytes from 192.168.2.4: icmp_seq=2 ttl=64 time=1.90 ms ^C --- 192.168.2.4 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1161ms rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms [root at openstack-neutron ~]# ip netns exec qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l cloud-user 192.168.2.4 Permission denied (publickey,gssapi-keyex,gssapi-with-mic). 2014-07-04 11:09 GMT+02:00 Rhys Oxenham : > Hi, > > Did you try with using the ?cloud-user? login username? > > Thanks > Rhys > > On 4 Jul 2014, at 09:22, Madko wrote: > > > Hi, > > > > I have an almost working openstack platform deployed via foreman. When I > launch an instance from the Fedora 19 cloud image, everything seems fine, > the VM is running on one of my hypervisor, but I can't access it (ping is > ok)... > > > > I'm following this documentation > > http://openstack.redhat.com/Running_an_instance > > > > I only get a permission denied when I do the last part: > > ssh -l root -i my_key_pair.pem floating_ip_address > > I also try by importing an ssh key. Same error. > > > > In the VM console, I see that CloudInit service is starting inside the > VM, no error are shown here. So my question is: Where are the logs for that > parts (cloud init server) in openstack ? Is the above documentation fine ? > > > > best regards, > > > > -- > > Edouard Bourguignon > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > -- Edouard Bourguignon -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxenham at redhat.com Fri Jul 4 09:45:25 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Fri, 4 Jul 2014 10:45:25 +0100 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> Message-ID: <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> Can you try another image to make sure that key pair injection is working inside of your environment? i.e. an image you already know the password for so you can check via VNC or passworded ssh login? Cheers Rhys On 4 Jul 2014, at 10:40, Madko wrote: > Nope didn't try this one, but no luck, same problem :( (I tried root, fedora, and now cloud-user) > > [root at openstack-neutron ~]# ip netns exec qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 > PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. > 64 bytes from 192.168.2.4: icmp_seq=1 ttl=64 time=2.02 ms > 64 bytes from 192.168.2.4: icmp_seq=2 ttl=64 time=1.90 ms > ^C > --- 192.168.2.4 ping statistics --- > 2 packets transmitted, 2 received, 0% packet loss, time 1161ms > rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms > [root at openstack-neutron ~]# ip netns exec qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l cloud-user 192.168.2.4 > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). > > > > 2014-07-04 11:09 GMT+02:00 Rhys Oxenham : > Hi, > > Did you try with using the ?cloud-user? login username? > > Thanks > Rhys > > On 4 Jul 2014, at 09:22, Madko wrote: > > > Hi, > > > > I have an almost working openstack platform deployed via foreman. When I launch an instance from the Fedora 19 cloud image, everything seems fine, the VM is running on one of my hypervisor, but I can't access it (ping is ok)... > > > > I'm following this documentation > > http://openstack.redhat.com/Running_an_instance > > > > I only get a permission denied when I do the last part: > > ssh -l root -i my_key_pair.pem floating_ip_address > > I also try by importing an ssh key. Same error. > > > > In the VM console, I see that CloudInit service is starting inside the VM, no error are shown here. So my question is: Where are the logs for that parts (cloud init server) in openstack ? Is the above documentation fine ? > > > > best regards, > > > > -- > > Edouard Bourguignon > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > -- > Edouard Bourguignon From madko77 at gmail.com Fri Jul 4 09:50:48 2014 From: madko77 at gmail.com (Madko) Date: Fri, 4 Jul 2014 11:50:48 +0200 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> Message-ID: I've just deployed OpenStack so I don't have any other image. I can try to make one. Is cloudInit easy to install on Fedora ? I have some CentOS images too, but no cloudInit. 2014-07-04 11:45 GMT+02:00 Rhys Oxenham : > Can you try another image to make sure that key pair injection is working > inside of your environment? i.e. an image you already know the password for > so you can check via VNC or passworded ssh login? > > Cheers > Rhys > > On 4 Jul 2014, at 10:40, Madko wrote: > > > Nope didn't try this one, but no luck, same problem :( (I tried root, > fedora, and now cloud-user) > > > > [root at openstack-neutron ~]# ip netns exec > qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 > > PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. > > 64 bytes from 192.168.2.4: icmp_seq=1 ttl=64 time=2.02 ms > > 64 bytes from 192.168.2.4: icmp_seq=2 ttl=64 time=1.90 ms > > ^C > > --- 192.168.2.4 ping statistics --- > > 2 packets transmitted, 2 received, 0% packet loss, time 1161ms > > rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms > > [root at openstack-neutron ~]# ip netns exec > qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l > cloud-user 192.168.2.4 > > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). > > > > > > > > 2014-07-04 11:09 GMT+02:00 Rhys Oxenham : > > Hi, > > > > Did you try with using the ?cloud-user? login username? > > > > Thanks > > Rhys > > > > On 4 Jul 2014, at 09:22, Madko wrote: > > > > > Hi, > > > > > > I have an almost working openstack platform deployed via foreman. When > I launch an instance from the Fedora 19 cloud image, everything seems fine, > the VM is running on one of my hypervisor, but I can't access it (ping is > ok)... > > > > > > I'm following this documentation > > > http://openstack.redhat.com/Running_an_instance > > > > > > I only get a permission denied when I do the last part: > > > ssh -l root -i my_key_pair.pem floating_ip_address > > > I also try by importing an ssh key. Same error. > > > > > > In the VM console, I see that CloudInit service is starting inside the > VM, no error are shown here. So my question is: Where are the logs for that > parts (cloud init server) in openstack ? Is the above documentation fine ? > > > > > > best regards, > > > > > > -- > > > Edouard Bourguignon > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > -- > > Edouard Bourguignon > > -- Edouard Bourguignon -------------- next part -------------- An HTML attachment was scrubbed... URL: From vimal7370 at gmail.com Fri Jul 4 09:55:07 2014 From: vimal7370 at gmail.com (Vimal Kumar) Date: Fri, 4 Jul 2014 15:25:07 +0530 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> Message-ID: Use cirros (13M) image to test if ssh key-pair injection is working or not: http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img ssh as: cirros@ In case if your ssh key isn't working, the password is cubswin:) On Fri, Jul 4, 2014 at 3:20 PM, Madko wrote: > I've just deployed OpenStack so I don't have any other image. I can try to > make one. Is cloudInit easy to install on Fedora ? I have some CentOS > images too, but no cloudInit. > > > 2014-07-04 11:45 GMT+02:00 Rhys Oxenham : > > Can you try another image to make sure that key pair injection is working >> inside of your environment? i.e. an image you already know the password for >> so you can check via VNC or passworded ssh login? >> >> Cheers >> Rhys >> >> On 4 Jul 2014, at 10:40, Madko wrote: >> >> > Nope didn't try this one, but no luck, same problem :( (I tried root, >> fedora, and now cloud-user) >> > >> > [root at openstack-neutron ~]# ip netns exec >> qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 >> > PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. >> > 64 bytes from 192.168.2.4: icmp_seq=1 ttl=64 time=2.02 ms >> > 64 bytes from 192.168.2.4: icmp_seq=2 ttl=64 time=1.90 ms >> > ^C >> > --- 192.168.2.4 ping statistics --- >> > 2 packets transmitted, 2 received, 0% packet loss, time 1161ms >> > rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms >> > [root at openstack-neutron ~]# ip netns exec >> qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l >> cloud-user 192.168.2.4 >> > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). >> > >> > >> > >> > 2014-07-04 11:09 GMT+02:00 Rhys Oxenham : >> > Hi, >> > >> > Did you try with using the ?cloud-user? login username? >> > >> > Thanks >> > Rhys >> > >> > On 4 Jul 2014, at 09:22, Madko wrote: >> > >> > > Hi, >> > > >> > > I have an almost working openstack platform deployed via foreman. >> When I launch an instance from the Fedora 19 cloud image, everything seems >> fine, the VM is running on one of my hypervisor, but I can't access it >> (ping is ok)... >> > > >> > > I'm following this documentation >> > > http://openstack.redhat.com/Running_an_instance >> > > >> > > I only get a permission denied when I do the last part: >> > > ssh -l root -i my_key_pair.pem floating_ip_address >> > > I also try by importing an ssh key. Same error. >> > > >> > > In the VM console, I see that CloudInit service is starting inside >> the VM, no error are shown here. So my question is: Where are the logs for >> that parts (cloud init server) in openstack ? Is the above documentation >> fine ? >> > > >> > > best regards, >> > > >> > > -- >> > > Edouard Bourguignon >> > > _______________________________________________ >> > > Rdo-list mailing list >> > > Rdo-list at redhat.com >> > > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > >> > >> > >> > -- >> > Edouard Bourguignon >> >> > > > -- > Edouard Bourguignon > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxenham at redhat.com Fri Jul 4 09:55:29 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Fri, 4 Jul 2014 10:55:29 +0100 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> Message-ID: Suggest you download and install the cirros image, it doesn?t have cloud-init, but supports key pair injection? wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img glance add name="cirros-0.3.1-x86_64" is_public=true disk_format=qcow2 container_format=bare < cirros-0.3.1-x86_64-disk.img nova boot --image cirros-0.3.1-x86_64 --flavor m1.small cirrostest You can login to that with or without ssh key injection. Username = ?cirros", password = ?cubswin:)? That way, you can check the injection- cat .ssh/authorized_keys -and check the metadata api- curl http://169.254.169.254/latest/meta-data/ Cheers Rhys On 4 Jul 2014, at 10:50, Madko wrote: > I've just deployed OpenStack so I don't have any other image. I can try to make one. Is cloudInit easy to install on Fedora ? I have some CentOS images too, but no cloudInit. > > > 2014-07-04 11:45 GMT+02:00 Rhys Oxenham : > Can you try another image to make sure that key pair injection is working inside of your environment? i.e. an image you already know the password for so you can check via VNC or passworded ssh login? > > Cheers > Rhys > > On 4 Jul 2014, at 10:40, Madko wrote: > > > Nope didn't try this one, but no luck, same problem :( (I tried root, fedora, and now cloud-user) > > > > [root at openstack-neutron ~]# ip netns exec qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 > > PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. > > 64 bytes from 192.168.2.4: icmp_seq=1 ttl=64 time=2.02 ms > > 64 bytes from 192.168.2.4: icmp_seq=2 ttl=64 time=1.90 ms > > ^C > > --- 192.168.2.4 ping statistics --- > > 2 packets transmitted, 2 received, 0% packet loss, time 1161ms > > rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms > > [root at openstack-neutron ~]# ip netns exec qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l cloud-user 192.168.2.4 > > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). > > > > > > > > 2014-07-04 11:09 GMT+02:00 Rhys Oxenham : > > Hi, > > > > Did you try with using the ?cloud-user? login username? > > > > Thanks > > Rhys > > > > On 4 Jul 2014, at 09:22, Madko wrote: > > > > > Hi, > > > > > > I have an almost working openstack platform deployed via foreman. When I launch an instance from the Fedora 19 cloud image, everything seems fine, the VM is running on one of my hypervisor, but I can't access it (ping is ok)... > > > > > > I'm following this documentation > > > http://openstack.redhat.com/Running_an_instance > > > > > > I only get a permission denied when I do the last part: > > > ssh -l root -i my_key_pair.pem floating_ip_address > > > I also try by importing an ssh key. Same error. > > > > > > In the VM console, I see that CloudInit service is starting inside the VM, no error are shown here. So my question is: Where are the logs for that parts (cloud init server) in openstack ? Is the above documentation fine ? > > > > > > best regards, > > > > > > -- > > > Edouard Bourguignon > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > -- > > Edouard Bourguignon > > > > > -- > Edouard Bourguignon From madko77 at gmail.com Fri Jul 4 11:13:48 2014 From: madko77 at gmail.com (Madko) Date: Fri, 4 Jul 2014 13:13:48 +0200 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> Message-ID: Thank you Rhys and Vimal, I'll try cyrros image right now. 2014-07-04 11:55 GMT+02:00 Vimal Kumar : > Use cirros (13M) image to test if ssh key-pair injection is working or not: > > http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img > > ssh as: cirros@ > > In case if your ssh key isn't working, the password is cubswin:) > > > > On Fri, Jul 4, 2014 at 3:20 PM, Madko wrote: > >> I've just deployed OpenStack so I don't have any other image. I can try >> to make one. Is cloudInit easy to install on Fedora ? I have some CentOS >> images too, but no cloudInit. >> >> >> 2014-07-04 11:45 GMT+02:00 Rhys Oxenham : >> >> Can you try another image to make sure that key pair injection is working >>> inside of your environment? i.e. an image you already know the password for >>> so you can check via VNC or passworded ssh login? >>> >>> Cheers >>> Rhys >>> >>> On 4 Jul 2014, at 10:40, Madko wrote: >>> >>> > Nope didn't try this one, but no luck, same problem :( (I tried root, >>> fedora, and now cloud-user) >>> > >>> > [root at openstack-neutron ~]# ip netns exec >>> qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 >>> > PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. >>> > 64 bytes from 192.168.2.4: icmp_seq=1 ttl=64 time=2.02 ms >>> > 64 bytes from 192.168.2.4: icmp_seq=2 ttl=64 time=1.90 ms >>> > ^C >>> > --- 192.168.2.4 ping statistics --- >>> > 2 packets transmitted, 2 received, 0% packet loss, time 1161ms >>> > rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms >>> > [root at openstack-neutron ~]# ip netns exec >>> qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l >>> cloud-user 192.168.2.4 >>> > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). >>> > >>> > >>> > >>> > 2014-07-04 11:09 GMT+02:00 Rhys Oxenham : >>> > Hi, >>> > >>> > Did you try with using the ?cloud-user? login username? >>> > >>> > Thanks >>> > Rhys >>> > >>> > On 4 Jul 2014, at 09:22, Madko wrote: >>> > >>> > > Hi, >>> > > >>> > > I have an almost working openstack platform deployed via foreman. >>> When I launch an instance from the Fedora 19 cloud image, everything seems >>> fine, the VM is running on one of my hypervisor, but I can't access it >>> (ping is ok)... >>> > > >>> > > I'm following this documentation >>> > > http://openstack.redhat.com/Running_an_instance >>> > > >>> > > I only get a permission denied when I do the last part: >>> > > ssh -l root -i my_key_pair.pem floating_ip_address >>> > > I also try by importing an ssh key. Same error. >>> > > >>> > > In the VM console, I see that CloudInit service is starting inside >>> the VM, no error are shown here. So my question is: Where are the logs for >>> that parts (cloud init server) in openstack ? Is the above documentation >>> fine ? >>> > > >>> > > best regards, >>> > > >>> > > -- >>> > > Edouard Bourguignon >>> > > _______________________________________________ >>> > > Rdo-list mailing list >>> > > Rdo-list at redhat.com >>> > > https://www.redhat.com/mailman/listinfo/rdo-list >>> > >>> > >>> > >>> > >>> > -- >>> > Edouard Bourguignon >>> >>> >> >> >> -- >> Edouard Bourguignon >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> > -- Edouard Bourguignon -------------- next part -------------- An HTML attachment was scrubbed... URL: From madko77 at gmail.com Fri Jul 4 11:48:05 2014 From: madko77 at gmail.com (Madko) Date: Fri, 4 Jul 2014 13:48:05 +0200 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> Message-ID: Great!!! I can connect to the cirros instance. But no .ssh/authorized_keys. Seems the metadata api is not available. Where is it supposed to be hosted? what service? is it the neutron-metadata-agent? 2014-07-04 13:13 GMT+02:00 Madko : > Thank you Rhys and Vimal, I'll try cyrros image right now. > > > 2014-07-04 11:55 GMT+02:00 Vimal Kumar : > > Use cirros (13M) image to test if ssh key-pair injection is working or not: >> >> http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img >> >> ssh as: cirros@ >> >> In case if your ssh key isn't working, the password is cubswin:) >> >> >> >> On Fri, Jul 4, 2014 at 3:20 PM, Madko wrote: >> >>> I've just deployed OpenStack so I don't have any other image. I can try >>> to make one. Is cloudInit easy to install on Fedora ? I have some CentOS >>> images too, but no cloudInit. >>> >>> >>> 2014-07-04 11:45 GMT+02:00 Rhys Oxenham : >>> >>> Can you try another image to make sure that key pair injection is >>>> working inside of your environment? i.e. an image you already know the >>>> password for so you can check via VNC or passworded ssh login? >>>> >>>> Cheers >>>> Rhys >>>> >>>> On 4 Jul 2014, at 10:40, Madko wrote: >>>> >>>> > Nope didn't try this one, but no luck, same problem :( (I tried root, >>>> fedora, and now cloud-user) >>>> > >>>> > [root at openstack-neutron ~]# ip netns exec >>>> qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 >>>> > PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. >>>> > 64 bytes from 192.168.2.4: icmp_seq=1 ttl=64 time=2.02 ms >>>> > 64 bytes from 192.168.2.4: icmp_seq=2 ttl=64 time=1.90 ms >>>> > ^C >>>> > --- 192.168.2.4 ping statistics --- >>>> > 2 packets transmitted, 2 received, 0% packet loss, time 1161ms >>>> > rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms >>>> > [root at openstack-neutron ~]# ip netns exec >>>> qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l >>>> cloud-user 192.168.2.4 >>>> > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). >>>> > >>>> > >>>> > >>>> > 2014-07-04 11:09 GMT+02:00 Rhys Oxenham : >>>> > Hi, >>>> > >>>> > Did you try with using the ?cloud-user? login username? >>>> > >>>> > Thanks >>>> > Rhys >>>> > >>>> > On 4 Jul 2014, at 09:22, Madko wrote: >>>> > >>>> > > Hi, >>>> > > >>>> > > I have an almost working openstack platform deployed via foreman. >>>> When I launch an instance from the Fedora 19 cloud image, everything seems >>>> fine, the VM is running on one of my hypervisor, but I can't access it >>>> (ping is ok)... >>>> > > >>>> > > I'm following this documentation >>>> > > http://openstack.redhat.com/Running_an_instance >>>> > > >>>> > > I only get a permission denied when I do the last part: >>>> > > ssh -l root -i my_key_pair.pem floating_ip_address >>>> > > I also try by importing an ssh key. Same error. >>>> > > >>>> > > In the VM console, I see that CloudInit service is starting inside >>>> the VM, no error are shown here. So my question is: Where are the logs for >>>> that parts (cloud init server) in openstack ? Is the above documentation >>>> fine ? >>>> > > >>>> > > best regards, >>>> > > >>>> > > -- >>>> > > Edouard Bourguignon >>>> > > _______________________________________________ >>>> > > Rdo-list mailing list >>>> > > Rdo-list at redhat.com >>>> > > https://www.redhat.com/mailman/listinfo/rdo-list >>>> > >>>> > >>>> > >>>> > >>>> > -- >>>> > Edouard Bourguignon >>>> >>>> >>> >>> >>> -- >>> Edouard Bourguignon >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> >> > > > -- > Edouard Bourguignon > -- Edouard Bourguignon -------------- next part -------------- An HTML attachment was scrubbed... URL: From amuller at redhat.com Sun Jul 6 09:08:54 2014 From: amuller at redhat.com (Assaf Muller) Date: Sun, 6 Jul 2014 05:08:54 -0400 (EDT) Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> Message-ID: <1378466327.5928093.1404637734577.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Great!!! I can connect to the cirros instance. But no .ssh/authorized_keys. > Seems the metadata api is not available. Where is it supposed to be hosted? > what service? is it the neutron-metadata-agent? > Metadata is hosted by the nova-api server. When using Neutron, the neutron-metadata- agent on the network node proxies metadata requests to nova-api. It does a couple of queries to Neutron,adds the instance-id to the request and forwards the message to nova-api. This is because when using nova-network you cannot have overlapping IPs so the nova metadata server can figure out the instance ID from its IP. Neutron does support overlapping IPs so that's why the neutron-metadata-agent exists. If curl 169.254.169.254 doesn't work, check for errors in the neutron metadata agent logs and in nova-api as well. > > 2014-07-04 13:13 GMT+02:00 Madko < madko77 at gmail.com > : > > > > Thank you Rhys and Vimal, I'll try cyrros image right now. > > > 2014-07-04 11:55 GMT+02:00 Vimal Kumar < vimal7370 at gmail.com > : > > > > > Use cirros (13M) image to test if ssh key-pair injection is working or not: > > http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img > > ssh as: cirros@ > > In case if your ssh key isn't working, the password is cubswin:) > > > > On Fri, Jul 4, 2014 at 3:20 PM, Madko < madko77 at gmail.com > wrote: > > > > I've just deployed OpenStack so I don't have any other image. I can try to > make one. Is cloudInit easy to install on Fedora ? I have some CentOS images > too, but no cloudInit. > > > 2014-07-04 11:45 GMT+02:00 Rhys Oxenham < roxenham at redhat.com > : > > > > Can you try another image to make sure that key pair injection is working > inside of your environment? i.e. an image you already know the password for > so you can check via VNC or passworded ssh login? > > Cheers > Rhys > > On 4 Jul 2014, at 10:40, Madko < madko77 at gmail.com > wrote: > > > Nope didn't try this one, but no luck, same problem :( (I tried root, > > fedora, and now cloud-user) > > > > [root at openstack-neutron ~]# ip netns exec > > qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 > > PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. > > 64 bytes from 192.168.2.4 : icmp_seq=1 ttl=64 time=2.02 ms > > 64 bytes from 192.168.2.4 : icmp_seq=2 ttl=64 time=1.90 ms > > ^C > > --- 192.168.2.4 ping statistics --- > > 2 packets transmitted, 2 received, 0% packet loss, time 1161ms > > rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms > > [root at openstack-neutron ~]# ip netns exec > > qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l > > cloud-user 192.168.2.4 > > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). > > > > > > > > 2014-07-04 11:09 GMT+02:00 Rhys Oxenham < roxenham at redhat.com >: > > Hi, > > > > Did you try with using the ?cloud-user? login username? > > > > Thanks > > Rhys > > > > On 4 Jul 2014, at 09:22, Madko < madko77 at gmail.com > wrote: > > > > > Hi, > > > > > > I have an almost working openstack platform deployed via foreman. When I > > > launch an instance from the Fedora 19 cloud image, everything seems > > > fine, the VM is running on one of my hypervisor, but I can't access it > > > (ping is ok)... > > > > > > I'm following this documentation > > > http://openstack.redhat.com/Running_an_instance > > > > > > I only get a permission denied when I do the last part: > > > ssh -l root -i my_key_pair.pem floating_ip_address > > > I also try by importing an ssh key. Same error. > > > > > > In the VM console, I see that CloudInit service is starting inside the > > > VM, no error are shown here. So my question is: Where are the logs for > > > that parts (cloud init server) in openstack ? Is the above documentation > > > fine ? > > > > > > best regards, > > > > > > -- > > > Edouard Bourguignon > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > -- > > Edouard Bourguignon > > > > > -- > Edouard Bourguignon > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > -- > Edouard Bourguignon > > > > -- > Edouard Bourguignon > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From Brad.Lodgen at centurylink.com Sun Jul 6 17:46:13 2014 From: Brad.Lodgen at centurylink.com (Lodgen, Brad) Date: Sun, 6 Jul 2014 17:46:13 +0000 Subject: [Rdo-list] Red Hat OpenStack Evaluation installation In-Reply-To: References: Message-ID: I have a quick question, as I may be misunderstanding the intention of the RHOS product and the installation/configuration guide. I'm using an evaluation, so I can't open tickets or I'll get forwarded to this mailing list. I've had a considerable number of issues installing and getting RHOS running in an initial "let's get started doing actual OpenStack tasks" kind of state. Is the RHOS product meant to be installable and running without going through the section of the installation/configuration guide that covers manual installation? Or are you expected to still go through the entire manual installation section? Because there are integral parts that are not discussed outside of the manual section; storage implementation, for example, is not mentioned outside of the manual installation section. And even the sections that are in the manual section basically say you can't rely solely on the Foreman host groups to set up storage, as there are some manual steps. Can someone shed some light on the product's intentions and how far it goes with setting up OpenStack for you? -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.shaw at gmail.com Sun Jul 6 18:55:20 2014 From: marco.shaw at gmail.com (Marco Shaw) Date: Sun, 6 Jul 2014 15:55:20 -0300 Subject: [Rdo-list] Red Hat OpenStack Evaluation installation In-Reply-To: References: Message-ID: Hi I think I understand... OpenStack is very complicated. I would only expect packstack to be able to provide me with a basic POC. To really get into things, I think it is best to assume you have a lot of reading/testing/configuring to do! That's why RedHat or even Mirantis appear to be making some money with consulting services. Marco > On Jul 6, 2014, at 2:46 PM, "Lodgen, Brad" wrote: > > I have a quick question, as I may be misunderstanding the intention of the RHOS product and the installation/configuration guide. I'm using an evaluation, so I can't open tickets or I'll get forwarded to this mailing list. I've had a considerable number of issues installing and getting RHOS running in an initial "let's get started doing actual OpenStack tasks? kind of state. > > Is the RHOS product meant to be installable and running without going through the section of the installation/configuration guide that covers manual installation? Or are you expected to still go through the entire manual installation section? Because there are integral parts that are not discussed outside of the manual section; storage implementation, for example, is not mentioned outside of the manual installation section. And even the sections that are in the manual section basically say you can?t rely solely on the Foreman host groups to set up storage, as there are some manual steps. Can someone shed some light on the product?s intentions and how far it goes with setting up OpenStack for you? > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From gareth at openstacker.org Mon Jul 7 02:25:12 2014 From: gareth at openstacker.org (Kun Huang) Date: Mon, 7 Jul 2014 10:25:12 +0800 Subject: [Rdo-list] plan to add a parameter to config glance backend? In-Reply-To: <20140702075432.GD11253@redhat.com> References: <20140701230012.GC11253@redhat.com> <20140702075432.GD11253@redhat.com> Message-ID: I know that... I'm using RDO to deploy somethings including Glance. And I want to config backend store in RDO's answer file, not Glance configures manually. On Wed, Jul 2, 2014 at 3:54 PM, Flavio Percoco wrote: > On 02/07/14 09:45 +0800, Kun Huang wrote: > >> I need this: CONFIG_GLANCE_BACKEND=file|rbd|.... >> > > I'm not sure if you're talking about the client or the server. In > glance-api there are 2 config options. The first one `known_stores` is > used to enable/disable stores. The second one `default_store` allows > you to specify which store should be used as the default one when none > is passed to the API. > > Is there something missing in the above-mentioned options? > > Cheers, > Flavio > > > >> On Wed, Jul 2, 2014 at 7:00 AM, Flavio Percoco wrote: >> >>> On 26/06/14 16:50 +0800, Kun Huang wrote: >>> >>>> >>>> Hi all >>>> >>>> Is there such a plan now? Actually it's okay to adjust glance.conf >>>> only. Deploying ceph is not necessary. >>>> >>> >>> >>> Hi Kun, >>> >>> I'm not sure I understand your question. What config parameter do you >>> need? What do you think is missing in Glance? >>> >>> Cheers, >>> Flavio >>> >>> -- >>> @flaper87 >>> Flavio Percoco >>> >> > -- > @flaper87 > Flavio Percoco > -------------- next part -------------- An HTML attachment was scrubbed... URL: From acvelez at vidalinux.com Mon Jul 7 05:24:19 2014 From: acvelez at vidalinux.com (Antonio C. Velez) Date: Mon, 7 Jul 2014 01:24:19 -0400 (AST) Subject: [Rdo-list] Outgoing traffic from instances! RDO icehouse with GRE fedora20 In-Reply-To: <1923008744.47721.1404710354936.JavaMail.zimbra@vidalinux.net> Message-ID: <1499091523.47731.1404710659416.JavaMail.zimbra@vidalinux.net> Hi everyone, I'm using icehouse with GRE on fedora20 in two nodes 1 controller 1 compute, everything is working great, I can loging to ssh to the instances and ping to the internet ect. but when I try to install something with yum it says conection timeout, conecting to public servers using ramdon ports 443, 80, 22 doesn't work. Someone can give me any clue of what is wrong? ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From kchamart at redhat.com Mon Jul 7 06:15:22 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 7 Jul 2014 11:45:22 +0530 Subject: [Rdo-list] Outgoing traffic from instances! RDO icehouse with GRE fedora20 In-Reply-To: <1499091523.47731.1404710659416.JavaMail.zimbra@vidalinux.net> References: <1923008744.47721.1404710354936.JavaMail.zimbra@vidalinux.net> <1499091523.47731.1404710659416.JavaMail.zimbra@vidalinux.net> Message-ID: <20140707061522.GB15283@tesla> On Mon, Jul 07, 2014 at 01:24:19AM -0400, Antonio C. Velez wrote: > Hi everyone, > > I'm using icehouse with GRE on fedora20 in two nodes 1 controller 1 > compute, everything is working great, I can loging to ssh to the > instances and ping to the internet ect. but when I try to install > something with yum it says conection timeout, conecting to public > servers using ramdon ports 443, 80, 22 doesn't work. > Someone can give me any clue of what is wrong? I don't have answer to your specific question. But, for any Neutron experts to take a look at this, you might want to provide additional troubleshooting, like iptables rules, namespaces, routing info, etc. Some resources: - http://docs.openstack.org/trunk/openstack-ops/content/network_troubleshooting.html - https://github.com/larsks/neutron-diag - How to use it: http://kashyapc.fedorapeople.org/virt/openstack/debugging-neutron.txt -- /kashyap From flavio at redhat.com Mon Jul 7 07:57:39 2014 From: flavio at redhat.com (Flavio Percoco) Date: Mon, 07 Jul 2014 09:57:39 +0200 Subject: [Rdo-list] plan to add a parameter to config glance backend? In-Reply-To: References: <20140701230012.GC11253@redhat.com> <20140702075432.GD11253@redhat.com> Message-ID: <53BA52F3.1010508@redhat.com> On 07/07/2014 04:25 AM, Kun Huang wrote: > I know that... I'm using RDO to deploy somethings including Glance. And > I want to config backend store in RDO's answer file, not Glance > configures manually. Ahh.. You should have mentioned you were talking about packstack before. I don't think there's a plan to add support for this in the near future. Cheers, Flavio -- @flaper87 Flavio Percoco From ihrachys at redhat.com Mon Jul 7 08:54:47 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 07 Jul 2014 10:54:47 +0200 Subject: [Rdo-list] Outgoing traffic from instances! RDO icehouse with GRE fedora20 In-Reply-To: <1499091523.47731.1404710659416.JavaMail.zimbra@vidalinux.net> References: <1499091523.47731.1404710659416.JavaMail.zimbra@vidalinux.net> Message-ID: <53BA6057.80907@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 07/07/14 07:24, Antonio C. Velez wrote: > Hi everyone, > > I'm using icehouse with GRE on fedora20 in two nodes 1 controller 1 > compute, everything is working great, I can loging to ssh to the > instances and ping to the internet ect. but when I try to install > something with yum it says conection timeout, conecting to public > servers using ramdon ports 443, 80, 22 doesn't work. > > Someone can give me any clue of what is wrong? > I suspect your security groups do not allow outgoing traffic. Please specify whether you use Neutron or Nova for networking, and show the effective security group rules applied to affected instances. /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTumBXAAoJEC5aWaUY1u57M04IAOf/3LB++tpw0/6QUyy5rw5K rteR7NeWQ/iS7FAQZE/1yx2YYlSlQIPAXkNjqT3+qBMzZ1dPGiLumMC/PtPTeb1U Kl9e5OKtfMOpKVc+Jh4EeEeCIywiNDI2+KzWwR2g8AH+HRnj5cqXP5mnGyNUgibw qiqrm8nBLogqZgo7uN1zoeQzgezNP4jNBT9zKZJZLI84ELnkDaVA/EAMHwcoPknQ e7urlxD6A/rnLtYGGp5UWLcs4RuGsKSxPAtvXqZ/veXSH+hNvFCDlI82vosYb0Vo 7SRJs5lSNqzIoqTYnzruxSZF4ImRwjDMQTST0vPb/2UPe48ZAsClcyWqGEPVO9w= =8inL -----END PGP SIGNATURE----- From roxenham at redhat.com Mon Jul 7 09:30:03 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Mon, 7 Jul 2014 10:30:03 +0100 Subject: [Rdo-list] Outgoing traffic from instances! RDO icehouse with GRE fedora20 In-Reply-To: <1499091523.47731.1404710659416.JavaMail.zimbra@vidalinux.net> References: <1499091523.47731.1404710659416.JavaMail.zimbra@vidalinux.net> Message-ID: On 7 Jul 2014, at 06:24, Antonio C. Velez wrote: > I'm using icehouse with GRE on fedora20 in two nodes 1 controller 1 compute, everything is working great, I can loging to ssh to the instances and ping to the internet ect. but when I try to install something with yum it says conection timeout, conecting to public servers using ramdon ports 443, 80, 22 doesn't work. Check the MTU inside of the guest. If it?s at 1500, then it?s likely that you?ll experience significant packet fragmentation and severely degraded performance. I usually force MTU to be 1400 inside of my guest, either by preconfiguring the image or by setting the MTU via the DHCP agent. A manual configuration whilst the instance is running will confirm this. Try: ?ip link set eth0 mtu 1400? Cheers Rhys From acvelez at vidalinux.com Mon Jul 7 14:46:51 2014 From: acvelez at vidalinux.com (Antonio C. Velez) Date: Mon, 7 Jul 2014 10:46:51 -0400 (AST) Subject: [Rdo-list] Outgoing traffic from instances! RDO icehouse with GRE fedora20 In-Reply-To: References: <1499091523.47731.1404710659416.JavaMail.zimbra@vidalinux.net> Message-ID: <1444196359.48026.1404744411355.JavaMail.zimbra@vidalinux.net> Rhys you're a genius this fix my issue, thanks a lot. ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com ----- Original Message ----- From: "Rhys Oxenham" To: "Antonio C. Velez" Cc: rdo-list at redhat.com Sent: Monday, July 7, 2014 5:30:03 AM Subject: Re: [Rdo-list] Outgoing traffic from instances! RDO icehouse with GRE fedora20 On 7 Jul 2014, at 06:24, Antonio C. Velez wrote: > I'm using icehouse with GRE on fedora20 in two nodes 1 controller 1 compute, everything is working great, I can loging to ssh to the instances and ping to the internet ect. but when I try to install something with yum it says conection timeout, conecting to public servers using ramdon ports 443, 80, 22 doesn't work. Check the MTU inside of the guest. If it?s at 1500, then it?s likely that you?ll experience significant packet fragmentation and severely degraded performance. I usually force MTU to be 1400 inside of my guest, either by preconfiguring the image or by setting the MTU via the DHCP agent. A manual configuration whilst the instance is running will confirm this. Try: ?ip link set eth0 mtu 1400? Cheers Rhys -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From rbowen at redhat.com Mon Jul 7 16:13:47 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 07 Jul 2014 12:13:47 -0400 Subject: [Rdo-list] Fwd: 50% discount on passes to Cloud Connect China in Shanghai, September 16-18 In-Reply-To: <1404742936.902315631@mail.openstack.org> References: <1404742936.902315631@mail.openstack.org> Message-ID: <53BAC73B.3020601@redhat.com> For anyone in the Shanghai area ... -------- Original Message -------- Subject: [openstack-community] 50% discount on passes to Cloud Connect China in Shanghai, September 16-18 Date: Mon, 7 Jul 2014 09:22:16 -0500 (CDT) From: Kathy Cacciatore To: community at lists.openstack.org, marketing at lists.openstack.org Cloud Connect China is offering OpenStack community members and their clients and prospects a 50% discount on conference passes. It is limited to the first 50 people and must be used by July 31. Feel free to pass this on to other OpenStack people who may be interested in attending. OpenStack is sponsoring a half-day workshop on Monday, September 16, given by leading community members in China. Tom Fifield, OpenStack Community Manager, is a conference advisor and will also attending. Visit www.cloudconnectevent.cn/registration/registration_en.php , and register for the package desired using registration code *CLOU14XP8ND. * Here are the packages with pre-discount prices. Note that a VIP Pass will be under $400! Thank you. cid:image007.png at 01CF63B8.42D2FE90 -- Regards, Kathy Cacciatore OpenStack Industry Event Planner 1-512-970-2807 (mobile) Part time: Monday - Thursday, 9am - 2pm US CT kathyc at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 193688 bytes Desc: not available URL: -------------- next part -------------- _______________________________________________ Community mailing list Community at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community From kchamart at redhat.com Fri Jul 4 13:25:16 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 4 Jul 2014 18:55:16 +0530 Subject: [Rdo-list] [Rdo-newsletter] July 2014 RDO Community Newsletter In-Reply-To: <53B5A9AB.7080500@redhat.com> References: <53B5A9AB.7080500@redhat.com> Message-ID: <20140704132516.GA3637@tesla.redhat.com> Heya, On Thu, Jul 03, 2014 at 03:06:19PM -0400, Rich Bowen wrote: [. . .] > In early August, the Flock conference will be held in Prague, Czech > Republic - http://flocktofedora.com/ (August 6-9). In addition to all of > the great Fedora content, Kashyap Chamarthy will be speaking about > deploying OpenStack on Fedora. - http://sched.co/1kI1BWf Unfortunately, I'll not be able to make it to this. Luckily -- thanks to Jakub Ruzicka (OpenStack developer and RDO packager), he kindly agreed to step in here, yay! -- /kashyap _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From Lance.Fang at emc.com Mon Jul 7 16:47:18 2014 From: Lance.Fang at emc.com (Fang, Lance) Date: Mon, 7 Jul 2014 12:47:18 -0400 Subject: [Rdo-list] ERROR while installing RDO (rabbitmq-server) In-Reply-To: <53B51820.6020504@redhat.com> References: <95730731D64285418F19B9129C3BDC3D010E6CA3233D@MX40A.corp.emc.com> <87zjgrprne.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3234E@MX40A.corp.emc.com> <87wqbvppd3.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA3236F@MX40A.corp.emc.com> <87tx6zpopt.fsf@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA323BA@MX40A.corp.emc.com> <28331C9F-307F-4943-8735-E5DCAB4B53CA@redhat.com> <95730731D64285418F19B9129C3BDC3D010E6CA323C6@MX40A.corp.emc.com> <53B51820.6020504@redhat.com> Message-ID: <95730731D64285418F19B9129C3BDC3D010E6CA325EA@MX40A.corp.emc.com> Ihar, Thank you for responding. Looking at the link and it is not clear what I should add/enable to the repos. Here is what I have: [root at sse-durl-ora1 yum.repos.d]# ls -lt total 64 -rw-r--r-- 1 root root 1056 Jul 7 11:19 epel-testing.repo -rw-r--r-- 1 root root 248 Jul 2 17:40 rdo-release.repo -rw-r--r-- 1 root root 957 Jul 2 17:40 epel.repo -rw-r--r-- 1 root root 1220 Jul 2 17:39 puppetlabs.repo -rw-r--r-- 1 root root 707 Jul 2 17:39 foreman.repo -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Ihar Hrachyshka Sent: Thursday, July 03, 2014 1:45 AM To: rdo-list at redhat.com Subject: Re: [Rdo-list] ERROR while installing RDO (rabbitmq-server) -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 03/07/14 00:53, Fang, Lance wrote: > PowmInsecureWarning: Not using mpz_powm_sec. You should r ebuild > using libgmp >= 5 to avoid timing attack vulnerability Do you have all the needed repos enabled for yum? See: http://openstack.redhat.com/Repositories /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTtRgfAAoJEC5aWaUY1u57VOwIAI+bKVyZ7IkAIyLCZBeTYwgE J4ecYKv/LerCel/lFlJGhw1KApdqS9VvFJibGFpQlHtPA/DEgoENPcpxEkAaXB/z BXd/6Cm/H+d6TL1bSPK89bKn2FIZnnw0koTXUTkV4nTX+Kt3O5ojo/jWpL1HP/x2 LGUqIQZUkQyr2NbRR8LL7UnAQZM8PXFWLST0XAIOXWXwxwDMl5pcENJucT5iC5cR DLbNs8mtm7OgQG5+eTic2OvIVv8LY8ufbeOqr79MoB2FNIWUnw6aUMwiFJe4umsM mHNFGn4RQk/wr8cfRuC2sOA5uZUSh4XF1JiwLis6McML/7C8OblJ5bODSzwXolA= =NPPV -----END PGP SIGNATURE----- _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From rbowen at redhat.com Tue Jul 8 13:39:20 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 08 Jul 2014 09:39:20 -0400 Subject: [Rdo-list] mysqld failure on --allinone, centos7 Message-ID: <53BBF488.5020501@redhat.com> I'm running `packstack --allinone` on a fresh install of the new CentOS7, and I'm getting a failure at: 192.168.0.176_mysql.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp Error: Could not enable mysqld: You will find full trace in log /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log Please check log file /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for more information The log message is: Notice: /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: Dependency Service[mysqld] has failures: true Warning: /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: Skipping because of failed dependencies mysqld was successfully installed, and is running. Before I start digging deeper, I wondered if this is something that's already been encountered. Thanks. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From ihrachys at redhat.com Tue Jul 8 13:53:44 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 08 Jul 2014 15:53:44 +0200 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: <53BBF488.5020501@redhat.com> References: <53BBF488.5020501@redhat.com> Message-ID: <53BBF7E8.9060503@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 08/07/14 15:39, Rich Bowen wrote: > I'm running `packstack --allinone` on a fresh install of the new > CentOS7, and I'm getting a failure at: > > 192.168.0.176_mysql.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp > Error: Could not enable mysqld: You will find full trace in log > /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log > > Please check log file > /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for > more information > > > The log message is: > > Notice: > /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: > > Dependency Service[mysqld] has failures: true > Warning: > /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: > > Skipping because of failed dependencies > I suspect there were more log messages before those you've posted that could reveal the cause of the failure. > > mysqld was successfully installed, and is running. > > Before I start digging deeper, I wondered if this is something > that's already been encountered. > > Thanks. > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTu/foAAoJEC5aWaUY1u57sxEIAI9NMwIAX3AnRYHwD16mhOzA iELjuho/mWqnYoTjJx74QtJVB8SiArr7+KsXHBiXQIbRng4TaXf8W6Rzd+D3Fy5+ GTGNd5Q2tRTgZVJlT4EYIWwCSGMofEhkcky7iKftM39WiKPco1Q4CBRQ//5S0M3g RkTfWcIMujKcWYPH+8jydMQL17fgDKnqZBwUL9YBdsIPAg5dVyIUflC2VxAtEcQ2 ne3ITam2Nl7dwEfdMdrsPHapbOrosr8AIyFkpAkXrRZpF4P1MGVeN3MKC3E4N/O7 t3hUD+5u4x5ZHgCprVzGkZp7Hcqr2A8WAROqI73kveo4i62Y9BLzIy5b7yTl7jI= =dzu2 -----END PGP SIGNATURE----- From rbowen at redhat.com Tue Jul 8 14:41:24 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 08 Jul 2014 10:41:24 -0400 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: <53BBF7E8.9060503@redhat.com> References: <53BBF488.5020501@redhat.com> <53BBF7E8.9060503@redhat.com> Message-ID: <53BC0314.8050304@redhat.com> On 07/08/2014 09:53 AM, Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 08/07/14 15:39, Rich Bowen wrote: >> I'm running `packstack --allinone` on a fresh install of the new >> CentOS7, and I'm getting a failure at: >> >> 192.168.0.176_mysql.pp: [ ERROR ] >> Applying Puppet manifests [ ERROR ] >> >> ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp >> Error: Could not enable mysqld: You will find full trace in log >> /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log >> >> Please check log file >> /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for >> more information >> >> >> The log message is: >> >> Notice: >> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: >> >> > Dependency Service[mysqld] has failures: true >> Warning: >> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: >> >> > Skipping because of failed dependencies > I suspect there were more log messages before those you've posted that > could reveal the cause of the failure. The full log file is attached, and I'm working through it now. If someone has more immediate insight, that would be awesome. Thanks. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: mysql.pp.log Type: text/x-log Size: 16376 bytes Desc: not available URL: From jruzicka at redhat.com Tue Jul 8 18:13:05 2014 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Tue, 08 Jul 2014 20:13:05 +0200 Subject: [Rdo-list] plan to add a parameter to config glance backend? In-Reply-To: References: <20140701230012.GC11253@redhat.com> <20140702075432.GD11253@redhat.com> Message-ID: <53BC34B1.2060108@redhat.com> I assume you're using packstack to deploy RDO. Packstack is a proof-of-concept/demo installer and this functionality is out of its scope. If you require sophisticated installer, use something else such as Foreman. Cheers Jakub Ruzicka On 7.7.2014 04:25, Kun Huang wrote: > I know that... I'm using RDO to deploy somethings including Glance. And I > want to config backend store in RDO's answer file, not Glance configures > manually. > > > On Wed, Jul 2, 2014 at 3:54 PM, Flavio Percoco wrote: > >> On 02/07/14 09:45 +0800, Kun Huang wrote: >> >>> I need this: CONFIG_GLANCE_BACKEND=file|rbd|.... >>> >> >> I'm not sure if you're talking about the client or the server. In >> glance-api there are 2 config options. The first one `known_stores` is >> used to enable/disable stores. The second one `default_store` allows >> you to specify which store should be used as the default one when none >> is passed to the API. >> >> Is there something missing in the above-mentioned options? >> >> Cheers, >> Flavio >> >> >> >>> On Wed, Jul 2, 2014 at 7:00 AM, Flavio Percoco wrote: >>> >>>> On 26/06/14 16:50 +0800, Kun Huang wrote: >>>> >>>>> >>>>> Hi all >>>>> >>>>> Is there such a plan now? Actually it's okay to adjust glance.conf >>>>> only. Deploying ceph is not necessary. >>>>> >>>> >>>> >>>> Hi Kun, >>>> >>>> I'm not sure I understand your question. What config parameter do you >>>> need? What do you think is missing in Glance? >>>> >>>> Cheers, >>>> Flavio >>>> >>>> -- >>>> @flaper87 >>>> Flavio Percoco >>>> >>> >> -- >> @flaper87 >> Flavio Percoco >> > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From rbowen at redhat.com Tue Jul 8 19:45:50 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 08 Jul 2014 15:45:50 -0400 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: <53BC0314.8050304@redhat.com> References: <53BBF488.5020501@redhat.com> <53BBF7E8.9060503@redhat.com> <53BC0314.8050304@redhat.com> Message-ID: <53BC4A6E.30600@redhat.com> On 07/08/2014 10:41 AM, Rich Bowen wrote: > > On 07/08/2014 09:53 AM, Ihar Hrachyshka wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA512 >> >> On 08/07/14 15:39, Rich Bowen wrote: >>> I'm running `packstack --allinone` on a fresh install of the new >>> CentOS7, and I'm getting a failure at: >>> >>> 192.168.0.176_mysql.pp: [ ERROR ] >>> Applying Puppet manifests [ ERROR ] >>> >>> ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp >>> Error: Could not enable mysqld: You will find full trace in log >>> /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log >>> >>> >>> Please check log file >>> /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for >>> more information >>> >>> >>> The log message is: >>> >>> Notice: >>> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: >>> >>> >>> >> Dependency Service[mysqld] has failures: true >>> Warning: >>> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: >>> >>> >>> >> Skipping because of failed dependencies >> I suspect there were more log messages before those you've posted that >> could reveal the cause of the failure. > > The full log file is attached, and I'm working through it now. If > someone has more immediate insight, that would be awesome. Thanks. No joy so far, except that it does *not* seem to be related to https://bugzilla.redhat.com/show_bug.cgi?id=1117035 -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From mattdm at mattdm.org Tue Jul 8 22:57:59 2014 From: mattdm at mattdm.org (Matthew Miller) Date: Tue, 8 Jul 2014 18:57:59 -0400 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> Message-ID: <20140708225759.GA17208@mattdm.org> On Fri, Jul 04, 2014 at 11:40:17AM +0200, Madko wrote: > Nope didn't try this one, but no luck, same problem :( (I tried root, > fedora, and now cloud-user) On Fedora 19 and Fedora 20, "fedora" is the right user by default. -- Matthew Miller mattdm at mattdm.org Fedora Project Leader mattdm at fedoraproject.org From bderzhavets at hotmail.com Wed Jul 9 05:35:24 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 9 Jul 2014 01:35:24 -0400 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: <53BC4A6E.30600@redhat.com> References: <53BBF488.5020501@redhat.com> <53BBF7E8.9060503@redhat.com>, <53BC0314.8050304@redhat.com>, <53BC4A6E.30600@redhat.com> Message-ID: Please view https://bugzilla.redhat.com/show_bug.cgi?id=981116 ############ Comment 36 ############ So workaround is: rm /usr/lib/systemd/system/mysqld.service cp /usr/lib/systemd/system/mariadb.service /usr/lib/systemd/system/mysqld.service Works for me on CentOS 7 . Before packstack rerun:- # systemctl stop mariadb > Date: Tue, 8 Jul 2014 15:45:50 -0400 > From: rbowen at redhat.com > To: rdo-list at redhat.com > Subject: Re: [Rdo-list] mysqld failure on --allinone, centos7 > > > On 07/08/2014 10:41 AM, Rich Bowen wrote: > > > > On 07/08/2014 09:53 AM, Ihar Hrachyshka wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- > >> Hash: SHA512 > >> > >> On 08/07/14 15:39, Rich Bowen wrote: > >>> I'm running `packstack --allinone` on a fresh install of the new > >>> CentOS7, and I'm getting a failure at: > >>> > >>> 192.168.0.176_mysql.pp: [ ERROR ] > >>> Applying Puppet manifests [ ERROR ] > >>> > >>> ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp > >>> Error: Could not enable mysqld: You will find full trace in log > >>> /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log > >>> > >>> > >>> Please check log file > >>> /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for > >>> more information > >>> > >>> > >>> The log message is: > >>> > >>> Notice: > >>> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: > >>> > >>> > >>> > >> Dependency Service[mysqld] has failures: true > >>> Warning: > >>> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: > >>> > >>> > >>> > >> Skipping because of failed dependencies > >> I suspect there were more log messages before those you've posted that > >> could reveal the cause of the failure. > > > > The full log file is attached, and I'm working through it now. If > > someone has more immediate insight, that would be awesome. Thanks. > > No joy so far, except that it does *not* seem to be related to > https://bugzilla.redhat.com/show_bug.cgi?id=1117035 > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://openstack.redhat.com/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Jul 9 05:51:40 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 9 Jul 2014 01:51:40 -0400 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: References: <53BBF488.5020501@redhat.com>,<53BBF7E8.9060503@redhat.com>, <53BC0314.8050304@redhat.com>, , <53BC4A6E.30600@redhat.com>, Message-ID: Packstack fails running nova.pp :- Installing Dependencies [ DONE ] Copying Puppet modules and manifests [ DONE ] Applying 192.169.142.57_prescript.pp 192.169.142.57_prescript.pp: [ DONE ] Applying 192.169.142.57_mysql.pp Applying 192.169.142.57_amqp.pp 192.169.142.57_mysql.pp: [ DONE ] 192.169.142.57_amqp.pp: [ DONE ] Applying 192.169.142.57_keystone.pp Applying 192.169.142.57_glance.pp Applying 192.169.142.57_cinder.pp 192.169.142.57_keystone.pp: [ DONE ] 192.169.142.57_glance.pp: [ DONE ] 192.169.142.57_cinder.pp: [ DONE ] Applying 192.169.142.57_api_nova.pp 192.169.142.57_api_nova.pp: [ DONE ] Applying 192.169.142.57_nova.pp 192.169.142.57_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.169.142.57_nova.pp Error: /Service[messagebus]: Could not evaluate: Could not find init script for 'messagebus' You will find full trace in log /var/tmp/packstack/20140709-094621-U8Ewey/manifests/192.169.142.57_nova.pp.log Please check log file /var/tmp/packstack/20140709-094621-U8Ewey/openstack-setup.log for more information Boris. From: bderzhavets at hotmail.com To: rbowen at redhat.com; rdo-list at redhat.com Date: Wed, 9 Jul 2014 01:35:24 -0400 Subject: Re: [Rdo-list] mysqld failure on --allinone, centos7 Please view https://bugzilla.redhat.com/show_bug.cgi?id=981116 ############ Comment 36 ############ So workaround is: rm /usr/lib/systemd/system/mysqld.service cp /usr/lib/systemd/system/mariadb.service /usr/lib/systemd/system/mysqld.service Works for me on CentOS 7 . Before packstack rerun:- # systemctl stop mariadb > Date: Tue, 8 Jul 2014 15:45:50 -0400 > From: rbowen at redhat.com > To: rdo-list at redhat.com > Subject: Re: [Rdo-list] mysqld failure on --allinone, centos7 > > > On 07/08/2014 10:41 AM, Rich Bowen wrote: > > > > On 07/08/2014 09:53 AM, Ihar Hrachyshka wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- > >> Hash: SHA512 > >> > >> On 08/07/14 15:39, Rich Bowen wrote: > >>> I'm running `packstack --allinone` on a fresh install of the new > >>> CentOS7, and I'm getting a failure at: > >>> > >>> 192.168.0.176_mysql.pp: [ ERROR ] > >>> Applying Puppet manifests [ ERROR ] > >>> > >>> ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp > >>> Error: Could not enable mysqld: You will find full trace in log > >>> /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log > >>> > >>> > >>> Please check log file > >>> /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for > >>> more information > >>> > >>> > >>> The log message is: > >>> > >>> Notice: > >>> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: > >>> > >>> > >>> > >> Dependency Service[mysqld] has failures: true > >>> Warning: > >>> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: > >>> > >>> > >>> > >> Skipping because of failed dependencies > >> I suspect there were more log messages before those you've posted that > >> could reveal the cause of the failure. > > > > The full log file is attached, and I'm working through it now. If > > someone has more immediate insight, that would be awesome. Thanks. > > No joy so far, except that it does *not* seem to be related to > https://bugzilla.redhat.com/show_bug.cgi?id=1117035 > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://openstack.redhat.com/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Wed Jul 9 09:39:18 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 09 Jul 2014 11:39:18 +0200 Subject: [Rdo-list] Fwd: Fedora 21 Mass Branching In-Reply-To: <20140709012230.0903b854@adria.ausil.us> References: <20140709012230.0903b854@adria.ausil.us> Message-ID: <53BD0DC6.9090501@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 To whom it may concern: Fedora 21 was branched, so from now on any fix to Icehouse should go to el6-icehouse (EL6) and f21 (next Fedora release + EL7). As for master, Juno should eventually arrive there. Till that time, we still probably want to track Icehouse backports there not to leave the branch without proper fixes that reached other Icehouse branches. /Ihar - -------- Original Message -------- Subject: Fedora 21 Mass Branching Date: Wed, 9 Jul 2014 01:22:30 -0500 From: Dennis Gilmore Reply-To: devel at lists.fedoraproject.org To: devel-announce at lists.fedoraproject.org Hi All, Fedora 21 has been branched, please be sure to do a git pull --rebase to pick up the new branch, as an additional reminder rawhide/f22 has had inheritance cut off from previous releases, so this means that anything you do for f21 you also have to do in the master branch and do a build there. This is the same as we did for fedora 19 and 20. Dennis _______________________________________________ devel-announce mailing list devel-announce at lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel-announce -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTvQ3GAAoJEC5aWaUY1u57J+cH/3tGhazyADj+RTRUrFqx0HOA hflUMxDkQS2yvRHnaaVSgzOaRV0GT6lXiewxMlTb3HcxsrF/CJp3EU+sUVwFewD/ 8VdFGOq8GMoElAzrZddPPzVsgd8biojWCMqCF8BYetzDlUCxLnz18SszdC/HPiGk yP3NjIex0AOP8YGZtUZg78QwHfTlKmzr2ozONt3qoe37sAoiOT16uLYu0FQmEA5n iRvO5wRmdyH1H/gozAVdkZVCzdvvZwQHUj9NF+dN7Pwbkg1qYzq8H8Seu0ijbv8n 98m153VH2IOVS82FD9wLXARcZ9fJjotK2nJZl2fsMdGcyEhoJ5A0tYmM4pYhCYM= =BPl6 -----END PGP SIGNATURE----- From madko77 at gmail.com Wed Jul 9 12:14:38 2014 From: madko77 at gmail.com (Madko) Date: Wed, 9 Jul 2014 14:14:38 +0200 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: <1378466327.5928093.1404637734577.JavaMail.zimbra@redhat.com> References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> <1378466327.5928093.1404637734577.JavaMail.zimbra@redhat.com> Message-ID: thanks Assaf for the point. But When I try to connect to 169.254.169.254 on the Cirros VM I only get a no route to host. On the Fedora VM, the CloudInit seems to know the IP address of the metadata agent and try to contact this IP (instead of 169.254.169.254). I can't recall where exactly but I remember I read something about neutron doing some NAT with iptables to forward the http request to the right address. How can I check this NAT rules? I don't see anything like this with iptables. 2014-07-06 11:08 GMT+02:00 Assaf Muller : > > > ----- Original Message ----- > > Great!!! I can connect to the cirros instance. But no > .ssh/authorized_keys. > > Seems the metadata api is not available. Where is it supposed to be > hosted? > > what service? is it the neutron-metadata-agent? > > > > Metadata is hosted by the nova-api server. When using Neutron, the > neutron-metadata- > agent on the network node proxies metadata requests to nova-api. It does a > couple > of queries to Neutron,adds the instance-id to the request and forwards the > message > to nova-api. This is because when using nova-network you cannot have > overlapping IPs > so the nova metadata server can figure out the instance ID from its IP. > Neutron > does support overlapping IPs so that's why the neutron-metadata-agent > exists. > > If curl 169.254.169.254 doesn't work, check for errors in the neutron > metadata > agent logs and in nova-api as well. > > > > > 2014-07-04 13:13 GMT+02:00 Madko < madko77 at gmail.com > : > > > > > > > > Thank you Rhys and Vimal, I'll try cyrros image right now. > > > > > > 2014-07-04 11:55 GMT+02:00 Vimal Kumar < vimal7370 at gmail.com > : > > > > > > > > > > Use cirros (13M) image to test if ssh key-pair injection is working or > not: > > > > http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img > > > > ssh as: cirros@ > > > > In case if your ssh key isn't working, the password is cubswin:) > > > > > > > > On Fri, Jul 4, 2014 at 3:20 PM, Madko < madko77 at gmail.com > wrote: > > > > > > > > I've just deployed OpenStack so I don't have any other image. I can try > to > > make one. Is cloudInit easy to install on Fedora ? I have some CentOS > images > > too, but no cloudInit. > > > > > > 2014-07-04 11:45 GMT+02:00 Rhys Oxenham < roxenham at redhat.com > : > > > > > > > > Can you try another image to make sure that key pair injection is working > > inside of your environment? i.e. an image you already know the password > for > > so you can check via VNC or passworded ssh login? > > > > Cheers > > Rhys > > > > On 4 Jul 2014, at 10:40, Madko < madko77 at gmail.com > wrote: > > > > > Nope didn't try this one, but no luck, same problem :( (I tried root, > > > fedora, and now cloud-user) > > > > > > [root at openstack-neutron ~]# ip netns exec > > > qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 > > > PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. > > > 64 bytes from 192.168.2.4 : icmp_seq=1 ttl=64 time=2.02 ms > > > 64 bytes from 192.168.2.4 : icmp_seq=2 ttl=64 time=1.90 ms > > > ^C > > > --- 192.168.2.4 ping statistics --- > > > 2 packets transmitted, 2 received, 0% packet loss, time 1161ms > > > rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms > > > [root at openstack-neutron ~]# ip netns exec > > > qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l > > > cloud-user 192.168.2.4 > > > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). > > > > > > > > > > > > 2014-07-04 11:09 GMT+02:00 Rhys Oxenham < roxenham at redhat.com >: > > > Hi, > > > > > > Did you try with using the ?cloud-user? login username? > > > > > > Thanks > > > Rhys > > > > > > On 4 Jul 2014, at 09:22, Madko < madko77 at gmail.com > wrote: > > > > > > > Hi, > > > > > > > > I have an almost working openstack platform deployed via foreman. > When I > > > > launch an instance from the Fedora 19 cloud image, everything seems > > > > fine, the VM is running on one of my hypervisor, but I can't access > it > > > > (ping is ok)... > > > > > > > > I'm following this documentation > > > > http://openstack.redhat.com/Running_an_instance > > > > > > > > I only get a permission denied when I do the last part: > > > > ssh -l root -i my_key_pair.pem floating_ip_address > > > > I also try by importing an ssh key. Same error. > > > > > > > > In the VM console, I see that CloudInit service is starting inside > the > > > > VM, no error are shown here. So my question is: Where are the logs > for > > > > that parts (cloud init server) in openstack ? Is the above > documentation > > > > fine ? > > > > > > > > best regards, > > > > > > > > -- > > > > Edouard Bourguignon > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > -- > > > Edouard Bourguignon > > > > > > > > > > -- > > Edouard Bourguignon > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > -- > > Edouard Bourguignon > > > > > > > > -- > > Edouard Bourguignon > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > -- Edouard Bourguignon -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxenham at redhat.com Wed Jul 9 12:21:56 2014 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 9 Jul 2014 13:21:56 +0100 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> <1378466327.5928093.1404637734577.JavaMail.zimbra@redhat.com> Message-ID: <2EF74CD1-7183-4D89-86C5-0469530C7A42@redhat.com> On 9 Jul 2014, at 13:14, Madko wrote: > thanks Assaf for the point. But When I try to connect to 169.254.169.254 on the Cirros VM I only get a no route to host. On the Fedora VM, the CloudInit seems to know the IP address of the metadata agent and try to contact this IP (instead of 169.254.169.254). I can't recall where exactly but I remember I read something about neutron doing some NAT with iptables to forward the http request to the right address. How can I check this NAT rules? I don't see anything like this with iptables. > You can check the NAT rules on the L3 agent (if using the default configuration)... 'ip netns list? to find the router namespace appropriate for the tenant network that your instance is operating in. You may have to use ?neutron router-list? to find the correct UUID. Then, execute the following to check the NAT rules: ?ip netns exec qrouter-XXXX iptables -L -t nat? (replace XXXX with the router ID you found in the previous command) You should expect something in the output like this: Chain neutron-l3-agent-PREROUTING (1 references) target prot opt source destination REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir ports 9697 > > 2014-07-06 11:08 GMT+02:00 Assaf Muller : > > > ----- Original Message ----- > > Great!!! I can connect to the cirros instance. But no .ssh/authorized_keys. > > Seems the metadata api is not available. Where is it supposed to be hosted? > > what service? is it the neutron-metadata-agent? > > > > Metadata is hosted by the nova-api server. When using Neutron, the neutron-metadata- > agent on the network node proxies metadata requests to nova-api. It does a couple > of queries to Neutron,adds the instance-id to the request and forwards the message > to nova-api. This is because when using nova-network you cannot have overlapping IPs > so the nova metadata server can figure out the instance ID from its IP. Neutron > does support overlapping IPs so that's why the neutron-metadata-agent exists. > > If curl 169.254.169.254 doesn't work, check for errors in the neutron metadata > agent logs and in nova-api as well. > > > > > 2014-07-04 13:13 GMT+02:00 Madko < madko77 at gmail.com > : > > > > > > > > Thank you Rhys and Vimal, I'll try cyrros image right now. > > > > > > 2014-07-04 11:55 GMT+02:00 Vimal Kumar < vimal7370 at gmail.com > : > > > > > > > > > > Use cirros (13M) image to test if ssh key-pair injection is working or not: > > > > http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img > > > > ssh as: cirros@ > > > > In case if your ssh key isn't working, the password is cubswin:) > > > > > > > > On Fri, Jul 4, 2014 at 3:20 PM, Madko < madko77 at gmail.com > wrote: > > > > > > > > I've just deployed OpenStack so I don't have any other image. I can try to > > make one. Is cloudInit easy to install on Fedora ? I have some CentOS images > > too, but no cloudInit. > > > > > > 2014-07-04 11:45 GMT+02:00 Rhys Oxenham < roxenham at redhat.com > : > > > > > > > > Can you try another image to make sure that key pair injection is working > > inside of your environment? i.e. an image you already know the password for > > so you can check via VNC or passworded ssh login? > > > > Cheers > > Rhys > > > > On 4 Jul 2014, at 10:40, Madko < madko77 at gmail.com > wrote: > > > > > Nope didn't try this one, but no luck, same problem :( (I tried root, > > > fedora, and now cloud-user) > > > > > > [root at openstack-neutron ~]# ip netns exec > > > qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 > > > PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. > > > 64 bytes from 192.168.2.4 : icmp_seq=1 ttl=64 time=2.02 ms > > > 64 bytes from 192.168.2.4 : icmp_seq=2 ttl=64 time=1.90 ms > > > ^C > > > --- 192.168.2.4 ping statistics --- > > > 2 packets transmitted, 2 received, 0% packet loss, time 1161ms > > > rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms > > > [root at openstack-neutron ~]# ip netns exec > > > qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l > > > cloud-user 192.168.2.4 > > > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). > > > > > > > > > > > > 2014-07-04 11:09 GMT+02:00 Rhys Oxenham < roxenham at redhat.com >: > > > Hi, > > > > > > Did you try with using the ?cloud-user? login username? > > > > > > Thanks > > > Rhys > > > > > > On 4 Jul 2014, at 09:22, Madko < madko77 at gmail.com > wrote: > > > > > > > Hi, > > > > > > > > I have an almost working openstack platform deployed via foreman. When I > > > > launch an instance from the Fedora 19 cloud image, everything seems > > > > fine, the VM is running on one of my hypervisor, but I can't access it > > > > (ping is ok)... > > > > > > > > I'm following this documentation > > > > http://openstack.redhat.com/Running_an_instance > > > > > > > > I only get a permission denied when I do the last part: > > > > ssh -l root -i my_key_pair.pem floating_ip_address > > > > I also try by importing an ssh key. Same error. > > > > > > > > In the VM console, I see that CloudInit service is starting inside the > > > > VM, no error are shown here. So my question is: Where are the logs for > > > > that parts (cloud init server) in openstack ? Is the above documentation > > > > fine ? > > > > > > > > best regards, > > > > > > > > -- > > > > Edouard Bourguignon > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > -- > > > Edouard Bourguignon > > > > > > > > > > -- > > Edouard Bourguignon > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > -- > > Edouard Bourguignon > > > > > > > > -- > > Edouard Bourguignon > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > -- > Edouard Bourguignon > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From madko77 at gmail.com Wed Jul 9 12:53:51 2014 From: madko77 at gmail.com (Madko) Date: Wed, 9 Jul 2014 14:53:51 +0200 Subject: [Rdo-list] ssh access to a fedora cloud image instance In-Reply-To: <2EF74CD1-7183-4D89-86C5-0469530C7A42@redhat.com> References: <082F3E2A-781E-4418-81F7-4DE1BE47F27B@redhat.com> <9C43E151-070A-44EB-998A-3C8F6F027C18@redhat.com> <1378466327.5928093.1404637734577.JavaMail.zimbra@redhat.com> <2EF74CD1-7183-4D89-86C5-0469530C7A42@redhat.com> Message-ID: Ok I have no NAT rules, so the problem seems to be on the l3-agent. I will check that part. Thanks again for the help. 2014-07-09 14:21 GMT+02:00 Rhys Oxenham : > On 9 Jul 2014, at 13:14, Madko wrote: > > > thanks Assaf for the point. But When I try to connect to 169.254.169.254 > on the Cirros VM I only get a no route to host. On the Fedora VM, the > CloudInit seems to know the IP address of the metadata agent and try to > contact this IP (instead of 169.254.169.254). I can't recall where exactly > but I remember I read something about neutron doing some NAT with iptables > to forward the http request to the right address. How can I check this NAT > rules? I don't see anything like this with iptables. > > > > You can check the NAT rules on the L3 agent (if using the default > configuration)... > > 'ip netns list? to find the router namespace appropriate for the tenant > network that your instance is operating in. > > You may have to use ?neutron router-list? to find the correct UUID. > > Then, execute the following to check the NAT rules: > > ?ip netns exec qrouter-XXXX iptables -L -t nat? (replace XXXX with the > router ID you found in the previous command) > > You should expect something in the output like this: > > Chain neutron-l3-agent-PREROUTING (1 references) > target prot opt source destination > REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http > redir ports 9697 > > > > > > 2014-07-06 11:08 GMT+02:00 Assaf Muller : > > > > > > ----- Original Message ----- > > > Great!!! I can connect to the cirros instance. But no > .ssh/authorized_keys. > > > Seems the metadata api is not available. Where is it supposed to be > hosted? > > > what service? is it the neutron-metadata-agent? > > > > > > > Metadata is hosted by the nova-api server. When using Neutron, the > neutron-metadata- > > agent on the network node proxies metadata requests to nova-api. It does > a couple > > of queries to Neutron,adds the instance-id to the request and forwards > the message > > to nova-api. This is because when using nova-network you cannot have > overlapping IPs > > so the nova metadata server can figure out the instance ID from its IP. > Neutron > > does support overlapping IPs so that's why the neutron-metadata-agent > exists. > > > > If curl 169.254.169.254 doesn't work, check for errors in the neutron > metadata > > agent logs and in nova-api as well. > > > > > > > > 2014-07-04 13:13 GMT+02:00 Madko < madko77 at gmail.com > : > > > > > > > > > > > > Thank you Rhys and Vimal, I'll try cyrros image right now. > > > > > > > > > 2014-07-04 11:55 GMT+02:00 Vimal Kumar < vimal7370 at gmail.com > : > > > > > > > > > > > > > > > Use cirros (13M) image to test if ssh key-pair injection is working or > not: > > > > > > http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img > > > > > > ssh as: cirros@ > > > > > > In case if your ssh key isn't working, the password is cubswin:) > > > > > > > > > > > > On Fri, Jul 4, 2014 at 3:20 PM, Madko < madko77 at gmail.com > wrote: > > > > > > > > > > > > I've just deployed OpenStack so I don't have any other image. I can > try to > > > make one. Is cloudInit easy to install on Fedora ? I have some CentOS > images > > > too, but no cloudInit. > > > > > > > > > 2014-07-04 11:45 GMT+02:00 Rhys Oxenham < roxenham at redhat.com > : > > > > > > > > > > > > Can you try another image to make sure that key pair injection is > working > > > inside of your environment? i.e. an image you already know the > password for > > > so you can check via VNC or passworded ssh login? > > > > > > Cheers > > > Rhys > > > > > > On 4 Jul 2014, at 10:40, Madko < madko77 at gmail.com > wrote: > > > > > > > Nope didn't try this one, but no luck, same problem :( (I tried root, > > > > fedora, and now cloud-user) > > > > > > > > [root at openstack-neutron ~]# ip netns exec > > > > qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ping 192.168.2.4 > > > > PING 192.168.2.4 (192.168.2.4) 56(84) bytes of data. > > > > 64 bytes from 192.168.2.4 : icmp_seq=1 ttl=64 time=2.02 ms > > > > 64 bytes from 192.168.2.4 : icmp_seq=2 ttl=64 time=1.90 ms > > > > ^C > > > > --- 192.168.2.4 ping statistics --- > > > > 2 packets transmitted, 2 received, 0% packet loss, time 1161ms > > > > rtt min/avg/max/mdev = 1.900/1.964/2.029/0.078 ms > > > > [root at openstack-neutron ~]# ip netns exec > > > > qdhcp-1d742b5e-c3f3-430f-b8a9-275bcbf967a3 ssh -i neutron_test.pem -l > > > > cloud-user 192.168.2.4 > > > > Permission denied (publickey,gssapi-keyex,gssapi-with-mic). > > > > > > > > > > > > > > > > 2014-07-04 11:09 GMT+02:00 Rhys Oxenham < roxenham at redhat.com >: > > > > Hi, > > > > > > > > Did you try with using the ?cloud-user? login username? > > > > > > > > Thanks > > > > Rhys > > > > > > > > On 4 Jul 2014, at 09:22, Madko < madko77 at gmail.com > wrote: > > > > > > > > > Hi, > > > > > > > > > > I have an almost working openstack platform deployed via foreman. > When I > > > > > launch an instance from the Fedora 19 cloud image, everything seems > > > > > fine, the VM is running on one of my hypervisor, but I can't > access it > > > > > (ping is ok)... > > > > > > > > > > I'm following this documentation > > > > > http://openstack.redhat.com/Running_an_instance > > > > > > > > > > I only get a permission denied when I do the last part: > > > > > ssh -l root -i my_key_pair.pem floating_ip_address > > > > > I also try by importing an ssh key. Same error. > > > > > > > > > > In the VM console, I see that CloudInit service is starting inside > the > > > > > VM, no error are shown here. So my question is: Where are the logs > for > > > > > that parts (cloud init server) in openstack ? Is the above > documentation > > > > > fine ? > > > > > > > > > > best regards, > > > > > > > > > > -- > > > > > Edouard Bourguignon > > > > > _______________________________________________ > > > > > Rdo-list mailing list > > > > > Rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > > > -- > > > > Edouard Bourguignon > > > > > > > > > > > > > > > -- > > > Edouard Bourguignon > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > -- > > > Edouard Bourguignon > > > > > > > > > > > > -- > > > Edouard Bourguignon > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > -- > > Edouard Bourguignon > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > -- Edouard Bourguignon -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Jul 9 14:09:54 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 9 Jul 2014 10:09:54 -0400 Subject: [Rdo-list] Attempt of RDO AIO install IceHouse on CentOS 7 Message-ID: https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-icehouse-on-centos-7/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Wed Jul 9 14:46:01 2014 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 9 Jul 2014 10:46:01 -0400 (EDT) Subject: [Rdo-list] Attempt of RDO AIO install IceHouse on CentOS 7 In-Reply-To: References: Message-ID: <574817848.570302.1404917161589.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Boris Derzhavets" > To: rdo-list at redhat.com > Sent: Wednesday, July 9, 2014 7:09:54 AM > Subject: [Rdo-list] Attempt of RDO AIO install IceHouse on CentOS 7 > > https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-icehouse-on-centos-7/ This seems likely to be a bug in PackStack and/or the puppet manifests and I have filed it here: https://bugzilla.redhat.com/show_bug.cgi?id=1117871 Thanks, -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From rohara at redhat.com Wed Jul 9 17:15:34 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Wed, 9 Jul 2014 12:15:34 -0500 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: References: <53BBF488.5020501@redhat.com> <53BBF7E8.9060503@redhat.com> <53BC0314.8050304@redhat.com> <53BC4A6E.30600@redhat.com> Message-ID: <20140709171533.GA5087@redhat.com> On Wed, Jul 09, 2014 at 01:35:24AM -0400, Boris Derzhavets wrote: > Please view https://bugzilla.redhat.com/show_bug.cgi?id=981116 > > ############ > Comment 36 > ############ > > So workaround is: > > > rm /usr/lib/systemd/system/mysqld.service > > cp /usr/lib/systemd/system/mariadb.service /usr/lib/systemd/system/mysqld.service > > Works for me on CentOS 7 . Before packstack rerun:- > > # systemctl stop mariadb You should not have to do this. If you're installing RDO Icehouse, you should be getting mariadb-galera-server, which should be creating both the mariadb and mysqld service files. They should be identical. Which database package is being installed? Ryan > > Date: Tue, 8 Jul 2014 15:45:50 -0400 > > From: rbowen at redhat.com > > To: rdo-list at redhat.com > > Subject: Re: [Rdo-list] mysqld failure on --allinone, centos7 > > > > > > On 07/08/2014 10:41 AM, Rich Bowen wrote: > > > > > > On 07/08/2014 09:53 AM, Ihar Hrachyshka wrote: > > >> -----BEGIN PGP SIGNED MESSAGE----- > > >> Hash: SHA512 > > >> > > >> On 08/07/14 15:39, Rich Bowen wrote: > > >>> I'm running `packstack --allinone` on a fresh install of the new > > >>> CentOS7, and I'm getting a failure at: > > >>> > > >>> 192.168.0.176_mysql.pp: [ ERROR ] > > >>> Applying Puppet manifests [ ERROR ] > > >>> > > >>> ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp > > >>> Error: Could not enable mysqld: You will find full trace in log > > >>> /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log > > >>> > > >>> > > >>> Please check log file > > >>> /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for > > >>> more information > > >>> > > >>> > > >>> The log message is: > > >>> > > >>> Notice: > > >>> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: > > >>> > > >>> > > >>> > > >> Dependency Service[mysqld] has failures: true > > >>> Warning: > > >>> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: > > >>> > > >>> > > >>> > > >> Skipping because of failed dependencies > > >> I suspect there were more log messages before those you've posted that > > >> could reveal the cause of the failure. > > > > > > The full log file is attached, and I'm working through it now. If > > > someone has more immediate insight, that would be awesome. Thanks. > > > > No joy so far, except that it does *not* seem to be related to > > https://bugzilla.redhat.com/show_bug.cgi?id=1117035 > > > > -- > > Rich Bowen - rbowen at redhat.com > > OpenStack Community Liaison > > http://openstack.redhat.com/ > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From bderzhavets at hotmail.com Wed Jul 9 18:16:06 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 9 Jul 2014 14:16:06 -0400 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: <20140709171533.GA5087@redhat.com> References: <53BBF488.5020501@redhat.com>, <53BBF7E8.9060503@redhat.com>, <53BC0314.8050304@redhat.com>, <53BC4A6E.30600@redhat.com>, , <20140709171533.GA5087@redhat.com> Message-ID: Clean install IceHouse on Fedora 20 gives ------------------------------------------------------------------------------------------------------------------ [root at icehouse1 ~(keystone_admin)]# service mariadb status Redirecting to /bin/systemctl status mariadb.service mariadb.service - MariaDB database server Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled) Active: active (running) since Wed 2014-07-09 09:08:21 MSK; 12h ago Process: 1610 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS) Process: 760 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS) Main PID: 1609 (mysqld_safe) CGroup: /system.slice/mariadb.service ??1609 /bin/sh /usr/bin/mysqld_safe --basedir=/usr ??3328 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/us... Jul 09 09:08:13 icehouse1.localdomain mysqld_safe[1609]: 140709 09:08:08 mysqld_safe Logging .... Jul 09 09:08:13 icehouse1.localdomain mysqld_safe[1609]: 140709 09:08:08 mysqld_safe Starting...l Jul 09 09:08:13 icehouse1.localdomain mysqld_safe[1609]: 140709 09:08:08 mysqld_safe WSREP: R...' Jul 09 09:08:18 icehouse1.localdomain mysqld_safe[1609]: 140709 09:08:18 mysqld_safe WSREP: R...1 Jul 09 09:08:21 icehouse1.localdomain systemd[1]: Started MariaDB database server. Hint: Some lines were ellipsized, use -l to show in full. ------------------------------------------------------------------------------------------------------------------ [root at icehouse1 ~(keystone_admin)]# service mysqld status Redirecting to /bin/systemctl status mysqld.service mariadb.service - MariaDB database server Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled) Active: active (running) since Wed 2014-07-09 09:08:21 MSK; 12h ago Process: 1610 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS) Process: 760 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS) Main PID: 1609 (mysqld_safe) CGroup: /system.slice/mariadb.service ??1609 /bin/sh /usr/bin/mysqld_safe --basedir=/usr ??3328 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/us... Jul 09 09:08:13 icehouse1.localdomain mysqld_safe[1609]: 140709 09:08:08 mysqld_safe Logging .... Jul 09 09:08:13 icehouse1.localdomain mysqld_safe[1609]: 140709 09:08:08 mysqld_safe Starting...l Jul 09 09:08:13 icehouse1.localdomain mysqld_safe[1609]: 140709 09:08:08 mysqld_safe WSREP: R...' Jul 09 09:08:18 icehouse1.localdomain mysqld_safe[1609]: 140709 09:08:18 mysqld_safe WSREP: R...1 Jul 09 09:08:21 icehouse1.localdomain systemd[1]: Started MariaDB database server. Hint: Some lines were ellipsized, use -l to show in full. ----------------------------------------------------------------------------------------------------------------- [root at icehouse1 ~(keystone_admin)]# rpm -qa | grep mariadb mariadb-5.5.37-1.fc20.x86_64 mariadb-galera-common-5.5.36-9.fc20.x86_64 mariadb-libs-5.5.37-1.fc20.x86_64 mariadb-galera-server-5.5.36-9.fc20.x86_64 ---------------------------------------------------------------- Same picture I have on CentOS 7 after hack. ---------------------------------------------------------------- [root at ip-192-169-142-37 ~]# rpm -qa | grep mariadb mariadb-galera-common-5.5.36-9.el7.x86_64 mariadb-5.5.37-1.el7_0.x86_64 mariadb-galera-server-5.5.36-9.el7.x86_64 mariadb-libs-5.5.37-1.el7_0.x86_64 Thanks. Boris. > Date: Wed, 9 Jul 2014 12:15:34 -0500 > From: rohara at redhat.com > To: bderzhavets at hotmail.com > CC: rbowen at redhat.com; rdo-list at redhat.com > Subject: Re: [Rdo-list] mysqld failure on --allinone, centos7 > > On Wed, Jul 09, 2014 at 01:35:24AM -0400, Boris Derzhavets wrote: > > Please view https://bugzilla.redhat.com/show_bug.cgi?id=981116 > > > > ############ > > Comment 36 > > ############ > > > > So workaround is: > > > > > > rm /usr/lib/systemd/system/mysqld.service > > > > cp /usr/lib/systemd/system/mariadb.service /usr/lib/systemd/system/mysqld.service > > > > Works for me on CentOS 7 . Before packstack rerun:- > > > > # systemctl stop mariadb > > You should not have to do this. If you're installing RDO Icehouse, > you should be getting mariadb-galera-server, which should be creating > both the mariadb and mysqld service files. They should be identical. > > Which database package is being installed? > > Ryan > > > > Date: Tue, 8 Jul 2014 15:45:50 -0400 > > > From: rbowen at redhat.com > > > To: rdo-list at redhat.com > > > Subject: Re: [Rdo-list] mysqld failure on --allinone, centos7 > > > > > > > > > On 07/08/2014 10:41 AM, Rich Bowen wrote: > > > > > > > > On 07/08/2014 09:53 AM, Ihar Hrachyshka wrote: > > > >> -----BEGIN PGP SIGNED MESSAGE----- > > > >> Hash: SHA512 > > > >> > > > >> On 08/07/14 15:39, Rich Bowen wrote: > > > >>> I'm running `packstack --allinone` on a fresh install of the new > > > >>> CentOS7, and I'm getting a failure at: > > > >>> > > > >>> 192.168.0.176_mysql.pp: [ ERROR ] > > > >>> Applying Puppet manifests [ ERROR ] > > > >>> > > > >>> ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp > > > >>> Error: Could not enable mysqld: You will find full trace in log > > > >>> /var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log > > > >>> > > > >>> > > > >>> Please check log file > > > >>> /var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for > > > >>> more information > > > >>> > > > >>> > > > >>> The log message is: > > > >>> > > > >>> Notice: > > > >>> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: > > > >>> > > > >>> > > > >>> > > > >> Dependency Service[mysqld] has failures: true > > > >>> Warning: > > > >>> /Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron at 127.0.0.1/neutron]: > > > >>> > > > >>> > > > >>> > > > >> Skipping because of failed dependencies > > > >> I suspect there were more log messages before those you've posted that > > > >> could reveal the cause of the failure. > > > > > > > > The full log file is attached, and I'm working through it now. If > > > > someone has more immediate insight, that would be awesome. Thanks. > > > > > > No joy so far, except that it does *not* seem to be related to > > > https://bugzilla.redhat.com/show_bug.cgi?id=1117035 > > > > > > -- > > > Rich Bowen - rbowen at redhat.com > > > OpenStack Community Liaison > > > http://openstack.redhat.com/ > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > S -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Jul 9 18:25:46 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 09 Jul 2014 14:25:46 -0400 Subject: [Rdo-list] mysqld failure on --allinone, centos7 In-Reply-To: References: <53BBF488.5020501@redhat.com> <53BBF7E8.9060503@redhat.com>, <53BC0314.8050304@redhat.com>, <53BC4A6E.30600@redhat.com> Message-ID: <53BD892A.40404@redhat.com> On 07/09/2014 01:35 AM, Boris Derzhavets wrote: > Please view https://bugzilla.redhat.com/show_bug.cgi?id=981116 > > ############ > Comment 36 > ############ > > So workaround is: > rm /usr/lib/systemd/system/mysqld.service > cp /usr/lib/systemd/system/mariadb.service > /usr/lib/systemd/system/mysqld.service > > Works for me on CentOS 7 . Before packstack rerun:- > > # systemctl stop mariadb Thanks. Between that, and https://bugzilla.redhat.com/show_bug.cgi?id=1117035 the installation completed successfully, although with a "Redirecting to /bin/systemctl start mariadb.service" warning. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Thu Jul 10 15:37:47 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 10 Jul 2014 11:37:47 -0400 Subject: [Rdo-list] RDO Folsom and Grizzly EOL Message-ID: <53BEB34B.4010700@redhat.com> RDO's mission is to provide latest stable OpenStack packages. As such, anything back 2 from latest, we move into EOL[1] to indicate that we're not actively working on those packages any more. (Much like the Fedora release and EOL process [2] ) We've gotten a little behind on this over the last two cycles. In the next few days we'll be moving Folsom and Grizzly into the EOL directory to indicate their status. Also, a few weeks after the Juno 3 release[3] in September, we'll be moving Havana to EOL status as well. Thanks. --Rich [1] http://repos.fedorapeople.org/repos/openstack/EOL/ [2] https://fedoraproject.org/wiki/Fedora_Release_Life_Cycle#Maintenance_Schedule [3] https://wiki.openstack.org/wiki/Juno_Release_Schedule -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From ganguly at cisco.com Thu Jul 10 16:49:29 2014 From: ganguly at cisco.com (Chandra Ganguly (ganguly)) Date: Thu, 10 Jul 2014 16:49:29 +0000 Subject: [Rdo-list] Need Help: openstack Repos Are Missing Message-ID: Hi RedHat/Openstack Team I am trying to install foreman and I am seeing the following RPM missing, which is causing my the download of my foreman-installer to fail. Can somebody let me know what is the new openstack repo to get; I am running it on RHEL6.5 [root at foreman-server ~]# subscription-manager repos --enable rhel-6-server-openstack-4.0-rpms Error: rhel-6-server-openstack-4.0-rpms is not a valid repo ID. Use --list option to see valid repos. root at foreman-server ~]# subscription-manager repos --list | grep openstack [root at foreman-server ~]# yum install openstack-foreman-installer foreman-selinux Loaded plugins: priorities, product-id, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. rhel-6-server-optional-rpms | 3.5 kB 00:00 rhel-6-server-realtime-rpms | 3.8 kB 00:00 rhel-6-server-rpms | 3.7 kB 00:00 rhel-ha-for-rhel-6-server-rpms | 3.7 kB 00:00 rhel-hpn-for-rhel-6-server-rpms | 3.7 kB 00:00 rhel-lb-for-rhel-6-server-rpms | 3.7 kB 00:00 rhel-rs-for-rhel-6-server-rpms | 3.7 kB 00:00 rhel-sap-for-rhel-6-server-rpms | 3.7 kB 00:00 rhel-sap-hana-for-rhel-6-server-rpms | 2.8 kB 00:00 rhel-scalefs-for-rhel-6-server-rpms | 3.7 kB 00:00 rhel-server-6-rhds-9-rpms | 3.1 kB 00:00 rhel-server-dts-6-rpms | 2.9 kB 00:00 rhel-server-dts2-6-rpms | 2.6 kB 00:00 rhel-sjis-for-rhel-6-server-rpms | 3.1 kB 00:00 Setting up Install Process No package openstack-foreman-installer available. No package foreman-selinux available. Error: Nothing to do Thanks Chandra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Thu Jul 10 16:59:12 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 10 Jul 2014 18:59:12 +0200 Subject: [Rdo-list] Need Help: openstack Repos Are Missing In-Reply-To: References: Message-ID: <53BEC660.2070906@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 10/07/14 18:49, Chandra Ganguly (ganguly) wrote: > Hi RedHat/Openstack Team > > I am trying to install foreman and I am seeing the following RPM > missing, which is causing my the download of my foreman-installer > to fail. Can somebody let me know what is the new openstack repo to > get; I am running it on RHEL6.5 > > > [root at foreman-server ~]# subscription-manager repos --enable > rhel-6-server-openstack-4.0-rpms > > Error: rhel-6-server-openstack-4.0-rpms is not a valid repo ID. Use > --list option to see valid repos. > > > root at foreman-server ~]# subscription-manager repos --list | grep > openstack > > > > [root at foreman-server ~]# yum install openstack-foreman-installer > foreman-selinux > > Loaded plugins: priorities, product-id, security, > subscription-manager > > This system is receiving updates from Red Hat Subscription > Management. > > rhel-6-server-optional-rpms | 3.5 kB > 00:00 > > rhel-6-server-realtime-rpms | 3.8 kB > 00:00 > > rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-ha-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-hpn-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-lb-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-rs-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-sap-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-sap-hana-for-rhel-6-server-rpms | 2.8 kB > 00:00 > > rhel-scalefs-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-server-6-rhds-9-rpms | 3.1 kB > 00:00 > > rhel-server-dts-6-rpms | 2.9 kB > 00:00 > > rhel-server-dts2-6-rpms | 2.6 kB > 00:00 > > rhel-sjis-for-rhel-6-server-rpms | 3.1 kB > 00:00 > > Setting up Install Process > > No package openstack-foreman-installer available. > > No package foreman-selinux available. > > Error: Nothing to do > Hi Chandra, it seems your question is not about RDO, a community supported distribution of Openstack, but RHEL-OSP, a commercially supported distribution sold and officially supported by Red Hat. I think you should ask for support from Red Hat. That's what you pay for. ;) /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTvsZgAAoJEC5aWaUY1u57RtoIAJjxAAtJSxTHCvQTQiCE3HsP M9meqXJG7ljZIA03/4ODt6Ru2jCtOLswN7rZRf3d0hnCsdA/DfCr/mclBBCFCnBQ GDYmHXI7fGtZzYepl3sjYcVKk3MxTmS6JnutD05ywJWcllCXR/QbaeRzqBfv7jpu fuuEeSnMpa6WLD499G1CHAMJZV4A89Pd0eV3N3RhSa6IR8Pl2i326cOu8NmtehtG KEcxJR3Gu4SKtOkgBr0N5NzHFfdBLjIovcmChPPMV/xnPJsReTXV/Ge9gdsYTfDk rdHqP0hUO04UyN7J1CWtKSiQ5HKeCptx2j6241fUDpp/pyw7PbWkvTPOzRrh+/U= =0i/7 -----END PGP SIGNATURE----- From pmyers at redhat.com Thu Jul 10 17:00:38 2014 From: pmyers at redhat.com (Perry Myers) Date: Thu, 10 Jul 2014 13:00:38 -0400 Subject: [Rdo-list] Need Help: openstack Repos Are Missing In-Reply-To: References: Message-ID: <53BEC6B6.6050405@redhat.com> On 07/10/2014 12:49 PM, Chandra Ganguly (ganguly) wrote: > Hi RedHat/Openstack Team > > I am trying to install foreman and I am seeing the following RPM > missing, which is causing my the download of my foreman-installer to > fail. Can somebody let me know what is the new openstack repo to get; I > am running it on RHEL6.5 I'm going to reply to this thread on rhos-list, that's probably a more suitable forum for product questions. Cheers, Perry > [root at foreman-server ~]# subscription-manager repos --enable > rhel-6-server-openstack-4.0-rpms > > Error: rhel-6-server-openstack-4.0-rpms is not a valid repo ID. Use > --list option to see valid repos. > > > root at foreman-server ~]# subscription-manager repos --list | grep openstack > > > > [root at foreman-server ~]# yum install openstack-foreman-installer > foreman-selinux > > Loaded plugins: priorities, product-id, security, subscription-manager > > This system is receiving updates from Red Hat Subscription Management. > > rhel-6-server-optional-rpms | 3.5 kB > 00:00 > > rhel-6-server-realtime-rpms | 3.8 kB > 00:00 > > rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-ha-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-hpn-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-lb-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-rs-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-sap-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-sap-hana-for-rhel-6-server-rpms | 2.8 kB > 00:00 > > rhel-scalefs-for-rhel-6-server-rpms | 3.7 kB > 00:00 > > rhel-server-6-rhds-9-rpms | 3.1 kB > 00:00 > > rhel-server-dts-6-rpms | 2.9 kB > 00:00 > > rhel-server-dts2-6-rpms | 2.6 kB > 00:00 > > rhel-sjis-for-rhel-6-server-rpms | 3.1 kB > 00:00 > > Setting up Install Process > > No package *openstack-foreman-installer* available. > > No package *foreman-selinux* available. > > Error: Nothing to do > > > > Thanks > > Chandra > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From ben42ml at gmail.com Fri Jul 11 09:57:12 2014 From: ben42ml at gmail.com (Benoit ML) Date: Fri, 11 Jul 2014 11:57:12 +0200 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" Message-ID: Hello, I'm working on a multi-node setup of openstack Icehouse using centos7. Well i have : - one controllor node with all server services thing stuff - one network node with openvswitch agent, l3-agent, dhcp-agent - two compute node with nova-compute and neutron-openvswitch - one storage nfs node NetworkManager is deleted on compute nodes and network node. My network use is configured to use vxlan. I can create VM, tenant-network, external-network, routeur, assign floating-ip to VM, push ssh-key into VM, create volume from glance image, etc... Evrything is conected and reacheable. Pretty cool :) But when i try to migrate VM things go wrong ... I have configured nova, libvirtd and qemu to use migration through libvirt-tcp. I have create and exchanged ssh-key for nova user on all node. I have verified userid and groupid of nova. Well nova-compute log, on the target compute node, : 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: Unauthorized {"error": {"m essage": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} So well after searching a lots in all logs, i have fount that i cant simply migration VM between compute node with a simple virsh : virsh migrate instance-00000084 qemu+tcp:///system The error is : erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device Well when i look on the source hyperviseur the bridge "qbr3ca65809" exists and have a network tap device. And moreover i manually create qbr3ca65809 on the target hypervisor, virsh migrate succed ! Can you help me plz ? What can i do wrong ? Perhpas neutron must create the bridge before migration but didnt for a mis configuration ? Plz ask anything you need ! Thank you in advance. The full nova-compute log attached. Regards, -- -- Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-compute-error-migrate-20140711.log Type: text/x-log Size: 64056 bytes Desc: not available URL: From bderzhavets at hotmail.com Fri Jul 11 11:40:41 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 11 Jul 2014 07:40:41 -0400 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Could you please post /etc/redhat-release. Boris. Date: Fri, 11 Jul 2014 11:57:12 +0200 From: ben42ml at gmail.com To: rdo-list at redhat.com Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" Hello, I'm working on a multi-node setup of openstack Icehouse using centos7.Well i have : - one controllor node with all server services thing stuff - one network node with openvswitch agent, l3-agent, dhcp-agent - two compute node with nova-compute and neutron-openvswitch - one storage nfs node NetworkManager is deleted on compute nodes and network node. My network use is configured to use vxlan. I can create VM, tenant-network, external-network, routeur, assign floating-ip to VM, push ssh-key into VM, create volume from glance image, etc... Evrything is conected and reacheable. Pretty cool :) But when i try to migrate VM things go wrong ... I have configured nova, libvirtd and qemu to use migration through libvirt-tcp.I have create and exchanged ssh-key for nova user on all node. I have verified userid and groupid of nova. Well nova-compute log, on the target compute node, : 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: Unauthorized {"error": {"m essage": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} So well after searching a lots in all logs, i have fount that i cant simply migration VM between compute node with a simple virsh : virsh migrate instance-00000084 qemu+tcp:///system The error is : erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device Well when i look on the source hyperviseur the bridge "qbr3ca65809" exists and have a network tap device. And moreover i manually create qbr3ca65809 on the target hypervisor, virsh migrate succed ! Can you help me plz ?What can i do wrong ? Perhpas neutron must create the bridge before migration but didnt for a mis configuration ? Plz ask anything you need ! Thank you in advance. The full nova-compute log attached. Regards, -- -- Benoit _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Fri Jul 11 11:42:23 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 11 Jul 2014 07:42:23 -0400 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Could you please post /etc/redhat-release. Boris. Date: Fri, 11 Jul 2014 11:57:12 +0200 From: ben42ml at gmail.com To: rdo-list at redhat.com Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" Hello, I'm working on a multi-node setup of openstack Icehouse using centos7.Well i have : - one controllor node with all server services thing stuff - one network node with openvswitch agent, l3-agent, dhcp-agent - two compute node with nova-compute and neutron-openvswitch - one storage nfs node NetworkManager is deleted on compute nodes and network node. My network use is configured to use vxlan. I can create VM, tenant-network, external-network, routeur, assign floating-ip to VM, push ssh-key into VM, create volume from glance image, etc... Evrything is conected and reacheable. Pretty cool :) But when i try to migrate VM things go wrong ... I have configured nova, libvirtd and qemu to use migration through libvirt-tcp.I have create and exchanged ssh-key for nova user on all node. I have verified userid and groupid of nova. Well nova-compute log, on the target compute node, : 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: Unauthorized {"error": {"m essage": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} So well after searching a lots in all logs, i have fount that i cant simply migration VM between compute node with a simple virsh : virsh migrate instance-00000084 qemu+tcp:///system The error is : erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device Well when i look on the source hyperviseur the bridge "qbr3ca65809" exists and have a network tap device. And moreover i manually create qbr3ca65809 on the target hypervisor, virsh migrate succed ! Can you help me plz ?What can i do wrong ? Perhpas neutron must create the bridge before migration but didnt for a mis configuration ? Plz ask anything you need ! Thank you in advance. The full nova-compute log attached. Regards, -- -- Benoit _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Fri Jul 11 11:46:10 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 11 Jul 2014 07:46:10 -0400 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Could you please post /etc/redhat-release Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben42ml at gmail.com Fri Jul 11 12:41:45 2014 From: ben42ml at gmail.com (Benoit ML) Date: Fri, 11 Jul 2014 14:41:45 +0200 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Hello, cat /etc/redhat-release CentOS Linux release 7 (Rebuilt from: RHEL 7.0) Regards, 2014-07-11 13:40 GMT+02:00 Boris Derzhavets : > Could you please post /etc/redhat-release. > > Boris. > > ------------------------------ > Date: Fri, 11 Jul 2014 11:57:12 +0200 > From: ben42ml at gmail.com > To: rdo-list at redhat.com > Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed > because of "network qbr no such device" > > > Hello, > > I'm working on a multi-node setup of openstack Icehouse using centos7. > Well i have : > - one controllor node with all server services thing stuff > - one network node with openvswitch agent, l3-agent, dhcp-agent > - two compute node with nova-compute and neutron-openvswitch > - one storage nfs node > > NetworkManager is deleted on compute nodes and network node. > > My network use is configured to use vxlan. I can create VM, > tenant-network, external-network, routeur, assign floating-ip to VM, push > ssh-key into VM, create volume from glance image, etc... Evrything is > conected and reacheable. Pretty cool :) > > But when i try to migrate VM things go wrong ... I have configured nova, > libvirtd and qemu to use migration through libvirt-tcp. > I have create and exchanged ssh-key for nova user on all node. I have > verified userid and groupid of nova. > > Well nova-compute log, on the target compute node, : > 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: > a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: > Unauthorized {"error": {"m > essage": "The request you have made requires authentication.", "code": > 401, "title": "Unauthorized"}} > > > So well after searching a lots in all logs, i have fount that i cant > simply migration VM between compute node with a simple virsh : > virsh migrate instance-00000084 qemu+tcp:///system > > The error is : > erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device > > Well when i look on the source hyperviseur the bridge "qbr3ca65809" exists > and have a network tap device. And moreover i manually create qbr3ca65809 > on the target hypervisor, virsh migrate succed ! > > Can you help me plz ? > What can i do wrong ? Perhpas neutron must create the bridge before > migration but didnt for a mis configuration ? > > Plz ask anything you need ! > > Thank you in advance. > > > The full nova-compute log attached. > > > > > > > Regards, > > -- > -- > Benoit > > _______________________________________________ Rdo-list mailing list > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list > -- -- Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelangel at ajo.es Fri Jul 11 13:09:46 2014 From: miguelangel at ajo.es (Miguel Angel) Date: Fri, 11 Jul 2014 15:09:46 +0200 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Hi Benoit, A manual virsh migration should fail, because the network ports are not migrated to the destination host. You must investigate on the authentication problem itself, and let nova handle all the underlying API calls which should happen... May be it's worth setting nova.conf to debug=True --- irc: ajo / mangelajo Miguel Angel Ajo Pelayo +34 636 52 25 69 skype: ajoajoajo 2014-07-11 14:41 GMT+02:00 Benoit ML : > Hello, > > cat /etc/redhat-release > CentOS Linux release 7 (Rebuilt from: RHEL 7.0) > > > Regards, > > > 2014-07-11 13:40 GMT+02:00 Boris Derzhavets : > > Could you please post /etc/redhat-release. >> >> Boris. >> >> ------------------------------ >> Date: Fri, 11 Jul 2014 11:57:12 +0200 >> From: ben42ml at gmail.com >> To: rdo-list at redhat.com >> Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed >> because of "network qbr no such device" >> >> >> Hello, >> >> I'm working on a multi-node setup of openstack Icehouse using centos7. >> Well i have : >> - one controllor node with all server services thing stuff >> - one network node with openvswitch agent, l3-agent, dhcp-agent >> - two compute node with nova-compute and neutron-openvswitch >> - one storage nfs node >> >> NetworkManager is deleted on compute nodes and network node. >> >> My network use is configured to use vxlan. I can create VM, >> tenant-network, external-network, routeur, assign floating-ip to VM, push >> ssh-key into VM, create volume from glance image, etc... Evrything is >> conected and reacheable. Pretty cool :) >> >> But when i try to migrate VM things go wrong ... I have configured >> nova, libvirtd and qemu to use migration through libvirt-tcp. >> I have create and exchanged ssh-key for nova user on all node. I have >> verified userid and groupid of nova. >> >> Well nova-compute log, on the target compute node, : >> 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: >> a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: >> Unauthorized {"error": {"m >> essage": "The request you have made requires authentication.", "code": >> 401, "title": "Unauthorized"}} >> >> >> So well after searching a lots in all logs, i have fount that i cant >> simply migration VM between compute node with a simple virsh : >> virsh migrate instance-00000084 qemu+tcp:///system >> >> The error is : >> erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device >> >> Well when i look on the source hyperviseur the bridge "qbr3ca65809" >> exists and have a network tap device. And moreover i manually create >> qbr3ca65809 on the target hypervisor, virsh migrate succed ! >> >> Can you help me plz ? >> What can i do wrong ? Perhpas neutron must create the bridge before >> migration but didnt for a mis configuration ? >> >> Plz ask anything you need ! >> >> Thank you in advance. >> >> >> The full nova-compute log attached. >> >> >> >> >> >> >> Regards, >> >> -- >> -- >> Benoit >> >> _______________________________________________ Rdo-list mailing list >> Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list >> > > > > -- > -- > Benoit > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben42ml at gmail.com Fri Jul 11 13:43:41 2014 From: ben42ml at gmail.com (Benoit ML) Date: Fri, 11 Jul 2014 15:43:41 +0200 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Hello, Ok I see. Nova telles neutron/openvswitch to create the bridge qbr prior to the migration itself. I ve already activate debug and verbose ... But well i'm really stuck, dont know how and where to search/look ... Regards, 2014-07-11 15:09 GMT+02:00 Miguel Angel : > Hi Benoit, > > A manual virsh migration should fail, because the > network ports are not migrated to the destination host. > > You must investigate on the authentication problem itself, > and let nova handle all the underlying API calls which should happen... > > May be it's worth setting nova.conf to debug=True > > > > --- > irc: ajo / mangelajo > Miguel Angel Ajo Pelayo > +34 636 52 25 69 > skype: ajoajoajo > > > 2014-07-11 14:41 GMT+02:00 Benoit ML : > > Hello, >> >> cat /etc/redhat-release >> CentOS Linux release 7 (Rebuilt from: RHEL 7.0) >> >> >> Regards, >> >> >> 2014-07-11 13:40 GMT+02:00 Boris Derzhavets : >> >> Could you please post /etc/redhat-release. >>> >>> Boris. >>> >>> ------------------------------ >>> Date: Fri, 11 Jul 2014 11:57:12 +0200 >>> From: ben42ml at gmail.com >>> To: rdo-list at redhat.com >>> Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration >>> failed because of "network qbr no such device" >>> >>> >>> Hello, >>> >>> I'm working on a multi-node setup of openstack Icehouse using centos7. >>> Well i have : >>> - one controllor node with all server services thing stuff >>> - one network node with openvswitch agent, l3-agent, dhcp-agent >>> - two compute node with nova-compute and neutron-openvswitch >>> - one storage nfs node >>> >>> NetworkManager is deleted on compute nodes and network node. >>> >>> My network use is configured to use vxlan. I can create VM, >>> tenant-network, external-network, routeur, assign floating-ip to VM, push >>> ssh-key into VM, create volume from glance image, etc... Evrything is >>> conected and reacheable. Pretty cool :) >>> >>> But when i try to migrate VM things go wrong ... I have configured >>> nova, libvirtd and qemu to use migration through libvirt-tcp. >>> I have create and exchanged ssh-key for nova user on all node. I have >>> verified userid and groupid of nova. >>> >>> Well nova-compute log, on the target compute node, : >>> 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: >>> a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: >>> Unauthorized {"error": {"m >>> essage": "The request you have made requires authentication.", "code": >>> 401, "title": "Unauthorized"}} >>> >>> >>> So well after searching a lots in all logs, i have fount that i cant >>> simply migration VM between compute node with a simple virsh : >>> virsh migrate instance-00000084 qemu+tcp:///system >>> >>> The error is : >>> erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device >>> >>> Well when i look on the source hyperviseur the bridge "qbr3ca65809" >>> exists and have a network tap device. And moreover i manually create >>> qbr3ca65809 on the target hypervisor, virsh migrate succed ! >>> >>> Can you help me plz ? >>> What can i do wrong ? Perhpas neutron must create the bridge before >>> migration but didnt for a mis configuration ? >>> >>> Plz ask anything you need ! >>> >>> Thank you in advance. >>> >>> >>> The full nova-compute log attached. >>> >>> >>> >>> >>> >>> >>> Regards, >>> >>> -- >>> -- >>> Benoit >>> >>> _______________________________________________ Rdo-list mailing list >>> Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list >>> >> >> >> >> -- >> -- >> Benoit >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> > -- -- Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 2014-07-11 15:40:23.217 4930 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call sleeping for 14.48 seconds _inner /usr/lib/python2.7/site-packages/nova/openstack/common/loopingcall.py:132 2014-07-11 15:40:25.688 4930 ERROR nova.compute.manager [req-87f5ecb5-56ef-4500-8b38-e3636eb25815 07b2987c884348418dbe3cdbb864e25c 5c9c186a909e499e9da0dd5cf2c403e0] [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] Setting instance vm_state to ERROR 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] Traceback (most recent call last): 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3547, in finish_resize 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] disk_info, image) 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3496, in _finish_resize 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] migration_p) 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] File "/usr/lib/python2.7/site-packages/nova/conductor/api.py", line 259, in network_migrate_instance_finish 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] migration) 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 391, in network_migrate_instance_finish 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] instance=instance_p, migration=migration_p) 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 150, in call 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] wait_for_reply=True, timeout=timeout) 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] timeout=timeout) 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 412, in send 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] return self._send(target, ctxt, message, wait_for_reply, timeout) 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 405, in _send 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] raise result 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] RemoteError: Remote error: Unauthorized {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n incoming.message))\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt, **new_args)\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1024, in network_migrate_instance_finish\n migration)\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 530, in network_migrate_instance_finish\n self.network_api.migrate_instance_finish(context, instance, migration)\n', u' File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1004, in migrate_instance_finish\n data = neutron.list_ports(**search_opts)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 111, in with_params\n ret = self.function(instance, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 306, in list_ports\n **_params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1250, in list\n for r in self._pagination(collection, path, **params):\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1263, in _pagination\n res = self.get(path, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1236, in get\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1221, in retry_request\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1150, in do_request\n self.httpclient.authenticate_and_fetch_endpoint_url()\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 179, in authenticate_and_fetch_endpoint_url\n self.authenticate()\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 239, in authenticate\n content_type="application/json")\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in _cs_request\n raise exceptions.Unauthorized(message=body)\n', u'Unauthorized: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}\n']. 2014-07-11 15:40:25.688 4930 TRACE nova.compute.manager [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] 2014-07-11 15:40:25.781 4930 DEBUG nova.openstack.common.lockutils [req-87f5ecb5-56ef-4500-8b38-e3636eb25815 07b2987c884348418dbe3cdbb864e25c 5c9c186a909e499e9da0dd5cf2c403e0] Got semaphore "compute_resources" lock /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:168 2014-07-11 15:40:25.781 4930 DEBUG nova.openstack.common.lockutils [req-87f5ecb5-56ef-4500-8b38-e3636eb25815 07b2987c884348418dbe3cdbb864e25c 5c9c186a909e499e9da0dd5cf2c403e0] Got semaphore / lock "update_usage" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:248 2014-07-11 15:40:25.781 4930 DEBUG nova.openstack.common.lockutils [req-87f5ecb5-56ef-4500-8b38-e3636eb25815 07b2987c884348418dbe3cdbb864e25c 5c9c186a909e499e9da0dd5cf2c403e0] Semaphore / lock released "update_usage" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:252 2014-07-11 15:40:25.923 4930 DEBUG nova.openstack.common.lockutils [req-87f5ecb5-56ef-4500-8b38-e3636eb25815 07b2987c884348418dbe3cdbb864e25c 5c9c186a909e499e9da0dd5cf2c403e0] Got semaphore "compute_resources" lock /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:168 2014-07-11 15:40:25.923 4930 DEBUG nova.openstack.common.lockutils [req-87f5ecb5-56ef-4500-8b38-e3636eb25815 07b2987c884348418dbe3cdbb864e25c 5c9c186a909e499e9da0dd5cf2c403e0] Got semaphore / lock "update_usage" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:248 2014-07-11 15:40:25.923 4930 DEBUG nova.openstack.common.lockutils [req-87f5ecb5-56ef-4500-8b38-e3636eb25815 07b2987c884348418dbe3cdbb864e25c 5c9c186a909e499e9da0dd5cf2c403e0] Semaphore / lock released "update_usage" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:252 2014-07-11 15:40:25.925 4930 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Remote error: Unauthorized {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n incoming.message))\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt, **new_args)\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1024, in network_migrate_instance_finish\n migration)\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 530, in network_migrate_instance_finish\n self.network_api.migrate_instance_finish(context, instance, migration)\n', u' File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1004, in migrate_instance_finish\n data = neutron.list_ports(**search_opts)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 111, in with_params\n ret = self.function(instance, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 306, in list_ports\n **_params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1250, in list\n for r in self._pagination(collection, path, **params):\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1263, in _pagination\n res = self.get(path, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1236, in get\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1221, in retry_request\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1150, in do_request\n self.httpclient.authenticate_and_fetch_endpoint_url()\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 179, in authenticate_and_fetch_endpoint_url\n self.authenticate()\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 239, in authenticate\n content_type="application/json")\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in _cs_request\n raise exceptions.Unauthorized(message=body)\n', u'Unauthorized: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}\n']. 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher payload) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 274, in decorated_function 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher pass 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 260, in decorated_function 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 327, in decorated_function 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher function(self, context, *args, **kwargs) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 248, in decorated_function 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher migration.instance_uuid, exc_info=True) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 235, in decorated_function 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 303, in decorated_function 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher e, sys.exc_info()) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 290, in decorated_function 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3559, in finish_resize 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher self._set_instance_error_state(context, instance['uuid']) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3547, in finish_resize 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher disk_info, image) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3496, in _finish_resize 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher migration_p) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/conductor/api.py", line 259, in network_migrate_instance_finish 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher migration) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 391, in network_migrate_instance_finish 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher instance=instance_p, migration=migration_p) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 150, in call 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher wait_for_reply=True, timeout=timeout) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher timeout=timeout) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 412, in send 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher return self._send(target, ctxt, message, wait_for_reply, timeout) 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 405, in _send 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher raise result 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher RemoteError: Remote error: Unauthorized {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n incoming.message))\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt, **new_args)\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1024, in network_migrate_instance_finish\n migration)\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 530, in network_migrate_instance_finish\n self.network_api.migrate_instance_finish(context, instance, migration)\n', u' File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1004, in migrate_instance_finish\n data = neutron.list_ports(**search_opts)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 111, in with_params\n ret = self.function(instance, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 306, in list_ports\n **_params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1250, in list\n for r in self._pagination(collection, path, **params):\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1263, in _pagination\n res = self.get(path, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1236, in get\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1221, in retry_request\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1150, in do_request\n self.httpclient.authenticate_and_fetch_endpoint_url()\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 179, in authenticate_and_fetch_endpoint_url\n self.authenticate()\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 239, in authenticate\n content_type="application/json")\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in _cs_request\n raise exceptions.Unauthorized(message=body)\n', u'Unauthorized: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}\n']. 2014-07-11 15:40:25.925 4930 TRACE oslo.messaging.rpc.dispatcher 2014-07-11 15:40:25.927 4930 ERROR oslo.messaging._drivers.common [-] Returning exception Remote error: Unauthorized {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n incoming.message))\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', u' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt, **new_args)\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1024, in network_migrate_instance_finish\n migration)\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 530, in network_migrate_instance_finish\n self.network_api.migrate_instance_finish(context, instance, migration)\n', u' File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1004, in migrate_instance_finish\n data = neutron.list_ports(**search_opts)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 111, in with_params\n ret = self.function(instance, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 306, in list_ports\n **_params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1250, in list\n for r in self._pagination(collection, path, **params):\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1263, in _pagination\n res = self.get(path, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1236, in get\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1221, in retry_request\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1150, in do_request\n self.httpclient.authenticate_and_fetch_endpoint_url()\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 179, in authenticate_and_fetch_endpoint_url\n self.authenticate()\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 239, in authenticate\n content_type="application/json")\n', u' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in _cs_request\n raise exceptions.Unauthorized(message=body)\n', u'Unauthorized: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}\n']. to caller 2014-07-11 15:40:25.927 4930 ERROR oslo.messaging._drivers.common [-] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n incoming.message))\n', ' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt, **new_args)\n', ' File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped\n payload)\n', ' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped\n return f(self, context, *args, **kw)\n', ' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 274, in decorated_function\n pass\n', ' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 260, in decorated_function\n return function(self, context, *args, **kwargs)\n', ' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 327, in decorated_function\n function(self, context, *args, **kwargs)\n', ' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 248, in decorated_function\n migration.instance_uuid, exc_info=True)\n', ' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 235, in decorated_function\n return function(self, context, *args, **kwargs)\n', ' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 303, in decorated_function\n e, sys.exc_info())\n', ' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 290, in decorated_function\n return function(self, context, *args, **kwargs)\n', ' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3559, in finish_resize\n self._set_instance_error_state(context, instance[\'uuid\'])\n', ' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3547, in finish_resize\n disk_info, image)\n', ' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3496, in _finish_resize\n migration_p)\n', ' File "/usr/lib/python2.7/site-packages/nova/conductor/api.py", line 259, in network_migrate_instance_finish\n migration)\n', ' File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 391, in network_migrate_instance_finish\n instance=instance_p, migration=migration_p)\n', ' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 150, in call\n wait_for_reply=True, timeout=timeout)\n', ' File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send\n timeout=timeout)\n', ' File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 412, in send\n return self._send(target, ctxt, message, wait_for_reply, timeout)\n', ' File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 405, in _send\n raise result\n', 'RemoteError: Remote error: Unauthorized {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}\n[u\'Traceback (most recent call last):\\n\', u\' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\\n incoming.message))\\n\', u\' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\\n return self._do_dispatch(endpoint, method, ctxt, args)\\n\', u\' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\\n result = getattr(endpoint, method)(ctxt, **new_args)\\n\', u\' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1024, in network_migrate_instance_finish\\n migration)\\n\', u\' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 530, in network_migrate_instance_finish\\n self.network_api.migrate_instance_finish(context, instance, migration)\\n\', u\' File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1004, in migrate_instance_finish\\n data = neutron.list_ports(**search_opts)\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 111, in with_params\\n ret = self.function(instance, *args, **kwargs)\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 306, in list_ports\\n **_params)\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1250, in list\\n for r in self._pagination(collection, path, **params):\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1263, in _pagination\\n res = self.get(path, params=params)\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1236, in get\\n headers=headers, params=params)\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1221, in retry_request\\n headers=headers, params=params)\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1150, in do_request\\n self.httpclient.authenticate_and_fetch_endpoint_url()\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 179, in authenticate_and_fetch_endpoint_url\\n self.authenticate()\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 239, in authenticate\\n content_type="application/json")\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in _cs_request\\n raise exceptions.Unauthorized(message=body)\\n\', u\'Unauthorized: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}\\n\'].\n'] 2014-07-11 15:40:37.695 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:40:37.695 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:40:37.695 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:40:37.696 4930 DEBUG nova.openstack.common.lockutils [-] Got semaphore "compute_resources" lock /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:168 2014-07-11 15:40:37.696 4930 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "update_available_resource" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:248 2014-07-11 15:40:37.696 4930 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2014-07-11 15:40:37.696 4930 DEBUG nova.virt.libvirt.driver [-] Updating host stats update_status /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5247 2014-07-11 15:40:38.426 4930 WARNING nova.virt.libvirt.driver [-] Periodic task is updating the host stat, it is trying to get disk instance-00000003, but disk file was removed by concurrent operations such as resize. 2014-07-11 15:40:42.215 4930 DEBUG nova.compute.resource_tracker [-] Hypervisor: free ram (MB): 383702 _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:409 2014-07-11 15:40:42.215 4930 DEBUG nova.compute.resource_tracker [-] Hypervisor: free disk (GB): 2636 _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:410 2014-07-11 15:40:42.215 4930 DEBUG nova.compute.resource_tracker [-] Hypervisor: free VCPUs: 32 _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:415 2014-07-11 15:40:42.215 4930 DEBUG nova.compute.resource_tracker [-] Hypervisor: assignable PCI devices: [] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:422 2014-07-11 15:40:42.269 4930 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 385628 2014-07-11 15:40:42.270 4930 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 2635 2014-07-11 15:40:42.270 4930 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 31 2014-07-11 15:40:42.299 4930 INFO nova.compute.resource_tracker [-] Compute_service record updated for pvidgsh006.pvi:pvidgsh006.pvi 2014-07-11 15:40:42.299 4930 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "update_available_resource" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:252 2014-07-11 15:40:42.324 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:40:42.324 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:40:42.324 4930 DEBUG nova.compute.manager [-] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:5364 2014-07-11 15:40:42.324 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:40:42.325 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:40:42.325 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:40:42.325 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:40:42.325 4930 DEBUG nova.compute.manager [-] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:4789 2014-07-11 15:40:42.325 4930 DEBUG nova.compute.manager [-] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:4793 2014-07-11 15:40:42.350 4930 DEBUG nova.objects.instance [-] Lazy-loading `system_metadata' on Instance uuid 779caa7e-a32e-4437-be15-447fad1e4d12 obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:519 2014-07-11 15:40:42.415 4930 DEBUG nova.network.neutronv2.api [-] get_instance_nw_info() for test2 _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:479 2014-07-11 15:40:42.415 4930 DEBUG neutronclient.client [-] REQ: curl -i http://pvidgsh105:9696/v2.0/ports.json?tenant_id=5c9c186a909e499e9da0dd5cf2c403e0&device_id=779caa7e-a32e-4437-be15-447fad1e4d12 -X GET -H "X-Auth-Token: MIIJqwYJKoZIhvcNAQcCoIIJnDCCCZgCAQExCTAHBgUrDgMCGjCCCAEGCSqGSIb3DQEHAaCCB-IEggfueyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNy0xMVQxMzozNjoyMy41NTA3NDEiLCAiZXhwaXJlcyI6ICIyMDE0LTA3LTExVDE0OjM2OjIzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlNlcnZpY2VzIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImYyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgIm5hbWUiOiAic2VydmljZXMifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjg3NzYvdjEvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgImlkIjogIjEyMmNkOGFjNTI5YTQyZjFhNDkwMzc1Mjk5ZDcyNWRkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIiwgImlkIjogIjNiZTQzNTQ1N2E1MjRmNDBiOTQ0NGE1N2MyM2FkMTdlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6ODc3NC92Mi9mMjNlZDViZTVmNTM0ZmRiYTMxZDIzZjYwNjIxMzQ3ZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAiaWQiOiAiMDNmMzMwNzJhN2RjNDQxMDgyMzI5ZTJkZjkyNzQzNTAiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogImNvbXB1dGUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiIsICJpZCI6ICI1N2FhNTA4MWRhNjU0YjFiOTFiNmQ5NGRhYmQ3OWVmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo1MDAwL3YyLjAiLCAiaWQiOiAiMGZiZGU2MzljNDQ2NGNmYzljYjc3Y2UyYTUyYTc5MjgiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAibmV1dHJvbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiMzJmM2M2ODZjODc0NDUyZWI0Y2NjNWM1OGU4ZDMzNjUiLCAicm9sZXMiOiBbeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJuZXV0cm9uIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjZlZmU4MjcxNGRhZjQwMzNiZWJmMmNkODI2YmQxY2I5Il19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAttg4HVpQIV16Yvl+2PfDZ+piry5+UTVr875ye8gEeXLUIal6cUQ-whZCCEXzYmKbphTE3iSHP25VprZYdqLThCGAWGOWEQ2rK1pCz5W3kKVlcGxpY3742zswzSlQDw5tn4LOCeKZV0RczIaTqCzE65M5qD01x0QhGhpYhWDr5njx-gCDW6nSyNlvMl39qnPfgRcU1lAifFnuVm9yhikEmp4TqaH30+vf1z4Cfowxnm5U5HsxLJjGKT8uD9PPq0w2aI3hnNIsBA1Ia2z1h4MT6jG-umSUUXu+az69L8blKSNbqnVwyQBlRuZwWRG3HzDeqyFyVrYFdg3eB3LVxzaGcQ==" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173 2014-07-11 15:40:42.509 4930 DEBUG neutronclient.client [-] RESP:{'status': '200', 'content-length': '651', 'content-location': 'http://pvidgsh105:9696/v2.0/ports.json?tenant_id=5c9c186a909e499e9da0dd5cf2c403e0&device_id=779caa7e-a32e-4437-be15-447fad1e4d12', 'date': 'Fri, 11 Jul 2014 13:40:42 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-848243b2-deb2-46df-8f6d-8432fccc5d10'} {"ports": [{"status": "ACTIVE", "binding:host_id": "pvidgsh005.pvi", "name": "", "admin_state_up": true, "network_id": "b7b1ddfb-79d4-4e77-9820-3e623d622591", "tenant_id": "5c9c186a909e499e9da0dd5cf2c403e0", "extra_dhcp_opts": [], "binding:vif_details": {"port_filter": true, "ovs_hybrid_plug": true}, "binding:vif_type": "ovs", "device_owner": "compute:None", "mac_address": "fa:16:3e:b2:b4:8e", "binding:profile": {}, "binding:vnic_type": "normal", "fixed_ips": [{"subnet_id": "6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824", "ip_address": "192.168.42.2"}], "id": "deace7d3-bd51-4177-97a2-6b3ad0f75337", "device_id": "779caa7e-a32e-4437-be15-447fad1e4d12"}]} http_log_resp /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179 2014-07-11 15:40:42.509 4930 DEBUG nova.objects.instance [-] Lazy-loading `info_cache' on Instance uuid 779caa7e-a32e-4437-be15-447fad1e4d12 obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:519 2014-07-11 15:40:42.573 4930 DEBUG neutronclient.client [-] REQ: curl -i http://pvidgsh105:9696/v2.0/networks.json?id=b7b1ddfb-79d4-4e77-9820-3e623d622591 -X GET -H "X-Auth-Token: MIIJqwYJKoZIhvcNAQcCoIIJnDCCCZgCAQExCTAHBgUrDgMCGjCCCAEGCSqGSIb3DQEHAaCCB-IEggfueyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNy0xMVQxMzozNjoyMy41NTA3NDEiLCAiZXhwaXJlcyI6ICIyMDE0LTA3LTExVDE0OjM2OjIzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlNlcnZpY2VzIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImYyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgIm5hbWUiOiAic2VydmljZXMifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjg3NzYvdjEvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgImlkIjogIjEyMmNkOGFjNTI5YTQyZjFhNDkwMzc1Mjk5ZDcyNWRkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIiwgImlkIjogIjNiZTQzNTQ1N2E1MjRmNDBiOTQ0NGE1N2MyM2FkMTdlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6ODc3NC92Mi9mMjNlZDViZTVmNTM0ZmRiYTMxZDIzZjYwNjIxMzQ3ZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAiaWQiOiAiMDNmMzMwNzJhN2RjNDQxMDgyMzI5ZTJkZjkyNzQzNTAiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogImNvbXB1dGUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiIsICJpZCI6ICI1N2FhNTA4MWRhNjU0YjFiOTFiNmQ5NGRhYmQ3OWVmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo1MDAwL3YyLjAiLCAiaWQiOiAiMGZiZGU2MzljNDQ2NGNmYzljYjc3Y2UyYTUyYTc5MjgiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAibmV1dHJvbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiMzJmM2M2ODZjODc0NDUyZWI0Y2NjNWM1OGU4ZDMzNjUiLCAicm9sZXMiOiBbeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJuZXV0cm9uIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjZlZmU4MjcxNGRhZjQwMzNiZWJmMmNkODI2YmQxY2I5Il19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAttg4HVpQIV16Yvl+2PfDZ+piry5+UTVr875ye8gEeXLUIal6cUQ-whZCCEXzYmKbphTE3iSHP25VprZYdqLThCGAWGOWEQ2rK1pCz5W3kKVlcGxpY3742zswzSlQDw5tn4LOCeKZV0RczIaTqCzE65M5qD01x0QhGhpYhWDr5njx-gCDW6nSyNlvMl39qnPfgRcU1lAifFnuVm9yhikEmp4TqaH30+vf1z4Cfowxnm5U5HsxLJjGKT8uD9PPq0w2aI3hnNIsBA1Ia2z1h4MT6jG-umSUUXu+az69L8blKSNbqnVwyQBlRuZwWRG3HzDeqyFyVrYFdg3eB3LVxzaGcQ==" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173 2014-07-11 15:40:42.599 4930 DEBUG neutronclient.client [-] RESP:{'status': '200', 'content-length': '373', 'content-location': 'http://pvidgsh105:9696/v2.0/networks.json?id=b7b1ddfb-79d4-4e77-9820-3e623d622591', 'date': 'Fri, 11 Jul 2014 13:40:42 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-61693cc8-73db-4aba-97f7-b5a908466b7c'} {"networks": [{"status": "ACTIVE", "subnets": ["6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824"], "name": "netadm00", "provider:physical_network": null, "admin_state_up": true, "tenant_id": "5c9c186a909e499e9da0dd5cf2c403e0", "provider:network_type": "vxlan", "router:external": false, "shared": false, "id": "b7b1ddfb-79d4-4e77-9820-3e623d622591", "provider:segmentation_id": 100}]} http_log_resp /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179 2014-07-11 15:40:42.600 4930 DEBUG neutronclient.client [-] REQ: curl -i http://pvidgsh105:9696/v2.0/floatingips.json?fixed_ip_address=192.168.42.2&port_id=deace7d3-bd51-4177-97a2-6b3ad0f75337 -X GET -H "X-Auth-Token: MIIJqwYJKoZIhvcNAQcCoIIJnDCCCZgCAQExCTAHBgUrDgMCGjCCCAEGCSqGSIb3DQEHAaCCB-IEggfueyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNy0xMVQxMzozNjoyMy41NTA3NDEiLCAiZXhwaXJlcyI6ICIyMDE0LTA3LTExVDE0OjM2OjIzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlNlcnZpY2VzIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImYyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgIm5hbWUiOiAic2VydmljZXMifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjg3NzYvdjEvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgImlkIjogIjEyMmNkOGFjNTI5YTQyZjFhNDkwMzc1Mjk5ZDcyNWRkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIiwgImlkIjogIjNiZTQzNTQ1N2E1MjRmNDBiOTQ0NGE1N2MyM2FkMTdlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6ODc3NC92Mi9mMjNlZDViZTVmNTM0ZmRiYTMxZDIzZjYwNjIxMzQ3ZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAiaWQiOiAiMDNmMzMwNzJhN2RjNDQxMDgyMzI5ZTJkZjkyNzQzNTAiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogImNvbXB1dGUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiIsICJpZCI6ICI1N2FhNTA4MWRhNjU0YjFiOTFiNmQ5NGRhYmQ3OWVmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo1MDAwL3YyLjAiLCAiaWQiOiAiMGZiZGU2MzljNDQ2NGNmYzljYjc3Y2UyYTUyYTc5MjgiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAibmV1dHJvbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiMzJmM2M2ODZjODc0NDUyZWI0Y2NjNWM1OGU4ZDMzNjUiLCAicm9sZXMiOiBbeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJuZXV0cm9uIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjZlZmU4MjcxNGRhZjQwMzNiZWJmMmNkODI2YmQxY2I5Il19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAttg4HVpQIV16Yvl+2PfDZ+piry5+UTVr875ye8gEeXLUIal6cUQ-whZCCEXzYmKbphTE3iSHP25VprZYdqLThCGAWGOWEQ2rK1pCz5W3kKVlcGxpY3742zswzSlQDw5tn4LOCeKZV0RczIaTqCzE65M5qD01x0QhGhpYhWDr5njx-gCDW6nSyNlvMl39qnPfgRcU1lAifFnuVm9yhikEmp4TqaH30+vf1z4Cfowxnm5U5HsxLJjGKT8uD9PPq0w2aI3hnNIsBA1Ia2z1h4MT6jG-umSUUXu+az69L8blKSNbqnVwyQBlRuZwWRG3HzDeqyFyVrYFdg3eB3LVxzaGcQ==" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173 2014-07-11 15:40:42.612 4930 DEBUG neutronclient.client [-] RESP:{'status': '200', 'content-length': '19', 'content-location': 'http://pvidgsh105:9696/v2.0/floatingips.json?fixed_ip_address=192.168.42.2&port_id=deace7d3-bd51-4177-97a2-6b3ad0f75337', 'date': 'Fri, 11 Jul 2014 13:40:42 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-8fd0da75-be10-4d8b-93da-97cc9449f970'} {"floatingips": []} http_log_resp /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179 2014-07-11 15:40:42.613 4930 DEBUG neutronclient.client [-] REQ: curl -i http://pvidgsh105:9696/v2.0/subnets.json?id=6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824 -X GET -H "X-Auth-Token: MIIJqwYJKoZIhvcNAQcCoIIJnDCCCZgCAQExCTAHBgUrDgMCGjCCCAEGCSqGSIb3DQEHAaCCB-IEggfueyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNy0xMVQxMzozNjoyMy41NTA3NDEiLCAiZXhwaXJlcyI6ICIyMDE0LTA3LTExVDE0OjM2OjIzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlNlcnZpY2VzIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImYyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgIm5hbWUiOiAic2VydmljZXMifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjg3NzYvdjEvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgImlkIjogIjEyMmNkOGFjNTI5YTQyZjFhNDkwMzc1Mjk5ZDcyNWRkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIiwgImlkIjogIjNiZTQzNTQ1N2E1MjRmNDBiOTQ0NGE1N2MyM2FkMTdlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6ODc3NC92Mi9mMjNlZDViZTVmNTM0ZmRiYTMxZDIzZjYwNjIxMzQ3ZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAiaWQiOiAiMDNmMzMwNzJhN2RjNDQxMDgyMzI5ZTJkZjkyNzQzNTAiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogImNvbXB1dGUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiIsICJpZCI6ICI1N2FhNTA4MWRhNjU0YjFiOTFiNmQ5NGRhYmQ3OWVmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo1MDAwL3YyLjAiLCAiaWQiOiAiMGZiZGU2MzljNDQ2NGNmYzljYjc3Y2UyYTUyYTc5MjgiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAibmV1dHJvbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiMzJmM2M2ODZjODc0NDUyZWI0Y2NjNWM1OGU4ZDMzNjUiLCAicm9sZXMiOiBbeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJuZXV0cm9uIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjZlZmU4MjcxNGRhZjQwMzNiZWJmMmNkODI2YmQxY2I5Il19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAttg4HVpQIV16Yvl+2PfDZ+piry5+UTVr875ye8gEeXLUIal6cUQ-whZCCEXzYmKbphTE3iSHP25VprZYdqLThCGAWGOWEQ2rK1pCz5W3kKVlcGxpY3742zswzSlQDw5tn4LOCeKZV0RczIaTqCzE65M5qD01x0QhGhpYhWDr5njx-gCDW6nSyNlvMl39qnPfgRcU1lAifFnuVm9yhikEmp4TqaH30+vf1z4Cfowxnm5U5HsxLJjGKT8uD9PPq0w2aI3hnNIsBA1Ia2z1h4MT6jG-umSUUXu+az69L8blKSNbqnVwyQBlRuZwWRG3HzDeqyFyVrYFdg3eB3LVxzaGcQ==" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173 2014-07-11 15:40:42.634 4930 DEBUG neutronclient.client [-] RESP:{'status': '200', 'content-length': '396', 'content-location': 'http://pvidgsh105:9696/v2.0/subnets.json?id=6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824', 'date': 'Fri, 11 Jul 2014 13:40:42 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-6c4de00b-81cd-4efc-a55d-cf2d9619b70e'} {"subnets": [{"name": "subnetadm", "enable_dhcp": true, "network_id": "b7b1ddfb-79d4-4e77-9820-3e623d622591", "tenant_id": "5c9c186a909e499e9da0dd5cf2c403e0", "dns_nameservers": [], "allocation_pools": [{"start": "192.168.42.2", "end": "192.168.42.254"}], "host_routes": [], "ip_version": 4, "gateway_ip": "192.168.42.1", "cidr": "192.168.42.0/24", "id": "6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824"}]} http_log_resp /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179 2014-07-11 15:40:42.635 4930 DEBUG neutronclient.client [-] REQ: curl -i http://pvidgsh105:9696/v2.0/ports.json?network_id=b7b1ddfb-79d4-4e77-9820-3e623d622591&device_owner=network%3Adhcp -X GET -H "X-Auth-Token: MIIJqwYJKoZIhvcNAQcCoIIJnDCCCZgCAQExCTAHBgUrDgMCGjCCCAEGCSqGSIb3DQEHAaCCB-IEggfueyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNy0xMVQxMzozNjoyMy41NTA3NDEiLCAiZXhwaXJlcyI6ICIyMDE0LTA3LTExVDE0OjM2OjIzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlNlcnZpY2VzIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImYyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgIm5hbWUiOiAic2VydmljZXMifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjg3NzYvdjEvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgImlkIjogIjEyMmNkOGFjNTI5YTQyZjFhNDkwMzc1Mjk5ZDcyNWRkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIiwgImlkIjogIjNiZTQzNTQ1N2E1MjRmNDBiOTQ0NGE1N2MyM2FkMTdlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6ODc3NC92Mi9mMjNlZDViZTVmNTM0ZmRiYTMxZDIzZjYwNjIxMzQ3ZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAiaWQiOiAiMDNmMzMwNzJhN2RjNDQxMDgyMzI5ZTJkZjkyNzQzNTAiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogImNvbXB1dGUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiIsICJpZCI6ICI1N2FhNTA4MWRhNjU0YjFiOTFiNmQ5NGRhYmQ3OWVmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo1MDAwL3YyLjAiLCAiaWQiOiAiMGZiZGU2MzljNDQ2NGNmYzljYjc3Y2UyYTUyYTc5MjgiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAibmV1dHJvbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiMzJmM2M2ODZjODc0NDUyZWI0Y2NjNWM1OGU4ZDMzNjUiLCAicm9sZXMiOiBbeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJuZXV0cm9uIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjZlZmU4MjcxNGRhZjQwMzNiZWJmMmNkODI2YmQxY2I5Il19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAttg4HVpQIV16Yvl+2PfDZ+piry5+UTVr875ye8gEeXLUIal6cUQ-whZCCEXzYmKbphTE3iSHP25VprZYdqLThCGAWGOWEQ2rK1pCz5W3kKVlcGxpY3742zswzSlQDw5tn4LOCeKZV0RczIaTqCzE65M5qD01x0QhGhpYhWDr5njx-gCDW6nSyNlvMl39qnPfgRcU1lAifFnuVm9yhikEmp4TqaH30+vf1z4Cfowxnm5U5HsxLJjGKT8uD9PPq0w2aI3hnNIsBA1Ia2z1h4MT6jG-umSUUXu+az69L8blKSNbqnVwyQBlRuZwWRG3HzDeqyFyVrYFdg3eB3LVxzaGcQ==" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173 2014-07-11 15:40:42.658 4930 DEBUG neutronclient.client [-] RESP:{'status': '200', 'content-length': '692', 'content-location': 'http://pvidgsh105:9696/v2.0/ports.json?network_id=b7b1ddfb-79d4-4e77-9820-3e623d622591&device_owner=network%3Adhcp', 'date': 'Fri, 11 Jul 2014 13:40:42 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-0be95549-0f9b-4f12-b45a-f323ea505644'} {"ports": [{"status": "ACTIVE", "binding:host_id": "pvidgsi002.pvi", "name": "", "admin_state_up": true, "network_id": "b7b1ddfb-79d4-4e77-9820-3e623d622591", "tenant_id": "5c9c186a909e499e9da0dd5cf2c403e0", "extra_dhcp_opts": [], "binding:vif_details": {"port_filter": true, "ovs_hybrid_plug": true}, "binding:vif_type": "ovs", "device_owner": "network:dhcp", "mac_address": "fa:16:3e:7a:d4:7a", "binding:profile": {}, "binding:vnic_type": "normal", "fixed_ips": [{"subnet_id": "6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824", "ip_address": "192.168.42.3"}], "id": "e147dbae-2d1d-4efb-8c56-41672a1ae890", "device_id": "dhcp9c5f705c-9856-569f-9b1d-06e8515920ea-b7b1ddfb-79d4-4e77-9820-3e623d622591"}]} http_log_resp /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179 2014-07-11 15:40:42.659 4930 DEBUG nova.network.api [-] Updating cache with info: [VIF({'ovs_interfaceid': u'deace7d3-bd51-4177-97a2-6b3ad0f75337', 'network': Network({'bridge': 'br-int', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 'floating_ips': [], 'address': u'192.168.42.2'})], 'version': 4, 'meta': {'dhcp_server': u'192.168.42.3'}, 'dns': [], 'routes': [], 'cidr': u'192.168.42.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u'192.168.42.1'})})], 'meta': {'injected': False, 'tenant_id': u'5c9c186a909e499e9da0dd5cf2c403e0'}, 'id': u'b7b1ddfb-79d4-4e77-9820-3e623d622591', 'label': u'netadm00'}), 'devname': u'tapdeace7d3-bd', 'qbh_params': None, 'meta': {}, 'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': u'fa:16:3e:b2:b4:8e', 'active': True, 'type': u'ovs', 'id': u'deace7d3-bd51-4177-97a2-6b3ad0f75337', 'qbg_params': None})] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/api.py:75 2014-07-11 15:40:42.672 4930 DEBUG nova.compute.manager [-] [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:4850 2014-07-11 15:40:42.672 4930 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call sleeping for 60.00 seconds _inner /usr/lib/python2.7/site-packages/nova/openstack/common/loopingcall.py:132 2014-07-11 15:41:42.673 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:41:42.673 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:41:42.674 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:41:42.674 4930 DEBUG nova.openstack.common.lockutils [-] Got semaphore "compute_resources" lock /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:168 2014-07-11 15:41:42.674 4930 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "update_available_resource" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:248 2014-07-11 15:41:42.675 4930 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2014-07-11 15:41:42.675 4930 DEBUG nova.virt.libvirt.driver [-] Updating host stats update_status /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5247 2014-07-11 15:41:43.294 4930 WARNING nova.virt.libvirt.driver [-] Periodic task is updating the host stat, it is trying to get disk instance-00000003, but disk file was removed by concurrent operations such as resize. 2014-07-11 15:41:47.079 4930 DEBUG nova.compute.resource_tracker [-] Hypervisor: free ram (MB): 383704 _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:409 2014-07-11 15:41:47.079 4930 DEBUG nova.compute.resource_tracker [-] Hypervisor: free disk (GB): 2636 _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:410 2014-07-11 15:41:47.079 4930 DEBUG nova.compute.resource_tracker [-] Hypervisor: free VCPUs: 32 _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:415 2014-07-11 15:41:47.079 4930 DEBUG nova.compute.resource_tracker [-] Hypervisor: assignable PCI devices: [] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:422 2014-07-11 15:41:47.140 4930 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 385628 2014-07-11 15:41:47.140 4930 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 2635 2014-07-11 15:41:47.140 4930 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 31 2014-07-11 15:41:47.168 4930 INFO nova.compute.resource_tracker [-] Compute_service record updated for pvidgsh006.pvi:pvidgsh006.pvi 2014-07-11 15:41:47.168 4930 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "update_available_resource" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:252 2014-07-11 15:41:47.194 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:41:47.195 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:41:47.195 4930 DEBUG nova.compute.manager [-] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:5364 2014-07-11 15:41:47.195 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:41:47.195 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:41:47.196 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:41:47.196 4930 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:178 2014-07-11 15:41:47.196 4930 DEBUG nova.compute.manager [-] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:4789 2014-07-11 15:41:47.196 4930 DEBUG nova.compute.manager [-] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:4793 2014-07-11 15:41:47.222 4930 DEBUG nova.objects.instance [-] Lazy-loading `system_metadata' on Instance uuid 779caa7e-a32e-4437-be15-447fad1e4d12 obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:519 2014-07-11 15:41:47.293 4930 DEBUG nova.network.neutronv2.api [-] get_instance_nw_info() for test2 _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:479 2014-07-11 15:41:47.293 4930 DEBUG neutronclient.client [-] REQ: curl -i http://pvidgsh105:9696/v2.0/ports.json?tenant_id=5c9c186a909e499e9da0dd5cf2c403e0&device_id=779caa7e-a32e-4437-be15-447fad1e4d12 -X GET -H "X-Auth-Token: MIIJqwYJKoZIhvcNAQcCoIIJnDCCCZgCAQExCTAHBgUrDgMCGjCCCAEGCSqGSIb3DQEHAaCCB-IEggfueyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNy0xMVQxMzozNjoyMy41NTA3NDEiLCAiZXhwaXJlcyI6ICIyMDE0LTA3LTExVDE0OjM2OjIzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlNlcnZpY2VzIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImYyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgIm5hbWUiOiAic2VydmljZXMifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjg3NzYvdjEvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgImlkIjogIjEyMmNkOGFjNTI5YTQyZjFhNDkwMzc1Mjk5ZDcyNWRkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIiwgImlkIjogIjNiZTQzNTQ1N2E1MjRmNDBiOTQ0NGE1N2MyM2FkMTdlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6ODc3NC92Mi9mMjNlZDViZTVmNTM0ZmRiYTMxZDIzZjYwNjIxMzQ3ZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAiaWQiOiAiMDNmMzMwNzJhN2RjNDQxMDgyMzI5ZTJkZjkyNzQzNTAiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogImNvbXB1dGUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiIsICJpZCI6ICI1N2FhNTA4MWRhNjU0YjFiOTFiNmQ5NGRhYmQ3OWVmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo1MDAwL3YyLjAiLCAiaWQiOiAiMGZiZGU2MzljNDQ2NGNmYzljYjc3Y2UyYTUyYTc5MjgiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAibmV1dHJvbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiMzJmM2M2ODZjODc0NDUyZWI0Y2NjNWM1OGU4ZDMzNjUiLCAicm9sZXMiOiBbeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJuZXV0cm9uIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjZlZmU4MjcxNGRhZjQwMzNiZWJmMmNkODI2YmQxY2I5Il19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAttg4HVpQIV16Yvl+2PfDZ+piry5+UTVr875ye8gEeXLUIal6cUQ-whZCCEXzYmKbphTE3iSHP25VprZYdqLThCGAWGOWEQ2rK1pCz5W3kKVlcGxpY3742zswzSlQDw5tn4LOCeKZV0RczIaTqCzE65M5qD01x0QhGhpYhWDr5njx-gCDW6nSyNlvMl39qnPfgRcU1lAifFnuVm9yhikEmp4TqaH30+vf1z4Cfowxnm5U5HsxLJjGKT8uD9PPq0w2aI3hnNIsBA1Ia2z1h4MT6jG-umSUUXu+az69L8blKSNbqnVwyQBlRuZwWRG3HzDeqyFyVrYFdg3eB3LVxzaGcQ==" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173 2014-07-11 15:41:47.384 4930 DEBUG neutronclient.client [-] RESP:{'status': '200', 'content-length': '651', 'content-location': 'http://pvidgsh105:9696/v2.0/ports.json?tenant_id=5c9c186a909e499e9da0dd5cf2c403e0&device_id=779caa7e-a32e-4437-be15-447fad1e4d12', 'date': 'Fri, 11 Jul 2014 13:41:47 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-cb101b3a-beba-4432-8ce3-81bd9644e284'} {"ports": [{"status": "ACTIVE", "binding:host_id": "pvidgsh005.pvi", "name": "", "admin_state_up": true, "network_id": "b7b1ddfb-79d4-4e77-9820-3e623d622591", "tenant_id": "5c9c186a909e499e9da0dd5cf2c403e0", "extra_dhcp_opts": [], "binding:vif_details": {"port_filter": true, "ovs_hybrid_plug": true}, "binding:vif_type": "ovs", "device_owner": "compute:None", "mac_address": "fa:16:3e:b2:b4:8e", "binding:profile": {}, "binding:vnic_type": "normal", "fixed_ips": [{"subnet_id": "6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824", "ip_address": "192.168.42.2"}], "id": "deace7d3-bd51-4177-97a2-6b3ad0f75337", "device_id": "779caa7e-a32e-4437-be15-447fad1e4d12"}]} http_log_resp /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179 2014-07-11 15:41:47.384 4930 DEBUG nova.objects.instance [-] Lazy-loading `info_cache' on Instance uuid 779caa7e-a32e-4437-be15-447fad1e4d12 obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:519 2014-07-11 15:41:47.441 4930 DEBUG neutronclient.client [-] REQ: curl -i http://pvidgsh105:9696/v2.0/networks.json?id=b7b1ddfb-79d4-4e77-9820-3e623d622591 -X GET -H "X-Auth-Token: MIIJqwYJKoZIhvcNAQcCoIIJnDCCCZgCAQExCTAHBgUrDgMCGjCCCAEGCSqGSIb3DQEHAaCCB-IEggfueyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNy0xMVQxMzozNjoyMy41NTA3NDEiLCAiZXhwaXJlcyI6ICIyMDE0LTA3LTExVDE0OjM2OjIzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlNlcnZpY2VzIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImYyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgIm5hbWUiOiAic2VydmljZXMifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjg3NzYvdjEvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgImlkIjogIjEyMmNkOGFjNTI5YTQyZjFhNDkwMzc1Mjk5ZDcyNWRkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIiwgImlkIjogIjNiZTQzNTQ1N2E1MjRmNDBiOTQ0NGE1N2MyM2FkMTdlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6ODc3NC92Mi9mMjNlZDViZTVmNTM0ZmRiYTMxZDIzZjYwNjIxMzQ3ZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAiaWQiOiAiMDNmMzMwNzJhN2RjNDQxMDgyMzI5ZTJkZjkyNzQzNTAiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogImNvbXB1dGUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiIsICJpZCI6ICI1N2FhNTA4MWRhNjU0YjFiOTFiNmQ5NGRhYmQ3OWVmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo1MDAwL3YyLjAiLCAiaWQiOiAiMGZiZGU2MzljNDQ2NGNmYzljYjc3Y2UyYTUyYTc5MjgiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAibmV1dHJvbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiMzJmM2M2ODZjODc0NDUyZWI0Y2NjNWM1OGU4ZDMzNjUiLCAicm9sZXMiOiBbeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJuZXV0cm9uIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjZlZmU4MjcxNGRhZjQwMzNiZWJmMmNkODI2YmQxY2I5Il19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAttg4HVpQIV16Yvl+2PfDZ+piry5+UTVr875ye8gEeXLUIal6cUQ-whZCCEXzYmKbphTE3iSHP25VprZYdqLThCGAWGOWEQ2rK1pCz5W3kKVlcGxpY3742zswzSlQDw5tn4LOCeKZV0RczIaTqCzE65M5qD01x0QhGhpYhWDr5njx-gCDW6nSyNlvMl39qnPfgRcU1lAifFnuVm9yhikEmp4TqaH30+vf1z4Cfowxnm5U5HsxLJjGKT8uD9PPq0w2aI3hnNIsBA1Ia2z1h4MT6jG-umSUUXu+az69L8blKSNbqnVwyQBlRuZwWRG3HzDeqyFyVrYFdg3eB3LVxzaGcQ==" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173 2014-07-11 15:41:47.467 4930 DEBUG neutronclient.client [-] RESP:{'status': '200', 'content-length': '373', 'content-location': 'http://pvidgsh105:9696/v2.0/networks.json?id=b7b1ddfb-79d4-4e77-9820-3e623d622591', 'date': 'Fri, 11 Jul 2014 13:41:47 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-40f7e34f-d5e3-48df-ab91-220d1eb7d7fe'} {"networks": [{"status": "ACTIVE", "subnets": ["6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824"], "name": "netadm00", "provider:physical_network": null, "admin_state_up": true, "tenant_id": "5c9c186a909e499e9da0dd5cf2c403e0", "provider:network_type": "vxlan", "router:external": false, "shared": false, "id": "b7b1ddfb-79d4-4e77-9820-3e623d622591", "provider:segmentation_id": 100}]} http_log_resp /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179 2014-07-11 15:41:47.468 4930 DEBUG neutronclient.client [-] REQ: curl -i http://pvidgsh105:9696/v2.0/floatingips.json?fixed_ip_address=192.168.42.2&port_id=deace7d3-bd51-4177-97a2-6b3ad0f75337 -X GET -H "X-Auth-Token: MIIJqwYJKoZIhvcNAQcCoIIJnDCCCZgCAQExCTAHBgUrDgMCGjCCCAEGCSqGSIb3DQEHAaCCB-IEggfueyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNy0xMVQxMzozNjoyMy41NTA3NDEiLCAiZXhwaXJlcyI6ICIyMDE0LTA3LTExVDE0OjM2OjIzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlNlcnZpY2VzIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImYyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgIm5hbWUiOiAic2VydmljZXMifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjg3NzYvdjEvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgImlkIjogIjEyMmNkOGFjNTI5YTQyZjFhNDkwMzc1Mjk5ZDcyNWRkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIiwgImlkIjogIjNiZTQzNTQ1N2E1MjRmNDBiOTQ0NGE1N2MyM2FkMTdlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6ODc3NC92Mi9mMjNlZDViZTVmNTM0ZmRiYTMxZDIzZjYwNjIxMzQ3ZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAiaWQiOiAiMDNmMzMwNzJhN2RjNDQxMDgyMzI5ZTJkZjkyNzQzNTAiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogImNvbXB1dGUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiIsICJpZCI6ICI1N2FhNTA4MWRhNjU0YjFiOTFiNmQ5NGRhYmQ3OWVmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo1MDAwL3YyLjAiLCAiaWQiOiAiMGZiZGU2MzljNDQ2NGNmYzljYjc3Y2UyYTUyYTc5MjgiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAibmV1dHJvbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiMzJmM2M2ODZjODc0NDUyZWI0Y2NjNWM1OGU4ZDMzNjUiLCAicm9sZXMiOiBbeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJuZXV0cm9uIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjZlZmU4MjcxNGRhZjQwMzNiZWJmMmNkODI2YmQxY2I5Il19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAttg4HVpQIV16Yvl+2PfDZ+piry5+UTVr875ye8gEeXLUIal6cUQ-whZCCEXzYmKbphTE3iSHP25VprZYdqLThCGAWGOWEQ2rK1pCz5W3kKVlcGxpY3742zswzSlQDw5tn4LOCeKZV0RczIaTqCzE65M5qD01x0QhGhpYhWDr5njx-gCDW6nSyNlvMl39qnPfgRcU1lAifFnuVm9yhikEmp4TqaH30+vf1z4Cfowxnm5U5HsxLJjGKT8uD9PPq0w2aI3hnNIsBA1Ia2z1h4MT6jG-umSUUXu+az69L8blKSNbqnVwyQBlRuZwWRG3HzDeqyFyVrYFdg3eB3LVxzaGcQ==" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173 2014-07-11 15:41:47.481 4930 DEBUG neutronclient.client [-] RESP:{'status': '200', 'content-length': '19', 'content-location': 'http://pvidgsh105:9696/v2.0/floatingips.json?fixed_ip_address=192.168.42.2&port_id=deace7d3-bd51-4177-97a2-6b3ad0f75337', 'date': 'Fri, 11 Jul 2014 13:41:47 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-588f3a02-a4e8-4359-896c-5e15c13139ae'} {"floatingips": []} http_log_resp /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179 2014-07-11 15:41:47.481 4930 DEBUG neutronclient.client [-] REQ: curl -i http://pvidgsh105:9696/v2.0/subnets.json?id=6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824 -X GET -H "X-Auth-Token: MIIJqwYJKoZIhvcNAQcCoIIJnDCCCZgCAQExCTAHBgUrDgMCGjCCCAEGCSqGSIb3DQEHAaCCB-IEggfueyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNy0xMVQxMzozNjoyMy41NTA3NDEiLCAiZXhwaXJlcyI6ICIyMDE0LTA3LTExVDE0OjM2OjIzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlNlcnZpY2VzIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImYyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgIm5hbWUiOiAic2VydmljZXMifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjg3NzYvdjEvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgImlkIjogIjEyMmNkOGFjNTI5YTQyZjFhNDkwMzc1Mjk5ZDcyNWRkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIiwgImlkIjogIjNiZTQzNTQ1N2E1MjRmNDBiOTQ0NGE1N2MyM2FkMTdlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6ODc3NC92Mi9mMjNlZDViZTVmNTM0ZmRiYTMxZDIzZjYwNjIxMzQ3ZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAiaWQiOiAiMDNmMzMwNzJhN2RjNDQxMDgyMzI5ZTJkZjkyNzQzNTAiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogImNvbXB1dGUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiIsICJpZCI6ICI1N2FhNTA4MWRhNjU0YjFiOTFiNmQ5NGRhYmQ3OWVmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo1MDAwL3YyLjAiLCAiaWQiOiAiMGZiZGU2MzljNDQ2NGNmYzljYjc3Y2UyYTUyYTc5MjgiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAibmV1dHJvbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiMzJmM2M2ODZjODc0NDUyZWI0Y2NjNWM1OGU4ZDMzNjUiLCAicm9sZXMiOiBbeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJuZXV0cm9uIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjZlZmU4MjcxNGRhZjQwMzNiZWJmMmNkODI2YmQxY2I5Il19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAttg4HVpQIV16Yvl+2PfDZ+piry5+UTVr875ye8gEeXLUIal6cUQ-whZCCEXzYmKbphTE3iSHP25VprZYdqLThCGAWGOWEQ2rK1pCz5W3kKVlcGxpY3742zswzSlQDw5tn4LOCeKZV0RczIaTqCzE65M5qD01x0QhGhpYhWDr5njx-gCDW6nSyNlvMl39qnPfgRcU1lAifFnuVm9yhikEmp4TqaH30+vf1z4Cfowxnm5U5HsxLJjGKT8uD9PPq0w2aI3hnNIsBA1Ia2z1h4MT6jG-umSUUXu+az69L8blKSNbqnVwyQBlRuZwWRG3HzDeqyFyVrYFdg3eB3LVxzaGcQ==" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173 2014-07-11 15:41:47.509 4930 DEBUG neutronclient.client [-] RESP:{'status': '200', 'content-length': '396', 'content-location': 'http://pvidgsh105:9696/v2.0/subnets.json?id=6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824', 'date': 'Fri, 11 Jul 2014 13:41:47 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-57317b23-9962-4582-a2ba-f5a9eb96f84c'} {"subnets": [{"name": "subnetadm", "enable_dhcp": true, "network_id": "b7b1ddfb-79d4-4e77-9820-3e623d622591", "tenant_id": "5c9c186a909e499e9da0dd5cf2c403e0", "dns_nameservers": [], "allocation_pools": [{"start": "192.168.42.2", "end": "192.168.42.254"}], "host_routes": [], "ip_version": 4, "gateway_ip": "192.168.42.1", "cidr": "192.168.42.0/24", "id": "6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824"}]} http_log_resp /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179 2014-07-11 15:41:47.509 4930 DEBUG neutronclient.client [-] REQ: curl -i http://pvidgsh105:9696/v2.0/ports.json?network_id=b7b1ddfb-79d4-4e77-9820-3e623d622591&device_owner=network%3Adhcp -X GET -H "X-Auth-Token: MIIJqwYJKoZIhvcNAQcCoIIJnDCCCZgCAQExCTAHBgUrDgMCGjCCCAEGCSqGSIb3DQEHAaCCB-IEggfueyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNy0xMVQxMzozNjoyMy41NTA3NDEiLCAiZXhwaXJlcyI6ICIyMDE0LTA3LTExVDE0OjM2OjIzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlNlcnZpY2VzIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImYyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgIm5hbWUiOiAic2VydmljZXMifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjg3NzYvdjEvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIiwgImlkIjogIjEyMmNkOGFjNTI5YTQyZjFhNDkwMzc1Mjk5ZDcyNWRkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo4Nzc2L3YxL2YyM2VkNWJlNWY1MzRmZGJhMzFkMjNmNjA2MjEzNDdkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMDAxOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIiwgImlkIjogIjNiZTQzNTQ1N2E1MjRmNDBiOTQ0NGE1N2MyM2FkMTdlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vcHZpZGdzaDAwMTo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6ODc3NC92Mi9mMjNlZDViZTVmNTM0ZmRiYTMxZDIzZjYwNjIxMzQ3ZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QiLCAiaWQiOiAiMDNmMzMwNzJhN2RjNDQxMDgyMzI5ZTJkZjkyNzQzNTAiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1Ojg3NzQvdjIvZjIzZWQ1YmU1ZjUzNGZkYmEzMWQyM2Y2MDYyMTM0N2QifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogImNvbXB1dGUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiIsICJpZCI6ICI1N2FhNTA4MWRhNjU0YjFiOTFiNmQ5NGRhYmQ3OWVmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3B2aWRnc2gxMDU6OTY5NiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vcHZpZGdzaDEwNTo1MDAwL3YyLjAiLCAiaWQiOiAiMGZiZGU2MzljNDQ2NGNmYzljYjc3Y2UyYTUyYTc5MjgiLCAicHVibGljVVJMIjogImh0dHA6Ly9wdmlkZ3NoMTA1OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAibmV1dHJvbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiMzJmM2M2ODZjODc0NDUyZWI0Y2NjNWM1OGU4ZDMzNjUiLCAicm9sZXMiOiBbeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJuZXV0cm9uIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjZlZmU4MjcxNGRhZjQwMzNiZWJmMmNkODI2YmQxY2I5Il19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAttg4HVpQIV16Yvl+2PfDZ+piry5+UTVr875ye8gEeXLUIal6cUQ-whZCCEXzYmKbphTE3iSHP25VprZYdqLThCGAWGOWEQ2rK1pCz5W3kKVlcGxpY3742zswzSlQDw5tn4LOCeKZV0RczIaTqCzE65M5qD01x0QhGhpYhWDr5njx-gCDW6nSyNlvMl39qnPfgRcU1lAifFnuVm9yhikEmp4TqaH30+vf1z4Cfowxnm5U5HsxLJjGKT8uD9PPq0w2aI3hnNIsBA1Ia2z1h4MT6jG-umSUUXu+az69L8blKSNbqnVwyQBlRuZwWRG3HzDeqyFyVrYFdg3eB3LVxzaGcQ==" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" http_log_req /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173 2014-07-11 15:41:47.532 4930 DEBUG neutronclient.client [-] RESP:{'status': '200', 'content-length': '692', 'content-location': 'http://pvidgsh105:9696/v2.0/ports.json?network_id=b7b1ddfb-79d4-4e77-9820-3e623d622591&device_owner=network%3Adhcp', 'date': 'Fri, 11 Jul 2014 13:41:47 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-229ca01d-2860-4e60-9019-dba23f10af07'} {"ports": [{"status": "ACTIVE", "binding:host_id": "pvidgsi002.pvi", "name": "", "admin_state_up": true, "network_id": "b7b1ddfb-79d4-4e77-9820-3e623d622591", "tenant_id": "5c9c186a909e499e9da0dd5cf2c403e0", "extra_dhcp_opts": [], "binding:vif_details": {"port_filter": true, "ovs_hybrid_plug": true}, "binding:vif_type": "ovs", "device_owner": "network:dhcp", "mac_address": "fa:16:3e:7a:d4:7a", "binding:profile": {}, "binding:vnic_type": "normal", "fixed_ips": [{"subnet_id": "6f0335e6-c0d5-46f8-b6f9-4f5d43fb1824", "ip_address": "192.168.42.3"}], "id": "e147dbae-2d1d-4efb-8c56-41672a1ae890", "device_id": "dhcp9c5f705c-9856-569f-9b1d-06e8515920ea-b7b1ddfb-79d4-4e77-9820-3e623d622591"}]} http_log_resp /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179 2014-07-11 15:41:47.533 4930 DEBUG nova.network.api [-] Updating cache with info: [VIF({'ovs_interfaceid': u'deace7d3-bd51-4177-97a2-6b3ad0f75337', 'network': Network({'bridge': 'br-int', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 'floating_ips': [], 'address': u'192.168.42.2'})], 'version': 4, 'meta': {'dhcp_server': u'192.168.42.3'}, 'dns': [], 'routes': [], 'cidr': u'192.168.42.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u'192.168.42.1'})})], 'meta': {'injected': False, 'tenant_id': u'5c9c186a909e499e9da0dd5cf2c403e0'}, 'id': u'b7b1ddfb-79d4-4e77-9820-3e623d622591', 'label': u'netadm00'}), 'devname': u'tapdeace7d3-bd', 'qbh_params': None, 'meta': {}, 'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': u'fa:16:3e:b2:b4:8e', 'active': True, 'type': u'ovs', 'id': u'deace7d3-bd51-4177-97a2-6b3ad0f75337', 'qbg_params': None})] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/api.py:75 2014-07-11 15:41:47.547 4930 DEBUG nova.compute.manager [-] [instance: 779caa7e-a32e-4437-be15-447fad1e4d12] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:4850 2014-07-11 15:41:47.548 4930 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call sleeping for 60.00 seconds _inner /usr/lib/python2.7/site-packages/nova/openstack/common/loopingcall.py:132 From vimal7370 at gmail.com Fri Jul 11 14:08:27 2014 From: vimal7370 at gmail.com (Vimal Kumar) Date: Fri, 11 Jul 2014 19:38:27 +0530 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: ----- File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 239, in authenticate\\n content_type="application/json")\\n\', u\' File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in _cs_request\\n raise exceptions.Unauthorized(message=body)\\n\', u\'Unauthorized: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}\\n\'].\n'] ----- Looks like HTTP connection to neutron server is resulting in 401 error. Try enabling debug mode for neutron server and then tail /var/log/neutron/server.log , hopefully you should get more info. On Fri, Jul 11, 2014 at 7:13 PM, Benoit ML wrote: > Hello, > > Ok I see. Nova telles neutron/openvswitch to create the bridge qbr prior > to the migration itself. > I ve already activate debug and verbose ... But well i'm really stuck, > dont know how and where to search/look ... > > > > Regards, > > > > > > 2014-07-11 15:09 GMT+02:00 Miguel Angel : > > Hi Benoit, >> >> A manual virsh migration should fail, because the >> network ports are not migrated to the destination host. >> >> You must investigate on the authentication problem itself, >> and let nova handle all the underlying API calls which should happen... >> >> May be it's worth setting nova.conf to debug=True >> >> >> >> --- >> irc: ajo / mangelajo >> Miguel Angel Ajo Pelayo >> +34 636 52 25 69 >> skype: ajoajoajo >> >> >> 2014-07-11 14:41 GMT+02:00 Benoit ML : >> >> Hello, >>> >>> cat /etc/redhat-release >>> CentOS Linux release 7 (Rebuilt from: RHEL 7.0) >>> >>> >>> Regards, >>> >>> >>> 2014-07-11 13:40 GMT+02:00 Boris Derzhavets : >>> >>> Could you please post /etc/redhat-release. >>>> >>>> Boris. >>>> >>>> ------------------------------ >>>> Date: Fri, 11 Jul 2014 11:57:12 +0200 >>>> From: ben42ml at gmail.com >>>> To: rdo-list at redhat.com >>>> Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration >>>> failed because of "network qbr no such device" >>>> >>>> >>>> Hello, >>>> >>>> I'm working on a multi-node setup of openstack Icehouse using centos7. >>>> Well i have : >>>> - one controllor node with all server services thing stuff >>>> - one network node with openvswitch agent, l3-agent, dhcp-agent >>>> - two compute node with nova-compute and neutron-openvswitch >>>> - one storage nfs node >>>> >>>> NetworkManager is deleted on compute nodes and network node. >>>> >>>> My network use is configured to use vxlan. I can create VM, >>>> tenant-network, external-network, routeur, assign floating-ip to VM, push >>>> ssh-key into VM, create volume from glance image, etc... Evrything is >>>> conected and reacheable. Pretty cool :) >>>> >>>> But when i try to migrate VM things go wrong ... I have configured >>>> nova, libvirtd and qemu to use migration through libvirt-tcp. >>>> I have create and exchanged ssh-key for nova user on all node. I have >>>> verified userid and groupid of nova. >>>> >>>> Well nova-compute log, on the target compute node, : >>>> 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: >>>> a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: >>>> Unauthorized {"error": {"m >>>> essage": "The request you have made requires authentication.", "code": >>>> 401, "title": "Unauthorized"}} >>>> >>>> >>>> So well after searching a lots in all logs, i have fount that i cant >>>> simply migration VM between compute node with a simple virsh : >>>> virsh migrate instance-00000084 qemu+tcp:///system >>>> >>>> The error is : >>>> erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device >>>> >>>> Well when i look on the source hyperviseur the bridge "qbr3ca65809" >>>> exists and have a network tap device. And moreover i manually create >>>> qbr3ca65809 on the target hypervisor, virsh migrate succed ! >>>> >>>> Can you help me plz ? >>>> What can i do wrong ? Perhpas neutron must create the bridge before >>>> migration but didnt for a mis configuration ? >>>> >>>> Plz ask anything you need ! >>>> >>>> Thank you in advance. >>>> >>>> >>>> The full nova-compute log attached. >>>> >>>> >>>> >>>> >>>> >>>> >>>> Regards, >>>> >>>> -- >>>> -- >>>> Benoit >>>> >>>> _______________________________________________ Rdo-list mailing list >>>> Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>> >>> >>> >>> -- >>> -- >>> Benoit >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> >> > > > -- > -- > Benoit > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frizop at gmail.com Fri Jul 11 19:53:21 2014 From: frizop at gmail.com (Nathan M.) Date: Fri, 11 Jul 2014 12:53:21 -0700 Subject: [Rdo-list] (no subject) Message-ID: So I've tried to setup a local controller node and run into a problem with getting cinder to create a volume, first up the service can't find a place to drop the volume I create. If I disable and reenable the service, it shows as up - so I'm not sure how to proceed on this. I'll note nothing ever shows up in /etc/cinder/volumes Thanks in advance for any help gents/gals --Nathan [root at node1 cinder(openstack_admin)]# cinder service-list +------------------+------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | node1.local | nova | enabled | up | 2014-07-11T19:29:45.000000 | None | | cinder-volume | node1.local@ | nova | enabled | up | 2014-07-11T19:29:46.000000 | None | | cinder-volume | node1.local at lvm1 | nova | enabled | down | 2014-07-11T19:28:51.000000 | None | +------------------+------------------+------+---------+-------+----------------------------+-----------------+ [root at node1 cinder(openstack_admin)]# openstack-status == Nova services == openstack-nova-api: active openstack-nova-cert: active openstack-nova-compute: dead (disabled on boot) openstack-nova-network: dead (disabled on boot) openstack-nova-scheduler: active openstack-nova-conductor: active == Glance services == openstack-glance-api: active openstack-glance-registry: active == Keystone service == openstack-keystone: active == Horizon service == openstack-dashboard: active == neutron services == neutron-server: active neutron-dhcp-agent: inactive (disabled on boot) neutron-l3-agent: inactive (disabled on boot) neutron-metadata-agent: inactive (disabled on boot) neutron-lbaas-agent: inactive (disabled on boot) neutron-openvswitch-agent: inactive (disabled on boot) == Swift services == openstack-swift-proxy: active openstack-swift-account: dead (disabled on boot) openstack-swift-container: dead (disabled on boot) openstack-swift-object: dead (disabled on boot) == Cinder services == openstack-cinder-api: active openstack-cinder-scheduler: active openstack-cinder-volume: active openstack-cinder-backup: inactive (disabled on boot) == Ceilometer services == openstack-ceilometer-api: active openstack-ceilometer-central: active openstack-ceilometer-compute: dead (disabled on boot) openstack-ceilometer-collector: active == Heat services == openstack-heat-api: active openstack-heat-api-cfn: active openstack-heat-api-cloudwatch: inactive (disabled on boot) openstack-heat-engine: active == Support services == openvswitch: dead (disabled on boot) messagebus: active tgtd: active rabbitmq-server: active memcached: active == Keystone users == +----------------------------------+------------+---------+----------------------+ | id | name | enabled | email | +----------------------------------+------------+---------+----------------------+ | 555e3e826c9f445c9975d0e1c6e00fc6 | admin | True | admin at local | | 4cbb547624004bbeb650d9f73875c1a2 | ceilometer | True | ceilometer at localhost | | 492d8baa1ae94e8dbf503187b5ccd0a9 | cinder | True | cinder at localhost | | fdcac23cb0bc4cd08712722b213d2e93 | glance | True | glance at localhost | | fc0d0960be5b4714b37f969fbc48d9e4 | heat | True | heat at localhost | | 48a6c949564d4e96b465fe670c92015c | heat-cfn | True | heat-cfn at localhost | | 1c531b23585e4cefb7fff7659cded687 | neutron | True | neutron at localhost | | 6ede5eb09ca64cdb934f6c92b20ba3b3 | nova | True | nova at localhost | | 6ba7e665409e4ee883e29e6def759255 | swift | True | swift at localhost | +----------------------------------+------------+---------+----------------------+ == Glance images == +--------------------------------------+--------+-------------+------------------+-----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+--------+-------------+------------------+-----------+--------+ | 6019cfa8-ee46-4617-8b56-ae5dc82013a3 | cirros | qcow2 | bare | 237896192 | active | +--------------------------------------+--------+-------------+------------------+-----------+--------+ == Nova managed services == +------------------+-------------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-------------+----------+---------+-------+----------------------------+-----------------+ | nova-consoleauth | node1.local | internal | enabled | up | 2014-07-11T19:31:14.000000 | - | | nova-scheduler | node1.local | internal | enabled | up | 2014-07-11T19:31:15.000000 | - | | nova-conductor | node1.local | internal | enabled | up | 2014-07-11T19:31:14.000000 | - | | nova-cert | node1.local | internal | enabled | up | 2014-07-11T19:31:15.000000 | - | +------------------+-------------+----------+---------+-------+----------------------------+-----------------+ == Nova networks == +--------------------------------------+-------+------+ | ID | Label | Cidr | +--------------------------------------+-------+------+ | 9295ee23-c93b-43f6-801e-51d67e66313f | net1 | - | +--------------------------------------+-------+------+ == Nova instance flavors == +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ == Nova instances == +----+------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +----+------+--------+------------+-------------+----------+ +----+------+--------+------------+-------------+----------+ [root at node1 cinder(openstack_admin)]# nova volume-create --volume-type lvm --availability-zone nova 1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2014-07-11T19:34:43.064107 | | display_description | - | | display_name | - | | encrypted | False | | id | 17959681-7b63-4dd2-b856-083aef246fd9 | | metadata | {} | | size | 1 | | snapshot_id | - | | source_volid | - | | status | creating | | volume_type | lvm | +---------------------+--------------------------------------+ [root at node1 cinder(openstack_admin)]# tail -5 scheduler.log 2014-07-11 12:34:43.200 8665 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'555e3e826c9f445c9975d0e1c6e00fc6', 'tenant': u'ff6a2b534e984db58313ae194b2d908c', 'user_identity': u'555e3e826c9f445c9975d0e1c6e00fc6 ff6a2b534e984db58313ae194b2d908c - - -'} 2014-07-11 12:34:43.275 8665 ERROR cinder.scheduler.filters.capacity_filter [req-2acb85f1-5b7b-4b63-bf95-9037338cb52b 555e3e826c9f445c9975d0e1c6e00fc6 ff6a2b534e984db58313ae194b2d908c - - -] Free capacity not set: volume node info collection broken. 2014-07-11 12:34:43.275 8665 WARNING cinder.scheduler.filters.capacity_filter [req-2acb85f1-5b7b-4b63-bf95-9037338cb52b 555e3e826c9f445c9975d0e1c6e00fc6 ff6a2b534e984db58313ae194b2d908c - - -] Insufficient free space for volume creation (requested / avail): 1/0.0 2014-07-11 12:34:43.325 8665 ERROR cinder.scheduler.flows.create_volume [req-2acb85f1-5b7b-4b63-bf95-9037338cb52b 555e3e826c9f445c9975d0e1c6e00fc6 ff6a2b534e984db58313ae194b2d908c - - -] Failed to schedule_create_volume: No valid host was found. 2014-07-11 12:35:16.253 8665 WARNING cinder.context [-] Arguments dropped when creating context: {'user': None, 'tenant': None, 'user_identity': u'- - - - -'} [root at node1 cinder(openstack_admin)]# cinder service-disable node1.local at lvm1 cinder-volume +------------------+---------------+----------+ | Host | Binary | Status | +------------------+---------------+----------+ | node1.local at lvm1 | cinder-volume | disabled | +------------------+---------------+----------+ [root at node1 cinder(openstack_admin)]# cinder service-enable node1.local at lvm1 cinder-volume +------------------+---------------+---------+ | Host | Binary | Status | +------------------+---------------+---------+ | node1.local at lvm1 | cinder-volume | enabled | +------------------+---------------+---------+ [root at node1 cinder(openstack_admin)]# cinder service-list +------------------+------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | node1.local | nova | enabled | up | 2014-07-11T19:36:29.000000 | None | | cinder-volume | node1.local@ | nova | enabled | up | 2014-07-11T19:36:30.000000 | None | | cinder-volume | node1.local at lvm1 | nova | enabled | up | 2014-07-11T19:36:35.000000 | None | +------------------+------------------+------+---------+-------+----------------------------+-----------------+ [root at node1 cinder(openstack_admin)]# sed -e '/^#/d' -e '/^$/d' /etc/cinder/cinder.conf [DEFAULT] amqp_durable_queues=False rabbit_host=localhost rabbit_port=5672 rabbit_hosts=localhost:5672 rabbit_userid=openstack rabbit_password= rabbit_virtual_host=/ rabbit_ha_queues=False notification_driver=cinder.openstack.common.notifier.rpc_notifier rpc_backend=cinder.openstack.common.rpc.impl_kombu control_exchange=openstack osapi_volume_listen=0.0.0.0 api_paste_config=/etc/cinder/api-paste.ini glance_host=192.168.0.6 auth_strategy=keystone enabled_backends= debug=False verbose=True log_dir=/var/log/cinder use_syslog=False iscsi_ip_address=192.168.0.6 volume_backend_name=DEFAULT iscsi_helper=tgtadm volumes_dir=/etc/cinder/volumes volume_group=cinder-volumes volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver [BRCD_FABRIC_EXAMPLE] [database] connection=mysql://cinder:@localhost/cinder idle_timeout=3600 [fc-zone-manager] [keymgr] [keystone_authtoken] [matchmaker_ring] [ssl] [root at node1 cinder(openstack_admin)]# lvdisplay cinder-volumes --- Logical volume --- LV Path /dev/cinder-volumes/cinder-volumes LV Name cinder-volumes VG Name cinder-volumes LV UUID wxnyZJ-3BM0-Dnzt-h2Pt-k6qY-1lFG-CEuwfp LV Write Access read/write LV Creation host, time node1.local, 2014-07-10 21:59:48 -0700 LV Status available # open 0 LV Size 4.00 GiB Current LE 1023 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Fri Jul 11 20:03:10 2014 From: sgordon at redhat.com (Steve Gordon) Date: Fri, 11 Jul 2014 16:03:10 -0400 (EDT) Subject: [Rdo-list] Fwd: [Bug 1117871] Could not evaluate: Could not find init script for 'messagebus' - RDO Icehouse AIO on CentOS 7 In-Reply-To: References: Message-ID: <1731788381.4617922.1405108990202.JavaMail.zimbra@redhat.com> Is anyone able to take it looks - seems like we have an issue on the recently release CentOS 7? ----- Forwarded Message ----- > From: bugzilla at redhat.com > To: sgordon at redhat.com > Sent: Friday, July 11, 2014 12:37:23 PM > Subject: [Bug 1117871] Could not evaluate: Could not find init script for 'messagebus' - RDO Icehouse AIO on CentOS 7 > > https://bugzilla.redhat.com/show_bug.cgi?id=1117871 > > Alejandro Cortina changed: > > What |Removed |Added > ---------------------------------------------------------------------------- > CC| |alitox at gmail.com > > > > --- Comment #1 from Alejandro Cortina --- > I had a different error but I fixed with the same solution provided in: > > https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-icehouse-on-centos-7/ > > "..replace content /etc/redhat-release with "Fedora release 20 (Heisenbug)" > and > rerun packstack --allinone. In meantime I have IceHouse AIO Instance on > CentOS > 7 completely functional." > > Terminal: > > ERROR : Error appeared during Puppet run: 192.168.11.19_prescript.pp > Error: comparison of String with 7 failed at > /var/tmp/packstack/2761ac128766421ab10ff27c754a6285/manifests/192.168.11.19_prescript.pp:15 > on node stack1.local.lan > You will find full trace in log > /var/tmp/packstack/20140712-012704-8RBDNB/manifests/192.168.11.19_prescript.pp.log > Please check log file > /var/tmp/packstack/20140712-012704-8RBDNB/openstack-setup.log for more > information > > > openstack-setup.log: > > ... > tar --dereference -cpzf - apache ceilometer certmonger cinder concat firewall > glance heat horizon inifile keystone memcached mongodb mysql neutron nova > nssdb > openstack packstack qp > id rabbitmq rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd > | > ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null > root at 192.168.11.19 tar -C > /var/tmp/packstack/2761ac128766421ab10ff27c754a6285/modules -xpzf - > 2014-07-12 01:28:57::ERROR::run_setup::920::root:: Traceback (most recent > call > last): > File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", > line 915, in main > _main(confFile) > File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", > line 605, in _main > runSequences() > File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", > line 584, in runSequences > controller.runAllSequences() > File > "/usr/lib/python2.7/site-packages/packstack/installer/setup_controller.py", > line 68, in runAllSequences > sequence.run(config=self.CONF, messages=self.MESSAGES) > File > "/usr/lib/python2.7/site-packages/packstack/installer/core/sequences.py", > line > 98, in run > step.run(config=config, messages=messages) > File > "/usr/lib/python2.7/site-packages/packstack/installer/core/sequences.py", > line > 44, in run > raise SequenceError(str(ex)) > SequenceError: Error appeared during Puppet run: 192.168.11.19_prescript.pp > Error: comparison of String with 7 failed at > /var/tmp/packstack/2761ac128766421ab10ff27c754a6285/manifests/192.168.11.19_prescript.pp:15 > on node stack1.local.lan > You will find full trace in log > /var/tmp/packstack/20140712-012704-8RBDNB/manifests/192.168.11.19_prescript.pp.log > > 2014-07-12 01:28:57::INFO::shell::81::root:: [192.168.11.19] Executing > script: > rm -rf /var/tmp/packstack/2761ac128766421ab10ff27c754a6285 > [root at stack1 20140712-012704-8RBDNB]# > > > 192.168.11.19_prescript.pp.log: > > Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults > Error: comparison of String with 7 failed at > /var/tmp/packstack/2761ac128766421ab10ff27c754a6285/manifests/192.168.11.19_prescript.pp:15 > on node stack1.local.lan > Wrapped exception: > comparison of String with 7 failed > Error: comparison of String with 7 failed at > /var/tmp/packstack/2761ac128766421ab10ff27c754a6285/manifests/192.168.11.19_prescript.pp:15 > on node stack1.local.lan > > -- > You are receiving this mail because: > You reported the bug. > -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From kchamart at redhat.com Mon Jul 14 08:37:24 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 14 Jul 2014 14:07:24 +0530 Subject: [Rdo-list] [Heads-up] RDO Bug triage tomorrow [15JUL2014] Message-ID: <20140714083724.GA24398@tesla.redhat.com> Heya, It's that time again! Tomorrow (3rd Tuesday of the month) is RDO bug triage day. If you have some spare cycles, please join us in helping triage bugs/root-cause analysis. Here's some details to get started[1] with bug triaging. Briefly, current state of RDO bugs: - NEW, ASSIGNED, ON_DEV : 177 - MODIFIED, POST, ON_QA : 125 All the bugs with their descriptions in plain text here[2]. As usual, if you have questions/comments, post to this list or on #rdo IRC channel on Freenode. [1] http://openstack.redhat.com/RDO-BugTriage [2] http://kashyapc.fedorapeople.org/virt/openstack/rdo-bug-status/all-rdo-bugs-14-07-2014.txt -- /kashyap From kchamart at redhat.com Mon Jul 14 18:21:22 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 14 Jul 2014 23:51:22 +0530 Subject: [Rdo-list] [Heads-up] RDO Bug triage tomorrow [15JUL2014] In-Reply-To: <20140714083724.GA24398@tesla.redhat.com> References: <20140714083724.GA24398@tesla.redhat.com> Message-ID: <20140714182122.GA19959@tesla.redhat.com> On Mon, Jul 14, 2014 at 02:07:24PM +0530, Kashyap Chamarthy wrote: > Heya, > > It's that time again! Tomorrow (3rd Tuesday of the month) Just a correction: Looking at our original thread[*] on deciding the recurring bug triage date, we had it as 3rd Wednesday of every month :-) But, our wiki-page below[1] had Tuesday. Now, that we've had this email out anyway, we can stick to Tuesday (and hey, what's in a day :-) ) for this time. Also, Rich Bowen has made a this nnouncement on the inter-webs, just don't want to add additional corrections there. (Thanks to Lars for pointing this out.) [*] https://www.redhat.com/archives/rdo-list/2014-January/msg00035.html > is RDO bug > triage day. If you have some spare cycles, please join us in helping > triage bugs/root-cause analysis. Here's some details to get started[1] > with bug triaging. > > > Briefly, current state of RDO bugs: > > - NEW, ASSIGNED, ON_DEV : 177 > - MODIFIED, POST, ON_QA : 125 > > All the bugs with their descriptions in plain text here[2]. > > As usual, if you have questions/comments, post to this list or on #rdo > IRC channel on Freenode. > > > [1] http://openstack.redhat.com/RDO-BugTriage > [2] http://kashyapc.fedorapeople.org/virt/openstack/rdo-bug-status/all-rdo-bugs-14-07-2014.txt > > > -- > /kashyap > From ben42ml at gmail.com Tue Jul 15 12:20:46 2014 From: ben42ml at gmail.com (Benoit ML) Date: Tue, 15 Jul 2014 14:20:46 +0200 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Hello, Thank you for taking time ! Well on the compute node, when i activate "vif_plugging_is_fatal = True", the vm creation stuck in spawning state, and in neutron server log i have : ======================================= 2014-07-15 14:12:52.351 18448 DEBUG neutron.notifiers.nova [-] Sending events: [{'status': 'completed', 'tag': u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged', 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] send_events /usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:218 2014-07-15 14:12:52.354 18448 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): localhost 2014-07-15 14:12:52.360 18448 DEBUG urllib3.connectionpool [-] "POST /v2/5c9c186a909e499e9da0dd5cf2c403e0/os-server-external-events HTTP/1.1" 401 23 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295 2014-07-15 14:12:52.362 18448 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): localhost 2014-07-15 14:12:52.452 18448 DEBUG urllib3.connectionpool [-] "POST /v2.0/tokens HTTP/1.1" 401 114 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295 2014-07-15 14:12:52.453 18448 ERROR neutron.notifiers.nova [-] Failed to notify nova on events: [{'status': 'completed', 'tag': u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged', 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Traceback (most recent call last): 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py", line 221, in send_events 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova batched_events) 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/site-packages/novaclient/v1_1/contrib/server_external_events.py", line 39, in create 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova return_raw=True) 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in _create 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova _resp, body = self.api.client.post(url, body=body) 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 312, in post 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova return self._cs_request(url, 'POST', **kwargs) 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 301, in _cs_request 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova raise e 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Unauthorized: Unauthorized (HTTP 401) 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova 2014-07-15 14:12:58.321 18448 DEBUG neutron.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-9bf35c42-3477-4ed3-8092-af729c21198c', u'_context_read_deleted': u'no', u'_context_user_name': None, u'_context_project_name': None, u'namespace': None, u'_context_tenant_id': None, u'args': {u'agent_state': {u'agent_state': {u'topic': u'N/A', u'binary': u'neutron-openvswitch-agent', u'host': u'pvidgsh006.pvi', u'agent_type': u'Open vSwitch agent', u'configurations': {u'tunnel_types': [u'vxlan'], u'tunneling_ip': u'192.168.40.5', u'bridge_mappings': {}, u'l2_population': False, u'devices': 1}}}, u'time': u'2014-07-15T12:12:58.313995'}, u'_context_tenant': None, u'_unique_id': u'7c9a4dfcd256494caf6e1327c8051e29', u'_context_is_admin': True, u'version': u'1.0', u'_context_timestamp': u'2014-07-15 12:01:28.190772', u'_context_tenant_name': None, u'_context_user': None, u'_context_user_id': None, u'method': u'report_state', u'_context_project_id': None} _safe_log /usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/common.py:280 ======================================= Well I'm supposed it's related ... Perhaps with those options in neutron.conf : ====================================== notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://localhost:8774/v2 nova_admin_tenant_name = services nova_admin_username = nova nova_admin_password = nova nova_admin_auth_url = http://localhost:35357/v2.0 ====================================== But well didnt see anything wrong ... Thank you in advance ! Regards, 2014-07-11 16:08 GMT+02:00 Vimal Kumar : > ----- > File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 239, > in authenticate\\n content_type="application/json")\\n\', u\' File > "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in > _cs_request\\n raise exceptions.Unauthorized(message=body)\\n\', > u\'Unauthorized: {"error": {"message": "The request you have made requires > authentication.", "code": 401, "title": "Unauthorized"}}\\n\'].\n'] > ----- > > Looks like HTTP connection to neutron server is resulting in 401 error. > > Try enabling debug mode for neutron server and then tail > /var/log/neutron/server.log , hopefully you should get more info. > > > On Fri, Jul 11, 2014 at 7:13 PM, Benoit ML wrote: > >> Hello, >> >> Ok I see. Nova telles neutron/openvswitch to create the bridge qbr prior >> to the migration itself. >> I ve already activate debug and verbose ... But well i'm really stuck, >> dont know how and where to search/look ... >> >> >> >> Regards, >> >> >> >> >> >> 2014-07-11 15:09 GMT+02:00 Miguel Angel : >> >> Hi Benoit, >>> >>> A manual virsh migration should fail, because the >>> network ports are not migrated to the destination host. >>> >>> You must investigate on the authentication problem itself, >>> and let nova handle all the underlying API calls which should happen... >>> >>> May be it's worth setting nova.conf to debug=True >>> >>> >>> >>> --- >>> irc: ajo / mangelajo >>> Miguel Angel Ajo Pelayo >>> +34 636 52 25 69 >>> skype: ajoajoajo >>> >>> >>> 2014-07-11 14:41 GMT+02:00 Benoit ML : >>> >>> Hello, >>>> >>>> cat /etc/redhat-release >>>> CentOS Linux release 7 (Rebuilt from: RHEL 7.0) >>>> >>>> >>>> Regards, >>>> >>>> >>>> 2014-07-11 13:40 GMT+02:00 Boris Derzhavets : >>>> >>>> Could you please post /etc/redhat-release. >>>>> >>>>> Boris. >>>>> >>>>> ------------------------------ >>>>> Date: Fri, 11 Jul 2014 11:57:12 +0200 >>>>> From: ben42ml at gmail.com >>>>> To: rdo-list at redhat.com >>>>> Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration >>>>> failed because of "network qbr no such device" >>>>> >>>>> >>>>> Hello, >>>>> >>>>> I'm working on a multi-node setup of openstack Icehouse using centos7. >>>>> Well i have : >>>>> - one controllor node with all server services thing stuff >>>>> - one network node with openvswitch agent, l3-agent, dhcp-agent >>>>> - two compute node with nova-compute and neutron-openvswitch >>>>> - one storage nfs node >>>>> >>>>> NetworkManager is deleted on compute nodes and network node. >>>>> >>>>> My network use is configured to use vxlan. I can create VM, >>>>> tenant-network, external-network, routeur, assign floating-ip to VM, push >>>>> ssh-key into VM, create volume from glance image, etc... Evrything is >>>>> conected and reacheable. Pretty cool :) >>>>> >>>>> But when i try to migrate VM things go wrong ... I have configured >>>>> nova, libvirtd and qemu to use migration through libvirt-tcp. >>>>> I have create and exchanged ssh-key for nova user on all node. I have >>>>> verified userid and groupid of nova. >>>>> >>>>> Well nova-compute log, on the target compute node, : >>>>> 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: >>>>> a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: >>>>> Unauthorized {"error": {"m >>>>> essage": "The request you have made requires authentication.", "code": >>>>> 401, "title": "Unauthorized"}} >>>>> >>>>> >>>>> So well after searching a lots in all logs, i have fount that i cant >>>>> simply migration VM between compute node with a simple virsh : >>>>> virsh migrate instance-00000084 qemu+tcp:///system >>>>> >>>>> The error is : >>>>> erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device >>>>> >>>>> Well when i look on the source hyperviseur the bridge "qbr3ca65809" >>>>> exists and have a network tap device. And moreover i manually create >>>>> qbr3ca65809 on the target hypervisor, virsh migrate succed ! >>>>> >>>>> Can you help me plz ? >>>>> What can i do wrong ? Perhpas neutron must create the bridge before >>>>> migration but didnt for a mis configuration ? >>>>> >>>>> Plz ask anything you need ! >>>>> >>>>> Thank you in advance. >>>>> >>>>> >>>>> The full nova-compute log attached. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Regards, >>>>> >>>>> -- >>>>> -- >>>>> Benoit >>>>> >>>>> _______________________________________________ Rdo-list mailing list >>>>> Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Benoit >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> >>> >> >> >> -- >> -- >> Benoit >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> > -- -- Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben42ml at gmail.com Tue Jul 15 13:13:07 2014 From: ben42ml at gmail.com (Benoit ML) Date: Tue, 15 Jul 2014 15:13:07 +0200 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Hello again, Ok on controller node I modify the neutron server configuration with nova_admin_tenant_id = f23ed5be5f534fdba31d23f60621347d where id is "services" in keystone and now it's working with "vif_plugging_is_fatal = True". Good thing. Well by the way the migrate doesnt working ... 2014-07-15 14:20 GMT+02:00 Benoit ML : > Hello, > > Thank you for taking time ! > > Well on the compute node, when i activate "vif_plugging_is_fatal = True", > the vm creation stuck in spawning state, and in neutron server log i have : > > ======================================= > 2014-07-15 14:12:52.351 18448 DEBUG neutron.notifiers.nova [-] Sending > events: [{'status': 'completed', 'tag': > u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged', > 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] send_events > /usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:218 > 2014-07-15 14:12:52.354 18448 INFO urllib3.connectionpool [-] Starting new > HTTP connection (1): localhost > 2014-07-15 14:12:52.360 18448 DEBUG urllib3.connectionpool [-] "POST > /v2/5c9c186a909e499e9da0dd5cf2c403e0/os-server-external-events HTTP/1.1" > 401 23 _make_request > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295 > 2014-07-15 14:12:52.362 18448 INFO urllib3.connectionpool [-] Starting new > HTTP connection (1): localhost > 2014-07-15 14:12:52.452 18448 DEBUG urllib3.connectionpool [-] "POST > /v2.0/tokens HTTP/1.1" 401 114 _make_request > /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295 > 2014-07-15 14:12:52.453 18448 ERROR neutron.notifiers.nova [-] Failed to > notify nova on events: [{'status': 'completed', 'tag': > u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged', > 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Traceback (most > recent call last): > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File > "/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py", line 221, in > send_events > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova > batched_events) > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File > "/usr/lib/python2.7/site-packages/novaclient/v1_1/contrib/server_external_events.py", > line 39, in create > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova > return_raw=True) > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File > "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in _create > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova _resp, body > = self.api.client.post(url, body=body) > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File > "/usr/lib/python2.7/site-packages/novaclient/client.py", line 312, in post > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova return > self._cs_request(url, 'POST', **kwargs) > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File > "/usr/lib/python2.7/site-packages/novaclient/client.py", line 301, in > _cs_request > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova raise e > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Unauthorized: > Unauthorized (HTTP 401) > 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova > 2014-07-15 14:12:58.321 18448 DEBUG neutron.openstack.common.rpc.amqp [-] > received {u'_context_roles': [u'admin'], u'_context_request_id': > u'req-9bf35c42-3477-4ed3-8092-af729c21198c', u'_context_read_deleted': > u'no', u'_context_user_name': None, u'_context_project_name': None, > u'namespace': None, u'_context_tenant_id': None, u'args': {u'agent_state': > {u'agent_state': {u'topic': u'N/A', u'binary': > u'neutron-openvswitch-agent', u'host': u'pvidgsh006.pvi', u'agent_type': > u'Open vSwitch agent', u'configurations': {u'tunnel_types': [u'vxlan'], > u'tunneling_ip': u'192.168.40.5', u'bridge_mappings': {}, u'l2_population': > False, u'devices': 1}}}, u'time': u'2014-07-15T12:12:58.313995'}, > u'_context_tenant': None, u'_unique_id': > u'7c9a4dfcd256494caf6e1327c8051e29', u'_context_is_admin': True, > u'version': u'1.0', u'_context_timestamp': u'2014-07-15 12:01:28.190772', > u'_context_tenant_name': None, u'_context_user': None, u'_context_user_id': > None, u'method': u'report_state', u'_context_project_id': None} _safe_log > /usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/common.py:280 > ======================================= > > Well I'm supposed it's related ... Perhaps with those options in > neutron.conf : > ====================================== > notify_nova_on_port_status_changes = True > notify_nova_on_port_data_changes = True > nova_url = http://localhost:8774/v2 > nova_admin_tenant_name = services > nova_admin_username = nova > nova_admin_password = nova > nova_admin_auth_url = http://localhost:35357/v2.0 > ====================================== > > But well didnt see anything wrong ... > > Thank you in advance ! > > Regards, > > > > 2014-07-11 16:08 GMT+02:00 Vimal Kumar : > > ----- >> File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line >> 239, in authenticate\\n content_type="application/json")\\n\', u\' File >> "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in >> _cs_request\\n raise exceptions.Unauthorized(message=body)\\n\', >> u\'Unauthorized: {"error": {"message": "The request you have made requires >> authentication.", "code": 401, "title": "Unauthorized"}}\\n\'].\n'] >> ----- >> >> Looks like HTTP connection to neutron server is resulting in 401 error. >> >> Try enabling debug mode for neutron server and then tail >> /var/log/neutron/server.log , hopefully you should get more info. >> >> >> On Fri, Jul 11, 2014 at 7:13 PM, Benoit ML wrote: >> >>> Hello, >>> >>> Ok I see. Nova telles neutron/openvswitch to create the bridge qbr prior >>> to the migration itself. >>> I ve already activate debug and verbose ... But well i'm really stuck, >>> dont know how and where to search/look ... >>> >>> >>> >>> Regards, >>> >>> >>> >>> >>> >>> 2014-07-11 15:09 GMT+02:00 Miguel Angel : >>> >>> Hi Benoit, >>>> >>>> A manual virsh migration should fail, because the >>>> network ports are not migrated to the destination host. >>>> >>>> You must investigate on the authentication problem itself, >>>> and let nova handle all the underlying API calls which should happen... >>>> >>>> May be it's worth setting nova.conf to debug=True >>>> >>>> >>>> >>>> --- >>>> irc: ajo / mangelajo >>>> Miguel Angel Ajo Pelayo >>>> +34 636 52 25 69 >>>> skype: ajoajoajo >>>> >>>> >>>> 2014-07-11 14:41 GMT+02:00 Benoit ML : >>>> >>>> Hello, >>>>> >>>>> cat /etc/redhat-release >>>>> CentOS Linux release 7 (Rebuilt from: RHEL 7.0) >>>>> >>>>> >>>>> Regards, >>>>> >>>>> >>>>> 2014-07-11 13:40 GMT+02:00 Boris Derzhavets : >>>>> >>>>> Could you please post /etc/redhat-release. >>>>>> >>>>>> Boris. >>>>>> >>>>>> ------------------------------ >>>>>> Date: Fri, 11 Jul 2014 11:57:12 +0200 >>>>>> From: ben42ml at gmail.com >>>>>> To: rdo-list at redhat.com >>>>>> Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration >>>>>> failed because of "network qbr no such device" >>>>>> >>>>>> >>>>>> Hello, >>>>>> >>>>>> I'm working on a multi-node setup of openstack Icehouse using centos7. >>>>>> Well i have : >>>>>> - one controllor node with all server services thing stuff >>>>>> - one network node with openvswitch agent, l3-agent, dhcp-agent >>>>>> - two compute node with nova-compute and neutron-openvswitch >>>>>> - one storage nfs node >>>>>> >>>>>> NetworkManager is deleted on compute nodes and network node. >>>>>> >>>>>> My network use is configured to use vxlan. I can create VM, >>>>>> tenant-network, external-network, routeur, assign floating-ip to VM, push >>>>>> ssh-key into VM, create volume from glance image, etc... Evrything is >>>>>> conected and reacheable. Pretty cool :) >>>>>> >>>>>> But when i try to migrate VM things go wrong ... I have configured >>>>>> nova, libvirtd and qemu to use migration through libvirt-tcp. >>>>>> I have create and exchanged ssh-key for nova user on all node. I >>>>>> have verified userid and groupid of nova. >>>>>> >>>>>> Well nova-compute log, on the target compute node, : >>>>>> 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: >>>>>> a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: >>>>>> Unauthorized {"error": {"m >>>>>> essage": "The request you have made requires authentication.", >>>>>> "code": 401, "title": "Unauthorized"}} >>>>>> >>>>>> >>>>>> So well after searching a lots in all logs, i have fount that i cant >>>>>> simply migration VM between compute node with a simple virsh : >>>>>> virsh migrate instance-00000084 qemu+tcp:///system >>>>>> >>>>>> The error is : >>>>>> erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device >>>>>> >>>>>> Well when i look on the source hyperviseur the bridge "qbr3ca65809" >>>>>> exists and have a network tap device. And moreover i manually create >>>>>> qbr3ca65809 on the target hypervisor, virsh migrate succed ! >>>>>> >>>>>> Can you help me plz ? >>>>>> What can i do wrong ? Perhpas neutron must create the bridge before >>>>>> migration but didnt for a mis configuration ? >>>>>> >>>>>> Plz ask anything you need ! >>>>>> >>>>>> Thank you in advance. >>>>>> >>>>>> >>>>>> The full nova-compute log attached. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Regards, >>>>>> >>>>>> -- >>>>>> -- >>>>>> Benoit >>>>>> >>>>>> _______________________________________________ Rdo-list mailing list >>>>>> Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> Benoit >>>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> >>>> >>> >>> >>> -- >>> -- >>> Benoit >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> >> > > > -- > -- > Benoit > -- -- Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam.huffman at gmail.com Tue Jul 15 13:32:57 2014 From: adam.huffman at gmail.com (Adam Huffman) Date: Tue, 15 Jul 2014 14:32:57 +0100 Subject: [Rdo-list] Glance/Keystone problem Message-ID: I've altered Keystone on my Icehouse cloud to use Apache/mod_ssl. The Keystone and Nova clients are working (more or less) but I'm having trouble with Glance. Here's an example of the sort of error I'm seeing from the Glance api.log: 2014-07-15 14:24:00.551 24063 DEBUG glance.api.middleware.version_negotiation [-] Determining version of request: GET /v1/shared-images/e35356df747b4c5aa663fae2897facba Accept: process_request /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:44 2014-07-15 14:24:00.552 24063 DEBUG glance.api.middleware.version_negotiation [-] Using url versioning process_request /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:57 2014-07-15 14:24:00.552 24063 DEBUG glance.api.middleware.version_negotiation [-] Matched version: v1 process_request /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:69 2014-07-15 14:24:00.552 24063 DEBUG glance.api.middleware.version_negotiation [-] new path /v1/shared-images/e35356df747b4c5aa663fae2897facba process_request /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:70 2014-07-15 14:24:00.553 24063 DEBUG keystoneclient.middleware.auth_token [-] Authenticating user token __call__ /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:666 2014-07-15 14:24:00.553 24063 DEBUG keystoneclient.middleware.auth_token [-] Removing headers from request environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role _remove_auth_headers /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:725 2014-07-15 14:24:00.591 24063 INFO urllib3.connectionpool [-] Starting new HTTPS connection (1): 2014-07-15 14:24:01.921 24063 DEBUG urllib3.connectionpool [-] "POST /v2.0/tokens HTTP/1.1" 200 7003 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 2014-07-15 14:24:01.931 24063 INFO urllib3.connectionpool [-] Starting new HTTPS connection (1): 2014-07-15 14:24:03.243 24063 DEBUG urllib3.connectionpool [-] "GET /v2.0/tokens/revoked HTTP/1.1" 200 682 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 2014-07-15 14:24:03.252 24063 INFO urllib3.connectionpool [-] Starting new HTTPS connection (1): 2014-07-15 14:24:04.529 24063 DEBUG urllib3.connectionpool [-] "GET / HTTP/1.1" 300 384 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 2014-07-15 14:24:04.530 24063 DEBUG keystoneclient.middleware.auth_token [-] Server reports support for api versions: v3.0 _get_supported_versions /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:656 2014-07-15 14:24:04.531 24063 INFO keystoneclient.middleware.auth_token [-] Auth Token confirmed use of v3.0 apis 2014-07-15 14:24:04.531 24063 INFO urllib3.connectionpool [-] Starting new HTTPS connection (1): 2014-07-15 14:24:04.667 24063 DEBUG urllib3.connectionpool [-] "GET /v3/OS-SIMPLE-CERT/certificates HTTP/1.1" 404 93 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 2014-07-15 14:24:04.669 24063 DEBUG keystoneclient.middleware.auth_token [-] Token validation failure. _validate_user_token /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:943 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token Traceback (most recent call last): 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", line 930, in _validate_user_token 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token verified = self.verify_signed_token(user_token, token_ids) 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", line 1347, in verify_signed_token 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token if self.is_signed_token_revoked(token_ids): 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", line 1299, in is_signed_token_revoked 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token if self._is_token_id_in_revoked_list(token_id): 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", line 1306, in _is_token_id_in_revoked_list 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token revocation_list = self.token_revocation_list 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", line 1413, in token_revocation_list 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token self.token_revocation_list = self.fetch_revocation_list() 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", line 1459, in fetch_revocation_list 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token return self.cms_verify(data['signed']) 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", line 1333, in cms_verify 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token self.fetch_signing_cert() 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", line 1477, in fetch_signing_cert 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token self._fetch_cert_file(self.signing_cert_file_name, 'signing') 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", line 1473, in _fetch_cert_file 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token raise exceptions.CertificateConfigError(response.text) 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token CertificateConfigError: Unable to load certificate. Ensure your system is configured properly. 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token 2014-07-15 14:24:04.671 24063 DEBUG keystoneclient.middleware.auth_token [-] Marking token as unauthorized in cache _cache_store_invalid /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1239 2014-07-15 14:24:04.672 24063 WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token 2014-07-15 14:24:04.672 24063 INFO keystoneclient.middleware.auth_token [-] Invalid user token - deferring reject downstream 2014-07-15 14:24:04.674 24063 INFO glance.wsgi.server [-] - - [15/Jul/2014 14:24:04] "GET /v1/shared-images/e35356df747b4c5aa663fae2897facba HTTP/1.1" 401 381 4.124231 There is a bug report about a race condition involving Cinder, but that was supposed to have been fixed. Any suggestions appreciated. Best Wishes, Adam From kchamart at redhat.com Wed Jul 16 05:06:27 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 16 Jul 2014 10:36:27 +0530 Subject: [Rdo-list] (no subject) In-Reply-To: References: Message-ID: <20140716050627.GA29802@tesla.redhat.com> [Please add a subject line with summary, it improves chances for people to respond.] On Fri, Jul 11, 2014 at 12:53:21PM -0700, Nathan M. wrote: > So I've tried to setup a local controller node and run into a problem with > getting cinder to create a volume, first up the service can't find a place > to drop the volume I create. > > If I disable and reenable the service, it shows as up - so I'm not sure how > to proceed on this. I'll note nothing ever shows up in /etc/cinder/volumes Are you able to reproduce this issue (assuming you can consistently) on current latest IceHouse RDO packages? You haven't noted the versions you're using. [. . .] > [root at node1 cinder(openstack_admin)]# tail -5 scheduler.log > 2014-07-11 12:34:43.200 8665 WARNING cinder.context [-] Arguments dropped > when creating context: {'user': u'555e3e826c9f445c9975d0e1c6e00fc6', > 'tenant': u'ff6a2b534e984db58313ae194b2d908c', 'user_identity': > u'555e3e826c9f445c9975d0e1c6e00fc6 ff6a2b534e984db58313ae194b2d908c - - -'} > 2014-07-11 12:34:43.275 8665 ERROR cinder.scheduler.filters.capacity_filter > [req-2acb85f1-5b7b-4b63-bf95-9037338cb52b 555e3e826c9f445c9975d0e1c6e00fc6 > ff6a2b534e984db58313ae194b2d908c - - -] Free capacity not set: volume node > info collection broken. > 2014-07-11 12:34:43.275 8665 WARNING > cinder.scheduler.filters.capacity_filter > [req-2acb85f1-5b7b-4b63-bf95-9037338cb52b 555e3e826c9f445c9975d0e1c6e00fc6 > ff6a2b534e984db58313ae194b2d908c - - -] Insufficient free space for volume > creation (requested / avail): 1/0.0 Maybe you don't really have enough free space there? I don't have a Cinder setup to do a sanity check, you might want to ensure if you have your Cinder filter scheduler configured correctly. > 2014-07-11 12:34:43.325 8665 ERROR cinder.scheduler.flows.create_volume > [req-2acb85f1-5b7b-4b63-bf95-9037338cb52b 555e3e826c9f445c9975d0e1c6e00fc6 > ff6a2b534e984db58313ae194b2d908c - - -] Failed to schedule_create_volume: > No valid host was found. > 2014-07-11 12:35:16.253 8665 WARNING cinder.context [-] Arguments dropped > when creating context: {'user': None, 'tenant': None, 'user_identity': u'- > - - - -'} > -- /kashyap From flavio at redhat.com Wed Jul 16 08:11:06 2014 From: flavio at redhat.com (Flavio Percoco) Date: Wed, 16 Jul 2014 10:11:06 +0200 Subject: [Rdo-list] Glance/Keystone problem In-Reply-To: References: Message-ID: <53C6339A.6030307@redhat.com> On 07/15/2014 03:32 PM, Adam Huffman wrote: > I've altered Keystone on my Icehouse cloud to use Apache/mod_ssl. The > Keystone and Nova clients are working (more or less) but I'm having > trouble with Glance. Hi Adam, We'd need your config files to have a better idea of what the issue could be. Based on the logs you just sent, keystone's middleware can't find/load the certification file: "Unable to load certificate. Ensure your system is configured properly" Some things you could check: 1. Is the file path in your config file correct? 2. Is the config option name correct? 3. Is the file readable? Hope the above helps, Flavio > > Here's an example of the sort of error I'm seeing from the Glance api.log: > > > 2014-07-15 14:24:00.551 24063 DEBUG > glance.api.middleware.version_negotiation [-] Determining version of > request: GET /v1/shared-images/e35356df747b4c5aa663fae2897facba > Accept: process_request > /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:44 > 2014-07-15 14:24:00.552 24063 DEBUG > glance.api.middleware.version_negotiation [-] Using url versioning > process_request > /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:57 > 2014-07-15 14:24:00.552 24063 DEBUG > glance.api.middleware.version_negotiation [-] Matched version: v1 > process_request > /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:69 > 2014-07-15 14:24:00.552 24063 DEBUG > glance.api.middleware.version_negotiation [-] new path > /v1/shared-images/e35356df747b4c5aa663fae2897facba process_request > /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:70 > 2014-07-15 14:24:00.553 24063 DEBUG > keystoneclient.middleware.auth_token [-] Authenticating user token > __call__ /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:666 > 2014-07-15 14:24:00.553 24063 DEBUG > keystoneclient.middleware.auth_token [-] Removing headers from request > environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role > _remove_auth_headers > /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:725 > 2014-07-15 14:24:00.591 24063 INFO urllib3.connectionpool [-] Starting > new HTTPS connection (1): > 2014-07-15 14:24:01.921 24063 DEBUG urllib3.connectionpool [-] "POST > /v2.0/tokens HTTP/1.1" 200 7003 _make_request > /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 > 2014-07-15 14:24:01.931 24063 INFO urllib3.connectionpool [-] Starting > new HTTPS connection (1): > 2014-07-15 14:24:03.243 24063 DEBUG urllib3.connectionpool [-] "GET > /v2.0/tokens/revoked HTTP/1.1" 200 682 _make_request > /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 > 2014-07-15 14:24:03.252 24063 INFO urllib3.connectionpool [-] Starting > new HTTPS connection (1): > 2014-07-15 14:24:04.529 24063 DEBUG urllib3.connectionpool [-] "GET / > HTTP/1.1" 300 384 _make_request > /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 > 2014-07-15 14:24:04.530 24063 DEBUG > keystoneclient.middleware.auth_token [-] Server reports support for > api versions: v3.0 _get_supported_versions > /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:656 > 2014-07-15 14:24:04.531 24063 INFO > keystoneclient.middleware.auth_token [-] Auth Token confirmed use of > v3.0 apis > 2014-07-15 14:24:04.531 24063 INFO urllib3.connectionpool [-] Starting > new HTTPS connection (1): > 2014-07-15 14:24:04.667 24063 DEBUG urllib3.connectionpool [-] "GET > /v3/OS-SIMPLE-CERT/certificates HTTP/1.1" 404 93 _make_request > /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 > 2014-07-15 14:24:04.669 24063 DEBUG > keystoneclient.middleware.auth_token [-] Token validation failure. > _validate_user_token > /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:943 > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token Traceback (most recent call > last): > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token File > "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", > line 930, in _validate_user_token > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token verified = > self.verify_signed_token(user_token, token_ids) > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token File > "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", > line 1347, in verify_signed_token > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token if > self.is_signed_token_revoked(token_ids): > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token File > "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", > line 1299, in is_signed_token_revoked > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token if > self._is_token_id_in_revoked_list(token_id): > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token File > "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", > line 1306, in _is_token_id_in_revoked_list > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token revocation_list = > self.token_revocation_list > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token File > "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", > line 1413, in token_revocation_list > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token self.token_revocation_list = > self.fetch_revocation_list() > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token File > "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", > line 1459, in fetch_revocation_list > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token return > self.cms_verify(data['signed']) > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token File > "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", > line 1333, in cms_verify > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token self.fetch_signing_cert() > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token File > "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", > line 1477, in fetch_signing_cert > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token > self._fetch_cert_file(self.signing_cert_file_name, 'signing') > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token File > "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", > line 1473, in _fetch_cert_file > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token raise > exceptions.CertificateConfigError(response.text) > 2014-07-15 14:24:04.669 24063 TRACE > keystoneclient.middleware.auth_token CertificateConfigError: Unable to > load certificate. Ensure your system is configured properly. > 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token > 2014-07-15 14:24:04.671 24063 DEBUG > keystoneclient.middleware.auth_token [-] Marking token as unauthorized > in cache _cache_store_invalid > /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1239 > 2014-07-15 14:24:04.672 24063 WARNING > keystoneclient.middleware.auth_token [-] Authorization failed for > token > 2014-07-15 14:24:04.672 24063 INFO > keystoneclient.middleware.auth_token [-] Invalid user token - > deferring reject downstream > 2014-07-15 14:24:04.674 24063 INFO glance.wsgi.server [-] > - - [15/Jul/2014 14:24:04] "GET > /v1/shared-images/e35356df747b4c5aa663fae2897facba HTTP/1.1" 401 381 > 4.124231 > > There is a bug report about a race condition involving Cinder, but > that was supposed to have been fixed. > > Any suggestions appreciated. > > Best Wishes, > Adam > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -- @flaper87 Flavio Percoco From kfiresmith at gmail.com Wed Jul 16 12:25:45 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 16 Jul 2014 08:25:45 -0400 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? Message-ID: Hello, First go-round with Openstack and first post on the list so bear with me... I've been working through the manual installation of RDO using the docs.openstack installation guide. Everything went smoothly for the most part until Neutron. It appears I've been hit by the same bug(?) discussed here: http://www.marshut.com/ithyup/net-create-issue.html#ithzts, and here: https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html ...among other places. Upon first launch of the neutron-server daemon, this appears in the neutron-server log file: http://paste.openstack.org/show/86614/ And once you go into the db you can see that a bunch of tables are not created that should be. As the first link alludes to, it looks like a MyISAM / InnoDB formatting mix-up but I'm no MySQL guy so I can't prove that. I would really like if someone on the list who is a bit more experienced with this stuff could please see if the suspicions raised in the links above are correct, and if so, could the RDO people please provide a workaround to get me back up and running with our test deployment? Thanks! - Kodiak From libosvar at redhat.com Wed Jul 16 12:54:24 2014 From: libosvar at redhat.com (Jakub Libosvar) Date: Wed, 16 Jul 2014 14:54:24 +0200 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? In-Reply-To: References: Message-ID: <53C67600.20701@redhat.com> On 07/16/2014 02:25 PM, Kodiak Firesmith wrote: > Hello, > First go-round with Openstack and first post on the list so bear with me... > > I've been working through the manual installation of RDO using the > docs.openstack installation guide. Everything went smoothly for the > most part until Neutron. It appears I've been hit by the same bug(?) > discussed here: > http://www.marshut.com/ithyup/net-create-issue.html#ithzts, and here: > https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html > ...among other places. > > Upon first launch of the neutron-server daemon, this appears in the > neutron-server log file: http://paste.openstack.org/show/86614/ > > And once you go into the db you can see that a bunch of tables are not > created that should be. > > As the first link alludes to, it looks like a MyISAM / InnoDB > formatting mix-up but I'm no MySQL guy so I can't prove that. > > I would really like if someone on the list who is a bit more > experienced with this stuff could please see if the suspicions raised > in the links above are correct, and if so, could the RDO people please > provide a workaround to get me back up and running with our test > deployment? > > Thanks! > - Kodiak > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > Hi Kodiak, I think there is a bug in documentation, I'm missing running neutron-db-manage command to create scheme for neutron. Can you please try to 1. stop neutron-server 2. create a new database 3. set connection string in neutron.conf 4. run neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file upgrade head 5. start neutron-server Kuba From kfiresmith at gmail.com Wed Jul 16 14:57:03 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 16 Jul 2014 10:57:03 -0400 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? In-Reply-To: <53C67600.20701@redhat.com> References: <53C67600.20701@redhat.com> Message-ID: Hello Kuba, Thanks for the reply. I used the ml2 ini file as my core plugin per the docs and did what you mentioned. It resulted in a traceback unfortunately. Here is a specific accounting of what I did: http://paste.openstack.org/show/86756/ So it looks like maybe there is an issue with the ml2 plugin as the openstack docs cover it so far as how it works with the RDO packages. Another admin reports that stuff "just works" in RDO packstack - maybe there is some workaround in Packstack or maybe it uses another driver and not ML2? Thanks again, - Kodiak On Wed, Jul 16, 2014 at 8:54 AM, Jakub Libosvar wrote: > On 07/16/2014 02:25 PM, Kodiak Firesmith wrote: >> Hello, >> First go-round with Openstack and first post on the list so bear with me... >> >> I've been working through the manual installation of RDO using the >> docs.openstack installation guide. Everything went smoothly for the >> most part until Neutron. It appears I've been hit by the same bug(?) >> discussed here: >> http://www.marshut.com/ithyup/net-create-issue.html#ithzts, and here: >> https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html >> ...among other places. >> >> Upon first launch of the neutron-server daemon, this appears in the >> neutron-server log file: http://paste.openstack.org/show/86614/ >> >> And once you go into the db you can see that a bunch of tables are not >> created that should be. >> >> As the first link alludes to, it looks like a MyISAM / InnoDB >> formatting mix-up but I'm no MySQL guy so I can't prove that. >> >> I would really like if someone on the list who is a bit more >> experienced with this stuff could please see if the suspicions raised >> in the links above are correct, and if so, could the RDO people please >> provide a workaround to get me back up and running with our test >> deployment? >> >> Thanks! >> - Kodiak >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > Hi Kodiak, > > I think there is a bug in documentation, I'm missing running > neutron-db-manage command to create scheme for neutron. > Can you please try to > 1. stop neutron-server > 2. create a new database > 3. set connection string in neutron.conf > 4. run > neutron-db-manage --config-file /etc/neutron/neutron.conf > --config-file upgrade head > 5. start neutron-server > > Kuba From libosvar at redhat.com Wed Jul 16 15:01:11 2014 From: libosvar at redhat.com (Jakub Libosvar) Date: Wed, 16 Jul 2014 17:01:11 +0200 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? In-Reply-To: References: <53C67600.20701@redhat.com> Message-ID: <53C693B7.2080303@redhat.com> On 07/16/2014 04:57 PM, Kodiak Firesmith wrote: > Hello Kuba, > Thanks for the reply. I used the ml2 ini file as my core plugin per > the docs and did what you mentioned. It resulted in a traceback > unfortunately. > > Here is a specific accounting of what I did: > http://paste.openstack.org/show/86756/ Ah, this is because we don't load full path from entry_points for plugins in neutron-db-manage (we didn't fix this because this dependency is going to be removed soon). Can you please try to change core_plugin in neutron.conf to core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin and re-run neutron-db-manage. Thanks, Kuba > > So it looks like maybe there is an issue with the ml2 plugin as the > openstack docs cover it so far as how it works with the RDO packages. > > Another admin reports that stuff "just works" in RDO packstack - maybe > there is some workaround in Packstack or maybe it uses another driver > and not ML2? > > Thanks again, > - Kodiak > > On Wed, Jul 16, 2014 at 8:54 AM, Jakub Libosvar wrote: >> On 07/16/2014 02:25 PM, Kodiak Firesmith wrote: >>> Hello, >>> First go-round with Openstack and first post on the list so bear with me... >>> >>> I've been working through the manual installation of RDO using the >>> docs.openstack installation guide. Everything went smoothly for the >>> most part until Neutron. It appears I've been hit by the same bug(?) >>> discussed here: >>> http://www.marshut.com/ithyup/net-create-issue.html#ithzts, and here: >>> https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html >>> ...among other places. >>> >>> Upon first launch of the neutron-server daemon, this appears in the >>> neutron-server log file: http://paste.openstack.org/show/86614/ >>> >>> And once you go into the db you can see that a bunch of tables are not >>> created that should be. >>> >>> As the first link alludes to, it looks like a MyISAM / InnoDB >>> formatting mix-up but I'm no MySQL guy so I can't prove that. >>> >>> I would really like if someone on the list who is a bit more >>> experienced with this stuff could please see if the suspicions raised >>> in the links above are correct, and if so, could the RDO people please >>> provide a workaround to get me back up and running with our test >>> deployment? >>> >>> Thanks! >>> - Kodiak >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >> Hi Kodiak, >> >> I think there is a bug in documentation, I'm missing running >> neutron-db-manage command to create scheme for neutron. >> Can you please try to >> 1. stop neutron-server >> 2. create a new database >> 3. set connection string in neutron.conf >> 4. run >> neutron-db-manage --config-file /etc/neutron/neutron.conf >> --config-file upgrade head >> 5. start neutron-server >> >> Kuba From kfiresmith at gmail.com Wed Jul 16 15:15:27 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 16 Jul 2014 11:15:27 -0400 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? In-Reply-To: <53C693B7.2080303@redhat.com> References: <53C67600.20701@redhat.com> <53C693B7.2080303@redhat.com> Message-ID: Thanks again Kuba! So I think it's gotten farther. I replaced the line on /etc/neutron/neutron.conf: -core_plugin = ml2 +core_plugin = neutron.plugins.ml2.plugin. Ml2Plugin Then I re-ran the neutron-db-manage as seen in the paste below. It's gotten past ml2 and now is erroring out on 'router': http://paste.openstack.org/show/86759/ - Kodiak On Wed, Jul 16, 2014 at 11:01 AM, Jakub Libosvar wrote: > On 07/16/2014 04:57 PM, Kodiak Firesmith wrote: >> Hello Kuba, >> Thanks for the reply. I used the ml2 ini file as my core plugin per >> the docs and did what you mentioned. It resulted in a traceback >> unfortunately. >> >> Here is a specific accounting of what I did: >> http://paste.openstack.org/show/86756/ > > Ah, this is because we don't load full path from entry_points for > plugins in neutron-db-manage (we didn't fix this because this dependency > is going to be removed soon). > > Can you please try to change core_plugin in neutron.conf to > > core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin > > and re-run neutron-db-manage. > > Thanks, > Kuba >> >> So it looks like maybe there is an issue with the ml2 plugin as the >> openstack docs cover it so far as how it works with the RDO packages. >> >> Another admin reports that stuff "just works" in RDO packstack - maybe >> there is some workaround in Packstack or maybe it uses another driver >> and not ML2? >> >> Thanks again, >> - Kodiak >> >> On Wed, Jul 16, 2014 at 8:54 AM, Jakub Libosvar wrote: >>> On 07/16/2014 02:25 PM, Kodiak Firesmith wrote: >>>> Hello, >>>> First go-round with Openstack and first post on the list so bear with me... >>>> >>>> I've been working through the manual installation of RDO using the >>>> docs.openstack installation guide. Everything went smoothly for the >>>> most part until Neutron. It appears I've been hit by the same bug(?) >>>> discussed here: >>>> http://www.marshut.com/ithyup/net-create-issue.html#ithzts, and here: >>>> https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html >>>> ...among other places. >>>> >>>> Upon first launch of the neutron-server daemon, this appears in the >>>> neutron-server log file: http://paste.openstack.org/show/86614/ >>>> >>>> And once you go into the db you can see that a bunch of tables are not >>>> created that should be. >>>> >>>> As the first link alludes to, it looks like a MyISAM / InnoDB >>>> formatting mix-up but I'm no MySQL guy so I can't prove that. >>>> >>>> I would really like if someone on the list who is a bit more >>>> experienced with this stuff could please see if the suspicions raised >>>> in the links above are correct, and if so, could the RDO people please >>>> provide a workaround to get me back up and running with our test >>>> deployment? >>>> >>>> Thanks! >>>> - Kodiak >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>> Hi Kodiak, >>> >>> I think there is a bug in documentation, I'm missing running >>> neutron-db-manage command to create scheme for neutron. >>> Can you please try to >>> 1. stop neutron-server >>> 2. create a new database >>> 3. set connection string in neutron.conf >>> 4. run >>> neutron-db-manage --config-file /etc/neutron/neutron.conf >>> --config-file upgrade head >>> 5. start neutron-server >>> >>> Kuba > From ben42ml at gmail.com Wed Jul 16 15:28:42 2014 From: ben42ml at gmail.com (Benoit ML) Date: Wed, 16 Jul 2014 17:28:42 +0200 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Hello, Another mail about the problem.... Well i have enable debug = True in keystone.conf And after a nova migrate , when i nova show : ============================================================================== | fault | {"message": "Remote error: Unauthorized {\"error\": {\"message\": \"User 0b45ccc267e04b59911e88381bb450c0 is unauthorized for tenant services\", \"code\": 401, \"title\": \"Unauthorized\"}} | ============================================================================== So well User with id 0b45ccc267e04b59911e88381bb450c0 is neutron : ============================================================================== keystone user-list | 0b45ccc267e04b59911e88381bb450c0 | neutron | True | | ============================================================================== And the role seems good : ============================================================================== keystone user-role-add --user=neutron --tenant=services --role=admin Conflict occurred attempting to store role grant. User 0b45ccc267e04b59911e88381bb450c0 already has role 734c2fb6fb444792b5ede1fa1e17fb7e in tenant dea82f7937064b6da1c370280d8bfdad (HTTP 409) keystone user-role-list --user neutron --tenant services +----------------------------------+-------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+-------+----------------------------------+----------------------------------+ | 734c2fb6fb444792b5ede1fa1e17fb7e | admin | 0b45ccc267e04b59911e88381bb450c0 | dea82f7937064b6da1c370280d8bfdad | +----------------------------------+-------+----------------------------------+----------------------------------+ keystone tenant-list +----------------------------------+----------+---------+ | id | name | enabled | +----------------------------------+----------+---------+ | e250f7573010415da6f191e0b53faae5 | admin | True | | fa30c6bdd56e45dea48dfbe9c3ee8782 | exploit | True | | dea82f7937064b6da1c370280d8bfdad | services | True | +----------------------------------+----------+---------+ ============================================================================== Really i didn't see where is my mistake ... can you help me plz ? Thank you in advance ! Regards, 2014-07-15 15:13 GMT+02:00 Benoit ML : > Hello again, > > Ok on controller node I modify the neutron server configuration with > nova_admin_tenant_id = f23ed5be5f534fdba31d23f60621347d > > where id is "services" in keystone and now it's working with "vif_plugging_is_fatal > = True". Good thing. > > Well by the way the migrate doesnt working ... > > > > > 2014-07-15 14:20 GMT+02:00 Benoit ML : > > Hello, >> >> Thank you for taking time ! >> >> Well on the compute node, when i activate "vif_plugging_is_fatal = True", >> the vm creation stuck in spawning state, and in neutron server log i have : >> >> ======================================= >> 2014-07-15 14:12:52.351 18448 DEBUG neutron.notifiers.nova [-] Sending >> events: [{'status': 'completed', 'tag': >> u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged', >> 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] send_events >> /usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:218 >> 2014-07-15 14:12:52.354 18448 INFO urllib3.connectionpool [-] Starting >> new HTTP connection (1): localhost >> 2014-07-15 14:12:52.360 18448 DEBUG urllib3.connectionpool [-] "POST >> /v2/5c9c186a909e499e9da0dd5cf2c403e0/os-server-external-events HTTP/1.1" >> 401 23 _make_request >> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295 >> 2014-07-15 14:12:52.362 18448 INFO urllib3.connectionpool [-] Starting >> new HTTP connection (1): localhost >> 2014-07-15 14:12:52.452 18448 DEBUG urllib3.connectionpool [-] "POST >> /v2.0/tokens HTTP/1.1" 401 114 _make_request >> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295 >> 2014-07-15 14:12:52.453 18448 ERROR neutron.notifiers.nova [-] Failed to >> notify nova on events: [{'status': 'completed', 'tag': >> u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged', >> 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Traceback >> (most recent call last): >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >> "/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py", line 221, in >> send_events >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova >> batched_events) >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >> "/usr/lib/python2.7/site-packages/novaclient/v1_1/contrib/server_external_events.py", >> line 39, in create >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova >> return_raw=True) >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >> "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in _create >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova _resp, >> body = self.api.client.post(url, body=body) >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >> "/usr/lib/python2.7/site-packages/novaclient/client.py", line 312, in post >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova return >> self._cs_request(url, 'POST', **kwargs) >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >> "/usr/lib/python2.7/site-packages/novaclient/client.py", line 301, in >> _cs_request >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova raise e >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Unauthorized: >> Unauthorized (HTTP 401) >> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova >> 2014-07-15 14:12:58.321 18448 DEBUG neutron.openstack.common.rpc.amqp [-] >> received {u'_context_roles': [u'admin'], u'_context_request_id': >> u'req-9bf35c42-3477-4ed3-8092-af729c21198c', u'_context_read_deleted': >> u'no', u'_context_user_name': None, u'_context_project_name': None, >> u'namespace': None, u'_context_tenant_id': None, u'args': {u'agent_state': >> {u'agent_state': {u'topic': u'N/A', u'binary': >> u'neutron-openvswitch-agent', u'host': u'pvidgsh006.pvi', u'agent_type': >> u'Open vSwitch agent', u'configurations': {u'tunnel_types': [u'vxlan'], >> u'tunneling_ip': u'192.168.40.5', u'bridge_mappings': {}, u'l2_population': >> False, u'devices': 1}}}, u'time': u'2014-07-15T12:12:58.313995'}, >> u'_context_tenant': None, u'_unique_id': >> u'7c9a4dfcd256494caf6e1327c8051e29', u'_context_is_admin': True, >> u'version': u'1.0', u'_context_timestamp': u'2014-07-15 12:01:28.190772', >> u'_context_tenant_name': None, u'_context_user': None, u'_context_user_id': >> None, u'method': u'report_state', u'_context_project_id': None} _safe_log >> /usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/common.py:280 >> ======================================= >> >> Well I'm supposed it's related ... Perhaps with those options in >> neutron.conf : >> ====================================== >> notify_nova_on_port_status_changes = True >> notify_nova_on_port_data_changes = True >> nova_url = http://localhost:8774/v2 >> nova_admin_tenant_name = services >> nova_admin_username = nova >> nova_admin_password = nova >> nova_admin_auth_url = http://localhost:35357/v2.0 >> ====================================== >> >> But well didnt see anything wrong ... >> >> Thank you in advance ! >> >> Regards, >> >> >> >> 2014-07-11 16:08 GMT+02:00 Vimal Kumar : >> >> ----- >>> File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line >>> 239, in authenticate\\n content_type="application/json")\\n\', u\' File >>> "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in >>> _cs_request\\n raise exceptions.Unauthorized(message=body)\\n\', >>> u\'Unauthorized: {"error": {"message": "The request you have made requires >>> authentication.", "code": 401, "title": "Unauthorized"}}\\n\'].\n'] >>> ----- >>> >>> Looks like HTTP connection to neutron server is resulting in 401 error. >>> >>> Try enabling debug mode for neutron server and then tail >>> /var/log/neutron/server.log , hopefully you should get more info. >>> >>> >>> On Fri, Jul 11, 2014 at 7:13 PM, Benoit ML wrote: >>> >>>> Hello, >>>> >>>> Ok I see. Nova telles neutron/openvswitch to create the bridge qbr >>>> prior to the migration itself. >>>> I ve already activate debug and verbose ... But well i'm really stuck, >>>> dont know how and where to search/look ... >>>> >>>> >>>> >>>> Regards, >>>> >>>> >>>> >>>> >>>> >>>> 2014-07-11 15:09 GMT+02:00 Miguel Angel : >>>> >>>> Hi Benoit, >>>>> >>>>> A manual virsh migration should fail, because the >>>>> network ports are not migrated to the destination host. >>>>> >>>>> You must investigate on the authentication problem itself, >>>>> and let nova handle all the underlying API calls which should happen... >>>>> >>>>> May be it's worth setting nova.conf to debug=True >>>>> >>>>> >>>>> >>>>> --- >>>>> irc: ajo / mangelajo >>>>> Miguel Angel Ajo Pelayo >>>>> +34 636 52 25 69 >>>>> skype: ajoajoajo >>>>> >>>>> >>>>> 2014-07-11 14:41 GMT+02:00 Benoit ML : >>>>> >>>>> Hello, >>>>>> >>>>>> cat /etc/redhat-release >>>>>> CentOS Linux release 7 (Rebuilt from: RHEL 7.0) >>>>>> >>>>>> >>>>>> Regards, >>>>>> >>>>>> >>>>>> 2014-07-11 13:40 GMT+02:00 Boris Derzhavets >>>>>> : >>>>>> >>>>>> Could you please post /etc/redhat-release. >>>>>>> >>>>>>> Boris. >>>>>>> >>>>>>> ------------------------------ >>>>>>> Date: Fri, 11 Jul 2014 11:57:12 +0200 >>>>>>> From: ben42ml at gmail.com >>>>>>> To: rdo-list at redhat.com >>>>>>> Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration >>>>>>> failed because of "network qbr no such device" >>>>>>> >>>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> I'm working on a multi-node setup of openstack Icehouse using >>>>>>> centos7. >>>>>>> Well i have : >>>>>>> - one controllor node with all server services thing stuff >>>>>>> - one network node with openvswitch agent, l3-agent, dhcp-agent >>>>>>> - two compute node with nova-compute and neutron-openvswitch >>>>>>> - one storage nfs node >>>>>>> >>>>>>> NetworkManager is deleted on compute nodes and network node. >>>>>>> >>>>>>> My network use is configured to use vxlan. I can create VM, >>>>>>> tenant-network, external-network, routeur, assign floating-ip to VM, push >>>>>>> ssh-key into VM, create volume from glance image, etc... Evrything is >>>>>>> conected and reacheable. Pretty cool :) >>>>>>> >>>>>>> But when i try to migrate VM things go wrong ... I have configured >>>>>>> nova, libvirtd and qemu to use migration through libvirt-tcp. >>>>>>> I have create and exchanged ssh-key for nova user on all node. I >>>>>>> have verified userid and groupid of nova. >>>>>>> >>>>>>> Well nova-compute log, on the target compute node, : >>>>>>> 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: >>>>>>> a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: >>>>>>> Unauthorized {"error": {"m >>>>>>> essage": "The request you have made requires authentication.", >>>>>>> "code": 401, "title": "Unauthorized"}} >>>>>>> >>>>>>> >>>>>>> So well after searching a lots in all logs, i have fount that i cant >>>>>>> simply migration VM between compute node with a simple virsh : >>>>>>> virsh migrate instance-00000084 qemu+tcp:///system >>>>>>> >>>>>>> The error is : >>>>>>> erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device >>>>>>> >>>>>>> Well when i look on the source hyperviseur the bridge "qbr3ca65809" >>>>>>> exists and have a network tap device. And moreover i manually create >>>>>>> qbr3ca65809 on the target hypervisor, virsh migrate succed ! >>>>>>> >>>>>>> Can you help me plz ? >>>>>>> What can i do wrong ? Perhpas neutron must create the bridge before >>>>>>> migration but didnt for a mis configuration ? >>>>>>> >>>>>>> Plz ask anything you need ! >>>>>>> >>>>>>> Thank you in advance. >>>>>>> >>>>>>> >>>>>>> The full nova-compute log attached. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> Benoit >>>>>>> >>>>>>> _______________________________________________ Rdo-list mailing >>>>>>> list Rdo-list at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> Benoit >>>>>> >>>>>> _______________________________________________ >>>>>> Rdo-list mailing list >>>>>> Rdo-list at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>> >>>>>> >>>>> >>>> >>>> >>>> -- >>>> -- >>>> Benoit >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> >>> >> >> >> -- >> -- >> Benoit >> > > > > -- > -- > Benoit > -- -- Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From kfiresmith at gmail.com Wed Jul 16 16:34:42 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 16 Jul 2014 12:34:42 -0400 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? In-Reply-To: References: <53C67600.20701@redhat.com> <53C693B7.2080303@redhat.com> Message-ID: Further modifying /etc/neutron/neutron.conf as follows allowed the neutron-db-manage goodness to happen: -service_plugins = router +service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin # neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head No handlers could be found for logger "neutron.common.legacy" INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade None -> folsom INFO [alembic.migration] Running upgrade folsom -> 2c4af419145b ... INFO [alembic.migration] Running upgrade 1341ed32cc1e -> grizzly INFO [alembic.migration] Running upgrade grizzly -> f489cf14a79c INFO [alembic.migration] Running upgrade f489cf14a79c -> 176a85fc7d79 ... INFO [alembic.migration] Running upgrade 49f5e553f61f -> 40b0aff0302e INFO [alembic.migration] Running upgrade 40b0aff0302e -> havana INFO [alembic.migration] Running upgrade havana -> e197124d4b9 ... INFO [alembic.migration] Running upgrade 538732fa21e1 -> 5ac1c354a051 INFO [alembic.migration] Running upgrade 5ac1c354a051 -> icehouse I am now cautiously optimistic that I'm back on track - will report back with success fail. If success I'll submit a documentation bug to the docs.openstack people. Here's my tables now: http://paste.openstack.org/show/86776/ Thanks a million! - Kodiak On Wed, Jul 16, 2014 at 11:15 AM, Kodiak Firesmith wrote: > Thanks again Kuba! > > So I think it's gotten farther. I replaced the line on > /etc/neutron/neutron.conf: > > -core_plugin = ml2 > +core_plugin = neutron.plugins.ml2.plugin. > Ml2Plugin > > Then I re-ran the neutron-db-manage as seen in the paste below. It's > gotten past ml2 and now is erroring out on 'router': > > http://paste.openstack.org/show/86759/ > > > - Kodiak > > On Wed, Jul 16, 2014 at 11:01 AM, Jakub Libosvar wrote: >> On 07/16/2014 04:57 PM, Kodiak Firesmith wrote: >>> Hello Kuba, >>> Thanks for the reply. I used the ml2 ini file as my core plugin per >>> the docs and did what you mentioned. It resulted in a traceback >>> unfortunately. >>> >>> Here is a specific accounting of what I did: >>> http://paste.openstack.org/show/86756/ >> >> Ah, this is because we don't load full path from entry_points for >> plugins in neutron-db-manage (we didn't fix this because this dependency >> is going to be removed soon). >> >> Can you please try to change core_plugin in neutron.conf to >> >> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin >> >> and re-run neutron-db-manage. >> >> Thanks, >> Kuba >>> >>> So it looks like maybe there is an issue with the ml2 plugin as the >>> openstack docs cover it so far as how it works with the RDO packages. >>> >>> Another admin reports that stuff "just works" in RDO packstack - maybe >>> there is some workaround in Packstack or maybe it uses another driver >>> and not ML2? >>> >>> Thanks again, >>> - Kodiak >>> >>> On Wed, Jul 16, 2014 at 8:54 AM, Jakub Libosvar wrote: >>>> On 07/16/2014 02:25 PM, Kodiak Firesmith wrote: >>>>> Hello, >>>>> First go-round with Openstack and first post on the list so bear with me... >>>>> >>>>> I've been working through the manual installation of RDO using the >>>>> docs.openstack installation guide. Everything went smoothly for the >>>>> most part until Neutron. It appears I've been hit by the same bug(?) >>>>> discussed here: >>>>> http://www.marshut.com/ithyup/net-create-issue.html#ithzts, and here: >>>>> https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html >>>>> ...among other places. >>>>> >>>>> Upon first launch of the neutron-server daemon, this appears in the >>>>> neutron-server log file: http://paste.openstack.org/show/86614/ >>>>> >>>>> And once you go into the db you can see that a bunch of tables are not >>>>> created that should be. >>>>> >>>>> As the first link alludes to, it looks like a MyISAM / InnoDB >>>>> formatting mix-up but I'm no MySQL guy so I can't prove that. >>>>> >>>>> I would really like if someone on the list who is a bit more >>>>> experienced with this stuff could please see if the suspicions raised >>>>> in the links above are correct, and if so, could the RDO people please >>>>> provide a workaround to get me back up and running with our test >>>>> deployment? >>>>> >>>>> Thanks! >>>>> - Kodiak >>>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>> Hi Kodiak, >>>> >>>> I think there is a bug in documentation, I'm missing running >>>> neutron-db-manage command to create scheme for neutron. >>>> Can you please try to >>>> 1. stop neutron-server >>>> 2. create a new database >>>> 3. set connection string in neutron.conf >>>> 4. run >>>> neutron-db-manage --config-file /etc/neutron/neutron.conf >>>> --config-file upgrade head >>>> 5. start neutron-server >>>> >>>> Kuba >> From adam.huffman at gmail.com Wed Jul 16 16:35:40 2014 From: adam.huffman at gmail.com (Adam Huffman) Date: Wed, 16 Jul 2014 17:35:40 +0100 Subject: [Rdo-list] Glance/Keystone problem In-Reply-To: References: <53C6339A.6030307@redhat.com> Message-ID: Hi Flavio, Thanks for looking. In the end, the cause here was an omission in the api-paste file for Keystone, now fixed. Best Wishes, Adam On Wed, Jul 16, 2014 at 5:35 PM, Adam Huffman wrote: > Hi Flavio, > > Thanks for looking. In the end, the cause here was an omission in the > api-paste file for Keystone, now fixed. > > Best Wishes, > Adam > > On Wed, Jul 16, 2014 at 9:11 AM, Flavio Percoco wrote: >> On 07/15/2014 03:32 PM, Adam Huffman wrote: >>> I've altered Keystone on my Icehouse cloud to use Apache/mod_ssl. The >>> Keystone and Nova clients are working (more or less) but I'm having >>> trouble with Glance. >> >> Hi Adam, >> >> We'd need your config files to have a better idea of what the issue >> could be. Based on the logs you just sent, keystone's middleware can't >> find/load the certification file: >> >> "Unable to load certificate. Ensure your system is configured properly" >> >> Some things you could check: >> >> 1. Is the file path in your config file correct? >> 2. Is the config option name correct? >> 3. Is the file readable? >> >> Hope the above helps, >> Flavio >> >> >>> >>> Here's an example of the sort of error I'm seeing from the Glance api.log: >>> >>> >>> 2014-07-15 14:24:00.551 24063 DEBUG >>> glance.api.middleware.version_negotiation [-] Determining version of >>> request: GET /v1/shared-images/e35356df747b4c5aa663fae2897facba >>> Accept: process_request >>> /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:44 >>> 2014-07-15 14:24:00.552 24063 DEBUG >>> glance.api.middleware.version_negotiation [-] Using url versioning >>> process_request >>> /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:57 >>> 2014-07-15 14:24:00.552 24063 DEBUG >>> glance.api.middleware.version_negotiation [-] Matched version: v1 >>> process_request >>> /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:69 >>> 2014-07-15 14:24:00.552 24063 DEBUG >>> glance.api.middleware.version_negotiation [-] new path >>> /v1/shared-images/e35356df747b4c5aa663fae2897facba process_request >>> /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:70 >>> 2014-07-15 14:24:00.553 24063 DEBUG >>> keystoneclient.middleware.auth_token [-] Authenticating user token >>> __call__ /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:666 >>> 2014-07-15 14:24:00.553 24063 DEBUG >>> keystoneclient.middleware.auth_token [-] Removing headers from request >>> environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role >>> _remove_auth_headers >>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:725 >>> 2014-07-15 14:24:00.591 24063 INFO urllib3.connectionpool [-] Starting >>> new HTTPS connection (1): >>> 2014-07-15 14:24:01.921 24063 DEBUG urllib3.connectionpool [-] "POST >>> /v2.0/tokens HTTP/1.1" 200 7003 _make_request >>> /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 >>> 2014-07-15 14:24:01.931 24063 INFO urllib3.connectionpool [-] Starting >>> new HTTPS connection (1): >>> 2014-07-15 14:24:03.243 24063 DEBUG urllib3.connectionpool [-] "GET >>> /v2.0/tokens/revoked HTTP/1.1" 200 682 _make_request >>> /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 >>> 2014-07-15 14:24:03.252 24063 INFO urllib3.connectionpool [-] Starting >>> new HTTPS connection (1): >>> 2014-07-15 14:24:04.529 24063 DEBUG urllib3.connectionpool [-] "GET / >>> HTTP/1.1" 300 384 _make_request >>> /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 >>> 2014-07-15 14:24:04.530 24063 DEBUG >>> keystoneclient.middleware.auth_token [-] Server reports support for >>> api versions: v3.0 _get_supported_versions >>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:656 >>> 2014-07-15 14:24:04.531 24063 INFO >>> keystoneclient.middleware.auth_token [-] Auth Token confirmed use of >>> v3.0 apis >>> 2014-07-15 14:24:04.531 24063 INFO urllib3.connectionpool [-] Starting >>> new HTTPS connection (1): >>> 2014-07-15 14:24:04.667 24063 DEBUG urllib3.connectionpool [-] "GET >>> /v3/OS-SIMPLE-CERT/certificates HTTP/1.1" 404 93 _make_request >>> /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 >>> 2014-07-15 14:24:04.669 24063 DEBUG >>> keystoneclient.middleware.auth_token [-] Token validation failure. >>> _validate_user_token >>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:943 >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token Traceback (most recent call >>> last): >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token File >>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", >>> line 930, in _validate_user_token >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token verified = >>> self.verify_signed_token(user_token, token_ids) >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token File >>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", >>> line 1347, in verify_signed_token >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token if >>> self.is_signed_token_revoked(token_ids): >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token File >>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", >>> line 1299, in is_signed_token_revoked >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token if >>> self._is_token_id_in_revoked_list(token_id): >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token File >>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", >>> line 1306, in _is_token_id_in_revoked_list >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token revocation_list = >>> self.token_revocation_list >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token File >>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", >>> line 1413, in token_revocation_list >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token self.token_revocation_list = >>> self.fetch_revocation_list() >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token File >>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", >>> line 1459, in fetch_revocation_list >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token return >>> self.cms_verify(data['signed']) >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token File >>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", >>> line 1333, in cms_verify >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token self.fetch_signing_cert() >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token File >>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", >>> line 1477, in fetch_signing_cert >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token >>> self._fetch_cert_file(self.signing_cert_file_name, 'signing') >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token File >>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", >>> line 1473, in _fetch_cert_file >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token raise >>> exceptions.CertificateConfigError(response.text) >>> 2014-07-15 14:24:04.669 24063 TRACE >>> keystoneclient.middleware.auth_token CertificateConfigError: Unable to >>> load certificate. Ensure your system is configured properly. >>> 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token >>> 2014-07-15 14:24:04.671 24063 DEBUG >>> keystoneclient.middleware.auth_token [-] Marking token as unauthorized >>> in cache _cache_store_invalid >>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1239 >>> 2014-07-15 14:24:04.672 24063 WARNING >>> keystoneclient.middleware.auth_token [-] Authorization failed for >>> token >>> 2014-07-15 14:24:04.672 24063 INFO >>> keystoneclient.middleware.auth_token [-] Invalid user token - >>> deferring reject downstream >>> 2014-07-15 14:24:04.674 24063 INFO glance.wsgi.server [-] >>> - - [15/Jul/2014 14:24:04] "GET >>> /v1/shared-images/e35356df747b4c5aa663fae2897facba HTTP/1.1" 401 381 >>> 4.124231 >>> >>> There is a bug report about a race condition involving Cinder, but >>> that was supposed to have been fixed. >>> >>> Any suggestions appreciated. >>> >>> Best Wishes, >>> Adam >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >> >> >> -- >> @flaper87 >> Flavio Percoco >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list From kfiresmith at gmail.com Wed Jul 16 16:58:41 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 16 Jul 2014 12:58:41 -0400 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? In-Reply-To: References: <53C67600.20701@redhat.com> <53C693B7.2080303@redhat.com> Message-ID: Of course setting up Neutron has taken Horizon offline: http://paste.openstack.org/show/86778/ - Kodiak On Wed, Jul 16, 2014 at 12:34 PM, Kodiak Firesmith wrote: > Further modifying /etc/neutron/neutron.conf as follows allowed the > neutron-db-manage goodness to happen: > > -service_plugins = router > +service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin > > # neutron-db-manage --config-file /etc/neutron/neutron.conf > --config-file /etc/neutron/plugin.ini upgrade head > No handlers could be found for logger "neutron.common.legacy" > INFO [alembic.migration] Context impl MySQLImpl. > INFO [alembic.migration] Will assume non-transactional DDL. > INFO [alembic.migration] Running upgrade None -> folsom > INFO [alembic.migration] Running upgrade folsom -> 2c4af419145b > ... > INFO [alembic.migration] Running upgrade 1341ed32cc1e -> grizzly > INFO [alembic.migration] Running upgrade grizzly -> f489cf14a79c > INFO [alembic.migration] Running upgrade f489cf14a79c -> 176a85fc7d79 > ... > INFO [alembic.migration] Running upgrade 49f5e553f61f -> 40b0aff0302e > INFO [alembic.migration] Running upgrade 40b0aff0302e -> havana > INFO [alembic.migration] Running upgrade havana -> e197124d4b9 > ... > INFO [alembic.migration] Running upgrade 538732fa21e1 -> 5ac1c354a051 > INFO [alembic.migration] Running upgrade 5ac1c354a051 -> icehouse > > I am now cautiously optimistic that I'm back on track - will report > back with success fail. If success I'll submit a documentation bug to > the docs.openstack people. > > Here's my tables now: > http://paste.openstack.org/show/86776/ > > Thanks a million! > > - Kodiak > > On Wed, Jul 16, 2014 at 11:15 AM, Kodiak Firesmith wrote: >> Thanks again Kuba! >> >> So I think it's gotten farther. I replaced the line on >> /etc/neutron/neutron.conf: >> >> -core_plugin = ml2 >> +core_plugin = neutron.plugins.ml2.plugin. >> Ml2Plugin >> >> Then I re-ran the neutron-db-manage as seen in the paste below. It's >> gotten past ml2 and now is erroring out on 'router': >> >> http://paste.openstack.org/show/86759/ >> >> >> - Kodiak >> >> On Wed, Jul 16, 2014 at 11:01 AM, Jakub Libosvar wrote: >>> On 07/16/2014 04:57 PM, Kodiak Firesmith wrote: >>>> Hello Kuba, >>>> Thanks for the reply. I used the ml2 ini file as my core plugin per >>>> the docs and did what you mentioned. It resulted in a traceback >>>> unfortunately. >>>> >>>> Here is a specific accounting of what I did: >>>> http://paste.openstack.org/show/86756/ >>> >>> Ah, this is because we don't load full path from entry_points for >>> plugins in neutron-db-manage (we didn't fix this because this dependency >>> is going to be removed soon). >>> >>> Can you please try to change core_plugin in neutron.conf to >>> >>> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin >>> >>> and re-run neutron-db-manage. >>> >>> Thanks, >>> Kuba >>>> >>>> So it looks like maybe there is an issue with the ml2 plugin as the >>>> openstack docs cover it so far as how it works with the RDO packages. >>>> >>>> Another admin reports that stuff "just works" in RDO packstack - maybe >>>> there is some workaround in Packstack or maybe it uses another driver >>>> and not ML2? >>>> >>>> Thanks again, >>>> - Kodiak >>>> >>>> On Wed, Jul 16, 2014 at 8:54 AM, Jakub Libosvar wrote: >>>>> On 07/16/2014 02:25 PM, Kodiak Firesmith wrote: >>>>>> Hello, >>>>>> First go-round with Openstack and first post on the list so bear with me... >>>>>> >>>>>> I've been working through the manual installation of RDO using the >>>>>> docs.openstack installation guide. Everything went smoothly for the >>>>>> most part until Neutron. It appears I've been hit by the same bug(?) >>>>>> discussed here: >>>>>> http://www.marshut.com/ithyup/net-create-issue.html#ithzts, and here: >>>>>> https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html >>>>>> ...among other places. >>>>>> >>>>>> Upon first launch of the neutron-server daemon, this appears in the >>>>>> neutron-server log file: http://paste.openstack.org/show/86614/ >>>>>> >>>>>> And once you go into the db you can see that a bunch of tables are not >>>>>> created that should be. >>>>>> >>>>>> As the first link alludes to, it looks like a MyISAM / InnoDB >>>>>> formatting mix-up but I'm no MySQL guy so I can't prove that. >>>>>> >>>>>> I would really like if someone on the list who is a bit more >>>>>> experienced with this stuff could please see if the suspicions raised >>>>>> in the links above are correct, and if so, could the RDO people please >>>>>> provide a workaround to get me back up and running with our test >>>>>> deployment? >>>>>> >>>>>> Thanks! >>>>>> - Kodiak >>>>>> >>>>>> _______________________________________________ >>>>>> Rdo-list mailing list >>>>>> Rdo-list at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>> >>>>> Hi Kodiak, >>>>> >>>>> I think there is a bug in documentation, I'm missing running >>>>> neutron-db-manage command to create scheme for neutron. >>>>> Can you please try to >>>>> 1. stop neutron-server >>>>> 2. create a new database >>>>> 3. set connection string in neutron.conf >>>>> 4. run >>>>> neutron-db-manage --config-file /etc/neutron/neutron.conf >>>>> --config-file upgrade head >>>>> 5. start neutron-server >>>>> >>>>> Kuba >>> From mkassawara at gmail.com Wed Jul 16 17:06:03 2014 From: mkassawara at gmail.com (Matt Kassawara) Date: Wed, 16 Jul 2014 12:06:03 -0500 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? In-Reply-To: References: <53C67600.20701@redhat.com> <53C693B7.2080303@redhat.com> Message-ID: For pre-release Icehouse packages on all distributions, the installation guide included steps to run neutron-db-manage. However, while testing the installation guide with release package, we found that fresh installation on all distributions no longer requires these steps. Only upgrading from prior releases (covered in the operations guide) requires these steps. Did something change in the RDO packages post-release to make neutron-db-manage steps necessary again? On Wed, Jul 16, 2014 at 11:58 AM, Kodiak Firesmith wrote: > Of course setting up Neutron has taken Horizon offline: > > http://paste.openstack.org/show/86778/ > > > - Kodiak > > On Wed, Jul 16, 2014 at 12:34 PM, Kodiak Firesmith > wrote: > > Further modifying /etc/neutron/neutron.conf as follows allowed the > > neutron-db-manage goodness to happen: > > > > -service_plugins = router > > +service_plugins = > neutron.services.l3_router.l3_router_plugin.L3RouterPlugin > > > > # neutron-db-manage --config-file /etc/neutron/neutron.conf > > --config-file /etc/neutron/plugin.ini upgrade head > > No handlers could be found for logger "neutron.common.legacy" > > INFO [alembic.migration] Context impl MySQLImpl. > > INFO [alembic.migration] Will assume non-transactional DDL. > > INFO [alembic.migration] Running upgrade None -> folsom > > INFO [alembic.migration] Running upgrade folsom -> 2c4af419145b > > ... > > INFO [alembic.migration] Running upgrade 1341ed32cc1e -> grizzly > > INFO [alembic.migration] Running upgrade grizzly -> f489cf14a79c > > INFO [alembic.migration] Running upgrade f489cf14a79c -> 176a85fc7d79 > > ... > > INFO [alembic.migration] Running upgrade 49f5e553f61f -> 40b0aff0302e > > INFO [alembic.migration] Running upgrade 40b0aff0302e -> havana > > INFO [alembic.migration] Running upgrade havana -> e197124d4b9 > > ... > > INFO [alembic.migration] Running upgrade 538732fa21e1 -> 5ac1c354a051 > > INFO [alembic.migration] Running upgrade 5ac1c354a051 -> icehouse > > > > I am now cautiously optimistic that I'm back on track - will report > > back with success fail. If success I'll submit a documentation bug to > > the docs.openstack people. > > > > Here's my tables now: > > http://paste.openstack.org/show/86776/ > > > > Thanks a million! > > > > - Kodiak > > > > On Wed, Jul 16, 2014 at 11:15 AM, Kodiak Firesmith > wrote: > >> Thanks again Kuba! > >> > >> So I think it's gotten farther. I replaced the line on > >> /etc/neutron/neutron.conf: > >> > >> -core_plugin = ml2 > >> +core_plugin = neutron.plugins.ml2.plugin. > >> Ml2Plugin > >> > >> Then I re-ran the neutron-db-manage as seen in the paste below. It's > >> gotten past ml2 and now is erroring out on 'router': > >> > >> http://paste.openstack.org/show/86759/ > >> > >> > >> - Kodiak > >> > >> On Wed, Jul 16, 2014 at 11:01 AM, Jakub Libosvar > wrote: > >>> On 07/16/2014 04:57 PM, Kodiak Firesmith wrote: > >>>> Hello Kuba, > >>>> Thanks for the reply. I used the ml2 ini file as my core plugin per > >>>> the docs and did what you mentioned. It resulted in a traceback > >>>> unfortunately. > >>>> > >>>> Here is a specific accounting of what I did: > >>>> http://paste.openstack.org/show/86756/ > >>> > >>> Ah, this is because we don't load full path from entry_points for > >>> plugins in neutron-db-manage (we didn't fix this because this > dependency > >>> is going to be removed soon). > >>> > >>> Can you please try to change core_plugin in neutron.conf to > >>> > >>> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin > >>> > >>> and re-run neutron-db-manage. > >>> > >>> Thanks, > >>> Kuba > >>>> > >>>> So it looks like maybe there is an issue with the ml2 plugin as the > >>>> openstack docs cover it so far as how it works with the RDO packages. > >>>> > >>>> Another admin reports that stuff "just works" in RDO packstack - maybe > >>>> there is some workaround in Packstack or maybe it uses another driver > >>>> and not ML2? > >>>> > >>>> Thanks again, > >>>> - Kodiak > >>>> > >>>> On Wed, Jul 16, 2014 at 8:54 AM, Jakub Libosvar > wrote: > >>>>> On 07/16/2014 02:25 PM, Kodiak Firesmith wrote: > >>>>>> Hello, > >>>>>> First go-round with Openstack and first post on the list so bear > with me... > >>>>>> > >>>>>> I've been working through the manual installation of RDO using the > >>>>>> docs.openstack installation guide. Everything went smoothly for the > >>>>>> most part until Neutron. It appears I've been hit by the same > bug(?) > >>>>>> discussed here: > >>>>>> http://www.marshut.com/ithyup/net-create-issue.html#ithzts, and > here: > >>>>>> https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html > >>>>>> ...among other places. > >>>>>> > >>>>>> Upon first launch of the neutron-server daemon, this appears in the > >>>>>> neutron-server log file: http://paste.openstack.org/show/86614/ > >>>>>> > >>>>>> And once you go into the db you can see that a bunch of tables are > not > >>>>>> created that should be. > >>>>>> > >>>>>> As the first link alludes to, it looks like a MyISAM / InnoDB > >>>>>> formatting mix-up but I'm no MySQL guy so I can't prove that. > >>>>>> > >>>>>> I would really like if someone on the list who is a bit more > >>>>>> experienced with this stuff could please see if the suspicions > raised > >>>>>> in the links above are correct, and if so, could the RDO people > please > >>>>>> provide a workaround to get me back up and running with our test > >>>>>> deployment? > >>>>>> > >>>>>> Thanks! > >>>>>> - Kodiak > >>>>>> > >>>>>> _______________________________________________ > >>>>>> Rdo-list mailing list > >>>>>> Rdo-list at redhat.com > >>>>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>>>> > >>>>> Hi Kodiak, > >>>>> > >>>>> I think there is a bug in documentation, I'm missing running > >>>>> neutron-db-manage command to create scheme for neutron. > >>>>> Can you please try to > >>>>> 1. stop neutron-server > >>>>> 2. create a new database > >>>>> 3. set connection string in neutron.conf > >>>>> 4. run > >>>>> neutron-db-manage --config-file /etc/neutron/neutron.conf > >>>>> --config-file upgrade head > >>>>> 5. start neutron-server > >>>>> > >>>>> Kuba > >>> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frizop at gmail.com Wed Jul 16 17:21:13 2014 From: frizop at gmail.com (Nathan M.) Date: Wed, 16 Jul 2014 10:21:13 -0700 Subject: [Rdo-list] cinder doesn't find volgroup/cinder-volumes Message-ID: > > Are you able to reproduce this issue (assuming you > can consistently) on current latest IceHouse RDO packages? You haven't > noted the versions you're using. ?rpm -qa | grep icehouse rdo-release-icehouse-4.noarch Maybe you don't really have enough free space there? I don't have a > Cinder setup to do a sanity check, you might want to ensure if you have > your Cinder filter scheduler configured correctly. ?I created another vg and gave it 30 gigs just in case: [root at node1]/etc/cinder# (openstack_admin)] vgdisplay cinder-volumes --- Volume group --- VG Name cinder-volumes System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 30.00 GiB PE Size 4.00 MiB Total PE 7679 Alloc PE / Size 0 / 0 Free PE / Size 7679 / 30.00 GiB VG UUID huXkqD-3JIm-Fasr-Gkwc-EPVP-n15c-KIvBRc? > > 2014-07-11 12:34:43.325 8665 ERROR cinder.scheduler.flows.create_volume > > [req-2acb85f1-5b7b-4b63-bf95-9037338cb52b > 555e3e826c9f445c9975d0e1c6e00fc6 > > ff6a2b534e984db58313ae194b2d908c - - -] Failed to schedule_create_volume: > > No valid host was found. > > 2014-07-11 12:35:16.253 8665 WARNING cinder.context [-] Arguments dropped > > when creating context: {'user': None, 'tenant': None, 'user_identity': > u'- > > - - - -'} > > > > -- > /kashyap > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frizop at gmail.com Wed Jul 16 17:23:18 2014 From: frizop at gmail.com (Nathan M.) Date: Wed, 16 Jul 2014 10:23:18 -0700 Subject: [Rdo-list] cinder doesn't find volgroup/cinder-volumes In-Reply-To: References: Message-ID: > your Cinder filter scheduler configured correctly. Last thing I was checking, sorry about breaking this up into two emails. Anyway, not sure what I'm looking for in the Cinder filter scheduler configuration. Thanks for the help though! ?--Nathan? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed Jul 16 17:59:32 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 16 Jul 2014 23:29:32 +0530 Subject: [Rdo-list] cinder doesn't find volgroup/cinder-volumes In-Reply-To: References: Message-ID: <20140716175932.GD8775@tesla.redhat.com> On Wed, Jul 16, 2014 at 10:23:18AM -0700, Nathan M. wrote: > > your Cinder filter scheduler configured correctly. > > Last thing I was checking, sorry about breaking this up into two emails. > Anyway, not sure what I'm looking for in the Cinder filter scheduler > configuration. Afraid, I'm not a Cinder expert, but maybe you can check existing documentation of Cinder config to get more clues. [1] http://docs.openstack.org/trunk/config-reference/content/section_cinder.conf.html -- /kashyap From ihrachys at redhat.com Thu Jul 17 08:55:08 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 17 Jul 2014 10:55:08 +0200 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? In-Reply-To: References: <53C67600.20701@redhat.com> <53C693B7.2080303@redhat.com> Message-ID: <53C78F6C.3070106@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 16/07/14 18:58, Kodiak Firesmith wrote: > Of course setting up Neutron has taken Horizon offline: > > http://paste.openstack.org/show/86778/ > Any interesting log messages for neutron service? Do basic neutron requests like 'neutron net-list' work? > > - Kodiak > > On Wed, Jul 16, 2014 at 12:34 PM, Kodiak Firesmith > wrote: >> Further modifying /etc/neutron/neutron.conf as follows allowed >> the neutron-db-manage goodness to happen: >> >> -service_plugins = router +service_plugins = >> neutron.services.l3_router.l3_router_plugin.L3RouterPlugin >> >> # neutron-db-manage --config-file /etc/neutron/neutron.conf >> --config-file /etc/neutron/plugin.ini upgrade head No handlers >> could be found for logger "neutron.common.legacy" INFO >> [alembic.migration] Context impl MySQLImpl. INFO >> [alembic.migration] Will assume non-transactional DDL. INFO >> [alembic.migration] Running upgrade None -> folsom INFO >> [alembic.migration] Running upgrade folsom -> 2c4af419145b ... >> INFO [alembic.migration] Running upgrade 1341ed32cc1e -> >> grizzly INFO [alembic.migration] Running upgrade grizzly -> >> f489cf14a79c INFO [alembic.migration] Running upgrade >> f489cf14a79c -> 176a85fc7d79 ... INFO [alembic.migration] >> Running upgrade 49f5e553f61f -> 40b0aff0302e INFO >> [alembic.migration] Running upgrade 40b0aff0302e -> havana INFO >> [alembic.migration] Running upgrade havana -> e197124d4b9 ... >> INFO [alembic.migration] Running upgrade 538732fa21e1 -> >> 5ac1c354a051 INFO [alembic.migration] Running upgrade >> 5ac1c354a051 -> icehouse >> >> I am now cautiously optimistic that I'm back on track - will >> report back with success fail. If success I'll submit a >> documentation bug to the docs.openstack people. >> >> Here's my tables now: http://paste.openstack.org/show/86776/ >> >> Thanks a million! >> >> - Kodiak >> >> On Wed, Jul 16, 2014 at 11:15 AM, Kodiak Firesmith >> wrote: >>> Thanks again Kuba! >>> >>> So I think it's gotten farther. I replaced the line on >>> /etc/neutron/neutron.conf: >>> >>> -core_plugin = ml2 +core_plugin = neutron.plugins.ml2.plugin. >>> Ml2Plugin >>> >>> Then I re-ran the neutron-db-manage as seen in the paste below. >>> It's gotten past ml2 and now is erroring out on 'router': >>> >>> http://paste.openstack.org/show/86759/ >>> >>> >>> - Kodiak >>> >>> On Wed, Jul 16, 2014 at 11:01 AM, Jakub Libosvar >>> wrote: >>>> On 07/16/2014 04:57 PM, Kodiak Firesmith wrote: >>>>> Hello Kuba, Thanks for the reply. I used the ml2 ini file >>>>> as my core plugin per the docs and did what you mentioned. >>>>> It resulted in a traceback unfortunately. >>>>> >>>>> Here is a specific accounting of what I did: >>>>> http://paste.openstack.org/show/86756/ >>>> >>>> Ah, this is because we don't load full path from entry_points >>>> for plugins in neutron-db-manage (we didn't fix this because >>>> this dependency is going to be removed soon). >>>> >>>> Can you please try to change core_plugin in neutron.conf to >>>> >>>> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin >>>> >>>> and re-run neutron-db-manage. >>>> >>>> Thanks, Kuba >>>>> >>>>> So it looks like maybe there is an issue with the ml2 >>>>> plugin as the openstack docs cover it so far as how it >>>>> works with the RDO packages. >>>>> >>>>> Another admin reports that stuff "just works" in RDO >>>>> packstack - maybe there is some workaround in Packstack or >>>>> maybe it uses another driver and not ML2? >>>>> >>>>> Thanks again, - Kodiak >>>>> >>>>> On Wed, Jul 16, 2014 at 8:54 AM, Jakub Libosvar >>>>> wrote: >>>>>> On 07/16/2014 02:25 PM, Kodiak Firesmith wrote: >>>>>>> Hello, First go-round with Openstack and first post on >>>>>>> the list so bear with me... >>>>>>> >>>>>>> I've been working through the manual installation of >>>>>>> RDO using the docs.openstack installation guide. >>>>>>> Everything went smoothly for the most part until >>>>>>> Neutron. It appears I've been hit by the same bug(?) >>>>>>> discussed here: >>>>>>> http://www.marshut.com/ithyup/net-create-issue.html#ithzts, >>>>>>> and here: >>>>>>> https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html >>>>>>> >>>>>>> ...among other places. >>>>>>> >>>>>>> Upon first launch of the neutron-server daemon, this >>>>>>> appears in the neutron-server log file: >>>>>>> http://paste.openstack.org/show/86614/ >>>>>>> >>>>>>> And once you go into the db you can see that a bunch of >>>>>>> tables are not created that should be. >>>>>>> >>>>>>> As the first link alludes to, it looks like a MyISAM / >>>>>>> InnoDB formatting mix-up but I'm no MySQL guy so I >>>>>>> can't prove that. >>>>>>> >>>>>>> I would really like if someone on the list who is a bit >>>>>>> more experienced with this stuff could please see if >>>>>>> the suspicions raised in the links above are correct, >>>>>>> and if so, could the RDO people please provide a >>>>>>> workaround to get me back up and running with our test >>>>>>> deployment? >>>>>>> >>>>>>> Thanks! - Kodiak >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Rdo-list mailing list Rdo-list at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>> >>>>>> Hi Kodiak, >>>>>> >>>>>> I think there is a bug in documentation, I'm missing >>>>>> running neutron-db-manage command to create scheme for >>>>>> neutron. Can you please try to 1. stop neutron-server 2. >>>>>> create a new database 3. set connection string in >>>>>> neutron.conf 4. run neutron-db-manage --config-file >>>>>> /etc/neutron/neutron.conf --config-file >>>>>> upgrade head 5. start >>>>>> neutron-server >>>>>> >>>>>> Kuba >>>> > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTx49sAAoJEC5aWaUY1u577IoH/A8xk9HwDgJYJ6M2T11D3Yt+ VkRyZvR8drsVl2GaX51uQF4F9HWNgT6zhZqk4Y9n8MvTtG66XIUI7K0KW51uU05/ Ki4NaD9RrkVMIvNGExCSOzcpuUaCYmTOjDVoHkKT+jp+vdRcjNrFZHtI7IE1qGpI BSbhNzV8htJJiFI40dsjJgZgutmqORvU79oFZDADcUMQnb/tIH9hw5xSAWe2+dzi IzUq88Brd90t8tteAAauNaHYcx4yG9dGZ7xaXi0FNqOhw/WzaVm8U/UkKmvEatoV NBlnbliuPBBtttGr/EtOtUcyo9eiNN1P+IvmoJgz8dSvbX95vAXZBGA+2Nq/ecQ= =dPVj -----END PGP SIGNATURE----- From acvelez at vidalinux.com Thu Jul 17 09:32:53 2014 From: acvelez at vidalinux.com (Antonio C. Velez) Date: Thu, 17 Jul 2014 05:32:53 -0400 (AST) Subject: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED In-Reply-To: <1499091523.47731.1404710659416.JavaMail.zimbra@vidalinux.net> References: <1499091523.47731.1404710659416.JavaMail.zimbra@vidalinux.net> Message-ID: <1403903244.57130.1405589573110.JavaMail.zimbra@vidalinux.net> Hi everyone, I'm trying to build openshift-origin using heat template: https://github.com/openstack/heat-templates/tree/master/openshift-origin/F19 the BrokerFlavor complete without issues, but stops giving the following error: 2014-07-17 04:17:50.266 6759 INFO heat.engine.resource [-] creating WaitCondition "BrokerWaitCondition" Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] 2014-07-17 05:16:28.484 6759 INFO heat.engine.scheduler [-] Task stack_task from Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] timed out 2014-07-17 05:16:28.704 6759 WARNING heat.engine.service [-] Stack create failed, status FAILED I tried increasing the BrokerWaitCondition timeout but doesn't help. ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From acvelez at vidalinux.com Thu Jul 17 09:36:59 2014 From: acvelez at vidalinux.com (Antonio C. Velez) Date: Thu, 17 Jul 2014 05:36:59 -0400 (AST) Subject: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED In-Reply-To: <1403903244.57130.1405589573110.JavaMail.zimbra@vidalinux.net> Message-ID: <788446832.57133.1405589819914.JavaMail.zimbra@vidalinux.net> Hi everyone, I'm trying to build openshift-origin using heat template: https://github.com/openstack/heat-templates/tree/master/openshift-origin/F19 the BrokerFlavor complete without issues, but stops giving the following error: 2014-07-17 04:17:50.266 6759 INFO heat.engine.resource [-] creating WaitCondition "BrokerWaitCondition" Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] 2014-07-17 05:16:28.484 6759 INFO heat.engine.scheduler [-] Task stack_task from Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] timed out 2014-07-17 05:16:28.704 6759 WARNING heat.engine.service [-] Stack create failed, status FAILED I tried increasing the BrokerWaitCondition timeout but doesn't help. ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From shardy at redhat.com Thu Jul 17 11:05:54 2014 From: shardy at redhat.com (Steven Hardy) Date: Thu, 17 Jul 2014 12:05:54 +0100 Subject: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED In-Reply-To: <788446832.57133.1405589819914.JavaMail.zimbra@vidalinux.net> References: <1403903244.57130.1405589573110.JavaMail.zimbra@vidalinux.net> <788446832.57133.1405589819914.JavaMail.zimbra@vidalinux.net> Message-ID: <20140717110553.GC10151@t430slt.redhat.com> On Thu, Jul 17, 2014 at 05:36:59AM -0400, Antonio C. Velez wrote: > Hi everyone, > > I'm trying to build openshift-origin using heat template: https://github.com/openstack/heat-templates/tree/master/openshift-origin/F19 the BrokerFlavor complete without issues, but stops giving the following error: > > 2014-07-17 04:17:50.266 6759 INFO heat.engine.resource [-] creating WaitCondition "BrokerWaitCondition" Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] > 2014-07-17 05:16:28.484 6759 INFO heat.engine.scheduler [-] Task stack_task from Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] timed out > 2014-07-17 05:16:28.704 6759 WARNING heat.engine.service [-] Stack create failed, status FAILED > > I tried increasing the BrokerWaitCondition timeout but doesn't help. Please check the following: 1. heat_waitcondition_server_url is set correctly in your /etc/heat/heat.conf: heat_waitcondition_server_url = http://192.168.0.6:8000/v1/waitcondition Here 192.168.0.6 needs to be the IP address of the box running heat-api-cfn, and it must be accessible to the instance. Relatedly, the heat-api-cfn service must be installed and running, which means setting the -os-heat-cfn-install/OS_HEAT_CFN_INSTALL option if you installed via packstack. 2. Ensure no firewalls are blocking access - SSH to the instance - Install nmap inside the instance - nmap 192.168.0.6 (using the above URL as an example) - Port tcp/8000 should be open 3. Ensure the instances can connect to the internet - Should be covered by installing nmap above, but if your network configuration is broken and they can't connect to the internet, the install of packages will hang up and the WaitCondition will time out. If all of the above is OK, log on to the instance during the install via SSH and tail /var/log/cloud-init*, looking for errors or a point in the install where it is getting stuck. Also, I assume the image you're using has been prepared as per the instructions in the README.rst? Hope that helps. -- Steve Hardy Red Hat Engineering, Cloud From kfiresmith at gmail.com Thu Jul 17 12:26:50 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Thu, 17 Jul 2014 08:26:50 -0400 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? Message-ID: Ihar, Apologies! I looked at this with fresh eyes this morning and realized that while neutron-server was listening on 9696 I hadn't yet put a rule into our enterprise iptables management module in Puppet for neutron-server yet, thus Neutron stuff was timing out when a user attempts to log into Horizon. Everything works well now - I'll make sure to pay the help forward by filing an Openstack documentation bug request that distils the missing steps that the RDO team helped me get through yesterday. Thanks so much! - Kodiak Date: Thu, 17 Jul 2014 10:55:08 +0200 From: Ihar Hrachyshka To: rdo-list at redhat.com Subject: Re: [Rdo-list] Icehouse Neutron DB code bug still persists? Message-ID: <53C78F6C.3070106 at redhat.com> Content-Type: text/plain; charset=ISO-8859-1 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 16/07/14 18:58, Kodiak Firesmith wrote: > Of course setting up Neutron has taken Horizon offline: > > http://paste.openstack.org/show/86778/ > Any interesting log messages for neutron service? Do basic neutron requests like 'neutron net-list' work? > > - Kodiak > > On Wed, Jul 16, 2014 at 12:34 PM, Kodiak Firesmith > wrote: >> Further modifying /etc/neutron/neutron.conf as follows allowed >> the neutron-db-manage goodness to happen: >> >> -service_plugins = router +service_plugins = >> neutron.services.l3_router.l3_ router_plugin.L3RouterPlugin >> >> # neutron-db-manage --config-file /etc/neutron/neutron.conf >> --config-file /etc/neutron/plugin.ini upgrade head No handlers >> could be found for logger "neutron.common.legacy" INFO >> [alembic.migration] Context impl MySQLImpl. INFO >> [alembic.migration] Will assume non-transactional DDL. INFO >> [alembic.migration] Running upgrade None -> folsom INFO >> [alembic.migration] Running upgrade folsom -> 2c4af419145b ... >> INFO [alembic.migration] Running upgrade 1341ed32cc1e -> >> grizzly INFO [alembic.migration] Running upgrade grizzly -> >> f489cf14a79c INFO [alembic.migration] Running upgrade >> f489cf14a79c -> 176a85fc7d79 ... INFO [alembic.migration] >> Running upgrade 49f5e553f61f -> 40b0aff0302e INFO >> [alembic.migration] Running upgrade 40b0aff0302e -> havana INFO >> [alembic.migration] Running upgrade havana -> e197124d4b9 ... >> INFO [alembic.migration] Running upgrade 538732fa21e1 -> >> 5ac1c354a051 INFO [alembic.migration] Running upgrade >> 5ac1c354a051 -> icehouse >> >> I am now cautiously optimistic that I'm back on track - will >> report back with success fail. If success I'll submit a >> documentation bug to the docs.openstack people. >> >> Here's my tables now: http://paste.openstack.org/show/86776/ >> >> Thanks a million! >> >> - Kodiak >> >> On Wed, Jul 16, 2014 at 11:15 AM, Kodiak Firesmith >> wrote: >>> Thanks again Kuba! >>> >>> So I think it's gotten farther. I replaced the line on >>> /etc/neutron/neutron.conf: >>> >>> -core_plugin = ml2 +core_plugin = neutron.plugins.ml2.plugin. >>> Ml2Plugin >>> >>> Then I re-ran the neutron-db-manage as seen in the paste below. >>> It's gotten past ml2 and now is erroring out on 'router': >>> >>> http://paste.openstack.org/show/86759/ >>> >>> >>> - Kodiak >>> >>> On Wed, Jul 16, 2014 at 11:01 AM, Jakub Libosvar >>> wrote: >>>> On 07/16/2014 04:57 PM, Kodiak Firesmith wrote: >>>>> Hello Kuba, Thanks for the reply. I used the ml2 ini file >>>>> as my core plugin per the docs and did what you mentioned. >>>>> It resulted in a traceback unfortunately. >>>>> >>>>> Here is a specific accounting of what I did: >>>>> http://paste.openstack.org/show/86756/ >>>> >>>> Ah, this is because we don't load full path from entry_points >>>> for plugins in neutron-db-manage (we didn't fix this because >>>> this dependency is going to be removed soon). >>>> >>>> Can you please try to change core_plugin in neutron.conf to >>>> >>>> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin >>>> >>>> and re-run neutron-db-manage. >>>> >>>> Thanks, Kuba >>>>> >>>>> So it looks like maybe there is an issue with the ml2 >>>>> plugin as the openstack docs cover it so far as how it >>>>> works with the RDO packages. >>>>> >>>>> Another admin reports that stuff "just works" in RDO >>>>> packstack - maybe there is some workaround in Packstack or >>>>> maybe it uses another driver and not ML2? >>>>> >>>>> Thanks again, - Kodiak >>>>> >>>>> On Wed, Jul 16, 2014 at 8:54 AM, Jakub Libosvar >>>>> wrote: >>>>>> On 07/16/2014 02:25 PM, Kodiak Firesmith wrote: >>>>>>> Hello, First go-round with Openstack and first post on >>>>>>> the list so bear with me... >>>>>>> >>>>>>> I've been working through the manual installation of >>>>>>> RDO using the docs.openstack installation guide. >>>>>>> Everything went smoothly for the most part until >>>>>>> Neutron. It appears I've been hit by the same bug(?) >>>>>>> discussed here: >>>>>>> http://www.marshut.com/ithyup/net-create-issue.html#ithzts, >>>>>>> and here: >>>>>>> https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html >>>>>>> >>>>>>> ...among other places. >>>>>>> >>>>>>> Upon first launch of the neutron-server daemon, this >>>>>>> appears in the neutron-server log file: >>>>>>> http://paste.openstack.org/show/86614/ >>>>>>> >>>>>>> And once you go into the db you can see that a bunch of >>>>>>> tables are not created that should be. >>>>>>> >>>>>>> As the first link alludes to, it looks like a MyISAM / >>>>>>> InnoDB formatting mix-up but I'm no MySQL guy so I >>>>>>> can't prove that. >>>>>>> >>>>>>> I would really like if someone on the list who is a bit >>>>>>> more experienced with this stuff could please see if >>>>>>> the suspicions raised in the links above are correct, >>>>>>> and if so, could the RDO people please provide a >>>>>>> workaround to get me back up and running with our test >>>>>>> deployment? >>>>>>> >>>>>>> Thanks! - Kodiak >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Rdo-list mailing list Rdo-list at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>> >>>>>> Hi Kodiak, >>>>>> >>>>>> I think there is a bug in documentation, I'm missing >>>>>> running neutron-db-manage command to create scheme for >>>>>> neutron. Can you please try to 1. stop neutron-server 2. >>>>>> create a new database 3. set connection string in >>>>>> neutron.conf 4. run neutron-db-manage --config-file >>>>>> /etc/neutron/neutron.conf --config-file >>>>>> upgrade head 5. start >>>>>> neutron-server >>>>>> >>>>>> Kuba >>>> > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTx49sAAoJEC5aWaUY1u577IoH/A8xk9HwDgJYJ6M2T11D3Yt+ VkRyZvR8drsVl2GaX51uQF4F9HWNgT6zhZqk4Y9n8MvTtG66XIUI7K0KW51uU05/ Ki4NaD9RrkVMIvNGExCSOzcpuUaCYmTOjDVoHkKT+jp+vdRcjNrFZHtI7IE1qGpI BSbhNzV8htJJiFI40dsjJgZgutmqORvU79oFZDADcUMQnb/tIH9hw5xSAWe2+dzi IzUq88Brd90t8tteAAauNaHYcx4yG9dGZ7xaXi0FNqOhw/WzaVm8U/UkKmvEatoV NBlnbliuPBBtttGr/EtOtUcyo9eiNN1P+IvmoJgz8dSvbX95vAXZBGA+2Nq/ecQ= =dPVj -----END PGP SIGNATURE----- From kfiresmith at gmail.com Thu Jul 17 13:02:15 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Thu, 17 Jul 2014 09:02:15 -0400 Subject: [Rdo-list] Icehouse Neutron DB code bug still persists? In-Reply-To: References: Message-ID: Documentation issue is now tracking at: https://bugs.launchpad.net/openstack-manuals/+bug/1343277 On Thu, Jul 17, 2014 at 8:26 AM, Kodiak Firesmith wrote: > Ihar, > Apologies! I looked at this with fresh eyes this morning and realized > that while neutron-server was listening on 9696 I hadn't yet put a > rule into our enterprise iptables management module in Puppet for > neutron-server yet, thus Neutron stuff was timing out when a user > attempts to log into Horizon. > > Everything works well now - I'll make sure to pay the help forward by > filing an Openstack documentation bug request that distils the missing > steps that the RDO team helped me get through yesterday. > > Thanks so much! > - Kodiak > > > Date: Thu, 17 Jul 2014 10:55:08 +0200 > From: Ihar Hrachyshka > To: rdo-list at redhat.com > Subject: Re: [Rdo-list] Icehouse Neutron DB code bug still persists? > Message-ID: <53C78F6C.3070106 at redhat.com> > Content-Type: text/plain; charset=ISO-8859-1 > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 16/07/14 18:58, Kodiak Firesmith wrote: >> Of course setting up Neutron has taken Horizon offline: >> >> http://paste.openstack.org/show/86778/ >> > > Any interesting log messages for neutron service? Do basic neutron > requests like 'neutron net-list' work? > >> >> - Kodiak >> >> On Wed, Jul 16, 2014 at 12:34 PM, Kodiak Firesmith >> wrote: >>> Further modifying /etc/neutron/neutron.conf as follows allowed >>> the neutron-db-manage goodness to happen: >>> >>> -service_plugins = router +service_plugins = >>> neutron.services.l3_router.l3_ > router_plugin.L3RouterPlugin >>> >>> # neutron-db-manage --config-file /etc/neutron/neutron.conf >>> --config-file /etc/neutron/plugin.ini upgrade head No handlers >>> could be found for logger "neutron.common.legacy" INFO >>> [alembic.migration] Context impl MySQLImpl. INFO >>> [alembic.migration] Will assume non-transactional DDL. INFO >>> [alembic.migration] Running upgrade None -> folsom INFO >>> [alembic.migration] Running upgrade folsom -> 2c4af419145b ... >>> INFO [alembic.migration] Running upgrade 1341ed32cc1e -> >>> grizzly INFO [alembic.migration] Running upgrade grizzly -> >>> f489cf14a79c INFO [alembic.migration] Running upgrade >>> f489cf14a79c -> 176a85fc7d79 ... INFO [alembic.migration] >>> Running upgrade 49f5e553f61f -> 40b0aff0302e INFO >>> [alembic.migration] Running upgrade 40b0aff0302e -> havana INFO >>> [alembic.migration] Running upgrade havana -> e197124d4b9 ... >>> INFO [alembic.migration] Running upgrade 538732fa21e1 -> >>> 5ac1c354a051 INFO [alembic.migration] Running upgrade >>> 5ac1c354a051 -> icehouse >>> >>> I am now cautiously optimistic that I'm back on track - will >>> report back with success fail. If success I'll submit a >>> documentation bug to the docs.openstack people. >>> >>> Here's my tables now: http://paste.openstack.org/show/86776/ >>> >>> Thanks a million! >>> >>> - Kodiak >>> >>> On Wed, Jul 16, 2014 at 11:15 AM, Kodiak Firesmith >>> wrote: >>>> Thanks again Kuba! >>>> >>>> So I think it's gotten farther. I replaced the line on >>>> /etc/neutron/neutron.conf: >>>> >>>> -core_plugin = ml2 +core_plugin = neutron.plugins.ml2.plugin. >>>> Ml2Plugin >>>> >>>> Then I re-ran the neutron-db-manage as seen in the paste below. >>>> It's gotten past ml2 and now is erroring out on 'router': >>>> >>>> http://paste.openstack.org/show/86759/ >>>> >>>> >>>> - Kodiak >>>> >>>> On Wed, Jul 16, 2014 at 11:01 AM, Jakub Libosvar >>>> wrote: >>>>> On 07/16/2014 04:57 PM, Kodiak Firesmith wrote: >>>>>> Hello Kuba, Thanks for the reply. I used the ml2 ini file >>>>>> as my core plugin per the docs and did what you mentioned. >>>>>> It resulted in a traceback unfortunately. >>>>>> >>>>>> Here is a specific accounting of what I did: >>>>>> http://paste.openstack.org/show/86756/ >>>>> >>>>> Ah, this is because we don't load full path from entry_points >>>>> for plugins in neutron-db-manage (we didn't fix this because >>>>> this dependency is going to be removed soon). >>>>> >>>>> Can you please try to change core_plugin in neutron.conf to >>>>> >>>>> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin >>>>> >>>>> and re-run neutron-db-manage. >>>>> >>>>> Thanks, Kuba >>>>>> >>>>>> So it looks like maybe there is an issue with the ml2 >>>>>> plugin as the openstack docs cover it so far as how it >>>>>> works with the RDO packages. >>>>>> >>>>>> Another admin reports that stuff "just works" in RDO >>>>>> packstack - maybe there is some workaround in Packstack or >>>>>> maybe it uses another driver and not ML2? >>>>>> >>>>>> Thanks again, - Kodiak >>>>>> >>>>>> On Wed, Jul 16, 2014 at 8:54 AM, Jakub Libosvar >>>>>> wrote: >>>>>>> On 07/16/2014 02:25 PM, Kodiak Firesmith wrote: >>>>>>>> Hello, First go-round with Openstack and first post on >>>>>>>> the list so bear with me... >>>>>>>> >>>>>>>> I've been working through the manual installation of >>>>>>>> RDO using the docs.openstack installation guide. >>>>>>>> Everything went smoothly for the most part until >>>>>>>> Neutron. It appears I've been hit by the same bug(?) >>>>>>>> discussed here: >>>>>>>> http://www.marshut.com/ithyup/net-create-issue.html#ithzts, >>>>>>>> and here: >>>>>>>> https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html >>>>>>>> >>>>>>>> > ...among other places. >>>>>>>> >>>>>>>> Upon first launch of the neutron-server daemon, this >>>>>>>> appears in the neutron-server log file: >>>>>>>> http://paste.openstack.org/show/86614/ >>>>>>>> >>>>>>>> And once you go into the db you can see that a bunch of >>>>>>>> tables are not created that should be. >>>>>>>> >>>>>>>> As the first link alludes to, it looks like a MyISAM / >>>>>>>> InnoDB formatting mix-up but I'm no MySQL guy so I >>>>>>>> can't prove that. >>>>>>>> >>>>>>>> I would really like if someone on the list who is a bit >>>>>>>> more experienced with this stuff could please see if >>>>>>>> the suspicions raised in the links above are correct, >>>>>>>> and if so, could the RDO people please provide a >>>>>>>> workaround to get me back up and running with our test >>>>>>>> deployment? >>>>>>>> >>>>>>>> Thanks! - Kodiak >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Rdo-list mailing list Rdo-list at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>>> >>>>>>> Hi Kodiak, >>>>>>> >>>>>>> I think there is a bug in documentation, I'm missing >>>>>>> running neutron-db-manage command to create scheme for >>>>>>> neutron. Can you please try to 1. stop neutron-server 2. >>>>>>> create a new database 3. set connection string in >>>>>>> neutron.conf 4. run neutron-db-manage --config-file >>>>>>> /etc/neutron/neutron.conf --config-file >>>>>>> upgrade head 5. start >>>>>>> neutron-server >>>>>>> >>>>>>> Kuba >>>>> >> >> _______________________________________________ Rdo-list mailing >> list Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iQEcBAEBCgAGBQJTx49sAAoJEC5aWaUY1u577IoH/A8xk9HwDgJYJ6M2T11D3Yt+ > VkRyZvR8drsVl2GaX51uQF4F9HWNgT6zhZqk4Y9n8MvTtG66XIUI7K0KW51uU05/ > Ki4NaD9RrkVMIvNGExCSOzcpuUaCYmTOjDVoHkKT+jp+vdRcjNrFZHtI7IE1qGpI > BSbhNzV8htJJiFI40dsjJgZgutmqORvU79oFZDADcUMQnb/tIH9hw5xSAWe2+dzi > IzUq88Brd90t8tteAAauNaHYcx4yG9dGZ7xaXi0FNqOhw/WzaVm8U/UkKmvEatoV > NBlnbliuPBBtttGr/EtOtUcyo9eiNN1P+IvmoJgz8dSvbX95vAXZBGA+2Nq/ecQ= > =dPVj > -----END PGP SIGNATURE----- From ben42ml at gmail.com Thu Jul 17 14:53:52 2014 From: ben42ml at gmail.com (Benoit ML) Date: Thu, 17 Jul 2014 16:53:52 +0200 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Hello, Evrything now works. I replace ml2 by openvswitch for the core plugin. ##core_plugin =neutron.plugins.ml2.plugin.Ml2Plugin core_plugin =openvswitch Regards, 2014-07-16 17:28 GMT+02:00 Benoit ML : > Hello, > > Another mail about the problem.... Well i have enable debug = True in > keystone.conf > > And after a nova migrate , when i nova show : > > ============================================================================== > | fault | {"message": "Remote error: > Unauthorized {\"error\": {\"message\": \"User > 0b45ccc267e04b59911e88381bb450c0 is unauthorized for tenant services\", > \"code\": 401, \"title\": \"Unauthorized\"}} | > > ============================================================================== > > So well User with id 0b45ccc267e04b59911e88381bb450c0 is neutron : > > ============================================================================== > keystone user-list > | 0b45ccc267e04b59911e88381bb450c0 | neutron | True | | > > ============================================================================== > > And the role seems good : > > ============================================================================== > keystone user-role-add --user=neutron --tenant=services --role=admin > Conflict occurred attempting to store role grant. User > 0b45ccc267e04b59911e88381bb450c0 already has role > 734c2fb6fb444792b5ede1fa1e17fb7e in tenant dea82f7937064b6da1c370280d8bfdad > (HTTP 409) > > > keystone user-role-list --user neutron --tenant services > > +----------------------------------+-------+----------------------------------+----------------------------------+ > | id | name | > user_id | tenant_id | > > +----------------------------------+-------+----------------------------------+----------------------------------+ > | 734c2fb6fb444792b5ede1fa1e17fb7e | admin | > 0b45ccc267e04b59911e88381bb450c0 | dea82f7937064b6da1c370280d8bfdad | > > +----------------------------------+-------+----------------------------------+----------------------------------+ > > keystone tenant-list > +----------------------------------+----------+---------+ > | id | name | enabled | > +----------------------------------+----------+---------+ > | e250f7573010415da6f191e0b53faae5 | admin | True | > | fa30c6bdd56e45dea48dfbe9c3ee8782 | exploit | True | > | dea82f7937064b6da1c370280d8bfdad | services | True | > +----------------------------------+----------+---------+ > > > ============================================================================== > > > Really i didn't see where is my mistake ... can you help me plz ? > > > Thank you in advance ! > > Regards, > > > > > > > 2014-07-15 15:13 GMT+02:00 Benoit ML : > > Hello again, >> >> Ok on controller node I modify the neutron server configuration with >> nova_admin_tenant_id = f23ed5be5f534fdba31d23f60621347d >> >> where id is "services" in keystone and now it's working with "vif_plugging_is_fatal >> = True". Good thing. >> >> Well by the way the migrate doesnt working ... >> >> >> >> >> 2014-07-15 14:20 GMT+02:00 Benoit ML : >> >> Hello, >>> >>> Thank you for taking time ! >>> >>> Well on the compute node, when i activate "vif_plugging_is_fatal = >>> True", the vm creation stuck in spawning state, and in neutron server log i >>> have : >>> >>> ======================================= >>> 2014-07-15 14:12:52.351 18448 DEBUG neutron.notifiers.nova [-] Sending >>> events: [{'status': 'completed', 'tag': >>> u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged', >>> 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] send_events >>> /usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:218 >>> 2014-07-15 14:12:52.354 18448 INFO urllib3.connectionpool [-] Starting >>> new HTTP connection (1): localhost >>> 2014-07-15 14:12:52.360 18448 DEBUG urllib3.connectionpool [-] "POST >>> /v2/5c9c186a909e499e9da0dd5cf2c403e0/os-server-external-events HTTP/1.1" >>> 401 23 _make_request >>> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295 >>> 2014-07-15 14:12:52.362 18448 INFO urllib3.connectionpool [-] Starting >>> new HTTP connection (1): localhost >>> 2014-07-15 14:12:52.452 18448 DEBUG urllib3.connectionpool [-] "POST >>> /v2.0/tokens HTTP/1.1" 401 114 _make_request >>> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295 >>> 2014-07-15 14:12:52.453 18448 ERROR neutron.notifiers.nova [-] Failed to >>> notify nova on events: [{'status': 'completed', 'tag': >>> u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged', >>> 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Traceback >>> (most recent call last): >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >>> "/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py", line 221, in >>> send_events >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova >>> batched_events) >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >>> "/usr/lib/python2.7/site-packages/novaclient/v1_1/contrib/server_external_events.py", >>> line 39, in create >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova >>> return_raw=True) >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >>> "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in _create >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova _resp, >>> body = self.api.client.post(url, body=body) >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >>> "/usr/lib/python2.7/site-packages/novaclient/client.py", line 312, in post >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova return >>> self._cs_request(url, 'POST', **kwargs) >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >>> "/usr/lib/python2.7/site-packages/novaclient/client.py", line 301, in >>> _cs_request >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova raise e >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Unauthorized: >>> Unauthorized (HTTP 401) >>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova >>> 2014-07-15 14:12:58.321 18448 DEBUG neutron.openstack.common.rpc.amqp >>> [-] received {u'_context_roles': [u'admin'], u'_context_request_id': >>> u'req-9bf35c42-3477-4ed3-8092-af729c21198c', u'_context_read_deleted': >>> u'no', u'_context_user_name': None, u'_context_project_name': None, >>> u'namespace': None, u'_context_tenant_id': None, u'args': {u'agent_state': >>> {u'agent_state': {u'topic': u'N/A', u'binary': >>> u'neutron-openvswitch-agent', u'host': u'pvidgsh006.pvi', u'agent_type': >>> u'Open vSwitch agent', u'configurations': {u'tunnel_types': [u'vxlan'], >>> u'tunneling_ip': u'192.168.40.5', u'bridge_mappings': {}, u'l2_population': >>> False, u'devices': 1}}}, u'time': u'2014-07-15T12:12:58.313995'}, >>> u'_context_tenant': None, u'_unique_id': >>> u'7c9a4dfcd256494caf6e1327c8051e29', u'_context_is_admin': True, >>> u'version': u'1.0', u'_context_timestamp': u'2014-07-15 12:01:28.190772', >>> u'_context_tenant_name': None, u'_context_user': None, u'_context_user_id': >>> None, u'method': u'report_state', u'_context_project_id': None} _safe_log >>> /usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/common.py:280 >>> ======================================= >>> >>> Well I'm supposed it's related ... Perhaps with those options in >>> neutron.conf : >>> ====================================== >>> notify_nova_on_port_status_changes = True >>> notify_nova_on_port_data_changes = True >>> nova_url = http://localhost:8774/v2 >>> nova_admin_tenant_name = services >>> nova_admin_username = nova >>> nova_admin_password = nova >>> nova_admin_auth_url = http://localhost:35357/v2.0 >>> ====================================== >>> >>> But well didnt see anything wrong ... >>> >>> Thank you in advance ! >>> >>> Regards, >>> >>> >>> >>> 2014-07-11 16:08 GMT+02:00 Vimal Kumar : >>> >>> ----- >>>> File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line >>>> 239, in authenticate\\n content_type="application/json")\\n\', u\' File >>>> "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in >>>> _cs_request\\n raise exceptions.Unauthorized(message=body)\\n\', >>>> u\'Unauthorized: {"error": {"message": "The request you have made requires >>>> authentication.", "code": 401, "title": "Unauthorized"}}\\n\'].\n'] >>>> ----- >>>> >>>> Looks like HTTP connection to neutron server is resulting in 401 error. >>>> >>>> Try enabling debug mode for neutron server and then tail >>>> /var/log/neutron/server.log , hopefully you should get more info. >>>> >>>> >>>> On Fri, Jul 11, 2014 at 7:13 PM, Benoit ML wrote: >>>> >>>>> Hello, >>>>> >>>>> Ok I see. Nova telles neutron/openvswitch to create the bridge qbr >>>>> prior to the migration itself. >>>>> I ve already activate debug and verbose ... But well i'm really stuck, >>>>> dont know how and where to search/look ... >>>>> >>>>> >>>>> >>>>> Regards, >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> 2014-07-11 15:09 GMT+02:00 Miguel Angel : >>>>> >>>>> Hi Benoit, >>>>>> >>>>>> A manual virsh migration should fail, because the >>>>>> network ports are not migrated to the destination host. >>>>>> >>>>>> You must investigate on the authentication problem itself, >>>>>> and let nova handle all the underlying API calls which should >>>>>> happen... >>>>>> >>>>>> May be it's worth setting nova.conf to debug=True >>>>>> >>>>>> >>>>>> >>>>>> --- >>>>>> irc: ajo / mangelajo >>>>>> Miguel Angel Ajo Pelayo >>>>>> +34 636 52 25 69 >>>>>> skype: ajoajoajo >>>>>> >>>>>> >>>>>> 2014-07-11 14:41 GMT+02:00 Benoit ML : >>>>>> >>>>>> Hello, >>>>>>> >>>>>>> cat /etc/redhat-release >>>>>>> CentOS Linux release 7 (Rebuilt from: RHEL 7.0) >>>>>>> >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> >>>>>>> 2014-07-11 13:40 GMT+02:00 Boris Derzhavets >>>>>> >: >>>>>>> >>>>>>> Could you please post /etc/redhat-release. >>>>>>>> >>>>>>>> Boris. >>>>>>>> >>>>>>>> ------------------------------ >>>>>>>> Date: Fri, 11 Jul 2014 11:57:12 +0200 >>>>>>>> From: ben42ml at gmail.com >>>>>>>> To: rdo-list at redhat.com >>>>>>>> Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration >>>>>>>> failed because of "network qbr no such device" >>>>>>>> >>>>>>>> >>>>>>>> Hello, >>>>>>>> >>>>>>>> I'm working on a multi-node setup of openstack Icehouse using >>>>>>>> centos7. >>>>>>>> Well i have : >>>>>>>> - one controllor node with all server services thing stuff >>>>>>>> - one network node with openvswitch agent, l3-agent, dhcp-agent >>>>>>>> - two compute node with nova-compute and neutron-openvswitch >>>>>>>> - one storage nfs node >>>>>>>> >>>>>>>> NetworkManager is deleted on compute nodes and network node. >>>>>>>> >>>>>>>> My network use is configured to use vxlan. I can create VM, >>>>>>>> tenant-network, external-network, routeur, assign floating-ip to VM, push >>>>>>>> ssh-key into VM, create volume from glance image, etc... Evrything is >>>>>>>> conected and reacheable. Pretty cool :) >>>>>>>> >>>>>>>> But when i try to migrate VM things go wrong ... I have configured >>>>>>>> nova, libvirtd and qemu to use migration through libvirt-tcp. >>>>>>>> I have create and exchanged ssh-key for nova user on all node. I >>>>>>>> have verified userid and groupid of nova. >>>>>>>> >>>>>>>> Well nova-compute log, on the target compute node, : >>>>>>>> 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance: >>>>>>>> a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: >>>>>>>> Unauthorized {"error": {"m >>>>>>>> essage": "The request you have made requires authentication.", >>>>>>>> "code": 401, "title": "Unauthorized"}} >>>>>>>> >>>>>>>> >>>>>>>> So well after searching a lots in all logs, i have fount that i >>>>>>>> cant simply migration VM between compute node with a simple virsh : >>>>>>>> virsh migrate instance-00000084 qemu+tcp:///system >>>>>>>> >>>>>>>> The error is : >>>>>>>> erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device >>>>>>>> >>>>>>>> Well when i look on the source hyperviseur the bridge "qbr3ca65809" >>>>>>>> exists and have a network tap device. And moreover i manually create >>>>>>>> qbr3ca65809 on the target hypervisor, virsh migrate succed ! >>>>>>>> >>>>>>>> Can you help me plz ? >>>>>>>> What can i do wrong ? Perhpas neutron must create the bridge before >>>>>>>> migration but didnt for a mis configuration ? >>>>>>>> >>>>>>>> Plz ask anything you need ! >>>>>>>> >>>>>>>> Thank you in advance. >>>>>>>> >>>>>>>> >>>>>>>> The full nova-compute log attached. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Regards, >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> Benoit >>>>>>>> >>>>>>>> _______________________________________________ Rdo-list mailing >>>>>>>> list Rdo-list at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> Benoit >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Rdo-list mailing list >>>>>>> Rdo-list at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> Benoit >>>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> >>>> >>> >>> >>> -- >>> -- >>> Benoit >>> >> >> >> >> -- >> -- >> Benoit >> > > > > -- > -- > Benoit > -- -- Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Thu Jul 17 17:12:32 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 17 Jul 2014 13:12:32 -0400 Subject: [Rdo-list] Quickstart should mention architecture requirements Message-ID: <20140717171232.GA32299@redhat.com> I just spent some time debugging an issue on #rdo in which someone appeared to have done everything correctly but was unable to install RDO because of several missing packages. It turns out this was because they were working with an i686 CentOS image. I think we need to update the "Prerequisites" section of the Quickstart document (http://openstack.redhat.com/Quickstart) to indicate that we only support x86_64, because otherwise this is a tricky failure mode to detect: there are no particular errors, and all the .noarch packages still show up, so the problem is not immediately obvious. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From James.Radtke at siriusxm.com Thu Jul 17 17:18:15 2014 From: James.Radtke at siriusxm.com (Radtke, James) Date: Thu, 17 Jul 2014 17:18:15 +0000 Subject: [Rdo-list] Quickstart should mention architecture requirements In-Reply-To: <20140717171232.GA32299@redhat.com> References: <20140717171232.GA32299@redhat.com> Message-ID: <0D9F522988C72B48AD7045FCC7C2F3FE26352DB2@PDGLMPEXCMBX01.corp.siriusxm.com> And/Or make the installer fail on a non-supported platform with a more informative message? ________________________________________ From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf of Lars Kellogg-Stedman [lars at redhat.com] Sent: Thursday, July 17, 2014 1:12 PM To: rdo-list at redhat.com Subject: [Rdo-list] Quickstart should mention architecture requirements I just spent some time debugging an issue on #rdo in which someone appeared to have done everything correctly but was unable to install RDO because of several missing packages. It turns out this was because they were working with an i686 CentOS image. I think we need to update the "Prerequisites" section of the Quickstart document (http://openstack.redhat.com/Quickstart) to indicate that we only support x86_64, because otherwise this is a tricky failure mode to detect: there are no particular errors, and all the .noarch packages still show up, so the problem is not immediately obvious. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter From lars at redhat.com Thu Jul 17 17:23:06 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 17 Jul 2014 13:23:06 -0400 Subject: [Rdo-list] Quickstart should mention architecture requirements In-Reply-To: <0D9F522988C72B48AD7045FCC7C2F3FE26352DB2@PDGLMPEXCMBX01.corp.siriusxm.com> References: <20140717171232.GA32299@redhat.com> <0D9F522988C72B48AD7045FCC7C2F3FE26352DB2@PDGLMPEXCMBX01.corp.siriusxm.com> Message-ID: <20140717172306.GB32299@redhat.com> On Thu, Jul 17, 2014 at 05:18:15PM +0000, Radtke, James wrote: > And/Or make the installer fail on a non-supported platform with a more informative message? I think "And", because we do need to update the documentation. But yeah, a patch for packstack that would fail early on unsupported architectures sounds like a good idea. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From acvelez at vidalinux.com Fri Jul 18 01:23:13 2014 From: acvelez at vidalinux.com (Antonio C. Velez) Date: Thu, 17 Jul 2014 21:23:13 -0400 (AST) Subject: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED In-Reply-To: <20140717110553.GC10151@t430slt.redhat.com> References: <1403903244.57130.1405589573110.JavaMail.zimbra@vidalinux.net> <788446832.57133.1405589819914.JavaMail.zimbra@vidalinux.net> <20140717110553.GC10151@t430slt.redhat.com> Message-ID: <349164277.57584.1405646593373.JavaMail.zimbra@vidalinux.net> Hi Steven, Thanks for answering, I saw lot of errors in /var/log/cloud-init.log: http://paste.fedoraproject.org/118898/56463661/ I think this fedora19 is using an old outdated version of openshift that doesn't work anymore, did you know where to find an updated heat template or a good howto using the centos6.5 templates? Thanks in advanced, ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com ----- Original Message ----- From: "Steven Hardy" To: "Antonio C. Velez" Cc: rdo-list at redhat.com Sent: Thursday, July 17, 2014 7:05:54 AM Subject: Re: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED On Thu, Jul 17, 2014 at 05:36:59AM -0400, Antonio C. Velez wrote: > Hi everyone, > > I'm trying to build openshift-origin using heat template: https://github.com/openstack/heat-templates/tree/master/openshift-origin/F19 the BrokerFlavor complete without issues, but stops giving the following error: > > 2014-07-17 04:17:50.266 6759 INFO heat.engine.resource [-] creating WaitCondition "BrokerWaitCondition" Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] > 2014-07-17 05:16:28.484 6759 INFO heat.engine.scheduler [-] Task stack_task from Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] timed out > 2014-07-17 05:16:28.704 6759 WARNING heat.engine.service [-] Stack create failed, status FAILED > > I tried increasing the BrokerWaitCondition timeout but doesn't help. Please check the following: 1. heat_waitcondition_server_url is set correctly in your /etc/heat/heat.conf: heat_waitcondition_server_url = http://192.168.0.6:8000/v1/waitcondition Here 192.168.0.6 needs to be the IP address of the box running heat-api-cfn, and it must be accessible to the instance. Relatedly, the heat-api-cfn service must be installed and running, which means setting the -os-heat-cfn-install/OS_HEAT_CFN_INSTALL option if you installed via packstack. 2. Ensure no firewalls are blocking access - SSH to the instance - Install nmap inside the instance - nmap 192.168.0.6 (using the above URL as an example) - Port tcp/8000 should be open 3. Ensure the instances can connect to the internet - Should be covered by installing nmap above, but if your network configuration is broken and they can't connect to the internet, the install of packages will hang up and the WaitCondition will time out. If all of the above is OK, log on to the instance during the install via SSH and tail /var/log/cloud-init*, looking for errors or a point in the install where it is getting stuck. Also, I assume the image you're using has been prepared as per the instructions in the README.rst? Hope that helps. -- Steve Hardy Red Hat Engineering, Cloud -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From acvelez at vidalinux.com Fri Jul 18 09:22:01 2014 From: acvelez at vidalinux.com (Antonio C. Velez) Date: Fri, 18 Jul 2014 05:22:01 -0400 (AST) Subject: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED In-Reply-To: <20140717110553.GC10151@t430slt.redhat.com> References: <1403903244.57130.1405589573110.JavaMail.zimbra@vidalinux.net> <788446832.57133.1405589819914.JavaMail.zimbra@vidalinux.net> <20140717110553.GC10151@t430slt.redhat.com> Message-ID: <1007626789.57671.1405675321782.JavaMail.zimbra@vidalinux.net> Steven, I manage to understand centos6.5 templates and get it to work, then now I got another error, 2014-07-18 04:47:26.117 6759 ERROR heat.engine.resource [-] CREATE : Server "OpenShiftNode" Stack "openshift" [2d5caea3-ca3e-47f1-92e0-898109d671dd] 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource Traceback (most recent call last): 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 417, in _do_action 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource handle()) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line 535, in handle_create 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource admin_pass=admin_pass) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py", line 871, in create 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource **boot_kwargs) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py", line 534, in _boot 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource return_raw=return_raw, **kwargs) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in _create 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource _resp, body = self.api.client.post(url, body=body) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 312, in post 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource return self._cs_request(url, 'POST', **kwargs) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 286, in _cs_request 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource **kwargs) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 268, in _time_request 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource resp, body = self.request(url, method, **kwargs) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 262, in request 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource raise exceptions.from_response(resp, body, url, method) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource Conflict: Port 9bceb816-93f4-4d72-8ae8-de8e5758bd6d is still in use. (HTTP 409) (Request-ID: req-685b5295-4918-417d-894d-d416bf9e7b1c) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource 2014-07-18 04:47:26.848 6759 WARNING heat.engine.service [-] Stack create failed, status FAILED Any advice? Thanks! ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com ----- Original Message ----- From: "Steven Hardy" To: "Antonio C. Velez" Cc: rdo-list at redhat.com Sent: Thursday, July 17, 2014 7:05:54 AM Subject: Re: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED On Thu, Jul 17, 2014 at 05:36:59AM -0400, Antonio C. Velez wrote: > Hi everyone, > > I'm trying to build openshift-origin using heat template: https://github.com/openstack/heat-templates/tree/master/openshift-origin/F19 the BrokerFlavor complete without issues, but stops giving the following error: > > 2014-07-17 04:17:50.266 6759 INFO heat.engine.resource [-] creating WaitCondition "BrokerWaitCondition" Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] > 2014-07-17 05:16:28.484 6759 INFO heat.engine.scheduler [-] Task stack_task from Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] timed out > 2014-07-17 05:16:28.704 6759 WARNING heat.engine.service [-] Stack create failed, status FAILED > > I tried increasing the BrokerWaitCondition timeout but doesn't help. Please check the following: 1. heat_waitcondition_server_url is set correctly in your /etc/heat/heat.conf: heat_waitcondition_server_url = http://192.168.0.6:8000/v1/waitcondition Here 192.168.0.6 needs to be the IP address of the box running heat-api-cfn, and it must be accessible to the instance. Relatedly, the heat-api-cfn service must be installed and running, which means setting the -os-heat-cfn-install/OS_HEAT_CFN_INSTALL option if you installed via packstack. 2. Ensure no firewalls are blocking access - SSH to the instance - Install nmap inside the instance - nmap 192.168.0.6 (using the above URL as an example) - Port tcp/8000 should be open 3. Ensure the instances can connect to the internet - Should be covered by installing nmap above, but if your network configuration is broken and they can't connect to the internet, the install of packages will hang up and the WaitCondition will time out. If all of the above is OK, log on to the instance during the install via SSH and tail /var/log/cloud-init*, looking for errors or a point in the install where it is getting stuck. Also, I assume the image you're using has been prepared as per the instructions in the README.rst? Hope that helps. -- Steve Hardy Red Hat Engineering, Cloud -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From pbrady at redhat.com Fri Jul 18 10:32:15 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Fri, 18 Jul 2014 11:32:15 +0100 Subject: [Rdo-list] Quickstart should mention architecture requirements In-Reply-To: <20140717172306.GB32299@redhat.com> References: <20140717171232.GA32299@redhat.com> <0D9F522988C72B48AD7045FCC7C2F3FE26352DB2@PDGLMPEXCMBX01.corp.siriusxm.com> <20140717172306.GB32299@redhat.com> Message-ID: <53C8F7AF.30309@redhat.com> On 07/17/2014 06:23 PM, Lars Kellogg-Stedman wrote: > On Thu, Jul 17, 2014 at 05:18:15PM +0000, Radtke, James wrote: >> And/Or make the installer fail on a non-supported platform with a more informative message? > > I think "And", because we do need to update the documentation. Done > But yeah, a patch for packstack that would fail early on unsupported > architectures sounds like a good idea. +1 Please add bugs for packstack and foreman. thanks, P?draig From ben42ml at gmail.com Fri Jul 18 13:10:08 2014 From: ben42ml at gmail.com (Benoit ML) Date: Fri, 18 Jul 2014 15:10:08 +0200 Subject: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device" In-Reply-To: References: Message-ID: Hello, I know that ... I have started with ml2 plugins for this reason. Perhaps can it be a bug in ml2 plugin ? Next week i will try with ml2 and admin credential on neutron, will see. Moreover, for information, by default, puppet rdo modules from foreman configure erything with openvswitch plugin by default. Thank you. Regards, 2014-07-17 21:59 GMT+02:00 Miguel Angel : > Be aware that ovs plug in deprecated, and it's going to be removed now > from Juno. This could only harm if you wanted to upgrade to Juno at a > later time. Otherwise it may be ok. > > Could you try using the admin credentials in the settings? > On Jul 17, 2014 4:54 PM, "Benoit ML" wrote: > >> Hello, >> >> Evrything now works. I replace ml2 by openvswitch for the core plugin. >> >> >> ##core_plugin =neutron.plugins.ml2.plugin.Ml2Plugin >> core_plugin =openvswitch >> >> >> Regards, >> >> >> >> 2014-07-16 17:28 GMT+02:00 Benoit ML : >> >>> Hello, >>> >>> Another mail about the problem.... Well i have enable debug = True in >>> keystone.conf >>> >>> And after a nova migrate , when i nova show : >>> >>> ============================================================================== >>> | fault | {"message": "Remote error: >>> Unauthorized {\"error\": {\"message\": \"User >>> 0b45ccc267e04b59911e88381bb450c0 is unauthorized for tenant services\", >>> \"code\": 401, \"title\": \"Unauthorized\"}} | >>> >>> ============================================================================== >>> >>> So well User with id 0b45ccc267e04b59911e88381bb450c0 is neutron : >>> >>> ============================================================================== >>> keystone user-list >>> | 0b45ccc267e04b59911e88381bb450c0 | neutron | True | | >>> >>> ============================================================================== >>> >>> And the role seems good : >>> >>> ============================================================================== >>> keystone user-role-add --user=neutron --tenant=services --role=admin >>> Conflict occurred attempting to store role grant. User >>> 0b45ccc267e04b59911e88381bb450c0 already has role >>> 734c2fb6fb444792b5ede1fa1e17fb7e in tenant dea82f7937064b6da1c370280d8bfdad >>> (HTTP 409) >>> >>> >>> keystone user-role-list --user neutron --tenant services >>> >>> +----------------------------------+-------+----------------------------------+----------------------------------+ >>> | id | name | >>> user_id | tenant_id | >>> >>> +----------------------------------+-------+----------------------------------+----------------------------------+ >>> | 734c2fb6fb444792b5ede1fa1e17fb7e | admin | >>> 0b45ccc267e04b59911e88381bb450c0 | dea82f7937064b6da1c370280d8bfdad | >>> >>> +----------------------------------+-------+----------------------------------+----------------------------------+ >>> >>> keystone tenant-list >>> +----------------------------------+----------+---------+ >>> | id | name | enabled | >>> +----------------------------------+----------+---------+ >>> | e250f7573010415da6f191e0b53faae5 | admin | True | >>> | fa30c6bdd56e45dea48dfbe9c3ee8782 | exploit | True | >>> | dea82f7937064b6da1c370280d8bfdad | services | True | >>> +----------------------------------+----------+---------+ >>> >>> >>> ============================================================================== >>> >>> >>> Really i didn't see where is my mistake ... can you help me plz ? >>> >>> >>> Thank you in advance ! >>> >>> Regards, >>> >>> >>> >>> >>> >>> >>> 2014-07-15 15:13 GMT+02:00 Benoit ML : >>> >>> Hello again, >>>> >>>> Ok on controller node I modify the neutron server configuration with >>>> nova_admin_tenant_id = f23ed5be5f534fdba31d23f60621347d >>>> >>>> where id is "services" in keystone and now it's working with "vif_plugging_is_fatal >>>> = True". Good thing. >>>> >>>> Well by the way the migrate doesnt working ... >>>> >>>> >>>> >>>> >>>> 2014-07-15 14:20 GMT+02:00 Benoit ML : >>>> >>>> Hello, >>>>> >>>>> Thank you for taking time ! >>>>> >>>>> Well on the compute node, when i activate "vif_plugging_is_fatal = >>>>> True", the vm creation stuck in spawning state, and in neutron server log i >>>>> have : >>>>> >>>>> ======================================= >>>>> 2014-07-15 14:12:52.351 18448 DEBUG neutron.notifiers.nova [-] Sending >>>>> events: [{'status': 'completed', 'tag': >>>>> u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged', >>>>> 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] send_events >>>>> /usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:218 >>>>> 2014-07-15 14:12:52.354 18448 INFO urllib3.connectionpool [-] Starting >>>>> new HTTP connection (1): localhost >>>>> 2014-07-15 14:12:52.360 18448 DEBUG urllib3.connectionpool [-] "POST >>>>> /v2/5c9c186a909e499e9da0dd5cf2c403e0/os-server-external-events HTTP/1.1" >>>>> 401 23 _make_request >>>>> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295 >>>>> 2014-07-15 14:12:52.362 18448 INFO urllib3.connectionpool [-] Starting >>>>> new HTTP connection (1): localhost >>>>> 2014-07-15 14:12:52.452 18448 DEBUG urllib3.connectionpool [-] "POST >>>>> /v2.0/tokens HTTP/1.1" 401 114 _make_request >>>>> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295 >>>>> 2014-07-15 14:12:52.453 18448 ERROR neutron.notifiers.nova [-] Failed >>>>> to notify nova on events: [{'status': 'completed', 'tag': >>>>> u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged', >>>>> 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Traceback >>>>> (most recent call last): >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >>>>> "/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py", line 221, in >>>>> send_events >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova >>>>> batched_events) >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >>>>> "/usr/lib/python2.7/site-packages/novaclient/v1_1/contrib/server_external_events.py", >>>>> line 39, in create >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova >>>>> return_raw=True) >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >>>>> "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in _create >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova _resp, >>>>> body = self.api.client.post(url, body=body) >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >>>>> "/usr/lib/python2.7/site-packages/novaclient/client.py", line 312, in post >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova return >>>>> self._cs_request(url, 'POST', **kwargs) >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File >>>>> "/usr/lib/python2.7/site-packages/novaclient/client.py", line 301, in >>>>> _cs_request >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova raise e >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova >>>>> Unauthorized: Unauthorized (HTTP 401) >>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova >>>>> 2014-07-15 14:12:58.321 18448 DEBUG neutron.openstack.common.rpc.amqp >>>>> [-] received {u'_context_roles': [u'admin'], u'_context_request_id': >>>>> u'req-9bf35c42-3477-4ed3-8092-af729c21198c', u'_context_read_deleted': >>>>> u'no', u'_context_user_name': None, u'_context_project_name': None, >>>>> u'namespace': None, u'_context_tenant_id': None, u'args': {u'agent_state': >>>>> {u'agent_state': {u'topic': u'N/A', u'binary': >>>>> u'neutron-openvswitch-agent', u'host': u'pvidgsh006.pvi', u'agent_type': >>>>> u'Open vSwitch agent', u'configurations': {u'tunnel_types': [u'vxlan'], >>>>> u'tunneling_ip': u'192.168.40.5', u'bridge_mappings': {}, u'l2_population': >>>>> False, u'devices': 1}}}, u'time': u'2014-07-15T12:12:58.313995'}, >>>>> u'_context_tenant': None, u'_unique_id': >>>>> u'7c9a4dfcd256494caf6e1327c8051e29', u'_context_is_admin': True, >>>>> u'version': u'1.0', u'_context_timestamp': u'2014-07-15 12:01:28.190772', >>>>> u'_context_tenant_name': None, u'_context_user': None, u'_context_user_id': >>>>> None, u'method': u'report_state', u'_context_project_id': None} _safe_log >>>>> /usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/common.py:280 >>>>> ======================================= >>>>> >>>>> Well I'm supposed it's related ... Perhaps with those options in >>>>> neutron.conf : >>>>> ====================================== >>>>> notify_nova_on_port_status_changes = True >>>>> notify_nova_on_port_data_changes = True >>>>> nova_url = http://localhost:8774/v2 >>>>> nova_admin_tenant_name = services >>>>> nova_admin_username = nova >>>>> nova_admin_password = nova >>>>> nova_admin_auth_url = http://localhost:35357/v2.0 >>>>> ====================================== >>>>> >>>>> But well didnt see anything wrong ... >>>>> >>>>> Thank you in advance ! >>>>> >>>>> Regards, >>>>> >>>>> >>>>> >>>>> 2014-07-11 16:08 GMT+02:00 Vimal Kumar : >>>>> >>>>> ----- >>>>>> File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line >>>>>> 239, in authenticate\\n content_type="application/json")\\n\', u\' File >>>>>> "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in >>>>>> _cs_request\\n raise exceptions.Unauthorized(message=body)\\n\', >>>>>> u\'Unauthorized: {"error": {"message": "The request you have made requires >>>>>> authentication.", "code": 401, "title": "Unauthorized"}}\\n\'].\n'] >>>>>> ----- >>>>>> >>>>>> Looks like HTTP connection to neutron server is resulting in 401 >>>>>> error. >>>>>> >>>>>> Try enabling debug mode for neutron server and then tail >>>>>> /var/log/neutron/server.log , hopefully you should get more info. >>>>>> >>>>>> >>>>>> On Fri, Jul 11, 2014 at 7:13 PM, Benoit ML wrote: >>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> Ok I see. Nova telles neutron/openvswitch to create the bridge qbr >>>>>>> prior to the migration itself. >>>>>>> I ve already activate debug and verbose ... But well i'm really >>>>>>> stuck, dont know how and where to search/look ... >>>>>>> >>>>>>> >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> 2014-07-11 15:09 GMT+02:00 Miguel Angel : >>>>>>> >>>>>>> Hi Benoit, >>>>>>>> >>>>>>>> A manual virsh migration should fail, because the >>>>>>>> network ports are not migrated to the destination host. >>>>>>>> >>>>>>>> You must investigate on the authentication problem itself, >>>>>>>> and let nova handle all the underlying API calls which should >>>>>>>> happen... >>>>>>>> >>>>>>>> May be it's worth setting nova.conf to debug=True >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> --- >>>>>>>> irc: ajo / mangelajo >>>>>>>> Miguel Angel Ajo Pelayo >>>>>>>> +34 636 52 25 69 >>>>>>>> skype: ajoajoajo >>>>>>>> >>>>>>>> >>>>>>>> 2014-07-11 14:41 GMT+02:00 Benoit ML : >>>>>>>> >>>>>>>> Hello, >>>>>>>>> >>>>>>>>> cat /etc/redhat-release >>>>>>>>> CentOS Linux release 7 (Rebuilt from: RHEL 7.0) >>>>>>>>> >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> >>>>>>>>> >>>>>>>>> 2014-07-11 13:40 GMT+02:00 Boris Derzhavets < >>>>>>>>> bderzhavets at hotmail.com>: >>>>>>>>> >>>>>>>>> Could you please post /etc/redhat-release. >>>>>>>>>> >>>>>>>>>> Boris. >>>>>>>>>> >>>>>>>>>> ------------------------------ >>>>>>>>>> Date: Fri, 11 Jul 2014 11:57:12 +0200 >>>>>>>>>> From: ben42ml at gmail.com >>>>>>>>>> To: rdo-list at redhat.com >>>>>>>>>> Subject: [Rdo-list] Icehouse multi-node - Centos7 - live >>>>>>>>>> migration failed because of "network qbr no such device" >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hello, >>>>>>>>>> >>>>>>>>>> I'm working on a multi-node setup of openstack Icehouse using >>>>>>>>>> centos7. >>>>>>>>>> Well i have : >>>>>>>>>> - one controllor node with all server services thing stuff >>>>>>>>>> - one network node with openvswitch agent, l3-agent, dhcp-agent >>>>>>>>>> - two compute node with nova-compute and neutron-openvswitch >>>>>>>>>> - one storage nfs node >>>>>>>>>> >>>>>>>>>> NetworkManager is deleted on compute nodes and network node. >>>>>>>>>> >>>>>>>>>> My network use is configured to use vxlan. I can create VM, >>>>>>>>>> tenant-network, external-network, routeur, assign floating-ip to VM, push >>>>>>>>>> ssh-key into VM, create volume from glance image, etc... Evrything is >>>>>>>>>> conected and reacheable. Pretty cool :) >>>>>>>>>> >>>>>>>>>> But when i try to migrate VM things go wrong ... I have >>>>>>>>>> configured nova, libvirtd and qemu to use migration through libvirt-tcp. >>>>>>>>>> I have create and exchanged ssh-key for nova user on all node. I >>>>>>>>>> have verified userid and groupid of nova. >>>>>>>>>> >>>>>>>>>> Well nova-compute log, on the target compute node, : >>>>>>>>>> 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager >>>>>>>>>> [instance: a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error: >>>>>>>>>> Unauthorized {"error": {"m >>>>>>>>>> essage": "The request you have made requires authentication.", >>>>>>>>>> "code": 401, "title": "Unauthorized"}} >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> So well after searching a lots in all logs, i have fount that i >>>>>>>>>> cant simply migration VM between compute node with a simple virsh : >>>>>>>>>> virsh migrate instance-00000084 qemu+tcp:///system >>>>>>>>>> >>>>>>>>>> The error is : >>>>>>>>>> erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such >>>>>>>>>> device >>>>>>>>>> >>>>>>>>>> Well when i look on the source hyperviseur the bridge >>>>>>>>>> "qbr3ca65809" exists and have a network tap device. And moreover i >>>>>>>>>> manually create qbr3ca65809 on the target hypervisor, virsh migrate succed ! >>>>>>>>>> >>>>>>>>>> Can you help me plz ? >>>>>>>>>> What can i do wrong ? Perhpas neutron must create the bridge >>>>>>>>>> before migration but didnt for a mis configuration ? >>>>>>>>>> >>>>>>>>>> Plz ask anything you need ! >>>>>>>>>> >>>>>>>>>> Thank you in advance. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> The full nova-compute log attached. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> -- >>>>>>>>>> Benoit >>>>>>>>>> >>>>>>>>>> _______________________________________________ Rdo-list mailing >>>>>>>>>> list Rdo-list at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> -- >>>>>>>>> Benoit >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Rdo-list mailing list >>>>>>>>> Rdo-list at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> Benoit >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Rdo-list mailing list >>>>>>> Rdo-list at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> Benoit >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Benoit >>>> >>> >>> >>> >>> -- >>> -- >>> Benoit >>> >> >> >> >> -- >> -- >> Benoit >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> -- -- Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From juber at mozilla.com Fri Jul 18 17:39:43 2014 From: juber at mozilla.com (uberj) Date: Fri, 18 Jul 2014 10:39:43 -0700 Subject: [Rdo-list] Issues with rdo-release-4 and external networking / br-tun Message-ID: <53C95BDF.5060608@mozilla.com> Hello, I'm attempting to get rdo working on Centos6.5 with external networking. I am following the steps outlined on http://openstack.redhat.com/Neutron_with_existing_external_network To install openstack, I'm running the following commands: sudo yum -y update sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm sudo yum install -y openstack-packstack packstack --allinone --provision-all-in-one-ovs-bridge=n --os-client-install=y --os-heat-install=y After it completes I do 'tail -f /var/log/neutron/*.log' and then 'service neutron-openvswitch-agent restart'. In the neutron log I see the following errors: 2014-07-18 16:06:03.255 28794 ERROR neutron.agent.linux.ovs_lib [req-2fc4abd3-483a-4476-b14d-78c09b04368c None] Unable to execute ['ovs-ofctl', 'del-flows', 'br-tun']. Exception: Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-ofctl', 'del-flows', 'br-tun'] Exit code: 1 Stdout: '' Stderr: 'ovs-ofctl: br-tun is not a bridge or a socket\n' 2014-07-18 16:06:03.325 28794 ERROR neutron.agent.linux.ovs_lib [req-2fc4abd3-483a-4476-b14d-78c09b04368c None] Unable to execute ['ovs-ofctl', 'add-flow', 'br-tun', 'hard_timeout=0,idle_timeout=0,priority=1,in_port=1,actions=resubmit(,1)']. Exception: Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-ofctl', 'add-flow', 'br-tun', 'hard_timeout=0,idle_timeout=0,priority=1,in_port=1,actions=resubmit(,1)'] Exit code: 1 Stdout: '' Stderr: 'ovs-ofctl: br-tun is not a bridge or a socket\n' 2014-07-18 16:06:03.380 28794 ERROR neutron.agent.linux.ovs_lib [req-2fc4abd3-483a-4476-b14d-78c09b04368c None] Unable to execute ['ovs-ofctl', 'add-flow', 'br-tun', 'hard_timeout=0,idle_timeout=0,priority=0,actions=drop']. Exception: Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-ofctl', 'add-flow', 'br-tun', 'hard_timeout=0,idle_timeout=0,priority=0,actions=drop'] Exit code: 1 Stdout: '' Stderr: 'ovs-ofctl: br-tun is not a bridge or a socket\n' When I go to look for br-tun I do 'ovs-vsctl show' and I'll *sometimes* see: ... Bridge br-tun Port br-tun Interface br-tun type: internal ... Now, this is kind of weird, but if I do "watch -n 0.5 ovs-vsctl show" I don't always see br-tun! In fact the br-tun seems to jump around a lot (sometimes its there, sometimes its listed above br-ex, sometimes below.) More info: [root at localhost ~]# ovsdb-server --version ovsdb-server (Open vSwitch) 1.11.0 Compiled Jul 30 2013 18:14:53 [root at localhost ~]# ovs-vswitchd --version ovs-vswitchd (Open vSwitch) 1.11.0 Compiled Jul 30 2013 18:14:54 OpenFlow versions 0x1:0x1 Any help would be appreciated. -- (uberj) Jacques Uber Mozilla IT -------------- next part -------------- An HTML attachment was scrubbed... URL: From acvelez at vidalinux.com Fri Jul 18 18:39:10 2014 From: acvelez at vidalinux.com (Antonio C. Velez) Date: Fri, 18 Jul 2014 14:39:10 -0400 (AST) Subject: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED In-Reply-To: <1007626789.57671.1405675321782.JavaMail.zimbra@vidalinux.net> References: <1403903244.57130.1405589573110.JavaMail.zimbra@vidalinux.net> <788446832.57133.1405589819914.JavaMail.zimbra@vidalinux.net> <20140717110553.GC10151@t430slt.redhat.com> <1007626789.57671.1405675321782.JavaMail.zimbra@vidalinux.net> Message-ID: <1891440403.57944.1405708750248.JavaMail.zimbra@vidalinux.net> Steve, Nevermind, I fix the issue, I was calling the same port for OpenShiftNode, changing this got it working. Thanks! ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com ----- Original Message ----- From: "Antonio C. Velez" To: "Steven Hardy" Cc: rdo-list at redhat.com Sent: Friday, July 18, 2014 5:22:01 AM Subject: Re: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED Steven, I manage to understand centos6.5 templates and get it to work, then now I got another error, 2014-07-18 04:47:26.117 6759 ERROR heat.engine.resource [-] CREATE : Server "OpenShiftNode" Stack "openshift" [2d5caea3-ca3e-47f1-92e0-898109d671dd] 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource Traceback (most recent call last): 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 417, in _do_action 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource handle()) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line 535, in handle_create 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource admin_pass=admin_pass) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py", line 871, in create 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource **boot_kwargs) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py", line 534, in _boot 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource return_raw=return_raw, **kwargs) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in _create 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource _resp, body = self.api.client.post(url, body=body) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 312, in post 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource return self._cs_request(url, 'POST', **kwargs) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 286, in _cs_request 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource **kwargs) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 268, in _time_request 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource resp, body = self.request(url, method, **kwargs) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 262, in request 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource raise exceptions.from_response(resp, body, url, method) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource Conflict: Port 9bceb816-93f4-4d72-8ae8-de8e5758bd6d is still in use. (HTTP 409) (Request-ID: req-685b5295-4918-417d-894d-d416bf9e7b1c) 2014-07-18 04:47:26.117 6759 TRACE heat.engine.resource 2014-07-18 04:47:26.848 6759 WARNING heat.engine.service [-] Stack create failed, status FAILED Any advice? Thanks! ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com ----- Original Message ----- From: "Steven Hardy" To: "Antonio C. Velez" Cc: rdo-list at redhat.com Sent: Thursday, July 17, 2014 7:05:54 AM Subject: Re: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED On Thu, Jul 17, 2014 at 05:36:59AM -0400, Antonio C. Velez wrote: > Hi everyone, > > I'm trying to build openshift-origin using heat template: https://github.com/openstack/heat-templates/tree/master/openshift-origin/F19 the BrokerFlavor complete without issues, but stops giving the following error: > > 2014-07-17 04:17:50.266 6759 INFO heat.engine.resource [-] creating WaitCondition "BrokerWaitCondition" Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] > 2014-07-17 05:16:28.484 6759 INFO heat.engine.scheduler [-] Task stack_task from Stack "openshift" [d9c72c56-8d90-47fe-9036-084146eeb175] timed out > 2014-07-17 05:16:28.704 6759 WARNING heat.engine.service [-] Stack create failed, status FAILED > > I tried increasing the BrokerWaitCondition timeout but doesn't help. Please check the following: 1. heat_waitcondition_server_url is set correctly in your /etc/heat/heat.conf: heat_waitcondition_server_url = http://192.168.0.6:8000/v1/waitcondition Here 192.168.0.6 needs to be the IP address of the box running heat-api-cfn, and it must be accessible to the instance. Relatedly, the heat-api-cfn service must be installed and running, which means setting the -os-heat-cfn-install/OS_HEAT_CFN_INSTALL option if you installed via packstack. 2. Ensure no firewalls are blocking access - SSH to the instance - Install nmap inside the instance - nmap 192.168.0.6 (using the above URL as an example) - Port tcp/8000 should be open 3. Ensure the instances can connect to the internet - Should be covered by installing nmap above, but if your network configuration is broken and they can't connect to the internet, the install of packages will hang up and the WaitCondition will time out. If all of the above is OK, log on to the instance during the install via SSH and tail /var/log/cloud-init*, looking for errors or a point in the install where it is getting stuck. Also, I assume the image you're using has been prepared as per the instructions in the README.rst? Hope that helps. -- Steve Hardy Red Hat Engineering, Cloud -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From rbowen at redhat.com Fri Jul 18 20:22:13 2014 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 18 Jul 2014 16:22:13 -0400 Subject: [Rdo-list] Quickstart should mention architecture requirements In-Reply-To: <20140717171232.GA32299@redhat.com> References: <20140717171232.GA32299@redhat.com> Message-ID: <53C981F5.2040303@redhat.com> Looks like Padraig went ahead and made this edit this morning. On 07/17/2014 01:12 PM, Lars Kellogg-Stedman wrote: > I just spent some time debugging an issue on #rdo in which someone > appeared to have done everything correctly but was unable to install > RDO because of several missing packages. > > It turns out this was because they were working with an i686 CentOS > image. > > I think we need to update the "Prerequisites" section of the > Quickstart document (http://openstack.redhat.com/Quickstart) to > indicate that we only support x86_64, because otherwise this is a > tricky failure mode to detect: there are no particular errors, and all > the .noarch packages still show up, so the problem is not immediately > obvious. > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From acvelez at vidalinux.com Sun Jul 20 16:57:58 2014 From: acvelez at vidalinux.com (Antonio C. Velez) Date: Sun, 20 Jul 2014 12:57:58 -0400 (AST) Subject: [Rdo-list] Openshift inside Openstack ? how to access internal network from external ? In-Reply-To: <1775788621.58571.1405875291856.JavaMail.zimbra@vidalinux.net> Message-ID: <524756999.58575.1405875478166.JavaMail.zimbra@vidalinux.net> I successfully install openshift origin inside openstack, but I cannot access my openshift apps from my external network! broker and node already have floating ips but the dns on broker assign internal ips for my apps! what the correct procedure to fix this issue? Thanks in advance!!! ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From johndecot at gmail.com Mon Jul 21 09:06:53 2014 From: johndecot at gmail.com (john decot) Date: Mon, 21 Jul 2014 14:51:53 +0545 Subject: [Rdo-list] Icehouse MariaDB-Galera -Server problem. Message-ID: Hi, I am new to openstack. I am on the way to RDO for the installation of openstack. packstack --allinone command generates output error : cannot find mariadb-galera-server in repo. any help will be appreciated. Thank You, John. -------------- next part -------------- An HTML attachment was scrubbed... URL: From psuriset at linux.vnet.ibm.com Mon Jul 21 12:29:25 2014 From: psuriset at linux.vnet.ibm.com (Pradeep Kumar Surisetty) Date: Mon, 21 Jul 2014 17:59:25 +0530 Subject: [Rdo-list] [RDO][Instack] heat is not able to create stack with instack Message-ID: <53CD07A5.1040206@linux.vnet.ibm.com> Hi All I have been trying to set instack with RDO. I have successfully installed undercloud and moving on to overcloud. Now, when I run "instack-deploy-overcloud", I get the following error: |+ OVERCLOUD_YAML_PATH=overcloud.yaml + heat stack-create-f overcloud.yaml-PAdminToken=b003d63242f5db3e1ad4864ae66911e02ba19bcb-PAdminPassword=7bfe4d4a18280752ad07f259a69a3ed00db2ab44 -PCinderPassword=df0893b4355f3511a6d67538dd592d02d1bc11d3-PGlancePassword=066f65f878157b438a916ccbd44e0b7037ee118f -PHeatPassword=58fda0e4d6708e0164167b11fe6fca6ab6b35ec6 -PNeutronPassword=80853ad029feb77bb7c60d035542f21aa5c24177 -PNovaPassword=331474580be53b78e40c91dfdfc2323578a035e7 -PNeutronPublicInterface=eth0-PSwiftPassword=b0eca57b45ebf3dd5cae071dc3880888fb1d4840-PSwiftHashSuffix=a8d87f3952d6f91da589fbef801bb92141fd1461-PNovaComputeLibvirtType=qemu-P'GlanceLogFile='\'''\''' -PNeutronDnsmasqOptions=dhcp-option-force=26,1400 overcloud +--------------------------------------+------------+--------------------+----------------------+ | id| stack_name| stack_status| creation_time| +--------------------------------------+------------+--------------------+----------------------+ | 0ca028e7-682b-41ef-8af0-b2eb67bee272| overcloud| CREATE_IN_PROGRESS| 2014-07-18T10:50:48Z | +--------------------------------------+------------+--------------------+----------------------+ + tripleo wait_for_stack_ready220 10 overcloud Command output matched'CREATE_FAILED'. Exiting... |Now, i understand that the stack isn't being created. So, I tried to check out the state of the stack: |[stack at localhost~]$ heat stack-list +--------------------------------------+------------+---------------+----------------------+ | id| stack_name| stack_status| creation_time| +--------------------------------------+------------+---------------+----------------------+ | 0ca028e7-682b-41ef-8af0-b2eb67bee272| overcloud| CREATE_FAILED| 2014-07-18T10:50:48Z | +--------------------------------------+------------+---------------+----------------------+ | i even tried to create stack manually, but ended up getting the same error. Update: Here is the heat log: |2014-07-18 06:51:11.884 30750 WARNING heat.common.keystoneclient[-] stack_user_domain IDnot set in heat.conf falling back tousing default 2014-07-18 06:51:12.921 30750 WARNING heat.common.keystoneclient[-] stack_user_domain IDnot set in heat.conf falling back tousing default 2014-07-18 06:51:16.058 30750 ERROR heat.engine.resource[-] CREATE: Server "SwiftStorage0" [07e42c3d-0f1b-4bb9-b980-ffbb74ac770d] Stack "overcloud" [0ca028e7-682b-41ef-8af0-b2eb67bee272] 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resourceTraceback (most recent calllast): 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resourceFile "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line420, in _do_action 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resourcewhile not check(handle_data): 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resourceFile "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line545, in check_create_complete 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resourcereturn self._check_active(server) 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resourceFile "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line561, in _check_active 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resourceraise exc 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resourceError: Creation of server overcloud-SwiftStorage0-qdjqbif6peva failed. 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource 2014-07-18 06:51:16.255 30750 WARNING heat.common.keystoneclient[-] stack_user_domain IDnot set in heat.conf falling back tousing default 2014-07-18 06:51:16.939 30750 WARNING heat.common.keystoneclient[-] stack_user_domain IDnot set in heat.conf falling back tousing default 2014-07-18 06:51:17.368 30750 WARNING heat.common.keystoneclient[-] stack_user_domain IDnot set in heat.conf falling back tousing default 2014-07-18 06:51:17.638 30750 WARNING heat.common.keystoneclient[-] stack_user_domain IDnot set in heat.conf falling back tousing default 2014-07-18 06:51:18.158 30750 WARNING heat.common.keystoneclient[-] stack_user_domain IDnot set in heat.conf falling back tousing default 2014-07-18 06:51:18.613 30750 WARNING heat.common.keystoneclient[-] stack_user_domain IDnot set in heat.conf falling back... Earlier posted in wrong/different forum. Ref: https://ask.openstack.org/en/question/43017/heat-is-not-able-to-create-stack-with-instack/ | --Pradeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From xzhao at bnl.gov Mon Jul 21 14:59:08 2014 From: xzhao at bnl.gov (Zhao, Xin) Date: Mon, 21 Jul 2014 10:59:08 -0400 Subject: [Rdo-list] packstack error messages Message-ID: <53CD2ABC.602@bnl.gov> Hello, I am installing icehouse from RDO, on a RHEL7 VM, using packstack. Get the following error: ...... Applying 130.199.185.76_glance.pp Applying 130.199.185.76_cinder.pp 130.199.185.76_keystone.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 130.199.185.76_keystone.pp Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[_member_]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ role-list' returned 1: 'NoneType' object has no attribute '__getitem__' You will find full trace in log /var/tmp/packstack/20140721-102751-Mviqzb/manifests/130.199.185.76_keystone.pp.log Please check log file /var/tmp/packstack/20140721-102751-Mviqzb/openstack-setup.log for more information Any idea what caused this ? I am using the rdo repo from rdo-release-icehouse-4 rpm. Thanks in advance, Xin From jslagle at redhat.com Mon Jul 21 16:23:22 2014 From: jslagle at redhat.com (James Slagle) Date: Mon, 21 Jul 2014 12:23:22 -0400 Subject: [Rdo-list] [RDO][Instack] heat is not able to create stack with instack In-Reply-To: <53CD07A5.1040206@linux.vnet.ibm.com> References: <53CD07A5.1040206@linux.vnet.ibm.com> Message-ID: <20140721162322.GD10147@teletran-1> On Mon, Jul 21, 2014 at 05:59:25PM +0530, Pradeep Kumar Surisetty wrote: > Hi All > > I have been trying to set instack with RDO. I have successfully installed > undercloud and moving on to overcloud. Now, when I run > "instack-deploy-overcloud", I get the following error: > > + OVERCLOUD_YAML_PATH=overcloud.yaml > + heat stack-create -f overcloud.yaml -P AdminToken=b003d63242f5db3e1ad4864ae66911e02ba19bcb -P AdminPassword=7bfe4d4a18280752ad07f259a69a3ed00db2ab44 -P CinderPassword=df0893b4355f3511a6d67538dd592d02d1bc11d3 -P GlancePassword=066f65f878157b438a916ccbd44e0b7037ee! > 118f -P HeatPassword=58fda0e4d6708e0164167b11fe6fca6ab6b35ec6 -P NeutronPassword=80853ad029feb77bb7c60d035542f21aa5c24177 -P NovaPassword=331474580be53b78e40c91dfdfc2323578a035e7 -P NeutronPublicInterface=eth0 -P SwiftPassword=b0eca57b45ebf3dd5cae071dc3880888fb1d4840 -P SwiftHashSuffix=a8d87f3952d6f91da589fbef801bb92141fd1461 -P NovaComputeLibvirtType=qemu -P 'GlanceLogFile='\'''\''' -P NeutronDnsmasqOptions=dhcp-option-force=26,1400 overcloud > +--------------------------------------+------------+--------------------+----------------------+ > | id | stack_name | stack_status | creation_time | > +--------------------------------------+------------+--------------------+----------------------+ > | 0ca028e7-682b-41ef-8af0-b2eb67bee272 | overcloud | CREATE_IN_PROGRESS | 2014-07-18T10:50:48Z | > +--------------------------------------+------------+--------------------+----------------------+ > + tripleo wait_for_stack_ready 220 10 overcloud > Command output matched 'CREATE_FAILED'. Exiting... > > Now, i understand that the stack isn't being created. So, I tried to check out the state of the stack: > > [stack at localhost ~]$ heat stack-list > +--------------------------------------+------------+---------------+----------------------+ > | id | stack_name | stack_status | creation_time | > +--------------------------------------+------------+---------------+----------------------+ > | 0ca028e7-682b-41ef-8af0-b2eb67bee272 | overcloud | CREATE_FAILED | 2014-07-18T10:50:48Z | > +--------------------------------------+------------+---------------+----------------------+ > > > i even tried to create stack manually, but ended up getting the same > error. > > Update: Here is the heat log: > > 2014-07-18 06:51:11.884 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:12.921 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:16.058 30750 ERROR heat.engine.resource [-] CREATE : Server "SwiftStorage0" [07e42c3d-0f1b-4bb9-b980-ffbb74ac770d] Stack "overcloud" [0ca028e7-682b-41ef-8af0-b2eb67bee272] > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Traceback (most recent call last): > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 420, in _do_action > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource while not check(handle_data): > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line 545, in check_create_complete > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource return self._check_active(server) > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line 561, in _check_active > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource raise exc > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Error: Creation of server overcloud-SwiftStorage0-qdjqbif6peva failed. > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource > 2014-07-18 06:51:16.255 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:16.939 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:17.368 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:17.638 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:18.158 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:18.613 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back ... Hi Pradeep, Can you run a "nova show " on the failed instance? And also provide any tracebacks or errors from the nova compute log under /var/log/nova? -- -- James Slagle -- From frizop at gmail.com Mon Jul 21 23:25:08 2014 From: frizop at gmail.com (Nathan M.) Date: Mon, 21 Jul 2014 16:25:08 -0700 Subject: [Rdo-list] Icehouse MariaDB-Galera -Server problem. In-Reply-To: References: Message-ID: I believe you'll need the EPEL repo installed prior to this on that system: epel/pkgtags | 879 kB 00:00 =============================================== N/S Matched: mariadb-galera-server =============================================== mariadb-galera-server.x86_64 : The MariaDB server and related files On Mon, Jul 21, 2014 at 2:06 AM, john decot wrote: > Hi, > > I am new to openstack. I am on the way to RDO for the installation of > openstack. > > > packstack --allinone command generates output error : cannot find > mariadb-galera-server in repo. > > any help will be appreciated. > > > Thank You, > > John. > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yguenane at gmail.com Mon Jul 21 23:25:46 2014 From: yguenane at gmail.com (Yanis Guenane) Date: Mon, 21 Jul 2014 19:25:46 -0400 Subject: [Rdo-list] packstack error messages In-Reply-To: <53CD2ABC.602@bnl.gov> References: <53CD2ABC.602@bnl.gov> Message-ID: <53CDA17A.2000709@gmail.com> > Hello, > > I am installing icehouse from RDO, on a RHEL7 VM, using packstack. Get > the following error: > > ...... > > Applying 130.199.185.76_glance.pp > Applying 130.199.185.76_cinder.pp > 130.199.185.76_keystone.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 130.199.185.76_keystone.pp > Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[_member_]: > Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint > http://127.0.0.1:35357/v2.0/ role-list' returned 1: 'NoneType' object > has no attribute '__getitem__' > You will find full trace in log > /var/tmp/packstack/20140721-102751-Mviqzb/manifests/130.199.185.76_keystone.pp.log > > Please check log file > /var/tmp/packstack/20140721-102751-Mviqzb/openstack-setup.log for more > information > > > Any idea what caused this ? I am using the rdo repo from > rdo-release-icehouse-4 rpm. > > Thanks in advance, > Xin Hi Xin, Not sure why you have the error, but since I don't know the setup one remark, the IP you seem to install packstack on is 130.199.185.76, but yet os-endpoint is specified to be 127.0.0.1, are you sure it is listening on this IP? What does `netstat -tlnp` gives you? Also could you please paste the logs, we might get a better idea from them. -- Yanis Guenane From kchamart at redhat.com Tue Jul 22 05:16:19 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 22 Jul 2014 10:46:19 +0530 Subject: [Rdo-list] Issues with rdo-release-4 and external networking / br-tun In-Reply-To: <53C95BDF.5060608@mozilla.com> References: <53C95BDF.5060608@mozilla.com> Message-ID: <20140722051619.GB30129@tesla.redhat.com> On Fri, Jul 18, 2014 at 10:39:43AM -0700, uberj wrote: [. . .] > 2014-07-18 16:06:03.255 28794 ERROR neutron.agent.linux.ovs_lib > [req-2fc4abd3-483a-4476-b14d-78c09b04368c None] Unable to execute > ['ovs-ofctl', 'del-flows', 'br-tun']. Exception: > Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', > 'ovs-ofctl', 'del-flows', 'br-tun'] > Exit code: 1 > Stdout: '' > Stderr: 'ovs-ofctl: br-tun is not a bridge or a socket\n' > 2014-07-18 16:06:03.325 28794 ERROR neutron.agent.linux.ovs_lib > [req-2fc4abd3-483a-4476-b14d-78c09b04368c None] Unable to execute > ['ovs-ofctl', 'add-flow', 'br-tun', > 'hard_timeout=0,idle_timeout=0,priority=1,in_port=1,actions=resubmit(,1)']. > Exception: > Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', > 'ovs-ofctl', 'add-flow', 'br-tun', > 'hard_timeout=0,idle_timeout=0,priority=1,in_port=1,actions=resubmit(,1)'] > Exit code: 1 > Stdout: '' > Stderr: 'ovs-ofctl: br-tun is not a bridge or a socket\n' > 2014-07-18 16:06:03.380 28794 ERROR neutron.agent.linux.ovs_lib > [req-2fc4abd3-483a-4476-b14d-78c09b04368c None] Unable to execute > ['ovs-ofctl', 'add-flow', 'br-tun', > 'hard_timeout=0,idle_timeout=0,priority=0,actions=drop']. Exception: > Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', > 'ovs-ofctl', 'add-flow', 'br-tun', > 'hard_timeout=0,idle_timeout=0,priority=0,actions=drop'] > Exit code: 1 > Stdout: '' > Stderr: 'ovs-ofctl: br-tun is not a bridge or a socket\n' >From here[1], the above seems (at-least on Ubuntu machines) like an incompatibility between OVS (1.10.2) and Kernel (3.11.0-20). But I see from below that you're using OVS 1.11.0. I don't have a 6.5 machine handy to check. You might want to ensure you have whatever newest Kernel/OVS/openstack-neutron packages available for CentOS 6.5 RDO. Also, here's some configs that worked for me w/ IceHouse+Neutron+GRE on Fedora-20 RDO. [1] https://ask.openstack.org/en/question/30014/openvswitch-module-verification-failed-signature-andor-required-key-missing-tainting-kernel/ [2] http://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt > > When I go to look for br-tun I do 'ovs-vsctl show' and I'll *sometimes* see: > > ... > Bridge br-tun > Port br-tun > Interface br-tun > type: internal > ... > > Now, this is kind of weird, but if I do "watch -n 0.5 ovs-vsctl show" I > don't always see br-tun! In fact the br-tun seems to jump around a lot > (sometimes its there, sometimes its listed above br-ex, sometimes below.) > > More info: > > [root at localhost ~]# ovsdb-server --version > ovsdb-server (Open vSwitch) 1.11.0 > Compiled Jul 30 2013 18:14:53 > [root at localhost ~]# ovs-vswitchd --version > ovs-vswitchd (Open vSwitch) 1.11.0 > Compiled Jul 30 2013 18:14:54 > OpenFlow versions 0x1:0x1 > > > Any help would be appreciated. > -- /kashyap From johndecot at gmail.com Tue Jul 22 05:32:24 2014 From: johndecot at gmail.com (john decot) Date: Tue, 22 Jul 2014 11:17:24 +0545 Subject: [Rdo-list] Icehouse MariaDB-Galera -Server problem. In-Reply-To: References: <53CD2B6E.9040607@mozilla.com> Message-ID: Hi, the output of uname -a is Linux virtualbox.localdomain 2.6.32-431.20.3.el6.i686 #1 SMP Thu Jun 19 19:51:30 UTC 2014 i686 i686 i386 GNU/Linux John. On Tue, Jul 22, 2014 at 6:19 AM, john decot wrote: > Hi, > the output of uname -a is > > Linux virtualbox.localdomain 2.6.32-431.20.3.el6.i686 #1 SMP Thu Jun 19 > 19:51:30 UTC 2014 i686 i686 i386 GNU/Linux > > John. > > > > On Mon, Jul 21, 2014 at 8:47 PM, uberj wrote: > >> Hi John, >> >> I have seen similar symptoms (specifically the absense of >> mariadb-galera-server) when I trying to install packstack on a non x86 >> version of centos. What does 'uname -a' say? Some packages needed by >> packstack are pinned to the x86 architecture so that when you install rdo >> on something other than x86 you only will be installing the noarch packages >> and miss any architecture specific packages. >> >> >> On 07/21/2014 02:06 AM, john decot wrote: >> >> Hi, >> >> I am new to openstack. I am on the way to RDO for the installation >> of openstack. >> >> >> packstack --allinone command generates output error : cannot find >> mariadb-galera-server in repo. >> >> any help will be appreciated. >> >> >> Thank You, >> >> John. >> >> >> >> >> >> _______________________________________________ >> Rdo-list mailing listRdo-list at redhat.comhttps://www.redhat.com/mailman/listinfo/rdo-list >> >> >> -- >> (uberj) Jacques Uber >> Mozilla IT >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peeygupt at in.ibm.com Tue Jul 22 06:26:28 2014 From: peeygupt at in.ibm.com (Peeyush Gupta) Date: Tue, 22 Jul 2014 11:56:28 +0530 Subject: [Rdo-list] [RDO][Instack] heat is not able to create stack with instack In-Reply-To: <20140721162322.GD10147@teletran-1> References: <53CD07A5.1040206@linux.vnet.ibm.com> <20140721162322.GD10147@teletran-1> Message-ID: Hi James, Here are the details of the failed instance. Interestingly enough, when I re-ran the overcloud deployment, this time, swift didn't fail, it was actually a notcompute instance. Here are the details: [stack at localhost ~]$ nova list +--------------------------------------+--------------------------------------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------------------------------+--------+------------+-------------+---------------------+ | b815d299-4817-4491-b352-d09ab618bd77 | overcloud-BlockStorage0-zwfiycn67hpc | ACTIVE | - | Running | ctlplane=192.0.2.15 | | 9b646a2a-d4b3-438b-94de-4e14bdbe1432 | overcloud-NovaCompute0-ubp6vlfjjepu | ACTIVE | - | Running | ctlplane=192.0.2.14 | | c0ad2f8f-ef91-4a6d-b229-c52b4a89bedd | overcloud-SwiftStorage0-xns634un3z7k | ACTIVE | - | Running | ctlplane=192.0.2.16 | | 11d91a42-4244-48d7-8c4b-92db3a9c43b6 | overcloud-notCompute0-bw4mq7v2sh5y | ERROR | - | NOSTATE | | +--------------------------------------+--------------------------------------+--------+------------+-------------+---------------------+ [stack at localhost ~]$ nova show 11d91a42-4244-48d7-8c4b-92db3a9c43b6 +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000016 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-07-21T11:18:32Z | | fault | {"message": "No valid host was found. ", "code": 500, "details": " File \"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py\", line 108, in schedule_run_instance | | | raise exception.NoValidHost (reason=\"\") | | | ", "created": "2014-07-21T11:18:32Z"} | | flavor | baremetal (b564fd03-bc8d-42f4-8c5b-264cfa62a655) | | hostId | | | id | 11d91a42-4244-48d7-8c4b-92db3a9c43b6 | | image | overcloud-control (cb938db4-2f1a-44d6-96f9-016c8cc7b406) | | key_name | default | | metadata | {} | | name | overcloud-notCompute0-bw4mq7v2sh5y | | os-extended-volumes:volumes_attached | [] | | status | ERROR | | tenant_id | ae8b85d781ad443792f2a3516f38ed88 | | updated | 2014-07-21T11:18:32Z | | user_id | 921abf3732ce40d0b1502e9aa13c6c2a | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ The only relevant log I could found is with nova scheduler: 2014-07-21 04:25:41.473 31078 WARNING nova.scheduler.driver [req-a00295d2-7308-4bb9-a40d-0740fe852bf5 921abf3732ce40d0b1502e9aa13c6c2a ae8b85d781ad443792f2a3516f38ed88] [instance: b3b3f6f9-25fc-45db-b14e-f7daae1f3216] Setting instance to ERROR state. Though I would like to point out here that I am seeing 5 VMs in my setup instead of 4. [stack at devstack ~]$ virsh list --all Id Name State ---------------------------------------------------- 2 instack running 9 baremetal_1 running 10 baremetal_2 running 11 baremetal_3 running - baremetal_0 shut off Regards, Peeyush Gupta From: James Slagle To: Pradeep Kumar Surisetty Cc: rdo-list at redhat.com, deepthi at linux.vnet.ibm.com, Peeyush Gupta/India/IBM at IBMIN, Pradeep K Surisetty/India/IBM at IBMIN, anantyog at linux.vnet.ibm.com Date: 07/21/2014 09:50 PM Subject: Re: [Rdo-list] [RDO][Instack] heat is not able to create stack with instack On Mon, Jul 21, 2014 at 05:59:25PM +0530, Pradeep Kumar Surisetty wrote: > Hi All > > I have been trying to set instack with RDO. I have successfully installed > undercloud and moving on to overcloud. Now, when I run > "instack-deploy-overcloud", I get the following error: > > + OVERCLOUD_YAML_PATH=overcloud.yaml > + heat stack-create -f overcloud.yaml -P AdminToken=b003d63242f5db3e1ad4864ae66911e02ba19bcb -P AdminPassword=7bfe4d4a18280752ad07f259a69a3ed00db2ab44 -P CinderPassword=df0893b4355f3511a6d67538dd592d02d1bc11d3 -P GlancePassword=066f65f878157b438a916ccbd44e0b7037ee! > 118f -P HeatPassword=58fda0e4d6708e0164167b11fe6fca6ab6b35ec6 -P NeutronPassword=80853ad029feb77bb7c60d035542f21aa5c24177 -P NovaPassword=331474580be53b78e40c91dfdfc2323578a035e7 -P NeutronPublicInterface=eth0 -P SwiftPassword=b0eca57b45ebf3dd5cae071dc3880888fb1d4840 -P SwiftHashSuffix=a8d87f3952d6f91da589fbef801bb92141fd1461 -P NovaComputeLibvirtType=qemu -P 'GlanceLogFile='\'''\''' -P NeutronDnsmasqOptions=dhcp-option-force=26,1400 overcloud > +--------------------------------------+------------+--------------------+----------------------+ > | id | stack_name | stack_status | creation_time | > +--------------------------------------+------------+--------------------+----------------------+ > | 0ca028e7-682b-41ef-8af0-b2eb67bee272 | overcloud | CREATE_IN_PROGRESS | 2014-07-18T10:50:48Z | > +--------------------------------------+------------+--------------------+----------------------+ > + tripleo wait_for_stack_ready 220 10 overcloud > Command output matched 'CREATE_FAILED'. Exiting... > > Now, i understand that the stack isn't being created. So, I tried to check out the state of the stack: > > [stack at localhost ~]$ heat stack-list > +--------------------------------------+------------+---------------+----------------------+ > | id | stack_name | stack_status | creation_time | > +--------------------------------------+------------+---------------+----------------------+ > | 0ca028e7-682b-41ef-8af0-b2eb67bee272 | overcloud | CREATE_FAILED | 2014-07-18T10:50:48Z | > +--------------------------------------+------------+---------------+----------------------+ > > > i even tried to create stack manually, but ended up getting the same > error. > > Update: Here is the heat log: > > 2014-07-18 06:51:11.884 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:12.921 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:16.058 30750 ERROR heat.engine.resource [-] CREATE : Server "SwiftStorage0" [07e42c3d-0f1b-4bb9-b980-ffbb74ac770d] Stack "overcloud" [0ca028e7-682b-41ef-8af0-b2eb67bee272] > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Traceback (most recent call last): > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 420, in _do_action > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource while not check(handle_data): > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line 545, in check_create_complete > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource return self._check_active(server) > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line 561, in _check_active > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource raise exc > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Error: Creation of server overcloud-SwiftStorage0-qdjqbif6peva failed. > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource > 2014-07-18 06:51:16.255 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:16.939 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:17.368 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:17.638 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:18.158 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back to using default > 2014-07-18 06:51:18.613 30750 WARNING heat.common.keystoneclient [-] stack_user_domain ID not set in heat.conf falling back ... Hi Pradeep, Can you run a "nova show " on the failed instance? And also provide any tracebacks or errors from the nova compute log under /var/log/nova? -- -- James Slagle -- From acvelez at vidalinux.com Tue Jul 22 07:04:36 2014 From: acvelez at vidalinux.com (Antonio C. Velez) Date: Tue, 22 Jul 2014 03:04:36 -0400 (AST) Subject: [Rdo-list] Openshift inside Openstack ? how to access internal network from external ? In-Reply-To: <524756999.58575.1405875478166.JavaMail.zimbra@vidalinux.net> References: <524756999.58575.1405875478166.JavaMail.zimbra@vidalinux.net> Message-ID: <740843063.59660.1406012676532.JavaMail.zimbra@vidalinux.net> Nevermind, I already answer my own question, for those who have the same problem you need to especify the following settings for broker and node in your heat template: nameserver_ip_addr => 'broker_floating_ip', broker_ip_addr => 'broker_floating_ip', node_ip_addr => 'node_floating_ip', Hope it helps! ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com ----- Original Message ----- From: "Antonio C. Velez" To: rdo-list at redhat.com Sent: Sunday, July 20, 2014 12:57:58 PM Subject: Openshift inside Openstack ? how to access internal network from external ? I successfully install openshift origin inside openstack, but I cannot access my openshift apps from my external network! broker and node already have floating ips but the dns on broker assign internal ips for my apps! what the correct procedure to fix this issue? Thanks in advance!!! ------------------ Antonio C. Velez Baez Linux Consultant Vidalinux.com RHCE, RHCI, RHCX, RHCOE Red Hat Certified Training Center Email: acvelez at vidalinux.com Tel: 1-787-439-2983 Skype: vidalinuxpr Twitter: @vidalinux.com Website: www.vidalinux.com -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From patrick at laimbock.com Tue Jul 22 10:11:58 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Tue, 22 Jul 2014 12:11:58 +0200 Subject: [Rdo-list] Icehouse MariaDB-Galera -Server problem. In-Reply-To: References: <53CD2B6E.9040607@mozilla.com> Message-ID: <53CE38EE.7040000@laimbock.com> Hi John, On 22-07-14 07:32, john decot wrote: > Hi, > the output of uname -a is > > Linux virtualbox.localdomain 2.6.32-431.20.3.el6.i686 #1 SMP Thu Jun 19 > 19:51:30 UTC 2014 i686 i686 i386 GNU/Linux Under 'Step 0: Prerequisites' at http://openstack.redhat.com/Quickstart it says 'x86_64 is currently the only supported architecture' which you don't seem to be using (i686 versus x86_64). Try running packstack on a x86_64 (virtual) box with CentOS 6.5 x86_64 (just follow the steps again). HTH, Patrick From xzhao at bnl.gov Tue Jul 22 15:15:32 2014 From: xzhao at bnl.gov (Zhao, Xin) Date: Tue, 22 Jul 2014 11:15:32 -0400 Subject: [Rdo-list] packstack error messages In-Reply-To: <53CDA17A.2000709@gmail.com> References: <53CD2ABC.602@bnl.gov> <53CDA17A.2000709@gmail.com> Message-ID: <53CE8014.6090308@bnl.gov> Hi Yanis, Here is the output of netstat: # netstat -lpnt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1376/sendmail: acce tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 12717/sshd: root at pt tcp 0 0 0.0.0.0:35357 0.0.0.0:* LISTEN 7209/python tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 7800/python tcp 0 0 0.0.0.0:8776 0.0.0.0:* LISTEN 13785/python tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 7209/python tcp 0 0 0.0.0.0:41193 0.0.0.0:* LISTEN 3774/beam.smp tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 6051/mysqld tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 7825/python tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 980/sshd tcp6 0 0 ::1:6010 :::* LISTEN 12717/sshd: root at pt tcp6 0 0 :::5672 :::* LISTEN 3774/beam.smp tcp6 0 0 :::22 :::* LISTEN 980/sshd I also attach the two log files mentioned in the stdout messages. Thanks, Xin On 7/21/2014 7:25 PM, Yanis Guenane wrote: >> Hello, >> >> I am installing icehouse from RDO, on a RHEL7 VM, using packstack. Get >> the following error: >> >> ...... >> >> Applying 130.199.185.76_glance.pp >> Applying 130.199.185.76_cinder.pp >> 130.199.185.76_keystone.pp: [ ERROR ] >> Applying Puppet manifests [ ERROR ] >> >> ERROR : Error appeared during Puppet run: 130.199.185.76_keystone.pp >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[_member_]: >> Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint >> http://127.0.0.1:35357/v2.0/ role-list' returned 1: 'NoneType' object >> has no attribute '__getitem__' >> You will find full trace in log >> /var/tmp/packstack/20140721-102751-Mviqzb/manifests/130.199.185.76_keystone.pp.log >> >> Please check log file >> /var/tmp/packstack/20140721-102751-Mviqzb/openstack-setup.log for more >> information >> >> >> Any idea what caused this ? I am using the rdo repo from >> rdo-release-icehouse-4 rpm. >> >> Thanks in advance, >> Xin > Hi Xin, > > Not sure why you have the error, but since I don't know the setup one > remark, the IP you seem to install packstack on is 130.199.185.76, but > yet os-endpoint is specified to be 127.0.0.1, are you sure it is > listening on this IP? What does `netstat -tlnp` gives you? > > Also could you please paste the logs, we might get a better idea from them. > > -- > Yanis Guenane > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults Warning: Scope(Class[Keystone]): token_format parameter is deprecated. Use token_provider instead. Warning: Scope(Class[Keystone::Endpoint]): The public_address parameter is deprecated, use public_url instead. Warning: Scope(Class[Keystone::Endpoint]): The internal_address parameter is deprecated, use internal_url instead. Warning: Scope(Class[Keystone::Endpoint]): The admin_address parameter is deprecated, use admin_url instead. Warning: Scope(Class[Nova::Keystone::Auth]): The cinder parameter is deprecated and has no effect. Notice: Compiled catalog for grid19.racf.bnl.gov in environment production in 1.82 seconds Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false. (at /usr/share/ruby/vendor_ruby/puppet/type.rb:816:in `set_default') Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[_member_]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ role-list' returned 1: 'NoneType' object has no attribute '__getitem__' Error: /Stage[main]/Neutron::Keystone::Auth/Keystone_service[neutron]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_service[ceilometer]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ tenant-list' returned 1: 'NoneType' object has no attribute '__getitem__' Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ role-list' returned 1: 'NoneType' object has no attribute '__getitem__' Error: /Stage[main]/Nova::Keystone::Auth/Keystone_service[nova_ec2]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Error: /Stage[main]/Cinder::Keystone::Auth/Keystone_service[cinderv2]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[SwiftOperator]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ role-list' returned 1: 'NoneType' object has no attribute '__getitem__' Error: Could not prefetch keystone_endpoint provider 'keystone': Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ endpoint-list' returned 1: 'NoneType' object has no attribute '__getitem__' Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_endpoint[RegionOne/ceilometer]: Dependency Keystone_service[ceilometer] has failures: true Warning: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_endpoint[RegionOne/ceilometer]: Skipping because of failed dependencies Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[services]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ tenant-list' returned 1: 'NoneType' object has no attribute '__getitem__' Notice: /Stage[main]/Neutron::Keystone::Auth/Keystone_user[neutron]: Dependency Keystone_tenant[services] has failures: true Warning: /Stage[main]/Neutron::Keystone::Auth/Keystone_user[neutron]: Skipping because of failed dependencies Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_user[nova]: Dependency Keystone_tenant[services] has failures: true Warning: /Stage[main]/Nova::Keystone::Auth/Keystone_user[nova]: Skipping because of failed dependencies Notice: /Stage[main]/Glance::Keystone::Auth/Keystone_user[glance]: Dependency Keystone_tenant[services] has failures: true Warning: /Stage[main]/Glance::Keystone::Auth/Keystone_user[glance]: Skipping because of failed dependencies Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ role-list' returned 1: 'NoneType' object has no attribute '__getitem__' Notice: /Stage[main]/Glance::Keystone::Auth/Keystone_user_role[glance at services]: Dependency Keystone_tenant[services] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone_user_role[glance at services]: Dependency Keystone_role[admin] has failures: true Warning: /Stage[main]/Glance::Keystone::Auth/Keystone_user_role[glance at services]: Skipping because of failed dependencies Error: /Stage[main]/Keystone::Endpoint/Keystone_service[keystone]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Notice: /Stage[main]/Keystone::Endpoint/Keystone_endpoint[RegionOne/keystone]: Dependency Keystone_service[keystone] has failures: true Warning: /Stage[main]/Keystone::Endpoint/Keystone_endpoint[RegionOne/keystone]: Skipping because of failed dependencies Error: /Stage[main]/Main/Keystone_service[cinder_v2]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Notice: /Stage[main]/Main/Keystone_endpoint[RegionOne/cinder_v2]: Dependency Keystone_service[cinder_v2] has failures: true Warning: /Stage[main]/Main/Keystone_endpoint[RegionOne/cinder_v2]: Skipping because of failed dependencies Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user[ceilometer]: Dependency Keystone_tenant[services] has failures: true Warning: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user[ceilometer]: Skipping because of failed dependencies Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user_role[ceilometer at services]: Dependency Keystone_role[ResellerAdmin] has failures: true Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user_role[ceilometer at services]: Dependency Keystone_tenant[services] has failures: true Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user_role[ceilometer at services]: Dependency Keystone_role[admin] has failures: true Warning: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_user_role[ceilometer at services]: Skipping because of failed dependencies Error: /Stage[main]/Cinder::Keystone::Auth/Keystone_service[cinder]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_endpoint[RegionOne/cinder]: Dependency Keystone_service[cinder] has failures: true Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_endpoint[RegionOne/cinder]: Skipping because of failed dependencies Error: /Stage[main]/Glance::Keystone::Auth/Keystone_service[glance]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Notice: /Stage[main]/Glance::Keystone::Auth/Keystone_endpoint[RegionOne/glance]: Dependency Keystone_service[glance] has failures: true Warning: /Stage[main]/Glance::Keystone::Auth/Keystone_endpoint[RegionOne/glance]: Skipping because of failed dependencies Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_user_role[nova at services]: Dependency Keystone_tenant[services] has failures: true Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_user_role[nova at services]: Dependency Keystone_role[admin] has failures: true Warning: /Stage[main]/Nova::Keystone::Auth/Keystone_user_role[nova at services]: Skipping because of failed dependencies Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Dependency Keystone_tenant[admin] has failures: true Warning: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Skipping because of failed dependencies Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin at admin]: Dependency Keystone_tenant[admin] has failures: true Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin at admin]: Dependency Keystone_role[admin] has failures: true Warning: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin at admin]: Skipping because of failed dependencies Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_user[cinder]: Dependency Keystone_tenant[services] has failures: true Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_user[cinder]: Skipping because of failed dependencies Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_endpoint[RegionOne/nova_ec2]: Dependency Keystone_service[nova_ec2] has failures: true Warning: /Stage[main]/Nova::Keystone::Auth/Keystone_endpoint[RegionOne/nova_ec2]: Skipping because of failed dependencies Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_endpoint[RegionOne/cinderv2]: Dependency Keystone_service[cinderv2] has failures: true Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_endpoint[RegionOne/cinderv2]: Skipping because of failed dependencies Error: /Stage[main]/Swift::Keystone::Auth/Keystone_service[swift_s3]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Notice: /Stage[main]/Swift::Keystone::Auth/Keystone_endpoint[RegionOne/swift_s3]: Dependency Keystone_service[swift_s3] has failures: true Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_endpoint[RegionOne/swift_s3]: Skipping because of failed dependencies Error: /Stage[main]/Nova::Keystone::Auth/Keystone_service[nova]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Notice: /Stage[main]/Nova::Keystone::Auth/Keystone_endpoint[RegionOne/nova]: Dependency Keystone_service[nova] has failures: true Warning: /Stage[main]/Nova::Keystone::Auth/Keystone_endpoint[RegionOne/nova]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Keystone::Auth/Keystone_user_role[neutron at services]: Dependency Keystone_tenant[services] has failures: true Notice: /Stage[main]/Neutron::Keystone::Auth/Keystone_user_role[neutron at services]: Dependency Keystone_role[admin] has failures: true Warning: /Stage[main]/Neutron::Keystone::Auth/Keystone_user_role[neutron at services]: Skipping because of failed dependencies Error: /Stage[main]/Swift::Keystone::Auth/Keystone_service[swift]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ service-list' returned 1: 'NoneType' object has no attribute '__getitem__' Notice: /Stage[main]/Swift::Keystone::Auth/Keystone_endpoint[RegionOne/swift]: Dependency Keystone_service[swift] has failures: true Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_endpoint[RegionOne/swift]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Keystone::Auth/Keystone_endpoint[RegionOne/neutron]: Dependency Keystone_service[neutron] has failures: true Warning: /Stage[main]/Neutron::Keystone::Auth/Keystone_endpoint[RegionOne/neutron]: Skipping because of failed dependencies Notice: /Stage[main]/Swift::Keystone::Auth/Keystone_user[swift]: Dependency Keystone_tenant[services] has failures: true Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_user[swift]: Skipping because of failed dependencies Notice: /Stage[main]/Swift::Keystone::Auth/Keystone_user_role[swift at services]: Dependency Keystone_tenant[services] has failures: true Notice: /Stage[main]/Swift::Keystone::Auth/Keystone_user_role[swift at services]: Dependency Keystone_role[admin] has failures: true Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_user_role[swift at services]: Skipping because of failed dependencies Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder at services]: Dependency Keystone_tenant[services] has failures: true Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder at services]: Dependency Keystone_role[admin] has failures: true Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder at services]: Skipping because of failed dependencies Notice: Finished catalog run in 5.71 seconds -------------- next part -------------- 2014-07-21 10:27:52::INFO::shell::81::root:: [localhost] Executing script: rm -rf /var/tmp/packstack/20140721-102751-Mviqzb/manifests/*pp 2014-07-21 10:27:52::INFO::shell::81::root:: [localhost] Executing script: mkdir -p ~/.ssh chmod 500 ~/.ssh grep 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCW4sAFGCnWIJhxzsEFNUN90X55acOPLvVK/ljKitu53QDUUycZD09shYi92NEv/AudIRydoH65wkR1XvHHi7g+pxv46jrZvql8G+5NAtfGXyonQg87ohT48TCFzNfxMw+9qhticKpGhgfIuSCfWoJDX0yu3QJ7rnq5cYg/Pl7VCQ3J35lHLNqY6n3eeTqPHeVzyit6QV1Q/xofJJqIh86ndRZRsrDYXsWi81wLF+Dfg+0n32kLeO6xo1PZLv5MwQq0I6/VNMOfWcgmjZoj1xf6pCo9M2IzBIGFix0daBVng5+Q6YwoBHoYrhHnX9YVMuYUPC2VXkr0ke6UFIvqpm4D root at grid19.racf.bnl.gov' ~/.ssh/authorized_keys > /dev/null 2>&1 || echo ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCW4sAFGCnWIJhxzsEFNUN90X55acOPLvVK/ljKitu53QDUUycZD09shYi92NEv/AudIRydoH65wkR1XvHHi7g+pxv46jrZvql8G+5NAtfGXyonQg87ohT48TCFzNfxMw+9qhticKpGhgfIuSCfWoJDX0yu3QJ7rnq5cYg/Pl7VCQ3J35lHLNqY6n3eeTqPHeVzyit6QV1Q/xofJJqIh86ndRZRsrDYXsWi81wLF+Dfg+0n32kLeO6xo1PZLv5MwQq0I6/VNMOfWcgmjZoj1xf6pCo9M2IzBIGFix0daBVng5+Q6YwoBHoYrhHnX9YVMuYUPC2VXkr0ke6UFIvqpm4D root at grid19.racf.bnl.gov >> ~/.ssh/authorized_keys chmod 400 ~/.ssh/authorized_keys restorecon -r ~/.ssh 2014-07-21 10:27:52::INFO::shell::81::root:: [130.199.185.76] Executing script: cat /etc/redhat-release 2014-07-21 10:27:52::INFO::shell::81::root:: [130.199.185.76] Executing script: mkdir -p /var/tmp/packstack mkdir --mode 0700 /var/tmp/packstack/bd0e45a11bea462ea26756c391054733 mkdir --mode 0700 /var/tmp/packstack/bd0e45a11bea462ea26756c391054733/modules mkdir --mode 0700 /var/tmp/packstack/bd0e45a11bea462ea26756c391054733/resources 2014-07-21 10:27:52::INFO::shell::81::root:: [130.199.185.76] Executing script: rpm -q --whatprovides yum-utils || yum install -y yum-utils 2014-07-21 10:27:52::INFO::shell::81::root:: [130.199.185.76] Executing script: REPOFILE=$(mktemp) cat /etc/yum.conf > $REPOFILE echo -e '[packstack-epel] name=packstack-epel enabled=1 mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch' >> $REPOFILE ( rpm -q --whatprovides epel-release || yum install -y --nogpg -c $REPOFILE epel-release ) || true rm -rf $REPOFILE 2014-07-21 10:27:52::INFO::shell::81::root:: [130.199.185.76] Executing script: yum-config-manager --enable epel 2014-07-21 10:27:53::INFO::shell::35::root:: Executing command: rpm -q rdo-release --qf='%{version}-%{release}.%{arch} ' 2014-07-21 10:27:53::INFO::shell::81::root:: [130.199.185.76] Executing script: (rpm -q 'rdo-release-icehouse' || yum install -y --nogpg http://rdo.fedorapeople.org/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm) || true 2014-07-21 10:27:53::INFO::shell::81::root:: [130.199.185.76] Executing script: yum-config-manager --enable openstack-icehouse 2014-07-21 10:27:53::INFO::shell::81::root:: [130.199.185.76] Executing script: yum install -y yum-plugin-priorities || true rpm -q epel-release && yum-config-manager --setopt="rhel-server-ost-6-4-rpms.priority=1" --save rhel-server-ost-6-4-rpms yum clean metadata 2014-07-21 10:27:57::INFO::shell::81::root:: [130.199.185.76] Executing script: vgdisplay cinder-volumes 2014-07-21 10:27:57::INFO::shell::81::root:: [130.199.185.76] Executing script: sed -i -r "s/^ *snapshot_autoextend_threshold +=.*/ snapshot_autoextend_threshold = 80/" /etc/lvm/lvm.conf sed -i -r "s/^ *snapshot_autoextend_percent +=.*/ snapshot_autoextend_percent = 20/" /etc/lvm/lvm.conf 2014-07-21 10:27:57::INFO::shell::81::root:: [localhost] Executing script: ssh-keygen -t rsa -b 2048 -f "/var/tmp/packstack/20140721-102751-Mviqzb/nova_migration_key" -N "" 2014-07-21 10:27:57::INFO::shell::81::root:: [localhost] Executing script: ssh-keyscan 130.199.185.76 2014-07-21 10:27:57::INFO::shell::81::root:: [130.199.185.76] Executing script: echo $HOME 2014-07-21 10:27:57::INFO::shell::81::root:: [localhost] Executing script: rpm -q --requires openstack-puppet-modules | egrep -v "^(rpmlib|\/|perl)" 2014-07-21 10:27:57::INFO::shell::81::root:: [130.199.185.76] Executing script: rpm -q --whatprovides puppet || yum install -y puppet rpm -q --whatprovides openssh-clients || yum install -y openssh-clients rpm -q --whatprovides tar || yum install -y tar rpm -q --whatprovides nc || yum install -y nc rpm -q --whatprovides rubygem-json || yum install -y rubygem-json 2014-07-21 10:27:57::INFO::shell::81::root:: [localhost] Executing script: cd /usr/lib/python2.7/site-packages/packstack/puppet cd /var/tmp/packstack/20140721-102751-Mviqzb/manifests tar --dereference -cpzf - ../manifests | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root at 130.199.185.76 tar -C /var/tmp/packstack/bd0e45a11bea462ea26756c391054733 -xpzf - cd /usr/share/openstack-puppet/modules tar --dereference -cpzf - apache ceilometer certmonger cinder concat firewall glance heat horizon inifile keystone memcached mongodb mysql neutron nova nssdb openstack packstack qpid rabbitmq rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root at 130.199.185.76 tar -C /var/tmp/packstack/bd0e45a11bea462ea26756c391054733/modules -xpzf - 2014-07-21 10:28:36::ERROR::run_setup::920::root:: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", line 915, in main _main(confFile) File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", line 605, in _main runSequences() File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", line 584, in runSequences controller.runAllSequences() File "/usr/lib/python2.7/site-packages/packstack/installer/setup_controller.py", line 68, in runAllSequences sequence.run(config=self.CONF, messages=self.MESSAGES) File "/usr/lib/python2.7/site-packages/packstack/installer/core/sequences.py", line 98, in run step.run(config=config, messages=messages) File "/usr/lib/python2.7/site-packages/packstack/installer/core/sequences.py", line 44, in run raise SequenceError(str(ex)) SequenceError: Error appeared during Puppet run: 130.199.185.76_keystone.pp Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[_member_]: Could not evaluate: Execution of '/usr/bin/keystone --os-endpoint http://127.0.0.1:35357/v2.0/ role-list' returned 1: 'NoneType' object has no attribute '__getitem__' You will find full trace in log /var/tmp/packstack/20140721-102751-Mviqzb/manifests/130.199.185.76_keystone.pp.log 2014-07-21 10:28:36::INFO::shell::81::root:: [130.199.185.76] Executing script: rm -rf /var/tmp/packstack/bd0e45a11bea462ea26756c391054733 From johndecot at gmail.com Tue Jul 22 16:41:47 2014 From: johndecot at gmail.com (john decot) Date: Tue, 22 Jul 2014 22:26:47 +0545 Subject: [Rdo-list] Icehouse MariaDB-Galera -Server problem. In-Reply-To: <53CE38EE.7040000@laimbock.com> References: <53CD2B6E.9040607@mozilla.com> <53CE38EE.7040000@laimbock.com> Message-ID: Hi Patrick, Thanks for pointing.I shall try with 6.5 x86_64 architecture. John. On Tue, Jul 22, 2014 at 3:56 PM, Patrick Laimbock wrote: > Hi John, > > > On 22-07-14 07:32, john decot wrote: > >> Hi, >> the output of uname -a is >> >> Linux virtualbox.localdomain 2.6.32-431.20.3.el6.i686 #1 SMP Thu Jun 19 >> 19:51:30 UTC 2014 i686 i686 i386 GNU/Linux >> > > Under 'Step 0: Prerequisites' at http://openstack.redhat.com/Quickstart > it says 'x86_64 is currently the only supported architecture' which you > don't seem to be using (i686 versus x86_64). Try running packstack on a > x86_64 (virtual) box with CentOS 6.5 x86_64 (just follow the steps again). > > HTH, > Patrick > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Tue Jul 22 20:40:31 2014 From: jslagle at redhat.com (James Slagle) Date: Tue, 22 Jul 2014 16:40:31 -0400 Subject: [Rdo-list] [RDO][Instack] heat is not able to create stack with instack In-Reply-To: References: <53CD07A5.1040206@linux.vnet.ibm.com> <20140721162322.GD10147@teletran-1> Message-ID: <20140722204031.GB22930@teletran-1.redhat.com> On Tue, Jul 22, 2014 at 11:56:28AM +0530, Peeyush Gupta wrote: > | fault | {"message": "No valid host was > found. ", "code": 500, "details": " File > \"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py\", > line 108, in schedule_run_instance | > | | raise exception.NoValidHost > (reason=\"\") > | > 2014-07-21 04:25:41.473 31078 WARNING nova.scheduler.driver > [req-a00295d2-7308-4bb9-a40d-0740fe852bf5 921abf3732ce40d0b1502e9aa13c6c2a > ae8b85d781ad443792f2a3516f38ed88] [instance: > b3b3f6f9-25fc-45db-b14e-f7daae1f3216] Setting instance to ERROR state. Unfortunately these error messages really aren't enough to debug the issue. Can you set debug=True in /etc/nova/nova.conf aynd restart the following services: openstack-nova-api openstack-nova-compute openstack-nova-scheduler openstack-nova-conductor Delete the overcloud with instack-delete-overcloud, and then try a deployment again. If you get another failure, hopefully there will be more useful data in the nova logs. > > Though I would like to point out here that I am seeing 5 VMs in my setup > instead of 4. You should have 5 vm's. > > [stack at devstack ~]$ virsh list --all > Id Name State > ---------------------------------------------------- > 2 instack running > 9 baremetal_1 running > 10 baremetal_2 running > 11 baremetal_3 running > - baremetal_0 shut off > > Regards, > Peeyush Gupta > > > > From: James Slagle > To: Pradeep Kumar Surisetty > Cc: rdo-list at redhat.com, deepthi at linux.vnet.ibm.com, Peeyush > Gupta/India/IBM at IBMIN, Pradeep K Surisetty/India/IBM at IBMIN, > anantyog at linux.vnet.ibm.com > Date: 07/21/2014 09:50 PM > Subject: Re: [Rdo-list] [RDO][Instack] heat is not able to create stack > with instack > > > > On Mon, Jul 21, 2014 at 05:59:25PM +0530, Pradeep Kumar Surisetty wrote: > > Hi All > > > > I have been trying to set instack with RDO. I have successfully > installed > > undercloud and moving on to overcloud. Now, when I run > > "instack-deploy-overcloud", I get the following error: > > > > + OVERCLOUD_YAML_PATH=overcloud.yaml > > + heat stack-create -f overcloud.yaml -P > AdminToken=b003d63242f5db3e1ad4864ae66911e02ba19bcb -P > AdminPassword=7bfe4d4a18280752ad07f259a69a3ed00db2ab44 -P > CinderPassword=df0893b4355f3511a6d67538dd592d02d1bc11d3 -P > GlancePassword=066f65f878157b438a916ccbd44e0b7037ee! > > 118f -P HeatPassword=58fda0e4d6708e0164167b11fe6fca6ab6b35ec6 -P > NeutronPassword=80853ad029feb77bb7c60d035542f21aa5c24177 -P > NovaPassword=331474580be53b78e40c91dfdfc2323578a035e7 -P > NeutronPublicInterface=eth0 -P > SwiftPassword=b0eca57b45ebf3dd5cae071dc3880888fb1d4840 -P > SwiftHashSuffix=a8d87f3952d6f91da589fbef801bb92141fd1461 -P > NovaComputeLibvirtType=qemu -P 'GlanceLogFile='\'''\''' -P > NeutronDnsmasqOptions=dhcp-option-force=26,1400 overcloud > > > +--------------------------------------+------------+--------------------+----------------------+ > > > | id | stack_name | stack_status > | creation_time | > > > +--------------------------------------+------------+--------------------+----------------------+ > > > | 0ca028e7-682b-41ef-8af0-b2eb67bee272 | overcloud | CREATE_IN_PROGRESS > | 2014-07-18T10:50:48Z | > > > +--------------------------------------+------------+--------------------+----------------------+ > > > + tripleo wait_for_stack_ready 220 10 overcloud > > Command output matched 'CREATE_FAILED'. Exiting... > > > > Now, i understand that the stack isn't being created. So, I tried to > check out the state of the stack: > > > > [stack at localhost ~]$ heat stack-list > > > +--------------------------------------+------------+---------------+----------------------+ > > > | id | stack_name | stack_status | > creation_time | > > > +--------------------------------------+------------+---------------+----------------------+ > > > | 0ca028e7-682b-41ef-8af0-b2eb67bee272 | overcloud | CREATE_FAILED | > 2014-07-18T10:50:48Z | > > > +--------------------------------------+------------+---------------+----------------------+ > > > > > > > i even tried to create stack manually, but ended up getting the same > > error. > > > > Update: Here is the heat log: > > > > 2014-07-18 06:51:11.884 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:12.921 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:16.058 30750 ERROR heat.engine.resource [-] CREATE : > Server "SwiftStorage0" [07e42c3d-0f1b-4bb9-b980-ffbb74ac770d] Stack > "overcloud" [0ca028e7-682b-41ef-8af0-b2eb67bee272] > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Traceback (most > recent call last): > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File > "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 420, in > _do_action > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource while not > check(handle_data): > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File > "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line > 545, in check_create_complete > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource return > self._check_active(server) > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File > "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line > 561, in _check_active > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource raise exc > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Error: Creation > of server overcloud-SwiftStorage0-qdjqbif6peva failed. > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource > > 2014-07-18 06:51:16.255 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:16.939 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:17.368 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:17.638 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:18.158 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:18.613 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back ... > > Hi Pradeep, > > Can you run a "nova show " on the failed instance? And also > provide any tracebacks or errors from the nova compute log > under /var/log/nova? > > -- > -- James Slagle > -- > > > -- -- James Slagle -- From ak at cloudssky.com Wed Jul 23 09:00:20 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Wed, 23 Jul 2014 11:00:20 +0200 Subject: [Rdo-list] Openshift-Origin via Heat Icehouse Fedora20 | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED In-Reply-To: <1891440403.57944.1405708750248.JavaMail.zimbra@vidalinux.net> References: <1403903244.57130.1405589573110.JavaMail.zimbra@vidalinux.net> <788446832.57133.1405589819914.JavaMail.zimbra@vidalinux.net> <20140717110553.GC10151@t430slt.redhat.com> <1007626789.57671.1405675321782.JavaMail.zimbra@vidalinux.net> <1891440403.57944.1405708750248.JavaMail.zimbra@vidalinux.net> Message-ID: > > Hi Antonio, > are you using Icehouse or Havana? Does your OpenShift installation work now? Is there any good guides to install OpenShift Origin on top of OpenStack? (I searched a bit, but could'nt find any). Thanks! Arash -------------- next part -------------- An HTML attachment was scrubbed... URL: From peeygupt at in.ibm.com Wed Jul 23 10:09:38 2014 From: peeygupt at in.ibm.com (Peeyush Gupta) Date: Wed, 23 Jul 2014 15:39:38 +0530 Subject: [Rdo-list] [RDO][Instack] heat is not able to create stack with instack In-Reply-To: <20140722204031.GB22930@teletran-1.redhat.com> References: <53CD07A5.1040206@linux.vnet.ibm.com> <20140721162322.GD10147@teletran-1> <20140722204031.GB22930@teletran-1.redhat.com> Message-ID: Hi James, I reinstalled the undercloud and retraced all the steps with debug=True in /etc/nova/nova/conf. Here are the logs: Nova-compute logs: 2014-07-23 05:58:03.357 2915 WARNING nova.virt.baremetal.driver [req-648fe5c2-1152-44e6-83f2-87686e7ce87a bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] Destroy called on non-existing instance dd613a79-a790-434e-a40e-ea787c6d5035 2014-07-23 05:58:03.806 2915 ERROR nova.compute.manager [req-648fe5c2-1152-44e6-83f2-87686e7ce87a bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: dd613a79-a790-434e-a40e-ea787c6d5035] Error: PXE deploy failed for instance dd613a79-a790-434e-a40e-ea787c6d5035 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] Traceback (most recent call last): 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1305, in _build_instance 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] set_access_ip=set_access_ip) 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 393, in decorated_function 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] return function(self, context, *args, **kwargs) 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1717, in _spawn 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] LOG.exception(_('Instance failed to spawn'), instance=instance) 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] six.reraise(self.type_, self.value, self.tb) 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1714, in _spawn 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] block_device_info) 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 245, in spawn 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] block_device_info=block_device_info) 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 303, in _spawn 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] _update_state(context, node, None, baremetal_states.DELETED) 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] six.reraise(self.type_, self.value, self.tb) 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 281, in _spawn 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] self.driver.activate_node (context, node, instance) 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/pxe.py", line 500, in activate_node 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] locals['error'] % instance ['uuid']) 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] InstanceDeployFailure: PXE deploy failed for instance dd613a79-a790-434e-a40e-ea787c6d5035 2014-07-23 05:58:03.806 2915 TRACE nova.compute.manager [instance: dd613a79-a790-434e-a40e-ea787c6d5035] 2014-07-23 05:59:27.981 2915 ERROR nova.virt.baremetal.driver [req-bc984c41-18c0-4d4c-8dd8-2dc6e3b7044b bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] Error deploying instance ba65d12e-b664-4ad9-93fa-992a2886ab01 on baremetal node 68bf3440-7eb6-4b95-b439-74dd8edbe1bf. 2014-07-23 05:59:31.794 2915 ERROR nova.compute.manager [req-bc984c41-18c0-4d4c-8dd8-2dc6e3b7044b bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] Instance failed to spawn 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] Traceback (most recent call last): 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1714, in _spawn 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] block_device_info) 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 245, in spawn 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] block_device_info=block_device_info) 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 303, in _spawn 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] _update_state(context, node, None, baremetal_states.DELETED) 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] six.reraise(self.type_, self.value, self.tb) 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 281, in _spawn 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] self.driver.activate_node (context, node, instance) 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/pxe.py", line 500, in activate_node 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] locals['error'] % instance ['uuid']) 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] InstanceDeployFailure: PXE deploy failed for instance ba65d12e-b664-4ad9-93fa-992a2886ab01 2014-07-23 05:59:31.794 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] 2014-07-23 05:59:31.870 2915 WARNING nova.virt.baremetal.driver [req-bc984c41-18c0-4d4c-8dd8-2dc6e3b7044b bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] Destroy called on non-existing instance ba65d12e-b664-4ad9-93fa-992a2886ab01 2014-07-23 05:59:32.239 2915 ERROR nova.compute.manager [req-bc984c41-18c0-4d4c-8dd8-2dc6e3b7044b bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] Error: PXE deploy failed for instance ba65d12e-b664-4ad9-93fa-992a2886ab01 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] Traceback (most recent call last): 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1305, in _build_instance 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] set_access_ip=set_access_ip) 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 393, in decorated_function 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] return function(self, context, *args, **kwargs) 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1717, in _spawn 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] LOG.exception(_('Instance failed to spawn'), instance=instance) 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] six.reraise(self.type_, self.value, self.tb) 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1714, in _spawn 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] block_device_info) 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 245, in spawn 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] block_device_info=block_device_info) 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 303, in _spawn 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] _update_state(context, node, None, baremetal_states.DELETED) 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] six.reraise(self.type_, self.value, self.tb) 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 281, in _spawn 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] self.driver.activate_node (context, node, instance) 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/pxe.py", line 500, in activate_node 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] locals['error'] % instance ['uuid']) 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] InstanceDeployFailure: PXE deploy failed for instance ba65d12e-b664-4ad9-93fa-992a2886ab01 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] 2014-07-23 06:05:43.572 2915 WARNING nova.compute.manager [-] Found 4 in the database and 0 on the hypervisor. Nova-api logs: 2014-07-23 05:32:42.670 32685 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead. 2014-07-23 05:32:43.404 32685 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead. 2014-07-23 05:32:44.431 32685 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead. 2014-07-23 05:32:46.417 32685 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead. 2014-07-23 05:45:12.903 2947 ERROR nova.wsgi [-] Could not bind to 0.0.0.0:8773 2014-07-23 05:45:12.903 2947 CRITICAL nova [-] error: [Errno 98] Address already in use Nova-scheduler logs: 2014-07-23 05:32:45.277 32697 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on 127.0.0.1:5672 2014-07-23 05:45:17.997 2962 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on 127.0.0.1:5672 2014-07-23 05:51:44.675 2962 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on 127.0.0.1:5672 2014-07-23 05:52:06.321 2962 ERROR nova.scheduler.filter_scheduler [req-bc984c41-18c0-4d4c-8dd8-2dc6e3b7044b bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] Error from last host: undercloud (node dde604c0-d7f9-4212-bb26-c9e166b3df7e): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1272, in _build_instance\n with rt.instance_claim(context, instance, limits):\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 122, in instance_claim\n overhead=overhead, limits=limits)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/claims.py", line 95, in __init__\n self._claim_test(resources, limits)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/claims.py", line 148, in _claim_test\n "; ".join(reasons))\n', u'ComputeResourcesUnavailable: Insufficient compute resources: Free memory 0.00 MB < requested 2048 MB.\n'] 2014-07-23 05:55:53.648 2962 ERROR nova.scheduler.filter_scheduler [req-af4756b6-9630-4790-b782-629f3033a055 bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: 923d2903-d202-487f-82a9-cd7288e0cfe0] Error from last host: undercloud (node 72b92fba-fd38-40f6-8b80-7216ed8dc72c): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1305, in _build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 393, in decorated_function\n return function(self, context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1717, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1714, in _spawn\n block_device_info)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 245, in spawn\n block_device_info=block_device_info)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 303, in _spawn\n _update_state(context, node, None, baremetal_states.DELETED)\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 281, in _spawn\n self.driver.activate_node(context, node, instance)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/pxe.py", line 500, in activate_node\n locals[\'error\'] % instance[\'uuid\'])\n', u'InstanceDeployFailure: PXE deploy failed for instance 923d2903-d202-487f-82a9-cd7288e0cfe0\n'] 2014-07-23 05:55:53.673 2962 WARNING nova.scheduler.driver [req-af4756b6-9630-4790-b782-629f3033a055 bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: 923d2903-d202-487f-82a9-cd7288e0cfe0] Setting instance to ERROR state. 2014-07-23 05:57:01.605 2962 ERROR nova.scheduler.filter_scheduler [req-3122d59f-d1d2-4b8d-a1d6-59db30d59acd bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: 00626822-43b1-46c6-b896-f1b69a738ac7] Error from last host: undercloud (node 4cb58619-b383-4f97-be81-7dd0bfa51f8e): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1305, in _build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 393, in decorated_function\n return function(self, context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1717, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1714, in _spawn\n block_device_info)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 245, in spawn\n block_device_info=block_device_info)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 303, in _spawn\n _update_state(context, node, None, baremetal_states.DELETED)\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 281, in _spawn\n self.driver.activate_node(context, node, instance)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/pxe.py", line 500, in activate_node\n locals[\'error\'] % instance[\'uuid\'])\n', u'InstanceDeployFailure: PXE deploy failed for instance 00626822-43b1-46c6-b896-f1b69a738ac7\n'] 2014-07-23 05:57:01.638 2962 WARNING nova.scheduler.driver [req-3122d59f-d1d2-4b8d-a1d6-59db30d59acd bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: 00626822-43b1-46c6-b896-f1b69a738ac7] Setting instance to ERROR state. 2014-07-23 05:58:03.819 2962 ERROR nova.scheduler.filter_scheduler [req-648fe5c2-1152-44e6-83f2-87686e7ce87a bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: dd613a79-a790-434e-a40e-ea787c6d5035] Error from last host: undercloud (node dde604c0-d7f9-4212-bb26-c9e166b3df7e): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1305, in _build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 393, in decorated_function\n return function(self, context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1717, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1714, in _spawn\n block_device_info)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 245, in spawn\n block_device_info=block_device_info)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 303, in _spawn\n _update_state(context, node, None, baremetal_states.DELETED)\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 281, in _spawn\n self.driver.activate_node(context, node, instance)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/pxe.py", line 500, in activate_node\n locals[\'error\'] % instance[\'uuid\'])\n', u'InstanceDeployFailure: PXE deploy failed for instance dd613a79-a790-434e-a40e-ea787c6d5035\n'] 2014-07-23 05:58:03.829 2962 WARNING nova.scheduler.driver [req-648fe5c2-1152-44e6-83f2-87686e7ce87a bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: dd613a79-a790-434e-a40e-ea787c6d5035] Setting instance to ERROR state. 2014-07-23 05:59:32.248 2962 ERROR nova.scheduler.filter_scheduler [req-bc984c41-18c0-4d4c-8dd8-2dc6e3b7044b bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] Error from last host: undercloud (node 68bf3440-7eb6-4b95-b439-74dd8edbe1bf): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1305, in _build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 393, in decorated_function\n return function(self, context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1717, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1714, in _spawn\n block_device_info)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 245, in spawn\n block_device_info=block_device_info)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 303, in _spawn\n _update_state(context, node, None, baremetal_states.DELETED)\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py", line 281, in _spawn\n self.driver.activate_node(context, node, instance)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/baremetal/pxe.py", line 500, in activate_node\n locals[\'error\'] % instance[\'uuid\'])\n', u'InstanceDeployFailure: PXE deploy failed for instance ba65d12e-b664-4ad9-93fa-992a2886ab01\n'] 2014-07-23 05:59:32.267 2962 WARNING nova.scheduler.driver [req-bc984c41-18c0-4d4c-8dd8-2dc6e3b7044b bd2463eed41a47dbaad173888e8e4e81 5db26e088245493b9d07dcd52be97734] [instance: ba65d12e-b664-4ad9-93fa-992a2886ab01] Setting instance to ERROR state. I am not getting the "No valid host" error anymore. It's now PXE deploy failure. Regards, Peeyush Gupta From: James Slagle To: Peeyush Gupta/India/IBM at IBMIN Cc: anantyog at linux.vnet.ibm.com, deepthi at linux.vnet.ibm.com, Pradeep K Surisetty/India/IBM at IBMIN, Pradeep Kumar Surisetty , rdo-list at redhat.com Date: 07/23/2014 02:08 AM Subject: Re: [Rdo-list] [RDO][Instack] heat is not able to create stack with instack On Tue, Jul 22, 2014 at 11:56:28AM +0530, Peeyush Gupta wrote: > | fault | {"message": "No valid host was > found. ", "code": 500, "details": " File > \"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py\", > line 108, in schedule_run_instance | > | | raise exception.NoValidHost > (reason=\"\") > | > 2014-07-21 04:25:41.473 31078 WARNING nova.scheduler.driver > [req-a00295d2-7308-4bb9-a40d-0740fe852bf5 921abf3732ce40d0b1502e9aa13c6c2a > ae8b85d781ad443792f2a3516f38ed88] [instance: > b3b3f6f9-25fc-45db-b14e-f7daae1f3216] Setting instance to ERROR state. Unfortunately these error messages really aren't enough to debug the issue. Can you set debug=True in /etc/nova/nova.conf aynd restart the following services: openstack-nova-api openstack-nova-compute openstack-nova-scheduler openstack-nova-conductor Delete the overcloud with instack-delete-overcloud, and then try a deployment again. If you get another failure, hopefully there will be more useful data in the nova logs. > > Though I would like to point out here that I am seeing 5 VMs in my setup > instead of 4. You should have 5 vm's. > > [stack at devstack ~]$ virsh list --all > Id Name State > ---------------------------------------------------- > 2 instack running > 9 baremetal_1 running > 10 baremetal_2 running > 11 baremetal_3 running > - baremetal_0 shut off > > Regards, > Peeyush Gupta > > > > From: James Slagle > To: Pradeep Kumar Surisetty > Cc: rdo-list at redhat.com, deepthi at linux.vnet.ibm.com, Peeyush > Gupta/India/IBM at IBMIN, Pradeep K Surisetty/India/IBM at IBMIN, > anantyog at linux.vnet.ibm.com > Date: 07/21/2014 09:50 PM > Subject: Re: [Rdo-list] [RDO][Instack] heat is not able to create stack > with instack > > > > On Mon, Jul 21, 2014 at 05:59:25PM +0530, Pradeep Kumar Surisetty wrote: > > Hi All > > > > I have been trying to set instack with RDO. I have successfully > installed > > undercloud and moving on to overcloud. Now, when I run > > "instack-deploy-overcloud", I get the following error: > > > > + OVERCLOUD_YAML_PATH=overcloud.yaml > > + heat stack-create -f overcloud.yaml -P > AdminToken=b003d63242f5db3e1ad4864ae66911e02ba19bcb -P > AdminPassword=7bfe4d4a18280752ad07f259a69a3ed00db2ab44 -P > CinderPassword=df0893b4355f3511a6d67538dd592d02d1bc11d3 -P > GlancePassword=066f65f878157b438a916ccbd44e0b7037ee! > > 118f -P HeatPassword=58fda0e4d6708e0164167b11fe6fca6ab6b35ec6 -P > NeutronPassword=80853ad029feb77bb7c60d035542f21aa5c24177 -P > NovaPassword=331474580be53b78e40c91dfdfc2323578a035e7 -P > NeutronPublicInterface=eth0 -P > SwiftPassword=b0eca57b45ebf3dd5cae071dc3880888fb1d4840 -P > SwiftHashSuffix=a8d87f3952d6f91da589fbef801bb92141fd1461 -P > NovaComputeLibvirtType=qemu -P 'GlanceLogFile='\'''\''' -P > NeutronDnsmasqOptions=dhcp-option-force=26,1400 overcloud > > > +--------------------------------------+------------+--------------------+----------------------+ > > > | id | stack_name | stack_status > | creation_time | > > > +--------------------------------------+------------+--------------------+----------------------+ > > > | 0ca028e7-682b-41ef-8af0-b2eb67bee272 | overcloud | CREATE_IN_PROGRESS > | 2014-07-18T10:50:48Z | > > > +--------------------------------------+------------+--------------------+----------------------+ > > > + tripleo wait_for_stack_ready 220 10 overcloud > > Command output matched 'CREATE_FAILED'. Exiting... > > > > Now, i understand that the stack isn't being created. So, I tried to > check out the state of the stack: > > > > [stack at localhost ~]$ heat stack-list > > > +--------------------------------------+------------+---------------+----------------------+ > > > | id | stack_name | stack_status | > creation_time | > > > +--------------------------------------+------------+---------------+----------------------+ > > > | 0ca028e7-682b-41ef-8af0-b2eb67bee272 | overcloud | CREATE_FAILED | > 2014-07-18T10:50:48Z | > > > +--------------------------------------+------------+---------------+----------------------+ > > > > > > > i even tried to create stack manually, but ended up getting the same > > error. > > > > Update: Here is the heat log: > > > > 2014-07-18 06:51:11.884 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:12.921 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:16.058 30750 ERROR heat.engine.resource [-] CREATE : > Server "SwiftStorage0" [07e42c3d-0f1b-4bb9-b980-ffbb74ac770d] Stack > "overcloud" [0ca028e7-682b-41ef-8af0-b2eb67bee272] > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Traceback (most > recent call last): > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File > "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 420, in > _do_action > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource while not > check(handle_data): > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File > "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line > 545, in check_create_complete > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource return > self._check_active(server) > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File > "/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line > 561, in _check_active > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource raise exc > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Error: Creation > of server overcloud-SwiftStorage0-qdjqbif6peva failed. > > 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource > > 2014-07-18 06:51:16.255 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:16.939 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:17.368 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:17.638 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:18.158 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back to using default > > 2014-07-18 06:51:18.613 30750 WARNING heat.common.keystoneclient [-] > stack_user_domain ID not set in heat.conf falling back ... > > Hi Pradeep, > > Can you run a "nova show " on the failed instance? And also > provide any tracebacks or errors from the nova compute log > under /var/log/nova? > > -- > -- James Slagle > -- > > > -- -- James Slagle -- From lakmal at wso2.com Thu Jul 24 05:36:52 2014 From: lakmal at wso2.com (Lakmal Warusawithana) Date: Thu, 24 Jul 2014 11:06:52 +0530 Subject: [Rdo-list] RDO Icehouse with CentOS 6.5 Openvswitch Issue Message-ID: Hi, I have tried out RDO with CentOS 6.5 following [1]. I am getting an openvswitch issue, it keep adding and deleting br-tun. my /var/log/message hitting continues message of following. Is any one com across this issue? And note, I had working setup with multi node (10 days ago), but suddenly with an yum update and re-run packstack lead to this issue. Now even allinone setup also getting this issue. Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-br br-tun Jul 24 10:22:20 openstack01 kernel: device br-tun left promiscuous mode Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-tun Jul 24 10:22:20 openstack01 kernel: device br-tun entered promiscuous mode Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-br br-tun Jul 24 10:22:20 openstack01 kernel: device br-tun left promiscuous mode Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-tun Jul 24 10:22:20 openstack01 kernel: device br-tun entered promiscuous mode Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-br br-tun Jul 24 10:22:21 openstack01 kernel: device br-tun left promiscuous mode Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-tun Jul 24 10:22:21 openstack01 kernel: device br-tun entered promiscuous mode Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-br br-tun Jul 24 10:22:21 openstack01 kernel: device br-tun left promiscuous mode Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-tun Jul 24 10:22:21 openstack01 kernel: device br-tun entered promiscuous mode Jul 24 10:22:22 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-br br-tun Jul 24 10:22:22 openstack01 kernel: device br-tun left promiscuous mode Jul 24 10:22:22 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-tun Jul 24 10:22:22 openstack01 kernel: device br-tun entered promiscuous mode [1]http://openstack.redhat.com/Neutron_with_existing_external_network -- Lakmal Warusawithana -------------- next part -------------- An HTML attachment was scrubbed... URL: From amuller at redhat.com Thu Jul 24 10:16:12 2014 From: amuller at redhat.com (Assaf Muller) Date: Thu, 24 Jul 2014 06:16:12 -0400 (EDT) Subject: [Rdo-list] RDO Icehouse with CentOS 6.5 Openvswitch Issue In-Reply-To: References: Message-ID: <108178589.18419311.1406196972339.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Hi, > > I have tried out RDO with CentOS 6.5 following [1]. I am getting an > openvswitch issue, it keep adding and deleting br-tun. my /var/log/message > hitting continues message of following. Is any one com across this issue? > Do you by chance have an ifcfg file defined for br-tun? > And note, I had working setup with multi node (10 days ago), but suddenly > with an yum update and re-run packstack lead to this issue. Now even > allinone setup also getting this issue. > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > Jul 24 10:22:20 openstack01 kernel: device br-tun left promiscuous mode > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > Jul 24 10:22:20 openstack01 kernel: device br-tun entered promiscuous mode > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > Jul 24 10:22:20 openstack01 kernel: device br-tun left promiscuous mode > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > Jul 24 10:22:20 openstack01 kernel: device br-tun entered promiscuous mode > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > Jul 24 10:22:21 openstack01 kernel: device br-tun left promiscuous mode > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > Jul 24 10:22:21 openstack01 kernel: device br-tun entered promiscuous mode > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > Jul 24 10:22:21 openstack01 kernel: device br-tun left promiscuous mode > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > Jul 24 10:22:21 openstack01 kernel: device br-tun entered promiscuous mode > > Jul 24 10:22:22 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > Jul 24 10:22:22 openstack01 kernel: device br-tun left promiscuous mode > > Jul 24 10:22:22 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > Jul 24 10:22:22 openstack01 kernel: device br-tun entered promiscuous mode > > > [1] http://openstack.redhat.com/Neutron_with_existing_external_network > > -- > Lakmal Warusawithana > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From lakmal at wso2.com Thu Jul 24 10:46:03 2014 From: lakmal at wso2.com (Lakmal Warusawithana) Date: Thu, 24 Jul 2014 16:16:03 +0530 Subject: [Rdo-list] RDO Icehouse with CentOS 6.5 Openvswitch Issue In-Reply-To: <108178589.18419311.1406196972339.JavaMail.zimbra@redhat.com> References: <108178589.18419311.1406196972339.JavaMail.zimbra@redhat.com> Message-ID: Hi, Below is the content of the br-tun file DEVICE=br-tun DEVICETYPE=ovs TYPE=OVSBridge ONBOOT=yes OVSBOOTPROTO=none On Thu, Jul 24, 2014 at 3:46 PM, Assaf Muller wrote: > > > ----- Original Message ----- > > Hi, > > > > I have tried out RDO with CentOS 6.5 following [1]. I am getting an > > openvswitch issue, it keep adding and deleting br-tun. my > /var/log/message > > hitting continues message of following. Is any one com across this issue? > > > > Do you by chance have an ifcfg file defined for br-tun? > > > And note, I had working setup with multi node (10 days ago), but suddenly > > with an yum update and re-run packstack lead to this issue. Now even > > allinone setup also getting this issue. > > > > > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun left promiscuous mode > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun entered promiscuous > mode > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun left promiscuous mode > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun entered promiscuous > mode > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun left promiscuous mode > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun entered promiscuous > mode > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun left promiscuous mode > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun entered promiscuous > mode > > > > Jul 24 10:22:22 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > Jul 24 10:22:22 openstack01 kernel: device br-tun left promiscuous mode > > > > Jul 24 10:22:22 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > Jul 24 10:22:22 openstack01 kernel: device br-tun entered promiscuous > mode > > > > > > [1] http://openstack.redhat.com/Neutron_with_existing_external_network > > > > -- > > Lakmal Warusawithana > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > -- Lakmal Warusawithana -------------- next part -------------- An HTML attachment was scrubbed... URL: From amuller at redhat.com Thu Jul 24 11:11:10 2014 From: amuller at redhat.com (Assaf Muller) Date: Thu, 24 Jul 2014 07:11:10 -0400 (EDT) Subject: [Rdo-list] RDO Icehouse with CentOS 6.5 Openvswitch Issue In-Reply-To: References: <108178589.18419311.1406196972339.JavaMail.zimbra@redhat.com> Message-ID: <262435575.18431543.1406200270896.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Hi, > > Below is the content of the br-tun file > > DEVICE=br-tun > > DEVICETYPE=ovs > > TYPE=OVSBridge > > ONBOOT=yes > > OVSBOOTPROTO=none > Great, just delete the file and you'll be on your way. Can you tell me what deployment tool are you using, or if you know what created the file? > > On Thu, Jul 24, 2014 at 3:46 PM, Assaf Muller wrote: > > > > > > > ----- Original Message ----- > > > Hi, > > > > > > I have tried out RDO with CentOS 6.5 following [1]. I am getting an > > > openvswitch issue, it keep adding and deleting br-tun. my > > /var/log/message > > > hitting continues message of following. Is any one com across this issue? > > > > > > > Do you by chance have an ifcfg file defined for br-tun? > > > > > And note, I had working setup with multi node (10 days ago), but suddenly > > > with an yum update and re-run packstack lead to this issue. Now even > > > allinone setup also getting this issue. > > > > > > > > > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun left promiscuous mode > > > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun entered promiscuous > > mode > > > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun left promiscuous mode > > > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun entered promiscuous > > mode > > > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun left promiscuous mode > > > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun entered promiscuous > > mode > > > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun left promiscuous mode > > > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun entered promiscuous > > mode > > > > > > Jul 24 10:22:22 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > > > Jul 24 10:22:22 openstack01 kernel: device br-tun left promiscuous mode > > > > > > Jul 24 10:22:22 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > > > Jul 24 10:22:22 openstack01 kernel: device br-tun entered promiscuous > > mode > > > > > > > > > [1] http://openstack.redhat.com/Neutron_with_existing_external_network > > > > > > -- > > > Lakmal Warusawithana > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > -- > Lakmal Warusawithana > From lakmal at wso2.com Thu Jul 24 11:26:07 2014 From: lakmal at wso2.com (Lakmal Warusawithana) Date: Thu, 24 Jul 2014 16:56:07 +0530 Subject: [Rdo-list] RDO Icehouse with CentOS 6.5 Openvswitch Issue In-Reply-To: <262435575.18431543.1406200270896.JavaMail.zimbra@redhat.com> References: <108178589.18419311.1406196972339.JavaMail.zimbra@redhat.com> <262435575.18431543.1406200270896.JavaMail.zimbra@redhat.com> Message-ID: Seems like its working :), Will update after fully tested. I just ran packstack. did not create any files. Thanks a lot. On Thu, Jul 24, 2014 at 4:41 PM, Assaf Muller wrote: > > > ----- Original Message ----- > > Hi, > > > > Below is the content of the br-tun file > > > > DEVICE=br-tun > > > > DEVICETYPE=ovs > > > > TYPE=OVSBridge > > > > ONBOOT=yes > > > > OVSBOOTPROTO=none > > > > Great, just delete the file and you'll be on your way. > > Can you tell me what deployment tool are you using, or if you know what > created the file? > > > > > On Thu, Jul 24, 2014 at 3:46 PM, Assaf Muller > wrote: > > > > > > > > > > > ----- Original Message ----- > > > > Hi, > > > > > > > > I have tried out RDO with CentOS 6.5 following [1]. I am getting an > > > > openvswitch issue, it keep adding and deleting br-tun. my > > > /var/log/message > > > > hitting continues message of following. Is any one com across this > issue? > > > > > > > > > > Do you by chance have an ifcfg file defined for br-tun? > > > > > > > And note, I had working setup with multi node (10 days ago), but > suddenly > > > > with an yum update and re-run packstack lead to this issue. Now even > > > > allinone setup also getting this issue. > > > > > > > > > > > > > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun left promiscuous > mode > > > > > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun entered promiscuous > > > mode > > > > > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun left promiscuous > mode > > > > > > > > Jul 24 10:22:20 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > > > > > Jul 24 10:22:20 openstack01 kernel: device br-tun entered promiscuous > > > mode > > > > > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun left promiscuous > mode > > > > > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun entered promiscuous > > > mode > > > > > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun left promiscuous > mode > > > > > > > > Jul 24 10:22:21 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > > > > > Jul 24 10:22:21 openstack01 kernel: device br-tun entered promiscuous > > > mode > > > > > > > > Jul 24 10:22:22 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > > ovs-vsctl -t 10 -- --if-exists del-br br-tun > > > > > > > > Jul 24 10:22:22 openstack01 kernel: device br-tun left promiscuous > mode > > > > > > > > Jul 24 10:22:22 openstack01 ovs-vsctl: ovs|00001|vsctl|INFO|Called as > > > > ovs-vsctl -t 10 -- --may-exist add-br br-tun > > > > > > > > Jul 24 10:22:22 openstack01 kernel: device br-tun entered promiscuous > > > mode > > > > > > > > > > > > [1] > http://openstack.redhat.com/Neutron_with_existing_external_network > > > > > > > > -- > > > > Lakmal Warusawithana > > > > > > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > -- > > Lakmal Warusawithana > > > -- Lakmal Warusawithana Vice President, Apache Stratos Director - Cloud Architecture; WSO2 Inc. Mobile : +94714289692 Blog : http://lakmalsview.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From juber at mozilla.com Thu Jul 24 17:11:47 2014 From: juber at mozilla.com (uberj) Date: Thu, 24 Jul 2014 10:11:47 -0700 Subject: [Rdo-list] Issues with rdo-release-4 and external networking / br-tun In-Reply-To: <53C95BDF.5060608@mozilla.com> References: <53C95BDF.5060608@mozilla.com> Message-ID: <53D13E53.5050506@mozilla.com> Found a fix for this: rm /etc/sysconfig/network-scripts/ifcfg-br-tun (restart neutron networking services and nova-compute) Apparently that file shound't exist. The flapping may have been due to openvswitch and the underlying system fighting for control of the interface. On 07/18/2014 10:39 AM, uberj wrote: > Hello, > > I'm attempting to get rdo working on Centos6.5 with external > networking. I am following the steps outlined on > http://openstack.redhat.com/Neutron_with_existing_external_network > > To install openstack, I'm running the following commands: > > sudo yum -y update > sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm > sudo yum install -y openstack-packstack > packstack --allinone --provision-all-in-one-ovs-bridge=n > --os-client-install=y --os-heat-install=y > > > After it completes I do 'tail -f /var/log/neutron/*.log' and then > 'service neutron-openvswitch-agent restart'. In the neutron log I see > the following errors: > > 2014-07-18 16:06:03.255 28794 ERROR neutron.agent.linux.ovs_lib > [req-2fc4abd3-483a-4476-b14d-78c09b04368c None] Unable to execute > ['ovs-ofctl', 'del-flows', 'br-tun']. Exception: > Command: ['sudo', 'neutron-rootwrap', > '/etc/neutron/rootwrap.conf', 'ovs-ofctl', 'del-flows', 'br-tun'] > Exit code: 1 > Stdout: '' > Stderr: 'ovs-ofctl: br-tun is not a bridge or a socket\n' > 2014-07-18 16:06:03.325 28794 ERROR neutron.agent.linux.ovs_lib > [req-2fc4abd3-483a-4476-b14d-78c09b04368c None] Unable to execute > ['ovs-ofctl', 'add-flow', 'br-tun', > 'hard_timeout=0,idle_timeout=0,priority=1,in_port=1,actions=resubmit(,1)']. > Exception: > Command: ['sudo', 'neutron-rootwrap', > '/etc/neutron/rootwrap.conf', 'ovs-ofctl', 'add-flow', 'br-tun', > 'hard_timeout=0,idle_timeout=0,priority=1,in_port=1,actions=resubmit(,1)'] > Exit code: 1 > Stdout: '' > Stderr: 'ovs-ofctl: br-tun is not a bridge or a socket\n' > 2014-07-18 16:06:03.380 28794 ERROR neutron.agent.linux.ovs_lib > [req-2fc4abd3-483a-4476-b14d-78c09b04368c None] Unable to execute > ['ovs-ofctl', 'add-flow', 'br-tun', > 'hard_timeout=0,idle_timeout=0,priority=0,actions=drop']. Exception: > Command: ['sudo', 'neutron-rootwrap', > '/etc/neutron/rootwrap.conf', 'ovs-ofctl', 'add-flow', 'br-tun', > 'hard_timeout=0,idle_timeout=0,priority=0,actions=drop'] > Exit code: 1 > Stdout: '' > Stderr: 'ovs-ofctl: br-tun is not a bridge or a socket\n' > > > When I go to look for br-tun I do 'ovs-vsctl show' and I'll > *sometimes* see: > > ... > Bridge br-tun > Port br-tun > Interface br-tun > type: internal > ... > > Now, this is kind of weird, but if I do "watch -n 0.5 ovs-vsctl show" > I don't always see br-tun! In fact the br-tun seems to jump around a > lot (sometimes its there, sometimes its listed above br-ex, sometimes > below.) > > More info: > > [root at localhost ~]# ovsdb-server --version > ovsdb-server (Open vSwitch) 1.11.0 > Compiled Jul 30 2013 18:14:53 > [root at localhost ~]# ovs-vswitchd --version > ovs-vswitchd (Open vSwitch) 1.11.0 > Compiled Jul 30 2013 18:14:54 > OpenFlow versions 0x1:0x1 > > > Any help would be appreciated. > -- > (uberj) Jacques Uber > Mozilla IT -- (uberj) Jacques Uber Mozilla IT -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Fri Jul 25 17:15:49 2014 From: jslagle at redhat.com (James Slagle) Date: Fri, 25 Jul 2014 13:15:49 -0400 Subject: [Rdo-list] [RDO][Instack] heat is not able to create stack with instack In-Reply-To: References: <53CD07A5.1040206@linux.vnet.ibm.com> <20140721162322.GD10147@teletran-1> <20140722204031.GB22930@teletran-1.redhat.com> Message-ID: <20140725171549.GC22930@teletran-1.redhat.com> On Wed, Jul 23, 2014 at 03:39:38PM +0530, Peeyush Gupta wrote: > Hi James, > > I reinstalled the undercloud and retraced all the steps with debug=True > in /etc/nova/nova/conf. > Here are the logs: > > Nova-compute logs: > > 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: > ba65d12e-b664-4ad9-93fa-992a2886ab01] InstanceDeployFailure: PXE deploy > failed for instance ba65d12e-b664-4ad9-93fa-992a2886ab01 > 2014-07-23 05:59:32.239 2915 TRACE nova.compute.manager [instance: > ba65d12e-b664-4ad9-93fa-992a2886ab01] > 2014-07-23 06:05:43.572 2915 WARNING nova.compute.manager [-] Found 4 in > the database and 0 on the hypervisor. The WARNING here looks suspicous. Did you use instack-delete-overcloud to delete a previous overcloud? Before trying another deployment, can you run instack-delete-overcloud? That also deletes baremetal nodes, which will be re-created the next time you run instack-deploy-overcloud*. > Nova-api logs: > > 'identity_uri' instead. > 2014-07-23 05:45:12.903 2947 ERROR nova.wsgi [-] Could not bind to > 0.0.0.0:8773 > 2014-07-23 05:45:12.903 2947 CRITICAL nova [-] error: [Errno 98] Address > already in use For this one, I'd suggest stopping the openstack-nova-api service and verifying that there are indeed no nova-api processes running. If there are, kill those off. Then start openstack-nova-api. > Nova-scheduler logs: > _claim_test\n "; ".join(reasons))\n', u'ComputeResourcesUnavailable: > Insufficient compute resources: Free memory 0.00 MB < requested 2048 > MB.\n'] Deleting the previous baremetal nodes by using instack-delete-overcloud should hopefully clear this error up as well. -- -- James Slagle -- From lakmal at wso2.com Sat Jul 26 08:05:57 2014 From: lakmal at wso2.com (Lakmal Warusawithana) Date: Sat, 26 Jul 2014 13:35:57 +0530 Subject: [Rdo-list] Metadata block until restart iptables Message-ID: Hi, Configured multi node RDO setup on centos 6.5 with gre tunneling. Instances are spin up correctly but metadata not received to instances. When restart the iptables on compute node, metadata received only instances that spin up in that compute node. Seems like iptables issue, wonder how to figure it out. Is anyone notice this? highly appreciate your help. thanks -- Lakmal Warusawithana -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Jul 28 08:59:20 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 28 Jul 2014 14:29:20 +0530 Subject: [Rdo-list] Metadata block until restart iptables In-Reply-To: References: Message-ID: <20140728085920.GA20763@tesla.redhat.com> On Sat, Jul 26, 2014 at 01:35:57PM +0530, Lakmal Warusawithana wrote: > Hi, > > Configured multi node RDO setup on centos 6.5 with gre tunneling. Instances > are spin up correctly but metadata not received to instances. When restart > the iptables on compute node, metadata received only instances that spin up > in that compute node. Seems like iptables issue, wonder how to figure it > out. > > Is anyone notice this? highly appreciate your help. I was answering someone on #rdo IRC, and the below info helped them (assuming you're using Neutron): http://kashyapc.fedorapeople.org/virt/openstack/enabling-metadata-service.txt And, my iptables rules w/ IceHouse, scroll down to the bottom: https://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt Hope that helps a bit. -- /kashyap From jonas.hagberg at scilifelab.se Tue Jul 29 19:18:32 2014 From: jonas.hagberg at scilifelab.se (Jonas Hagberg) Date: Tue, 29 Jul 2014 21:18:32 +0200 Subject: [Rdo-list] foreman-installer puppet problems: " Error 400 on SERVER: Must pass admin_password to Class[Quickstack::Nova]" In-Reply-To: References: <20140617151058.GA518@redhat.com> Message-ID: Hej No one have any clue on this bug/issue? What could be the problem? cheers -- Jonas Hagberg BILS - Bioinformatics Infrastructure for Life Sciences - http://bils.se e-mail: jonas.hagberg at bils.se, jonas.hagberg at scilifelab.se phone: +46-(0)70 6683869 address: SciLifeLab, Box 1031, 171 21 Solna, Sweden On 19 June 2014 08:23, Jonas Hagberg wrote: > Hej > > Now I have reported the bug > > https://bugzilla.redhat.com/show_bug.cgi?id=1110661 > > > -- > Jonas Hagberg > BILS - Bioinformatics Infrastructure for Life Sciences - http://bils.se > e-mail: jonas.hagberg at bils.se, jonas.hagberg at scilifelab.se > phone: +46-(0)70 6683869 > address: SciLifeLab, Box 1031, 171 21 Solna, Sweden > > > On 17 June 2014 17:10, Lars Kellogg-Stedman wrote: > >> On Tue, Jun 17, 2014 at 12:38:03PM +0200, Jonas Hagberg wrote: >> > But when assigning a node to a hostgroup (Neutron controller or neutron >> > compute) and running puppet I get the following error. >> > >> > err: Could not retrieve catalog from remote server: Error 400 on SERVER: >> > Must pass admin_password to Class[Quickstack::Nova] at >> > >> /usr/share/openstack-foreman-installer/puppet/modules/quickstack/manifests/nova.pp:65 >> > on "fqdn" >> > >> > admin_password is set in hostgroup. >> >> If you haven't already you probably want to open a bug report on this >> (https://bugzilla.redhat.com/enter_bug.cgi?product=RDO). >> >> -- >> Lars Kellogg-Stedman | larsks @ irc >> Cloud Engineering / OpenStack | " " @ twitter >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: