From apevec at gmail.com Wed Apr 1 10:20:13 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 1 Apr 2015 12:20:13 +0200 Subject: [Rdo-list] Is RDO Juno (Kilo) supposed to support Xen Hypervisor on Compute Node ? In-Reply-To: References: <1427723241.7520.6.camel@redhat.com> Message-ID: > So, the question is mostly related with RDO Openstack release on F22 having > qemu version 2.3.0-rc1. qemu/libvirt/kernel etc. are provided by base OS, not RDO. Fedora 22 has qemu 2.3.0 rc1 http://koji.fedoraproject.org/koji/buildinfo?buildID=623500 and for EL we would depend on CentOS VirtSIG to provide appropriate qemu package, if base EL version is not enough. > Another concern is requirement to install package "nova-compute-xen" on > compute node. Xen parts were not sub-packaged, our focus was kvm, but I'd be happy to review proposal to add it in openstack-nova RPM. Please note that required libvirt-daemon-xen is not available on all Fedora arches https://bugzilla.redhat.com/show_bug.cgi?id=996715#c14 so this needs to be resolved somehow. Cheers, Alan From rbowen at redhat.com Wed Apr 1 13:28:27 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 01 Apr 2015 09:28:27 -0400 Subject: [Rdo-list] [Rdo-newsletter] March 2015 RDO Community Newsletter Message-ID: <551BF27B.4060600@redhat.com> Thanks, as always, for being part of the RDO community! Quick links: * RDO - http://rdoproject.org/ * Quick Start - http://rdoproject.org/quickstart * Mailing Lists - http://rdoproject.org/Mailing_lists * RDO packages - https://repos.fedorapeople.org/repos/openstack/openstack-juno/ * RDO blog - http://rdoproject.org/blog * Q&A - http://ask.openstack.org/ Happy Birthday RDO! Just two years ago we started this effort to provide easy-to-install OpenStack for users of RHEL, CentOS, and Fedora. Since then, we've gone from being entirely internally developed at Red Hat to having community participation from around the world. And we've gone from zero to being at the heart of one of the largest OpenStack deployments on the planet, at CERN, where they just recently crossed the 100,000 core mark on their deployment - https://twitter.com/noggin143/status/578950445835657216 We're so glad you're along for the ride, and we're really excited about the coming year, as our relationship with the CentOS community continues to strengthen, and we get broader community participation in the project. OpenStack Summit OpenStack Summit is a little over a month away, and now that the schedule is published - https://www.openstack.org/summit/vancouver-2015/schedule/ - you can start planning your week. Here's a few sessions that we recommend for RDO enthusiasts: * OpenStack Horizon deep dive and customization, with Matthias Runge - http://sched.co/2qiP * Lessons learned on upgrades: the importance of HA and automation, with Emilien Macchi and Frederic Lepied - http://sched.co/2rGf * How Neutron builds network topology for your multi-tier application, with Sadique Puthen - http://sched.co/2qcl * The Road to Enterprise-Ready OpenStack Storage as Service, with Flavio Percoco and Sean Cohen - http://sched.co/2qby * OpenStack Compute 101, with Stephen Gordon - http://sched.co/2qeM * OpenDaylight and OpenStack, with Dave Neary - http://sched.co/2qc8 But there's so much more excellent content scheduled, that it's going to be really hard to decide. So be sure you spend some quality time with the schedule ahead of time, because on-site it's going to be overwhelming. Register today, if you haven't already, at http://tm3.org/summit-register , and be sure to drop by the RDO booth for your event-exclusive RDO tshirt, and our newly updated RDO OpenStack cheatsheet bookmarks. See you there! Packaging Updates The RDO CentOS packaging effort continues to move along, and we'd love to have you come join us if you're looking for a way to get more involved in RDO. We meet each Wednesday at 15:00 UTC on the #RDO channel on the Freenode IRC network to discuss the RDO packaging effort. And we meet with the larger CentOS Cloud SIG group each Thursday at 15:00 UTC on the #centos-devel channel, to discuss the work that spans multiple open source cloud platforms, including CloudStack, Eucalyptus, and OpenNebula. Meetups Every day, there's at least OpenStack meetup somewhere in the world. Each week I post the upcoming meetings on the rdo-list mailing list and on the RDO website at http://rdoproject.org/Events If you're speaking at an OpenStack meetup, please let us know so that we can help publicize it. If you attend one, please blog about it, so that the rest of us can benefit a little too. Some upcoming meetups that you might want to check out include: * Thursday, April 02 OpenStack Seattle Meetup: Getting the Most out of Cinder, Seattle, WA, US - http://www.meetup.com/OpenStack-Seattle/events/219193094/ * Tuesday, April 07 April Sydney Meetup, Melbourne, AU - http://www.meetup.com/Australian-OpenStack-User-Group/events/220202269/ * Thursday, April 09 OpenStack Howto part 1 - Install and Run, Prague, CZ - http://www.meetup.com/OpenStack-Czech-User-Group-Meetup/events/221143227/ * Thursday, April 09 Are you getting the most out of Cinder block storage in OpenStack?, Atlanta, GA, US - http://www.meetup.com/openstack-atlanta/events/219694781/ * Monday, April 13 OpenStack and NFV, San Francisco, CA, US - http://www.meetup.com/San-Francisco-Silicon-Valley-OpenStack-Meetup/events/221142044/ Keep in touch There's lots of ways to stay in in touch with what's going in in the RDO community. The best ways are ... Social Media: * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * Facebook - http://facebook.com/rdocommunity Mailing Lists: * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter WWW * OpenStack Q&A - http://ask.openstack.org/ * RDO - http://rdoproject.org/ IRC * IRC - #rdo on Freenode.irc.net * Puppet module development - #rdo-puppet Finally, remember that the OpenStack User Survey is always open, so every time you deploy a new OpenStack cloud, go update your records at https://www.openstack.org/user-survey/ so that, when Vancouver rolls around, we have a clearer picture of the OpenStack usage out in the wild. Thanks again for being part of the RDO community! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From whayutin at redhat.com Wed Apr 1 19:22:11 2015 From: whayutin at redhat.com (whayutin) Date: Wed, 01 Apr 2015 15:22:11 -0400 Subject: [Rdo-list] rdo-kilo delorean-trunk status Message-ID: <1427916131.3130.44.camel@redhat.com> Greetings, RDO-Kilo Fedora 21: tests passing Centos-7 Glance is failing to start. glance-api.log 2015-04-01 18:19:54.946 15135 ERROR glance.common.config [-] Unable to load glance-api-keystone from configuration file /usr/share/glance/glance-api-dist-paste.ini. Got: ImportError('No module named elasticsearch',) 2015-04-01 18:34:38.863 1146 ERROR glance.common.config [-] Unable to load glance-api-keystone from configuration file /usr/share/glance/glance-api-dist-paste.ini. Got: ImportError('No module named elasticsearch',) Packstack finishes the install, although it probably should not have. Error was caught when the CI uploads a cirros image to glance and failed. From rbowen at rcbowen.com Wed Apr 1 19:27:36 2015 From: rbowen at rcbowen.com (Rich Bowen) Date: Wed, 01 Apr 2015 15:27:36 -0400 Subject: [Rdo-list] Reminder: OpenStack Israel Call for Papers Message-ID: <551C46A8.9040503@rcbowen.com> Reminder: The OpenStack Israel Call for Papers is open for just two more weeks: http://www.openstack-israel.org/#!copy-of-call-for-papers/cu3y --Rich -- Rich Bowen - rbowen at rcbowen.com - @rbowen http://apachecon.com/ - @apachecon From hguemar at fedoraproject.org Mon Apr 6 15:00:02 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 6 Apr 2015 15:00:02 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150406150002.E216E60A957F@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-04-08 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From lars at redhat.com Mon Apr 6 15:57:21 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 6 Apr 2015 11:57:21 -0400 Subject: [Rdo-list] RDO bug statistics for 2015-04-06 Message-ID: <20150406155721.GB3217@redhat.com> This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . You can find an HTML version of this report online at: . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 141 - Fixed (MODIFIED, POST, ON_QA): 146 ## Number of open bugs by component diskimage-builder [ 1] + distribution [ 12] ++++++++++++++ dnsmasq [ 2] ++ instack [ 2] ++ instack-undercloud [ 5] ++++++ iproute [ 1] + openstack-ceilometer [ 1] + openstack-cinder [ 12] ++++++++++++++ openstack-foreman-inst... [ 3] +++ openstack-glance [ 1] + openstack-horizon [ 1] + openstack-keystone [ 2] ++ openstack-neutron [ 8] +++++++++ openstack-nova [ 14] ++++++++++++++++ openstack-packstack [ 33] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 6] +++++++ openstack-selinux [ 8] +++++++++ openstack-swift [ 3] +++ openstack-tripleo [ 10] ++++++++++++ openstack-tripleo-heat... [ 1] + openstack-tripleo-imag... [ 2] ++ openstack-utils [ 2] ++ openvswitch [ 1] + python-glanceclient [ 1] + python-heatclient [ 1] + python-keystonemiddleware [ 1] + python-neutronclient [ 1] + python-novaclient [ 1] + python-openstackclient [ 1] + rdopkg [ 1] + RFEs [ 2] ++ tempest [ 1] + ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (141 bugs) ### diskimage-builder (1 bug) [1176269 ] http://bugzilla.redhat.com/1176269 (NEW) Component: diskimage-builder Last change: 2015-01-08 Summary: rhel-common element attempts to install rhel-7-server on RHEL 6 image ### distribution (12 bugs) [999587 ] http://bugzilla.redhat.com/999587 (ASSIGNED) Component: distribution Last change: 2015-01-07 Summary: sos report tracker bug [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-03-27 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-03-20 Summary: RDO: Packages needed to support AMQP1.0 [1116972 ] http://bugzilla.redhat.com/1116972 (NEW) Component: distribution Last change: 2015-03-20 Summary: RDO website: libffi-devel is required to run Tempest (at least on CentOS 6.5) [1116974 ] http://bugzilla.redhat.com/1116974 (NEW) Component: distribution Last change: 2015-03-20 Summary: Running Tempest according to the instructions @ RDO website fails with missing tox.ini error [1116975 ] http://bugzilla.redhat.com/1116975 (NEW) Component: distribution Last change: 2015-03-20 Summary: RDO website: configuring TestR according to website, breaks Tox completely [1117007 ] http://bugzilla.redhat.com/1117007 (NEW) Component: distribution Last change: 2015-03-20 Summary: RDO website: newer python-nose is required to run Tempest (at least on CentOS 6.5) [update to http://open stack.redhat.com/Testing_IceHouse_using_Tempest] [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2014-12-22 Summary: [TripleO] Provisioning Images filter doesn't work [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2014-12-22 Summary: [TripleO] text of uninitialized deployment needs rewording [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-03-19 Summary: SSL supports only broken crypto [1187309 ] http://bugzilla.redhat.com/1187309 (NEW) Component: distribution Last change: 2015-03-20 Summary: New package - python-cliff-tablib [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-03-29 Summary: Tracking bug for bugs that Lars is interested in ### dnsmasq (2 bugs) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2014-12-18 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) [1188423 ] http://bugzilla.redhat.com/1188423 (NEW) Component: dnsmasq Last change: 2015-03-22 Summary: RHEL / Centos 7-based instances lose their default IPv4 gateway ### instack (2 bugs) [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-03-17 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-03-12 Summary: instack-update-overcloud fails because it tries to access non-existing files ### instack-undercloud (5 bugs) [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-03-29 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-01-08 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-03-19 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-03-17 Summary: missing dependency on which [1203716 ] http://bugzilla.redhat.com/1203716 (NEW) Component: instack-undercloud Last change: 2015-03-26 Summary: diskimage-builder depends on internal URL ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-02-23 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (1 bug) [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions ### openstack-cinder (12 bugs) [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-03-27 Summary: Configuration file in share forces ignore of auth_uri [1149064 ] http://bugzilla.redhat.com/1149064 (NEW) Component: openstack-cinder Last change: 2014-12-12 Summary: Fail to delete cinder volume on Centos7, using RDO juno [1157939 ] http://bugzilla.redhat.com/1157939 (NEW) Component: openstack-cinder Last change: 2014-10-28 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-cinder Last change: 2014-10-28 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-02-20 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-03-18 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-03-25 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (1 bug) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-03 Summary: Split glance-api and glance-registry ### openstack-horizon (1 bug) [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-horizon Last change: 2014-10-24 Summary: Permissions issue prevents CSS from rendering ### openstack-keystone (2 bugs) [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2014-11-26 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM ### openstack-neutron (8 bugs) [986507 ] http://bugzilla.redhat.com/986507 (ASSIGNED) Component: openstack-neutron Last change: 2015-03-18 Summary: RFE: IPv6 Feature Parity [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1149504 ] http://bugzilla.redhat.com/1149504 (NEW) Component: openstack-neutron Last change: 2014-10-12 Summary: Instances won't obtain IPv6 address and gateway when using SLAAC provided by OpenStack [1149505 ] http://bugzilla.redhat.com/1149505 (NEW) Component: openstack-neutron Last change: 2014-10-12 Summary: Instances won't obtain IPv6 address and gateway when using Stateful DHCPv6 provided by OpenStack [1149897 ] http://bugzilla.redhat.com/1149897 (NEW) Component: openstack-neutron Last change: 2015-03-29 Summary: neutron-openvswitch-agent service creates high polkitd usage [1159733 ] http://bugzilla.redhat.com/1159733 (NEW) Component: openstack-neutron Last change: 2015-03-22 Summary: no ports available when associating floating ips to new instance [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false ### openstack-nova (14 bugs) [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-03-26 Summary: Ensure translations are installed correctly and picked up at runtime [1123298 ] http://bugzilla.redhat.com/1123298 (NEW) Component: openstack-nova Last change: 2015-04-03 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2014-10-01 Summary: nova: fail to edit project quota with DataError from nova [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2014-10-06 Summary: nova object store allow get object after date exires [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2014-12-15 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2014-10-20 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2014-10-27 Summary: v4-fixed-ip= not working with juno nova networking [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-03-15 Summary: horizon console uses http when horizon is set to use ssl [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2014-11-09 Summary: novnc init script doesnt write to log [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-01-08 Summary: Installation of openstack-nova-compute fails on PowerKVM [1189347 ] http://bugzilla.redhat.com/1189347 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: Nova AVC messages [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-02-09 Summary: Nova - db connection string present on compute nodes [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-03-20 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version ### openstack-packstack (33 bugs) [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-03-18 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-03-23 Summary: [RFE] Include Fedora cloud images in some nice way [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-03-18 Summary: API services has all admin permission instead of service [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-03-20 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2014-06-11 Summary: [RFE] SPICE support in packstack [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-04-01 Summary: Packstack missing ML2 Mellanox Mechanism Driver [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2014-12-18 Summary: Offset Swift ports to 6200 [1141608 ] http://bugzilla.redhat.com/1141608 (NEW) Component: openstack-packstack Last change: 2015-03-30 Summary: PackStack sets unrecognized "net.bridge.bridge-nf- call*" keys on up to date CentOS 6 [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2014-10-02 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1153128 ] http://bugzilla.redhat.com/1153128 (NEW) Component: openstack-packstack Last change: 2014-11-21 Summary: Cannot start nova-network on juno - Centos7 [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2014-11-21 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-01-25 Summary: rabbitmq wont start if ssl is required [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2014-12-18 Summary: centos7 fails to install glance [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-01-03 Summary: Error: service-update is not currently supported by the keystone sql driver [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-03-19 Summary: misleading exit message on fail [1174749 ] http://bugzilla.redhat.com/1174749 (NEW) Component: openstack-packstack Last change: 2014-12-17 Summary: Failed to start httpd service on Fedora 20 (with packstack utility) [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2014-12-21 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2014-12-23 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-03-19 Summary: packstack --allinone fails when starting neutron server [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-01-25 Summary: glance provision disregards keystone region setting [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-01-30 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-02-13 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-03-17 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-03-18 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-03-18 Summary: "private" network created by packstack is not owned by any tenant [1205772 ] http://bugzilla.redhat.com/1205772 (NEW) Component: openstack-packstack Last change: 2015-03-25 Summary: support the ldap user_enabled_invert parameter [1205912 ] http://bugzilla.redhat.com/1205912 (NEW) Component: openstack-packstack Last change: 2015-03-26 Summary: allow to specify admin name and email [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-03-26 Summary: provision_glance does not honour proxy setting when getting image [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-03-30 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-04-02 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-03-30 Summary: auto enablement of the extras channel [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-04-02 Summary: packstack --allinone fails during _keystone.pp [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-04-03 Summary: add DiskFilter to scheduler_default_filters ### openstack-puppet-modules (6 bugs) [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-02-01 Summary: Offset Swift ports to 6200 [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-02-15 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2014-10-22 Summary: Increase the rpc_thread_pool_size [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-03-13 Summary: ERROR: Network commands are not supported when using the Neutron API. [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-02-13 Summary: Add puppet-openstack_extras to opm [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-02-13 Summary: Add puppet-tripleo and puppet-gnocchi to opm ### openstack-selinux (8 bugs) [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: "glance image-list" fails on F21, causing packstack install to fail [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-03-30 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 ### openstack-swift (3 bugs) [1117012 ] http://bugzilla.redhat.com/1117012 (NEW) Component: openstack-swift Last change: 2015-03-30 Summary: openstack-swift-proxy depends on openstack-swift- plugin-swift3 [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (10 bugs) [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1162333 ] http://bugzilla.redhat.com/1162333 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: Instack fails to complete instack-virt-setup with syntax error near unexpected token `newline' [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: User can not login into the overcloud horizon using the proper credentials [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-01-31 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tri 46 72408 46 33756 0 0 25242 0 0:00:02 0:00:01 0:00:01 25228pleo Last change: 2015-03-25 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-tripleo Last change: 2015-04-06 Summary: Introspection for bare metals times out after more than an hour ### openstack-tripleo-heat-templates (1 bug) [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-03-22 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-01-29 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-02-28 Summary: mariadb my.cnf socket path does not exist ### openstack-utils (2 bugs) [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2014-11-07 Summary: Can't enable OpenStack service after openstack-service disable [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-03-12 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (NEW) Component: openvswitch Last change: 2015-04-05 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### python-glanceclient (1 bug) [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-heatclient (1 bug) [1205675 ] http://bugzilla.redhat.com/1205675 (NEW) Component: python-heatclient Last change: 2015-03-29 Summary: When passing --pre-create to the heat stack-create command, the command is ignored ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-03-02 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (1 bug) [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-04-03 Summary: Missing versioned dependency on python-novaclient ### python-openstackclient (1 bug) [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-03-04 Summary: Add --user to project list command to filter projects by user ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (2 bugs) [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-03-30 Summary: [RFE] Provide easy to use upgrade tool [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot ### tempest (1 bug) [1154633 ] http://bugzilla.redhat.com/1154633 (NEW) Component: tempest Last change: 2014-10-20 Summary: Tempest_config failure (RDO, Juno, CentOS 7 - heat related?) ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (146 bugs) ### distribution (3 bugs) [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2014-09-19 Summary: update el6 icehouse kombu packages for improved performance [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2014-10-16 Summary: Tuskar Fails After Remove/Reinstall Of RDO [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr ### instack-undercloud (1 bug) [1204126 ] http://bugzilla.redhat.com/1204126 (POST) Component: instack-undercloud Last change: 2015-03-26 Summary: [RFE] Enable deployment of Ceph via instack ### openstack-ceilometer (2 bugs) [1001832 ] http://bugzilla.redhat.com/1001832 (MODIFIED) Component: openstack-ceilometer Last change: 2014-01-13 Summary: sos report tracker bug - Ceilometer [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency ### openstack-cinder (6 bugs) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [999651 ] http://bugzilla.redhat.com/999651 (POST) Component: openstack-cinder Last change: 2014-03-17 Summary: splitting the sos report to modules - Cinder [1007515 ] http://bugzilla.redhat.com/1007515 (MODIFIED) Component: openstack-cinder Last change: 2013-11-27 Summary: cinder [Havana]: we try to create a backup although cinder-backup service is down [1010039 ] http://bugzilla.redhat.com/1010039 (MODIFIED) Component: openstack-cinder Last change: 2013-11-27 Summary: Grizzly -> Havana upgrade fails during db_sync [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (4 bugs) [999653 ] http://bugzilla.redhat.com/999653 (POST) Component: openstack-glance Last change: 2015-01-07 Summary: splitting the sos report to modules - Glance [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue ### openstack-heat (1 bug) [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (13 bugs) [999660 ] http://bugzilla.redhat.com/999660 (POST) Component: openstack-neutron Last change: 2015-02-01 Summary: splitting the sos report to modules - Neutron [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' ### openstack-nova (2 bugs) [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail ### openstack-packstack (55 bugs) [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install succeeds even when puppet completely fails [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-02-01 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-01-07 Summary: please give greater control over use of EPEL [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-01-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: rdo release RPM not installed on all fedora hosts [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: Warning message for installing RDO kernel needs to be adjusted [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2014-10-28 Summary: RFE: support setting up apache to serve keystone requests [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2014-04-21 Summary: openstack-dashboard django dependency conflict stops packstack execution [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2014-04-29 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-02-01 Summary: Openstack Installer: packstack does not create tables in Heat db. [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2014-04-29 Summary: packstack reports installation completed successfully but nothing installed [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2014-02-05 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-02-20 Summary: Packstack creates duplicate cirros images in glance [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2014-10-02 Summary: Packstack neutron plugin does not check if Nova is disabled [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: qpid should enable SSL [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2014-08-19 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2014-11-26 Summary: packstack requires 2 runs to install ceilometer [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2014-12-01 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2014-10-02 Summary: packstack fails if iptables.service is not available [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2014-06-02 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2014-10-02 Summary: Dashboard port firewall rule is not permanent [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2014-04-14 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-03-13 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2014-03-06 Summary: Change packstack to use openstack-puppet-modules [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2014-04-14 Summary: Fedora20: packstack gives traceback when SElinux permissive [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2014-03-25 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2014-05-13 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2014-06-23 Summary: Havana Fedora 19, packstack fails w/ mysql error [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2014-12-19 Summary: packstack package should depend on yum-utils [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-03-29 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2014-06-17 Summary: el7 Icehouse: Nagios installation fails [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-03-13 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2014-11-25 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-03-13 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [1139246 ] http://bugzilla.redhat.com/1139246 (POST) Component: openstack-packstack Last change: 2014-09-12 Summary: Refactor cinder plugin to support multiple cinder backends [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2014-10-27 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2014-10-31 Summary: packstack icehouse doesn't install anything because of repo [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-03-11 Summary: packstack fails on centos6 with missing systemctl [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-02-24 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-02-24 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2014-12-19 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-03-29 Summary: RabbitMQ fails to start if configured with ssl ### openstack-puppet-modules (16 bugs) [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-02-05 Summary: explicit check for pymongo is incorrect [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-02-05 Summary: cinder modules require glance installed [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-02-15 Summary: horizon log errors [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-02-05 Summary: netns.py syntax error [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-06-16 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-04-08 Summary: prescript.pp does not ensure iptables-services package installation [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-12 Summary: Horizon help url in RDO points to the RHOS documentation [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-04-24 Summary: prescript puppet - missing dependency package iptables- services [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-10-01 Summary: swift.pp: Could not find command 'restorecon' [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-02-17 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-01-21 Summary: packstack chokes on ironic - centos7 + juno [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1205757 ] http://bugzilla.redhat.com/1205757 (POST) Component: openstack-puppet-modules Last change: 2015-03-29 Summary: puppet-keystone support the ldap user_enabled_invert parameter [1207701 ] http://bugzilla.redhat.com/1207701 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-04-03 Summary: Unable to attach cinder volume to instance ### openstack-sahara (1 bug) [1184522 ] http://bugzilla.redhat.com/1184522 (MODIFIED) Component: openstack-sahara Last change: 2015-03-27 Summary: launch_command.py missing ### openstack-selinux (12 bugs) [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1093297 ] http://bugzilla.redhat.com/1093297 (POST) Component: openstack-selinux Last change: 2014-05-15 Summary: selinux AVC RHEL7 and RDO - Neutron [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later ### openstack-swift (2 bugs) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages [999662 ] http://bugzilla.redhat.com/999662 (POST) Component: openstack-swift Last change: 2015-01-07 Summary: splitting the sos report to modules - Swift ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-02-09 Summary: Registering nodes with the IPMI driver always fails [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-03-23 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-03-23 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py ### openstack-utils (1 bug) [1090648 ] http://bugzilla.redhat.com/1090648 (POST) Component: openstack-utils Last change: 2014-05-21 Summary: glance-manage db_sync silently fails to prepare the database ### openvswitch (2 bugs) [1193429 ] http://bugzilla.redhat.com/1193429 (ON_QA) Component: openvswitch Last change: 2015-03-30 Summary: failed to flow_del [1200918 ] http://bugzilla.redhat.com/1200918 (ON_QA) Component: openvswitch Last change: 2015-03-30 Summary: core dump ov openvswitch ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-openstack-auth (1 bug) [985570 ] http://bugzilla.redhat.com/985570 (ON_QA) Component: python-django-openstack-auth Last change: 2013-07-18 Summary: Please upgrade to 1.0.9 or better ### python-glanceclient (2 bugs) [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-01-07 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO 100 72408 100 72408 0 0 43633 0 0:00:01 0:00:01 --:--:-- 43619 ] [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-01-07 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2014-01-13 Summary: keystone missing tab completion ### python-neutronclient (3 bugs) [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() ### python-novaclient (2 bugs) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-01-07 Summary: nova commands fail with gnomekeyring IOError [1001107 ] http://bugzilla.redhat.com/1001107 (MODIFIED) Component: python-novaclient Last change: 2013-09-04 Summary: Please upgrade to 2.14.1 ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2014-06-17 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-quantumclient (1 bug) [989789 ] http://bugzilla.redhat.com/989789 (MODIFIED) Component: python-quantumclient Last change: 2013-07-30 Summary: warnings about missing editors ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### rdopkg (1 bug) [1127309 ] http://bugzilla.redhat.com/1127309 (POST) Component: rdopkg Last change: 2014-09-01 Summary: rdopkg version 0.18 fails to find rdoupdate.bsources.koji_ -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at redhat.com Mon Apr 6 17:46:08 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 06 Apr 2015 13:46:08 -0400 Subject: [Rdo-list] RDO/OpenStack meetups coming up (Monday, April 06, 2015) Message-ID: <5522C660.3040000@redhat.com> It's a really slow week in OpenStack meetups. The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Monday, April 06 in Durham, NC, US: April Meetup: ?I Can Haz Moar Networks?? w/Midokura & Cumulus Networks - http://www.meetup.com/Triangle-OpenStack-Meetup/events/221188847/ * Tuesday, April 07 in Melbourne, AU: April Sydney Meetup - http://www.meetup.com/Australian-OpenStack-User-Group/events/220202269/ * Wednesday, April 08 in New York, NY, US: Smart OpenStack Powered Automation/Neutron Q&A with James Denton - http://www.meetup.com/OpenStack-for-Enterprises-NYC/events/221414475/ * Wednesday, April 08 in Washington, DC, US: Software Defined Networks (SDN) & Linux Based Network OS (#20) - http://www.meetup.com/OpenStackDC/events/219873503/ * Thursday, April 09 in Prague, CZ: OpenStack Howto part 1 - Install and Run - http://www.meetup.com/OpenStack-Czech-User-Group-Meetup/events/221143227/ * Thursday, April 09 in Atlanta, GA, US: Are you getting the most out of Cinder block storage in OpenStack? - http://www.meetup.com/openstack-atlanta/events/219694781/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rdo-info at redhat.com Mon Apr 6 18:54:24 2015 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 6 Apr 2015 18:54:24 +0000 Subject: [Rdo-list] [RDO] RDO blog roundup, April 6 2015 Message-ID: <0000014c901568c2-bfe48927-82d8-4597-941e-4abcb27c36f1-000000@email.amazonses.com> rbowen started a discussion. RDO blog roundup, April 6 2015 --- Follow the link below to check it out: https://www.rdoproject.org/forum/discussion/1011/rdo-blog-roundup-april-6-2015 Have a great day! From abeekhof at redhat.com Wed Apr 8 02:12:56 2015 From: abeekhof at redhat.com (Andrew Beekhof) Date: Wed, 8 Apr 2015 12:12:56 +1000 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs Message-ID: Previously in order monitor the healthiness of compute nodes and the services running on them, we had to create single node clusters due to corosync's scaling limits. We can now announce a new deployment model that allows Pacemaker to continue this role, but presents a single coherent view of the entire deployment while allowing us to scale beyond corosync's limits. Having this single administrative domain then allows us to do clever things like automated recovery of VMs running on a failed or failing compute node. The main difference with the previous deployment mode is that services on the compute nodes are now managed and driven by the Pacemaker cluster on the control plane. The compute nodes do not become full members of the cluster and they no longer require the full cluster stack, instead they run pacemaker_remoted which acts as a conduit. Implementation Details: - Pacemaker monitors the connection to pacemaker_remoted to verify that the node is reachable or not. Failure to talk to a node triggers recovery action. - Pacemaker uses pacemaker_remoted to start compute node services in the same sequence as before (neutron-ovs-agent -> ceilometer-compute -> nova-compute). - If a service fails to start, any services that depend on the FAILED service will not be started. This avoids the issue of adding a broken node (back) to the pool. - If a service fails to stop, the node where the service is running will be fenced. This is necessary to guarantee data integrity and a core HA concept (for the purposes of this particular discussion, please take this as a given). - If a service's health check fails, the resource (and anything that depends on it) will be stopped and then restarted. Remember that failure to stop will trigger a fencing action. - A successful restart of all the services can only potentially affect network connectivity of the instances for a short period of time. With these capabilities in place, we can exploit Pacemaker's node monitoring and fencing capabilities to drive nova host-evacuate for the failed compute nodes and recover the VMs elsewhere. When a compute node fails, Pacemaker will: 1. Execute 'nova service-disable' 2. fence (power off) the failed compute node 3. fence_compute off (waiting for nova to detect the compute node is gone) 4. fence_compute on (a no-op unless the host happens to be up already) 5. Execute 'nova service-enable' when the compute node returns Technically steps 1 and 5 are optional and they are aimed to improve user experience by immediately excluding a failed host from nova scheduling. The only benefit is a faster scheduling of VMs that happens during a failure (nova does not have to recognize a host is down, timeout and subsequently schedule the VM on another host). Step 2 will make sure the host is completely powered off and nothing is running on the host. Optionally, you can have the failed host reboot which would potentially allow it to re-enter the pool. We have an implementation for Step 3 but the ideal solution depends on extensions to the nova API. Currently fence_compute loops, waiting for nova to recognise that the failed host is down, before we make a host-evacuate call which triggers nova to restart the VMs on another host. The discussed nova API extensions will speed up recovery times by allowing fence_compute to proactively push that information into nova instead. To take advantage of the VM recovery features: - VMs need to be running off a cinder volume or using shared ephemeral storage (like RBD or NFS) - If VM is not running using shared storage, recovery of the instance on a new compute node would need to revert to a previously stored snapshot/image in Glance (potentially losing state, but in some cases that may not matter) - RHEL7.1+ required for infrastructure nodes (controllers and compute). Instance guests can run anything. - Compute nodes need to have a working fencing mechanism (IPMI, hardware watchdog, etc) Detailed instructions for deploying this new model are of course available on Github: https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md#compute-node-implementation It has been successfully deployed in our labs, but we'd really like to hear how it works for you in the field. Please contact me if you encounter any issues. -- Andrew From whayutin at redhat.com Wed Apr 8 15:23:32 2015 From: whayutin at redhat.com (whayutin) Date: Wed, 08 Apr 2015 11:23:32 -0400 Subject: [Rdo-list] [CI] rdo-kilo delorean-trunk status In-Reply-To: <1427916131.3130.44.camel@redhat.com> References: <1427916131.3130.44.camel@redhat.com> Message-ID: <1428506612.11116.3.camel@redhat.com> Latest status: Fedora 21 kilo: Install works, test to launch instance fails w/ line 77, in wait_for_server_status server_id=server_id) BuildErrorException: Server 0548ae72-6fb0-4cd8-8284-dbfba351802b failed to build and is in ERROR status Details: {u'message': u'No valid host was found. There are not enough hosts available.', u'code': 500, u'created': u'2015-04-08T05:24:32Z'} Centos-7 kilo: failing on missing dep 'docutils' 00:24:24.431 TASK: [product/packstack | generate answer file] ****************************** 00:24:24.431 [[ previous task time: 0:00:32.666630 = 32.67s / 1429.14s ]] packstack --gen-answer-file=/root/packstack_config.txt 00:24:24.431 failed: [rdo-pksk-7p2cq-rhos-ci-27-controller] => {"changed": true, "cmd": ["packstack", "--gen-answer-file=/root/packstack_config.txt"], "delta": "0:00:00.207339", "end": "2015-04-08 15:00:34.507497", "rc": 1, "start": "2015-04-08 15:00:34.300158", "warnings": []} 00:24:24.431 stderr: ERROR:root:Failed to load plugin from file prescript_000.py 00:24:24.431 ERROR:root:Traceback (most recent call last): 00:24:24.431 File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", line 885, in loadPlugins 00:24:24.431 moduleobj = __import__(moduleToLoad) 00:24:24.431 File "/usr/lib/python2.7/site-packages/packstack/plugins/prescript_000.py", line 30, in 00:24:24.431 from packstack.modules.documentation import update_params_usage 00:24:24.431 File "/usr/lib/python2.7/site-packages/packstack/modules/documentation.py", line 20, in 00:24:24.431 from docutils import core 00:24:24.431 ImportError: No module named docutils On Wed, 2015-04-01 at 15:22 -0400, whayutin wrote: > Greetings, > RDO-Kilo > > Fedora 21: tests passing > > Centos-7 > Glance is failing to start. > > glance-api.log > 2015-04-01 18:19:54.946 15135 ERROR glance.common.config [-] Unable to > load glance-api-keystone from configuration > file /usr/share/glance/glance-api-dist-paste.ini. > Got: ImportError('No module named elasticsearch',) > 2015-04-01 18:34:38.863 1146 ERROR glance.common.config [-] Unable to > load glance-api-keystone from configuration > file /usr/share/glance/glance-api-dist-paste.ini. > Got: ImportError('No module named elasticsearch',) > > Packstack finishes the install, although it probably should not have. > Error was caught when the CI uploads a cirros image to glance and > failed. > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Wed Apr 8 16:01:58 2015 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 8 Apr 2015 12:01:58 -0400 (EDT) Subject: [Rdo-list] [CI] rdo-kilo delorean-trunk status In-Reply-To: <1428506612.11116.3.camel@redhat.com> References: <1427916131.3130.44.camel@redhat.com> <1428506612.11116.3.camel@redhat.com> Message-ID: <618348329.12500939.1428508918205.JavaMail.zimbra@redhat.com> > Latest status: > > Fedora 21 kilo: > Install works, test to launch instance fails w/ > > line 77, in wait_for_server_status server_id=server_id) > BuildErrorException: Server 0548ae72-6fb0-4cd8-8284-dbfba351802b failed > to build and is in ERROR status Details: {u'message': u'No valid host > was found. There are not enough hosts available.', u'code': 500, > u'created': u'2015-04-08T05:24:32Z'} > Addressed by https://review.openstack.org/171182 . As soon as packaging is fixed (please see below), it should work. > Centos-7 kilo: > failing on missing dep 'docutils' > This dependency was introduced by https://review.openstack.org/167667 . https://review.gerrithub.io/#/c/229713/ (not yet merged) fixes it. Regards, Javier > 00:24:24.431 TASK: [product/packstack | generate answer file] > ****************************** > 00:24:24.431 [[ previous task time: 0:00:32.666630 = > 32.67s / 1429.14s ]] > packstack --gen-answer-file=/root/packstack_config.txt > 00:24:24.431 failed: [rdo-pksk-7p2cq-rhos-ci-27-controller] => > {"changed": true, "cmd": ["packstack", > "--gen-answer-file=/root/packstack_config.txt"], "delta": > "0:00:00.207339", "end": "2015-04-08 15:00:34.507497", "rc": 1, "start": > "2015-04-08 15:00:34.300158", "warnings": []} > 00:24:24.431 stderr: ERROR:root:Failed to load plugin from file > prescript_000.py > 00:24:24.431 ERROR:root:Traceback (most recent call last): > 00:24:24.431 File > "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", > line 885, in loadPlugins > 00:24:24.431 moduleobj = __import__(moduleToLoad) > 00:24:24.431 File > "/usr/lib/python2.7/site-packages/packstack/plugins/prescript_000.py", > line 30, in > 00:24:24.431 from packstack.modules.documentation import > update_params_usage > 00:24:24.431 File > "/usr/lib/python2.7/site-packages/packstack/modules/documentation.py", > line 20, in > 00:24:24.431 from docutils import core > 00:24:24.431 ImportError: No module named docutils > > > > On Wed, 2015-04-01 at 15:22 -0400, whayutin wrote: > > Greetings, > > RDO-Kilo > > > > Fedora 21: tests passing > > > > Centos-7 > > Glance is failing to start. > > > > glance-api.log > > 2015-04-01 18:19:54.946 15135 ERROR glance.common.config [-] Unable to > > load glance-api-keystone from configuration > > file /usr/share/glance/glance-api-dist-paste.ini. > > Got: ImportError('No module named elasticsearch',) > > 2015-04-01 18:34:38.863 1146 ERROR glance.common.config [-] Unable to > > load glance-api-keystone from configuration > > file /usr/share/glance/glance-api-dist-paste.ini. > > Got: ImportError('No module named elasticsearch',) > > > > Packstack finishes the install, although it probably should not have. > > Error was caught when the CI uploads a cirros image to glance and > > failed. > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From Arkady_Kanevsky at dell.com Wed Apr 8 20:52:43 2015 From: Arkady_Kanevsky at dell.com (Arkady_Kanevsky at dell.com) Date: Wed, 8 Apr 2015 15:52:43 -0500 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs In-Reply-To: References: Message-ID: <336424C1A5A44044B29030055527AA7504B4F54D36@AUSX7MCPS301.AMER.DELL.COM> Dell - Internal Use - Confidential Does that work with HA controller cluster where pacemaker non-remote runs? -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Andrew Beekhof Sent: Tuesday, April 07, 2015 9:13 PM To: rdo-list at redhat.com; rhos-pgm Cc: milind.manjrekar at redhat.com; Perry Myers; Marcos Garcia; Balaji Jayavelu Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs Previously in order monitor the healthiness of compute nodes and the services running on them, we had to create single node clusters due to corosync's scaling limits. We can now announce a new deployment model that allows Pacemaker to continue this role, but presents a single coherent view of the entire deployment while allowing us to scale beyond corosync's limits. Having this single administrative domain then allows us to do clever things like automated recovery of VMs running on a failed or failing compute node. The main difference with the previous deployment mode is that services on the compute nodes are now managed and driven by the Pacemaker cluster on the control plane. The compute nodes do not become full members of the cluster and they no longer require the full cluster stack, instead they run pacemaker_remoted which acts as a conduit. Implementation Details: - Pacemaker monitors the connection to pacemaker_remoted to verify that the node is reachable or not. Failure to talk to a node triggers recovery action. - Pacemaker uses pacemaker_remoted to start compute node services in the same sequence as before (neutron-ovs-agent -> ceilometer-compute -> nova-compute). - If a service fails to start, any services that depend on the FAILED service will not be started. This avoids the issue of adding a broken node (back) to the pool. - If a service fails to stop, the node where the service is running will be fenced. This is necessary to guarantee data integrity and a core HA concept (for the purposes of this particular discussion, please take this as a given). - If a service's health check fails, the resource (and anything that depends on it) will be stopped and then restarted. Remember that failure to stop will trigger a fencing action. - A successful restart of all the services can only potentially affect network connectivity of the instances for a short period of time. With these capabilities in place, we can exploit Pacemaker's node monitoring and fencing capabilities to drive nova host-evacuate for the failed compute nodes and recover the VMs elsewhere. When a compute node fails, Pacemaker will: 1. Execute 'nova service-disable' 2. fence (power off) the failed compute node 3. fence_compute off (waiting for nova to detect the compute node is gone) 4. fence_compute on (a no-op unless the host happens to be up already) 5. Execute 'nova service-enable' when the compute node returns Technically steps 1 and 5 are optional and they are aimed to improve user experience by immediately excluding a failed host from nova scheduling. The only benefit is a faster scheduling of VMs that happens during a failure (nova does not have to recognize a host is down, timeout and subsequently schedule the VM on another host). Step 2 will make sure the host is completely powered off and nothing is running on the host. Optionally, you can have the failed host reboot which would potentially allow it to re-enter the pool. We have an implementation for Step 3 but the ideal solution depends on extensions to the nova API. Currently fence_compute loops, waiting for nova to recognise that the failed host is down, before we make a host-evacuate call which triggers nova to restart the VMs on another host. The discussed nova API extensions will speed up recovery times by allowing fence_compute to proactively push that information into nova instead. To take advantage of the VM recovery features: - VMs need to be running off a cinder volume or using shared ephemeral storage (like RBD or NFS) - If VM is not running using shared storage, recovery of the instance on a new compute node would need to revert to a previously stored snapshot/image in Glance (potentially losing state, but in some cases that may not matter) - RHEL7.1+ required for infrastructure nodes (controllers and compute). Instance guests can run anything. - Compute nodes need to have a working fencing mechanism (IPMI, hardware watchdog, etc) Detailed instructions for deploying this new model are of course available on Github: https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md#compute-node-implementation It has been successfully deployed in our labs, but we'd really like to hear how it works for you in the field. Please contact me if you encounter any issues. -- Andrew _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From abeekhof at redhat.com Wed Apr 8 21:52:27 2015 From: abeekhof at redhat.com (Andrew Beekhof) Date: Thu, 9 Apr 2015 07:52:27 +1000 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs In-Reply-To: <336424C1A5A44044B29030055527AA7504B4F54D36@AUSX7MCPS301.AMER.DELL.COM> References: <336424C1A5A44044B29030055527AA7504B4F54D36@AUSX7MCPS301.AMER.DELL.COM> Message-ID: > On 9 Apr 2015, at 6:52 am, Arkady_Kanevsky at DELL.com wrote: > > Dell - Internal Use - Confidential > > Does that work with HA controller cluster where pacemaker non-remote runs? I'm not sure I understand the question. The compute and control nodes are all part of a single cluster, its just that the compute nodes are not running a full stack. Or do you mean, "could the same approach work for control nodes"? For example, could this be used to manage more than 16 swift ACO nodes... Short answer: yes Longer answer: yes, but there would likely need additional integration work required so don't expect it in a hurry That specific case is on my mental list of options to explore in the future. -- Andrew > > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Andrew Beekhof > Sent: Tuesday, April 07, 2015 9:13 PM > To: rdo-list at redhat.com; rhos-pgm > Cc: milind.manjrekar at redhat.com; Perry Myers; Marcos Garcia; Balaji Jayavelu > Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs > > Previously in order monitor the healthiness of compute nodes and the services running on them, we had to create single node clusters due to corosync's scaling limits. > We can now announce a new deployment model that allows Pacemaker to continue this role, but presents a single coherent view of the entire deployment while allowing us to scale beyond corosync's limits. > > Having this single administrative domain then allows us to do clever things like automated recovery of VMs running on a failed or failing compute node. > > The main difference with the previous deployment mode is that services on the compute nodes are now managed and driven by the Pacemaker cluster on the control plane. > The compute nodes do not become full members of the cluster and they no longer require the full cluster stack, instead they run pacemaker_remoted which acts as a conduit. > > Implementation Details: > > - Pacemaker monitors the connection to pacemaker_remoted to verify that the node is reachable or not. > Failure to talk to a node triggers recovery action. > > - Pacemaker uses pacemaker_remoted to start compute node services in the same sequence as before (neutron-ovs-agent -> ceilometer-compute -> nova-compute). > > - If a service fails to start, any services that depend on the FAILED service will not be started. > This avoids the issue of adding a broken node (back) to the pool. > > - If a service fails to stop, the node where the service is running will be fenced. > This is necessary to guarantee data integrity and a core HA concept (for the purposes of this particular discussion, please take this as a given). > > - If a service's health check fails, the resource (and anything that depends on it) will be stopped and then restarted. > Remember that failure to stop will trigger a fencing action. > > - A successful restart of all the services can only potentially affect network connectivity of the instances for a short period of time. > > With these capabilities in place, we can exploit Pacemaker's node monitoring and fencing capabilities to drive nova host-evacuate for the failed compute nodes and recover the VMs elsewhere. > When a compute node fails, Pacemaker will: > > 1. Execute 'nova service-disable' > 2. fence (power off) the failed compute node 3. fence_compute off (waiting for nova to detect the compute node is gone) 4. fence_compute on (a no-op unless the host happens to be up already) 5. Execute 'nova service-enable' when the compute node returns > > Technically steps 1 and 5 are optional and they are aimed to improve user experience by immediately excluding a failed host from nova scheduling. > The only benefit is a faster scheduling of VMs that happens during a failure (nova does not have to recognize a host is down, timeout and subsequently schedule the VM on another host). > > Step 2 will make sure the host is completely powered off and nothing is running on the host. > Optionally, you can have the failed host reboot which would potentially allow it to re-enter the pool. > > We have an implementation for Step 3 but the ideal solution depends on extensions to the nova API. > Currently fence_compute loops, waiting for nova to recognise that the failed host is down, before we make a host-evacuate call which triggers nova to restart the VMs on another host. > The discussed nova API extensions will speed up recovery times by allowing fence_compute to proactively push that information into nova instead. > > > To take advantage of the VM recovery features: > > - VMs need to be running off a cinder volume or using shared ephemeral storage (like RBD or NFS) > - If VM is not running using shared storage, recovery of the instance on a new compute node would need to revert to a previously stored snapshot/image in Glance (potentially losing state, but in some cases that may not matter) > - RHEL7.1+ required for infrastructure nodes (controllers and compute). Instance guests can run anything. > - Compute nodes need to have a working fencing mechanism (IPMI, hardware watchdog, etc) > > > Detailed instructions for deploying this new model are of course available on Github: > > https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md#compute-node-implementation > > It has been successfully deployed in our labs, but we'd really like to hear how it works for you in the field. > Please contact me if you encounter any issues. > > -- Andrew > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From Arkady_Kanevsky at dell.com Wed Apr 8 22:38:11 2015 From: Arkady_Kanevsky at dell.com (Arkady_Kanevsky at dell.com) Date: Wed, 8 Apr 2015 17:38:11 -0500 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs In-Reply-To: References: <336424C1A5A44044B29030055527AA7504B4F54D36@AUSX7MCPS301.AMER.DELL.COM> Message-ID: <336424C1A5A44044B29030055527AA7504B4F54D89@AUSX7MCPS301.AMER.DELL.COM> Dell - Internal Use - Confidential Andrew, Say you have 3 controller nodes cluster where all services but compute and swift run. HA proxy and pacemaker are configured on that controller cluster with each service configured under HA proxy and pacemaker. Now you are trying to define another pacemaker cluster that includes original controller cluster plus all compute nodes. If you can put all nodes into one cluster and then define that a service runs on a subset of its node then it would work. Integration and deployment tooling can be handled if that works. Thanks, Arkady -----Original Message----- From: Andrew Beekhof [mailto:abeekhof at redhat.com] Sent: Wednesday, April 08, 2015 4:52 PM To: Kanevsky, Arkady Cc: rdo-list at redhat.com; milind.manjrekar at redhat.com; Perry Myers; Marcos Garcia; Balaji Jayavelu Subject: Re: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs > On 9 Apr 2015, at 6:52 am, Arkady_Kanevsky at DELL.com wrote: > > Dell - Internal Use - Confidential > > Does that work with HA controller cluster where pacemaker non-remote runs? I'm not sure I understand the question. The compute and control nodes are all part of a single cluster, its just that the compute nodes are not running a full stack. Or do you mean, "could the same approach work for control nodes"? For example, could this be used to manage more than 16 swift ACO nodes... Short answer: yes Longer answer: yes, but there would likely need additional integration work required so don't expect it in a hurry That specific case is on my mental list of options to explore in the future. -- Andrew > > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Andrew Beekhof > Sent: Tuesday, April 07, 2015 9:13 PM > To: rdo-list at redhat.com; rhos-pgm > Cc: milind.manjrekar at redhat.com; Perry Myers; Marcos Garcia; Balaji Jayavelu > Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs > > Previously in order monitor the healthiness of compute nodes and the services running on them, we had to create single node clusters due to corosync's scaling limits. > We can now announce a new deployment model that allows Pacemaker to continue this role, but presents a single coherent view of the entire deployment while allowing us to scale beyond corosync's limits. > > Having this single administrative domain then allows us to do clever things like automated recovery of VMs running on a failed or failing compute node. > > The main difference with the previous deployment mode is that services on the compute nodes are now managed and driven by the Pacemaker cluster on the control plane. > The compute nodes do not become full members of the cluster and they no longer require the full cluster stack, instead they run pacemaker_remoted which acts as a conduit. > > Implementation Details: > > - Pacemaker monitors the connection to pacemaker_remoted to verify that the node is reachable or not. > Failure to talk to a node triggers recovery action. > > - Pacemaker uses pacemaker_remoted to start compute node services in the same sequence as before (neutron-ovs-agent -> ceilometer-compute -> nova-compute). > > - If a service fails to start, any services that depend on the FAILED service will not be started. > This avoids the issue of adding a broken node (back) to the pool. > > - If a service fails to stop, the node where the service is running will be fenced. > This is necessary to guarantee data integrity and a core HA concept (for the purposes of this particular discussion, please take this as a given). > > - If a service's health check fails, the resource (and anything that depends on it) will be stopped and then restarted. > Remember that failure to stop will trigger a fencing action. > > - A successful restart of all the services can only potentially affect network connectivity of the instances for a short period of time. > > With these capabilities in place, we can exploit Pacemaker's node monitoring and fencing capabilities to drive nova host-evacuate for the failed compute nodes and recover the VMs elsewhere. > When a compute node fails, Pacemaker will: > > 1. Execute 'nova service-disable' > 2. fence (power off) the failed compute node 3. fence_compute off (waiting for nova to detect the compute node is gone) 4. fence_compute on (a no-op unless the host happens to be up already) 5. Execute 'nova service-enable' when the compute node returns > > Technically steps 1 and 5 are optional and they are aimed to improve user experience by immediately excluding a failed host from nova scheduling. > The only benefit is a faster scheduling of VMs that happens during a failure (nova does not have to recognize a host is down, timeout and subsequently schedule the VM on another host). > > Step 2 will make sure the host is completely powered off and nothing is running on the host. > Optionally, you can have the failed host reboot which would potentially allow it to re-enter the pool. > > We have an implementation for Step 3 but the ideal solution depends on extensions to the nova API. > Currently fence_compute loops, waiting for nova to recognise that the failed host is down, before we make a host-evacuate call which triggers nova to restart the VMs on another host. > The discussed nova API extensions will speed up recovery times by allowing fence_compute to proactively push that information into nova instead. > > > To take advantage of the VM recovery features: > > - VMs need to be running off a cinder volume or using shared ephemeral storage (like RBD or NFS) > - If VM is not running using shared storage, recovery of the instance on a new compute node would need to revert to a previously stored snapshot/image in Glance (potentially losing state, but in some cases that may not matter) > - RHEL7.1+ required for infrastructure nodes (controllers and compute). Instance guests can run anything. > - Compute nodes need to have a working fencing mechanism (IPMI, hardware watchdog, etc) > > > Detailed instructions for deploying this new model are of course available on Github: > > https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md#compute-node-implementation > > It has been successfully deployed in our labs, but we'd really like to hear how it works for you in the field. > Please contact me if you encounter any issues. > > -- Andrew > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From abeekhof at redhat.com Wed Apr 8 22:53:47 2015 From: abeekhof at redhat.com (Andrew Beekhof) Date: Thu, 9 Apr 2015 08:53:47 +1000 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs In-Reply-To: <336424C1A5A44044B29030055527AA7504B4F54D89@AUSX7MCPS301.AMER.DELL.COM> References: <336424C1A5A44044B29030055527AA7504B4F54D36@AUSX7MCPS301.AMER.DELL.COM> <336424C1A5A44044B29030055527AA7504B4F54D89@AUSX7MCPS301.AMER.DELL.COM> Message-ID: <687E5262-BEC5-48FB-A295-CC6A47121DE0@redhat.com> > On 9 Apr 2015, at 8:38 am, Arkady_Kanevsky at DELL.com wrote: > > Dell - Internal Use - Confidential > > Andrew, > Say you have 3 controller nodes cluster where all services but compute and swift run. > HA proxy and pacemaker are configured on that controller cluster with each service configured under HA proxy and pacemaker. > Now you are trying to define another pacemaker cluster that includes original controller cluster plus all compute nodes. Its not another cluster, machines cannot be part of multiple clusters, the compute nodes are being added to the existing cluster. > If you can put all nodes into one cluster and then define that a service runs on a subset of its node then it would work. Correct, there are rules in place to ensure services only run in the "correct" subset. See https://github.com/beekhof/osp-ha-deploy/blob/master/pcmk/compute-managed.scenario#L123 and the comment above it as well as all the "pcs constraint location" entries with "osprole eq compute" > Integration and deployment tooling can be handled if that works. > Thanks, > Arkady > > -----Original Message----- > From: Andrew Beekhof [mailto:abeekhof at redhat.com] > Sent: Wednesday, April 08, 2015 4:52 PM > To: Kanevsky, Arkady > Cc: rdo-list at redhat.com; milind.manjrekar at redhat.com; Perry Myers; Marcos Garcia; Balaji Jayavelu > Subject: Re: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs > > >> On 9 Apr 2015, at 6:52 am, Arkady_Kanevsky at DELL.com wrote: >> >> Dell - Internal Use - Confidential >> >> Does that work with HA controller cluster where pacemaker non-remote runs? > > I'm not sure I understand the question. > The compute and control nodes are all part of a single cluster, its just that the compute nodes are not running a full stack. > > Or do you mean, "could the same approach work for control nodes"? > For example, could this be used to manage more than 16 swift ACO nodes... > > Short answer: yes > Longer answer: yes, but there would likely need additional integration work required so don't expect it in a hurry > > That specific case is on my mental list of options to explore in the future. > > -- Andrew > >> >> -----Original Message----- >> From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Andrew Beekhof >> Sent: Tuesday, April 07, 2015 9:13 PM >> To: rdo-list at redhat.com; rhos-pgm >> Cc: milind.manjrekar at redhat.com; Perry Myers; Marcos Garcia; Balaji Jayavelu >> Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs >> >> Previously in order monitor the healthiness of compute nodes and the services running on them, we had to create single node clusters due to corosync's scaling limits. >> We can now announce a new deployment model that allows Pacemaker to continue this role, but presents a single coherent view of the entire deployment while allowing us to scale beyond corosync's limits. >> >> Having this single administrative domain then allows us to do clever things like automated recovery of VMs running on a failed or failing compute node. >> >> The main difference with the previous deployment mode is that services on the compute nodes are now managed and driven by the Pacemaker cluster on the control plane. >> The compute nodes do not become full members of the cluster and they no longer require the full cluster stack, instead they run pacemaker_remoted which acts as a conduit. >> >> Implementation Details: >> >> - Pacemaker monitors the connection to pacemaker_remoted to verify that the node is reachable or not. >> Failure to talk to a node triggers recovery action. >> >> - Pacemaker uses pacemaker_remoted to start compute node services in the same sequence as before (neutron-ovs-agent -> ceilometer-compute -> nova-compute). >> >> - If a service fails to start, any services that depend on the FAILED service will not be started. >> This avoids the issue of adding a broken node (back) to the pool. >> >> - If a service fails to stop, the node where the service is running will be fenced. >> This is necessary to guarantee data integrity and a core HA concept (for the purposes of this particular discussion, please take this as a given). >> >> - If a service's health check fails, the resource (and anything that depends on it) will be stopped and then restarted. >> Remember that failure to stop will trigger a fencing action. >> >> - A successful restart of all the services can only potentially affect network connectivity of the instances for a short period of time. >> >> With these capabilities in place, we can exploit Pacemaker's node monitoring and fencing capabilities to drive nova host-evacuate for the failed compute nodes and recover the VMs elsewhere. >> When a compute node fails, Pacemaker will: >> >> 1. Execute 'nova service-disable' >> 2. fence (power off) the failed compute node 3. fence_compute off (waiting for nova to detect the compute node is gone) 4. fence_compute on (a no-op unless the host happens to be up already) 5. Execute 'nova service-enable' when the compute node returns >> >> Technically steps 1 and 5 are optional and they are aimed to improve user experience by immediately excluding a failed host from nova scheduling. >> The only benefit is a faster scheduling of VMs that happens during a failure (nova does not have to recognize a host is down, timeout and subsequently schedule the VM on another host). >> >> Step 2 will make sure the host is completely powered off and nothing is running on the host. >> Optionally, you can have the failed host reboot which would potentially allow it to re-enter the pool. >> >> We have an implementation for Step 3 but the ideal solution depends on extensions to the nova API. >> Currently fence_compute loops, waiting for nova to recognise that the failed host is down, before we make a host-evacuate call which triggers nova to restart the VMs on another host. >> The discussed nova API extensions will speed up recovery times by allowing fence_compute to proactively push that information into nova instead. >> >> >> To take advantage of the VM recovery features: >> >> - VMs need to be running off a cinder volume or using shared ephemeral storage (like RBD or NFS) >> - If VM is not running using shared storage, recovery of the instance on a new compute node would need to revert to a previously stored snapshot/image in Glance (potentially losing state, but in some cases that may not matter) >> - RHEL7.1+ required for infrastructure nodes (controllers and compute). Instance guests can run anything. >> - Compute nodes need to have a working fencing mechanism (IPMI, hardware watchdog, etc) >> >> >> Detailed instructions for deploying this new model are of course available on Github: >> >> https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md#compute-node-implementation >> >> It has been successfully deployed in our labs, but we'd really like to hear how it works for you in the field. >> Please contact me if you encounter any issues. >> >> -- Andrew >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From Arkady_Kanevsky at dell.com Wed Apr 8 22:57:23 2015 From: Arkady_Kanevsky at dell.com (Arkady_Kanevsky at dell.com) Date: Wed, 8 Apr 2015 17:57:23 -0500 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs In-Reply-To: <687E5262-BEC5-48FB-A295-CC6A47121DE0@redhat.com> References: <336424C1A5A44044B29030055527AA7504B4F54D36@AUSX7MCPS301.AMER.DELL.COM> <336424C1A5A44044B29030055527AA7504B4F54D89@AUSX7MCPS301.AMER.DELL.COM> <687E5262-BEC5-48FB-A295-CC6A47121DE0@redhat.com> Message-ID: <336424C1A5A44044B29030055527AA7504B4F54D92@AUSX7MCPS301.AMER.DELL.COM> Dell - Internal Use - Confidential Thanks Andrew. This is nice. -----Original Message----- From: Andrew Beekhof [mailto:abeekhof at redhat.com] Sent: Wednesday, April 08, 2015 5:54 PM To: Kanevsky, Arkady Cc: rdo-list at redhat.com; milind.manjrekar at redhat.com; pmyers at redhat.com; mgarciam at redhat.com; bjayavel at redhat.com Subject: Re: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs > On 9 Apr 2015, at 8:38 am, Arkady_Kanevsky at DELL.com wrote: > > Dell - Internal Use - Confidential > > Andrew, > Say you have 3 controller nodes cluster where all services but compute and swift run. > HA proxy and pacemaker are configured on that controller cluster with each service configured under HA proxy and pacemaker. > Now you are trying to define another pacemaker cluster that includes original controller cluster plus all compute nodes. Its not another cluster, machines cannot be part of multiple clusters, the compute nodes are being added to the existing cluster. > If you can put all nodes into one cluster and then define that a service runs on a subset of its node then it would work. Correct, there are rules in place to ensure services only run in the "correct" subset. See https://github.com/beekhof/osp-ha-deploy/blob/master/pcmk/compute-managed.scenario#L123 and the comment above it as well as all the "pcs constraint location" entries with "osprole eq compute" > Integration and deployment tooling can be handled if that works. > Thanks, > Arkady > > -----Original Message----- > From: Andrew Beekhof [mailto:abeekhof at redhat.com] > Sent: Wednesday, April 08, 2015 4:52 PM > To: Kanevsky, Arkady > Cc: rdo-list at redhat.com; milind.manjrekar at redhat.com; Perry Myers; Marcos Garcia; Balaji Jayavelu > Subject: Re: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs > > >> On 9 Apr 2015, at 6:52 am, Arkady_Kanevsky at DELL.com wrote: >> >> Dell - Internal Use - Confidential >> >> Does that work with HA controller cluster where pacemaker non-remote runs? > > I'm not sure I understand the question. > The compute and control nodes are all part of a single cluster, its just that the compute nodes are not running a full stack. > > Or do you mean, "could the same approach work for control nodes"? > For example, could this be used to manage more than 16 swift ACO nodes... > > Short answer: yes > Longer answer: yes, but there would likely need additional integration work required so don't expect it in a hurry > > That specific case is on my mental list of options to explore in the future. > > -- Andrew > >> >> -----Original Message----- >> From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Andrew Beekhof >> Sent: Tuesday, April 07, 2015 9:13 PM >> To: rdo-list at redhat.com; rhos-pgm >> Cc: milind.manjrekar at redhat.com; Perry Myers; Marcos Garcia; Balaji Jayavelu >> Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs >> >> Previously in order monitor the healthiness of compute nodes and the services running on them, we had to create single node clusters due to corosync's scaling limits. >> We can now announce a new deployment model that allows Pacemaker to continue this role, but presents a single coherent view of the entire deployment while allowing us to scale beyond corosync's limits. >> >> Having this single administrative domain then allows us to do clever things like automated recovery of VMs running on a failed or failing compute node. >> >> The main difference with the previous deployment mode is that services on the compute nodes are now managed and driven by the Pacemaker cluster on the control plane. >> The compute nodes do not become full members of the cluster and they no longer require the full cluster stack, instead they run pacemaker_remoted which acts as a conduit. >> >> Implementation Details: >> >> - Pacemaker monitors the connection to pacemaker_remoted to verify that the node is reachable or not. >> Failure to talk to a node triggers recovery action. >> >> - Pacemaker uses pacemaker_remoted to start compute node services in the same sequence as before (neutron-ovs-agent -> ceilometer-compute -> nova-compute). >> >> - If a service fails to start, any services that depend on the FAILED service will not be started. >> This avoids the issue of adding a broken node (back) to the pool. >> >> - If a service fails to stop, the node where the service is running will be fenced. >> This is necessary to guarantee data integrity and a core HA concept (for the purposes of this particular discussion, please take this as a given). >> >> - If a service's health check fails, the resource (and anything that depends on it) will be stopped and then restarted. >> Remember that failure to stop will trigger a fencing action. >> >> - A successful restart of all the services can only potentially affect network connectivity of the instances for a short period of time. >> >> With these capabilities in place, we can exploit Pacemaker's node monitoring and fencing capabilities to drive nova host-evacuate for the failed compute nodes and recover the VMs elsewhere. >> When a compute node fails, Pacemaker will: >> >> 1. Execute 'nova service-disable' >> 2. fence (power off) the failed compute node 3. fence_compute off (waiting for nova to detect the compute node is gone) 4. fence_compute on (a no-op unless the host happens to be up already) 5. Execute 'nova service-enable' when the compute node returns >> >> Technically steps 1 and 5 are optional and they are aimed to improve user experience by immediately excluding a failed host from nova scheduling. >> The only benefit is a faster scheduling of VMs that happens during a failure (nova does not have to recognize a host is down, timeout and subsequently schedule the VM on another host). >> >> Step 2 will make sure the host is completely powered off and nothing is running on the host. >> Optionally, you can have the failed host reboot which would potentially allow it to re-enter the pool. >> >> We have an implementation for Step 3 but the ideal solution depends on extensions to the nova API. >> Currently fence_compute loops, waiting for nova to recognise that the failed host is down, before we make a host-evacuate call which triggers nova to restart the VMs on another host. >> The discussed nova API extensions will speed up recovery times by allowing fence_compute to proactively push that information into nova instead. >> >> >> To take advantage of the VM recovery features: >> >> - VMs need to be running off a cinder volume or using shared ephemeral storage (like RBD or NFS) >> - If VM is not running using shared storage, recovery of the instance on a new compute node would need to revert to a previously stored snapshot/image in Glance (potentially losing state, but in some cases that may not matter) >> - RHEL7.1+ required for infrastructure nodes (controllers and compute). Instance guests can run anything. >> - Compute nodes need to have a working fencing mechanism (IPMI, hardware watchdog, etc) >> >> >> Detailed instructions for deploying this new model are of course available on Github: >> >> https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md#compute-node-implementation >> >> It has been successfully deployed in our labs, but we'd really like to hear how it works for you in the field. >> Please contact me if you encounter any issues. >> >> -- Andrew >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From ihrachys at redhat.com Thu Apr 9 11:04:50 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 09 Apr 2015 13:04:50 +0200 Subject: [Rdo-list] delorean CI Message-ID: <55265CD2.9010106@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, it's great we have CI that builds packages. I have some issues with that though: - - it's not voting, so I need to walk thru comments and mark V+1 myself; - - if it fails, it does not allow me to retrigger; please either create accounts to jenkins for all contributors, or provide some hook to retrigger; - - if it fails, it does not provide any details. Without rpm build log, I'm in darkness why it failed. I hope we can fix those issues. Thanks, /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJVJlzRAAoJEC5aWaUY1u57SqAH/i7TJMOi9ZWNINXqAIh8N9Gd vauFW4NtWy3MIAk49X3sc6vfrcuLN1wXMKjh+I4NQofT0NZDF7oIr1fahwFOoKSm qneye5nAFJ5Gbq+3UAqe+RyKhu9LKfUfnCgYvjxO9lE00CTX2qfqCH4kaN6PxoeD 6bbKOqRiKDtpVJLjxgFUeP853k3OLQlJfW75YA533TfQi6jXDEyATeKlb+P5py2k hB85M/LVAIWkDsMZ9u8cRJfyFrtZIQ588MtaivATtmGOO73lktSGPdBBWRfokPbP TuoOqPBk+mcsjhYFQlLSNP/EkETi/ju8DWOYpwiNsxrYYMSwIjJEy1wLJkYNdrE= =Bn1X -----END PGP SIGNATURE----- From kchamart at redhat.com Thu Apr 9 12:12:11 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 9 Apr 2015 14:12:11 +0200 Subject: [Rdo-list] delorean CI In-Reply-To: <55265CD2.9010106@redhat.com> References: <55265CD2.9010106@redhat.com> Message-ID: <20150409121211.GH20803@tesla.home> On Thu, Apr 09, 2015 at 01:04:50PM +0200, Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi, > > it's great we have CI that builds packages. I have some issues with > that though: > > - - it's not voting, so I need to walk thru comments and mark V+1 myself; > - - if it fails, it does not allow me to retrigger; please either create > accounts to jenkins for all contributors, or provide some hook to > retrigger; > - - if it fails, it does not provide any details. Without rpm build log, > I'm in darkness why it failed. I tried to look through a failure in one of your reviews and couldn't find anything after clicking on 20 different URLs, I attributed that to my ignorance about Jenkins' UI. Thanks for confirming that it's not just me who cant' find build failures. -- /kashyap From whayutin at redhat.com Thu Apr 9 13:14:43 2015 From: whayutin at redhat.com (whayutin) Date: Thu, 09 Apr 2015 09:14:43 -0400 Subject: [Rdo-list] delorean CI In-Reply-To: <55265CD2.9010106@redhat.com> References: <55265CD2.9010106@redhat.com> Message-ID: <1428585283.2602.6.camel@redhat.com> On Thu, 2015-04-09 at 13:04 +0200, Ihar Hrachyshka wrote: > Hi, > > it's great we have CI that builds packages. I have some issues with > that though: > > - it's not voting, so I need to walk thru comments and mark V+1 myself; > - if it fails, it does not allow me to retrigger; please either create > accounts to jenkins for all contributors, or provide some hook to > retrigger; > - if it fails, it does not provide any details. Without rpm build log, > I'm in darkness why it failed. > > I hope we can fix those issues. > > Thanks, > /Ihar Which jobs are you referring to? > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ihrachys at redhat.com Thu Apr 9 13:23:57 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 09 Apr 2015 15:23:57 +0200 Subject: [Rdo-list] delorean CI In-Reply-To: <1428585283.2602.6.camel@redhat.com> References: <55265CD2.9010106@redhat.com> <1428585283.2602.6.camel@redhat.com> Message-ID: <55267D6D.8010607@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 04/09/2015 03:14 PM, whayutin wrote: > On Thu, 2015-04-09 at 13:04 +0200, Ihar Hrachyshka wrote: >> Hi, >> >> it's great we have CI that builds packages. I have some issues >> with that though: >> >> - it's not voting, so I need to walk thru comments and mark V+1 >> myself; - if it fails, it does not allow me to retrigger; please >> either create accounts to jenkins for all contributors, or >> provide some hook to retrigger; - if it fails, it does not >> provide any details. Without rpm build log, I'm in darkness why >> it failed. >> >> I hope we can fix those issues. >> >> Thanks, /Ihar > > Which jobs are you referring to? Those that report when I send a patch for neutron packaging. The CI account is called rdo-ci-jenkins, and it reports job results as in [1]. [1]: https://prod-rdojenkins.rhcloud.com/job/delorean-ci/111/ That said, now I see 'Build Artifacts' link there, where I can get the logs. So this specific issue is solved. Voting issue remains. /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJVJn1sAAoJEC5aWaUY1u57ByUH/A0c15RB53rcqh+iY5DFIZ52 IeP2mEGgS6ybDzDmzX+505OEqKjWpk7japtP4hFUbl0ByapYbtrhMBg8didGvtMe JxnKPUfVgrnXZ3+u7Pe7XR11Tb+GK9OOZD+PQgX+Cz1IGvMS0ZVlkhDo4C9cKh4l y534FZrfJgj84rp+nMt+m34c9G84fjJWLXKKXeaSwQHdXmUTnFMk7Tx/HtmaZjgr knYgbPbhXqpr9OJ+MhFpQ0TSaYKYBHlKsDi9TNnR0INn8cf4/KhlGwyMe/0qpFKC QH7e3xoR24g8evC+WAEVAAVO9WesKV001oHmt++f982bv7/MLBu8Sbz3R2Z5WFs= =TiiS -----END PGP SIGNATURE----- From chmouel at redhat.com Fri Apr 10 10:04:44 2015 From: chmouel at redhat.com (Chmouel Boudjnah) Date: Fri, 10 Apr 2015 12:04:44 +0200 Subject: [Rdo-list] python-openidc for keystone openidc Message-ID: Hello, I wanted to have a try to the keystone's OpenID support. It uses the mod_auth_openidc[1] module : http://docs.openstack.org/developer/keystone/extensions/openidc.html "Note that this module is not available on Fedora/CentOS/Red Hat." I was wondering if it was still accurate or there was already people working on it ? Cheers, Chmouel Footnotes: [1] https://github.com/pingidentity/mod_auth_openidc From whayutin at redhat.com Fri Apr 10 11:55:56 2015 From: whayutin at redhat.com (whayutin) Date: Fri, 10 Apr 2015 07:55:56 -0400 Subject: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo Message-ID: <1428666956.27400.28.camel@redhat.com> This is blocking everything related to kilo atm. https://bugzilla.redhat.com/show_bug.cgi?id=1210692 Thanks! From apevec at gmail.com Fri Apr 10 11:53:44 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 10 Apr 2015 13:53:44 +0200 Subject: [Rdo-list] delorean CI In-Reply-To: <55267D6D.8010607@redhat.com> References: <55265CD2.9010106@redhat.com> <1428585283.2602.6.camel@redhat.com> <55267D6D.8010607@redhat.com> Message-ID: > Those that report when I send a patch for neutron packaging. The CI > account is called rdo-ci-jenkins, and it reports job results as in [1]. > > [1]: https://prod-rdojenkins.rhcloud.com/job/delorean-ci/111/ > > That said, now I see 'Build Artifacts' link there, where I can get the > logs. So this specific issue is solved. For the record, this was fixed by https://review.gerrithub.io/229861 Cheers, Alan From mrunge at redhat.com Fri Apr 10 17:23:41 2015 From: mrunge at redhat.com (Matthias Runge) Date: Fri, 10 Apr 2015 19:23:41 +0200 Subject: [Rdo-list] Juno vs Kilo in Fedora 22 In-Reply-To: References: <648473255763364B961A02AC3BE1060D03CC2DFC00@MX19A.corp.emc.com> Message-ID: <5528071D.4040709@redhat.com> On 28/03/15 01:59, Ha?kel wrote: > For the record, we do provide packages for current stable Fedora and > RHEL/CentOS. > > So we will be providing Kilo on EL7 & Fedora 22 & Fedora 21 in our own > repositories if we were to continue the current setup. > The downside is that we're supposed to provide security fixes for > openstack in F22 for 13 months which > does not match with openstack lifecycle. > When Fedora 22 will be released, Juno will enter phase II of support > (6 months of security-fixes only and nothing else) > > As Fedora is going back to its usual 6 month release schedule, it's > worth considering resynchronizing our schedule. > Did we had an outcome of this? Matthias From apevec at gmail.com Fri Apr 10 17:55:13 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 10 Apr 2015 19:55:13 +0200 Subject: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo In-Reply-To: <1428666956.27400.28.camel@redhat.com> References: <1428666956.27400.28.camel@redhat.com> Message-ID: > This is blocking everything related to kilo atm. > https://bugzilla.redhat.com/show_bug.cgi?id=1210692 Until we get Horizon to play with Delorean trunk ( WIP https://review.gerrithub.io/229897 ) I've pushed Horizon Kilo2 builds from Rawhide to rdo-kilo repo https://bugzilla.redhat.com/show_bug.cgi?id=1210692#c18 Cheers, Alan From hguemar at fedoraproject.org Fri Apr 10 18:15:44 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Fri, 10 Apr 2015 20:15:44 +0200 Subject: [Rdo-list] python-openidc for keystone openidc In-Reply-To: References: Message-ID: 2015-04-10 12:04 GMT+02:00 Chmouel Boudjnah : > Hello, > > I wanted to have a try to the keystone's OpenID support. It uses the > mod_auth_openidc[1] module : > > http://docs.openstack.org/developer/keystone/extensions/openidc.html > > "Note that this module is not available on Fedora/CentOS/Red Hat." > > I was wondering if it was still accurate or there was already people > working on it ? > AFAIK, nobody's working on it, though it should not be a lot of work to get it packaged. Regards, H. > Cheers, > Chmouel > > Footnotes: > [1] https://github.com/pingidentity/mod_auth_openidc > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From john.haller at alcatel-lucent.com Fri Apr 10 20:12:31 2015 From: john.haller at alcatel-lucent.com (Haller, John H (John)) Date: Fri, 10 Apr 2015 20:12:31 +0000 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs In-Reply-To: References: Message-ID: <7C1824C61EE769448FCE74CD83F0CB4F5830B4C0@US70TWXCHMBA11.zam.alcatel-lucent.com> See inline > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > On Behalf Of Andrew Beekhof > Sent: Tuesday, April 07, 2015 9:13 PM > To: rdo-list at redhat.com; rhos-pgm > Cc: milind.manjrekar at redhat.com; Perry Myers; Marcos Garcia; Balaji > Jayavelu > Subject: [Rdo-list] New deployment model for HA compute nodes - now > with automated recovery of VMs > > Previously in order monitor the healthiness of compute nodes and the > services running on them, we had to create single node clusters due to > corosync's scaling limits. > We can now announce a new deployment model that allows Pacemaker to > continue this role, but presents a single coherent view of the entire > deployment while allowing us to scale beyond corosync's limits. [snip] > With these capabilities in place, we can exploit Pacemaker's node > monitoring and fencing capabilities to drive nova host-evacuate for the > failed compute nodes and recover the VMs elsewhere. > When a compute node fails, Pacemaker will: > > 1. Execute 'nova service-disable' See https://review.openstack.org/#/c/169836/ In particular, the note from Sylvain Bauza on patchset 8 about the issue when Ironic and/or VMware drivers are in use. The blueprint this review addresses is to introduce notifications similar to this. This blueprint is targeted for Liberty. If the node is down, I'm not sure that 'nova service-disable' will quickly cause the VMs running on it to disable, as any service on that node is likely already crashed, and is unlikely to let anyone know about its death. In any case, comments on the above blueprint are welcome, and the blueprint should help with this step. > 2. fence (power off) the failed compute node 3. fence_compute off > (waiting for nova to detect the compute node is gone) 4. fence_compute > on (a no-op unless the host happens to be up already) 5. Execute 'nova > service-enable' when the compute node returns > > -- Andrew Regards, John Haller From pgsousa at gmail.com Fri Apr 10 23:31:31 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Sat, 11 Apr 2015 00:31:31 +0100 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs In-Reply-To: References: Message-ID: Hi Andrew, I've checked your git, great work, but I'm using native toois with keepalived approach, using mmonit utility to monitor the infrastructure without pacemaker/corosync. I'm testing this approach to evacuate and disable a compute node, if something fails. What approach do you consider best, having in mind that a external monitoring tool like mmonit is not "cluster aware" and doesn't do things like fencing the dead node like pacemaker does? Thank you. On Wed, Apr 8, 2015 at 3:12 AM, Andrew Beekhof wrote: > Previously in order monitor the healthiness of compute nodes and the > services running on them, we had to create single node clusters due to > corosync's scaling limits. > We can now announce a new deployment model that allows Pacemaker to > continue this role, but presents a single coherent view of the entire > deployment while allowing us to scale beyond corosync's limits. > > Having this single administrative domain then allows us to do clever > things like automated recovery of VMs running on a failed or failing > compute node. > > The main difference with the previous deployment mode is that services on > the compute nodes are now managed and driven by the Pacemaker cluster on > the control plane. > The compute nodes do not become full members of the cluster and they no > longer require the full cluster stack, instead they run pacemaker_remoted > which acts as a conduit. > > Implementation Details: > > - Pacemaker monitors the connection to pacemaker_remoted to verify that > the node is reachable or not. > Failure to talk to a node triggers recovery action. > > - Pacemaker uses pacemaker_remoted to start compute node services in the > same sequence as before (neutron-ovs-agent -> ceilometer-compute -> > nova-compute). > > - If a service fails to start, any services that depend on the FAILED > service will not be started. > This avoids the issue of adding a broken node (back) to the pool. > > - If a service fails to stop, the node where the service is running will > be fenced. > This is necessary to guarantee data integrity and a core HA concept (for > the purposes of this particular discussion, please take this as a given). > > - If a service's health check fails, the resource (and anything that > depends on it) will be stopped and then restarted. > Remember that failure to stop will trigger a fencing action. > > - A successful restart of all the services can only potentially affect > network connectivity of the instances for a short period of time. > > With these capabilities in place, we can exploit Pacemaker's node > monitoring and fencing capabilities to drive nova host-evacuate for the > failed compute nodes and recover the VMs elsewhere. > When a compute node fails, Pacemaker will: > > 1. Execute 'nova service-disable' > 2. fence (power off) the failed compute node > 3. fence_compute off (waiting for nova to detect the compute node is gone) > 4. fence_compute on (a no-op unless the host happens to be up already) > 5. Execute 'nova service-enable' when the compute node returns > > Technically steps 1 and 5 are optional and they are aimed to improve user > experience by immediately excluding a failed host from nova scheduling. > The only benefit is a faster scheduling of VMs that happens during a > failure (nova does not have to recognize a host is down, timeout and > subsequently schedule the VM on another host). > > Step 2 will make sure the host is completely powered off and nothing is > running on the host. > Optionally, you can have the failed host reboot which would potentially > allow it to re-enter the pool. > > We have an implementation for Step 3 but the ideal solution depends on > extensions to the nova API. > Currently fence_compute loops, waiting for nova to recognise that the > failed host is down, before we make a host-evacuate call which triggers > nova to restart the VMs on another host. > The discussed nova API extensions will speed up recovery times by allowing > fence_compute to proactively push that information into nova instead. > > > To take advantage of the VM recovery features: > > - VMs need to be running off a cinder volume or using shared ephemeral > storage (like RBD or NFS) > - If VM is not running using shared storage, recovery of the instance on a > new compute node would need to revert to a previously stored snapshot/image > in Glance (potentially losing state, but in some cases that may not matter) > - RHEL7.1+ required for infrastructure nodes (controllers and compute). > Instance guests can run anything. > - Compute nodes need to have a working fencing mechanism (IPMI, hardware > watchdog, etc) > > > Detailed instructions for deploying this new model are of course available > on Github: > > > https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md#compute-node-implementation > > It has been successfully deployed in our labs, but we'd really like to > hear how it works for you in the field. > Please contact me if you encounter any issues. > > -- Andrew > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Sat Apr 11 01:15:35 2015 From: Tim.Bell at cern.ch (Tim Bell) Date: Sat, 11 Apr 2015 01:15:35 +0000 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs In-Reply-To: References: Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E501029BD91D@CERNXCHG43.cern.ch> Andrew, How much of this is RDO specific ? It seems to be a generic approach similar to Russel Bryant?s blogs using pacemaker to get VM HA. Can these approaches be converged to come up with a community supported approach for all OpenStack configurations ? Tim From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Pedro Sousa Sent: 11 April 2015 01:32 To: Andrew Beekhof Cc: milind.manjrekar at redhat.com; rdo-list at redhat.com; rhos-pgm; Perry Myers; Marcos Garcia; Balaji Jayavelu Subject: Re: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs Hi Andrew, I've checked your git, great work, but I'm using native toois with keepalived approach, using mmonit utility to monitor the infrastructure without pacemaker/corosync. I'm testing this approach to evacuate and disable a compute node, if something fails. What approach do you consider best, having in mind that a external monitoring tool like mmonit is not "cluster aware" and doesn't do things like fencing the dead node like pacemaker does? Thank you. On Wed, Apr 8, 2015 at 3:12 AM, Andrew Beekhof > wrote: Previously in order monitor the healthiness of compute nodes and the services running on them, we had to create single node clusters due to corosync's scaling limits. We can now announce a new deployment model that allows Pacemaker to continue this role, but presents a single coherent view of the entire deployment while allowing us to scale beyond corosync's limits. Having this single administrative domain then allows us to do clever things like automated recovery of VMs running on a failed or failing compute node. The main difference with the previous deployment mode is that services on the compute nodes are now managed and driven by the Pacemaker cluster on the control plane. The compute nodes do not become full members of the cluster and they no longer require the full cluster stack, instead they run pacemaker_remoted which acts as a conduit. Implementation Details: - Pacemaker monitors the connection to pacemaker_remoted to verify that the node is reachable or not. Failure to talk to a node triggers recovery action. - Pacemaker uses pacemaker_remoted to start compute node services in the same sequence as before (neutron-ovs-agent -> ceilometer-compute -> nova-compute). - If a service fails to start, any services that depend on the FAILED service will not be started. This avoids the issue of adding a broken node (back) to the pool. - If a service fails to stop, the node where the service is running will be fenced. This is necessary to guarantee data integrity and a core HA concept (for the purposes of this particular discussion, please take this as a given). - If a service's health check fails, the resource (and anything that depends on it) will be stopped and then restarted. Remember that failure to stop will trigger a fencing action. - A successful restart of all the services can only potentially affect network connectivity of the instances for a short period of time. With these capabilities in place, we can exploit Pacemaker's node monitoring and fencing capabilities to drive nova host-evacuate for the failed compute nodes and recover the VMs elsewhere. When a compute node fails, Pacemaker will: 1. Execute 'nova service-disable' 2. fence (power off) the failed compute node 3. fence_compute off (waiting for nova to detect the compute node is gone) 4. fence_compute on (a no-op unless the host happens to be up already) 5. Execute 'nova service-enable' when the compute node returns Technically steps 1 and 5 are optional and they are aimed to improve user experience by immediately excluding a failed host from nova scheduling. The only benefit is a faster scheduling of VMs that happens during a failure (nova does not have to recognize a host is down, timeout and subsequently schedule the VM on another host). Step 2 will make sure the host is completely powered off and nothing is running on the host. Optionally, you can have the failed host reboot which would potentially allow it to re-enter the pool. We have an implementation for Step 3 but the ideal solution depends on extensions to the nova API. Currently fence_compute loops, waiting for nova to recognise that the failed host is down, before we make a host-evacuate call which triggers nova to restart the VMs on another host. The discussed nova API extensions will speed up recovery times by allowing fence_compute to proactively push that information into nova instead. To take advantage of the VM recovery features: - VMs need to be running off a cinder volume or using shared ephemeral storage (like RBD or NFS) - If VM is not running using shared storage, recovery of the instance on a new compute node would need to revert to a previously stored snapshot/image in Glance (potentially losing state, but in some cases that may not matter) - RHEL7.1+ required for infrastructure nodes (controllers and compute). Instance guests can run anything. - Compute nodes need to have a working fencing mechanism (IPMI, hardware watchdog, etc) Detailed instructions for deploying this new model are of course available on Github: https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md#compute-node-implementation It has been successfully deployed in our labs, but we'd really like to hear how it works for you in the field. Please contact me if you encounter any issues. -- Andrew _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Sun Apr 12 11:11:20 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Sun, 12 Apr 2015 07:11:20 -0400 Subject: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo In-Reply-To: References: <1428666956.27400.28.camel@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03CC41A9B9@MX19A.corp.emc.com> I'm getting (on CentOS 7, using packstack): Error: /Stage[main]/Horizon/Package[horizon]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-dashboard' returned 1: Error: Package: openstack-dash board-2015.1-0.1.b2.el7.centos.noarch (openstack-kilo) Requires: python-oslo-concurrency Doesn't look like the same issue. This is from RDO Kilo repos. Y. > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Alan Pevec > Sent: Friday, April 10, 2015 8:55 PM > To: whayutin at redhat.com > Cc: Rdo-list at redhat.com; Alan Pevec > Subject: Re: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo > > > This is blocking everything related to kilo atm. > > https://bugzilla.redhat.com/show_bug.cgi?id=1210692 > > Until we get Horizon to play with Delorean trunk ( WIP > https://review.gerrithub.io/229897 ) I've pushed Horizon Kilo2 builds from > Rawhide to rdo-kilo repo > https://bugzilla.redhat.com/show_bug.cgi?id=1210692#c18 > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Sun Apr 12 21:19:14 2015 From: apevec at gmail.com (Alan Pevec) Date: Sun, 12 Apr 2015 23:19:14 +0200 Subject: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo In-Reply-To: <648473255763364B961A02AC3BE1060D03CC41A9B9@MX19A.corp.emc.com> References: <1428666956.27400.28.camel@redhat.com> <648473255763364B961A02AC3BE1060D03CC41A9B9@MX19A.corp.emc.com> Message-ID: > I'm getting (on CentOS 7, using packstack): > Error: /Stage[main]/Horizon/Package[horizon]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-dashboard' returned 1: Error: Package: openstack-dash > board-2015.1-0.1.b2.el7.centos.noarch (openstack-kilo) > Requires: python-oslo-concurrency > > > Doesn't look like the same issue. > This is from RDO Kilo repos. rdo-release-kilo.rpm is currently still bootsraping, you need to enable RDO Trunk aka Delorean repo. On CentOS 7 you need: 1. yum install epel-release 2. yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm 3. cd /etc/yum.repos.d; wget http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/delorean.repo Cheers, Alan From abeekhof at redhat.com Sun Apr 12 22:13:18 2015 From: abeekhof at redhat.com (Andrew Beekhof) Date: Mon, 13 Apr 2015 08:13:18 +1000 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E501029BD91D@CERNXCHG43.cern.ch> References: <5D7F9996EA547448BC6C54C8C5AAF4E501029BD91D@CERNXCHG43.cern.ch> Message-ID: > On 11 Apr 2015, at 11:15 am, Tim Bell wrote: > > Andrew, > > > > How much of this is RDO specific ? Nothing at all provided you?re using Pacemaker. Another cluster manager could be used but would require additional integration work by the interested party. > It seems to be a generic approach similar to Russel Bryant?s blogs using pacemaker to get VM HA. Russel worked with us to come up with that design, its the same thing :-) > > Can these approaches be converged to come up with a community supported approach for all OpenStack configurations ? > > > > Tim > > > > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Pedro Sousa > Sent: 11 April 2015 01:32 > To: Andrew Beekhof > Cc: milind.manjrekar at redhat.com; rdo-list at redhat.com; rhos-pgm; Perry Myers; Marcos Garcia; Balaji Jayavelu > Subject: Re: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs > > > > Hi Andrew, > > > > I've checked your git, great work, but I'm using native toois with keepalived approach, using mmonit utility to monitor the infrastructure without pacemaker/corosync. > > > > I'm testing this approach to evacuate and disable a compute node, if something fails. What approach do you consider best, having in mind that a external monitoring tool like mmonit is not "cluster aware" and doesn't do things like fencingdead node like pacemaker does? > > > > Thank you. > > > > On Wed, Apr 8, 2015 at 3:12 AM, Andrew Beekhof wrote: > > Previously in order monitor the healthiness of compute nodes and the services running on them, we had to create single node clusters due to corosync's scaling limits. > We can now announce a new deployment model that allows Pacemaker to continue this role, but presents a single coherent view of the entire deployment while allowing us to scale beyond corosync's limits. > > Having this single administrative domain then allows us to do clever things like automated recovery of VMs running on a failed or failing compute node. > > The main difference with the previous deployment mode is that services on the compute nodes are now managed and driven by the Pacemaker cluster on the control plane. > The compute nodes do not become full members of the cluster and they no longer require the full cluster stack, instead they run pacemaker_remoted which acts as a conduit. > > Implementation Details: > > - Pacemaker monitors the connection to pacemaker_remoted to verify that the node is reachable or not. > Failure to talk to a node triggers recovery action. > > - Pacemaker uses pacemaker_remoted to start compute node services in the same sequence as before (neutron-ovs-agent -> ceilometer-compute -> nova-compute). > > - If a service fails to start, any services that depend on the FAILED service will not be started. > This avoids the issue of adding a broken node (back) to the pool. > > - If a service fails to stop, the node where the service is running will be fenced. > This is necessary to guarantee data integrity and a core HA concept (for the purposes of this particular discussion, please take this as a given). > > - If a service's health check fails, the resource (and anything that depends on it) will be stopped and then restarted. > Remember that failure to stop will trigger a fencing action. > > - A successful restart of all the services can only potentially affect network connectivity of the instances for a short period of time. > > With these capabilities in place, we can exploit Pacemaker's node monitoring and fencing capabilities to drive nova host-evacuate for the failed compute nodes and recover the VMs elsewhere. > When a compute node fails, Pacemaker will: > > 1. Execute 'nova service-disable' > 2. fence (power off) the failed compute node > 3. fence_compute off (waiting for nova to detect the compute node is gone) > 4. fence_compute on (a no-op unless the host happens to be up already) > 5. Execute 'nova service-enable' when the compute node returns > > Technically steps 1 and 5 are optional and they are aimed to improve user experience by immediately excluding a failed host from nova scheduling. > The only benefit is a faster scheduling of VMs that happens during a failure (nova does not have to recognize a host is down, timeout and subsequently schedule the VM on another host). > > Step 2 will make sure the host is completely powered off and nothing is running on the host. > Optionally, you can have the failed host reboot which would potentially allow it to re-enter the pool. > > We have an implementation for Step 3 but the ideal solution depends on extensions to the nova API. > Currently fence_compute loops, waiting for nova to recognise that the failed host is down, before we make a host-evacuate call which triggers nova to restart the VMs on another host. > The discussed nova API extensions will speed up recovery times by allowing fence_compute to proactively push that information into nova instead. > > > To take advantage of the VM recovery features: > > - VMs need to be running off a cinder volume or using shared ephemeral storage (like RBD or NFS) > - If VM is not running using shared storage, recovery of the instance on a new compute node would need to revert to a previously stored snapshot/image in Glance (potentially losing state, but in some cases that may not matter) > - RHEL7.1+ required for infrastructure nodes (controllers and compute). Instance guests can run anything. > - Compute nodes need to have a working fencing mechanism (IPMI, hardware watchdog, etc) > > > Detailed instructions for deploying this new model are of course available on Github: > > https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md#compute-node-implementation > > It has been successfully deployed in our labs, but we'd really like to hear how it works for you in the field. > Please contact me if you encounter any issues. > > -- Andrew > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From abeekhof at redhat.com Sun Apr 12 23:15:04 2015 From: abeekhof at redhat.com (Andrew Beekhof) Date: Mon, 13 Apr 2015 09:15:04 +1000 Subject: [Rdo-list] New deployment model for HA compute nodes - now with automated recovery of VMs In-Reply-To: References: Message-ID: <68F07E37-D1ED-4023-86DA-48B393350FEC@redhat.com> > On 11 Apr 2015, at 9:31 am, Pedro Sousa wrote: > > Hi Andrew, > > I've checked your git, great work, but I'm using native toois with keepalived approach, using mmonit utility to monitor the infrastructure without pacemaker/corosync. > > I'm testing this approach to evacuate and disable a compute node, if something fails. What approach do you consider best, having in mind that a external monitoring tool like mmonit is not "cluster aware" and doesn't do things like fencing the dead node like pacemaker does? Its the benefits of having the "cluster aware? component that make a solution like this possible. Could you achieve similar results with keepalived? Maybe, but you?re likely to end re-implementing (without a decade of bug fixes) much of the ?cluster aware? part that some people seem hell bent on avoiding ;-) > > Thank you. > > On Wed, Apr 8, 2015 at 3:12 AM, Andrew Beekhof wrote: > Previously in order monitor the healthiness of compute nodes and the services running on them, we had to create single node clusters due to corosync's scaling limits. > We can now announce a new deployment model that allows Pacemaker to continue this role, but presents a single coherent view of the entire deployment while allowing us to scale beyond corosync's limits. > > Having this single administrative domain then allows us to do clever things like automated recovery of VMs running on a failed or failing compute node. > > The main difference with the previous deployment mode is that services on the compute nodes are now managed and driven by the Pacemaker cluster on the control plane. > The compute nodes do not become full members of the cluster and they no longer require the full cluster stack, instead they run pacemaker_remoted which acts as a conduit. > > Implementation Details: > > - Pacemaker monitors the connection to pacemaker_remoted to verify that the node is reachable or not. > Failure to talk to a node triggers recovery action. > > - Pacemaker uses pacemaker_remoted to start compute node services in the same sequence as before (neutron-ovs-agent -> ceilometer-compute -> nova-compute). > > - If a service fails to start, any services that depend on the FAILED service will not be started. > This avoids the issue of adding a broken node (back) to the pool. > > - If a service fails to stop, the node where the service is running will be fenced. > This is necessary to guarantee data integrity and a core HA concept (for the purposes of this particular discussion, please take this as a given). > > - If a service's health check fails, the resource (and anything that depends on it) will be stopped and then restarted. > Remember that failure to stop will trigger a fencing action. > > - A successful restart of all the services can only potentially affect network connectivity of the instances for a short period of time. > > With these capabilities in place, we can exploit Pacemaker's node monitoring and fencing capabilities to drive nova host-evacuate for the failed compute nodes and recover the VMs elsewhere. > When a compute node fails, Pacemaker will: > > 1. Execute 'nova service-disable' > 2. fence (power off) the failed compute node > 3. fence_compute off (waiting for nova to detect the compute node is gone) > 4. fence_compute on (a no-op unless the host happens to be up already) > 5. Execute 'nova service-enable' when the compute node returns > > Technically steps 1 and 5 are optional and they are aimed to improve user experience by immediately excluding a failed host from nova scheduling. > The only benefit is a faster scheduling of VMs that happens during a failure (nova does not have to recognize a host is down, timeout and subsequently schedule the VM on another host). > > Step 2 will make sure the host is completely powered off and nothing is running on the host. > Optionally, you can have the failed host reboot which would potentially allow it to re-enter the pool. > > We have an implementation for Step 3 but the ideal solution depends on extensions to the nova API. > Currently fence_compute loops, waiting for nova to recognise that the failed host is down, before we make a host-evacuate call which triggers nova to restart the VMs on another host. > The discussed nova API extensions will speed up recovery times by allowing fence_compute to proactively push that information into nova instead. > > > To take advantage of the VM recovery features: > > - VMs need to be running off a cinder volume or using shared ephemeral storage (like RBD or NFS) > - If VM is not running using shared storage, recovery of the instance on a new compute node would need to revert to a previously stored snapshot/image in Glance (potentially losing state, but in some cases that may not matter) > - RHEL7.1+ required for infrastructure nodes (controllers and compute). Instance guests can run anything. > - Compute nodes need to have a working fencing mechanism (IPMI, hardware watchdog, etc) > > > Detailed instructions for deploying this new model are of course available on Github: > > https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md#compute-node-implementation > > It has been successfully deployed in our labs, but we'd really like to hear how it works for you in the field. > Please contact me if you encounter any issues. > > -- Andrew > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From apevec at gmail.com Mon Apr 13 10:46:21 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 13 Apr 2015 12:46:21 +0200 Subject: [Rdo-list] Juno vs Kilo in Fedora 22 In-Reply-To: <5528071D.4040709@redhat.com> References: <648473255763364B961A02AC3BE1060D03CC2DFC00@MX19A.corp.emc.com> <5528071D.4040709@redhat.com> Message-ID: 2015-04-10 19:23 GMT+02:00 Matthias Runge : > Did we had an outcome of this? Status is that we missed F22 Beta freeze on Mar 31, now fesco freeze exception is required and Kilo import is still not completed in Rawhide. I'm also having second thoughts after thinking about Ihar's feedback last week on IRC where he questioned whether the price of maintaining openstack-* packages in stable Fedora is worth it when nobody is actually using them. Until proven otherwise, we're assuming typical OpenStack user on Fedora is interested in development and wants only latest packages, so CVE fixes and upstream stable point release updates are wasted. Production users are expected to run OpenStack on EL platform, which we provide through RDO. Instead of rushing Kilo into F22, I was thinking about the following model: (1) keep latest OpenStack release in Fedora Rawhide, starting from milestone2. Delorean stays RDO Trunk as a place for tracking changes on OpenStack master branch. (2) deprecate openstack-* in released Fedora to avoid maintenance overhead for no use (3) provide Fedora Rawhide builds for current and previous Fedora in RDO repo e.g. Kilo would be available for F22 and F21 through RDO (4) document that for production EL platform is recommended We cannot retire packages in released Fedora, so I'm not sure about the process for (2). f20 branches should definitely be closed since Havana is EOL upstream and in RDO, so there will be no further updates. When I tried fedpkg retire in openstack-keystone f20 branch, "dead.package" commit in dist-git worked[*] but then pkgdb step failed. Ideally, we would push an "empty" update which would warn installed user base that they're running EOLed packages and should migrate to RDO but I'm not sure there's such mechanism in Fedora Bodhi. I'd appreciate any and all feedback on the above plan, please point out the holes it might have. If the plan makes sense, F22 would stay Juno and it would be irrelevant which OpenStack release ends up in Fedora >= 23. Cheers, Alan [*] http://pkgs.fedoraproject.org/cgit/openstack-keystone.git/commit/?h=f20&id=2fdfdd6c8eac32334c2a5018fef196b041468979 From christian at berendt.io Mon Apr 13 11:54:49 2015 From: christian at berendt.io (Christian Berendt) Date: Mon, 13 Apr 2015 13:54:49 +0200 Subject: [Rdo-list] Juno vs Kilo in Fedora 22 In-Reply-To: References: <648473255763364B961A02AC3BE1060D03CC2DFC00@MX19A.corp.emc.com> <5528071D.4040709@redhat.com> Message-ID: <552BAE89.1090002@berendt.io> On 04/13/2015 12:46 PM, Alan Pevec wrote: > (2) deprecate openstack-* in released Fedora to avoid maintenance > overhead for no use I will start to work on the Installation Guide for OpenStack Kilo today. My planning was to include an update of the Installation Guide from Fedora 20 to Fedora 21. At the moment the Installation Guide is for Fedora 20. http://docs.openstack.org/juno/install-guide/install/yum/content/ Is this still necessary or can I propose to drop Fedora from the official Installation Guide? This would safe us a lot of time because we do not have to test for Fedora any longer and I do not have to update the Installation Guide for Fedora 21. Christian. From christian at berendt.io Mon Apr 13 12:03:02 2015 From: christian at berendt.io (Christian Berendt) Date: Mon, 13 Apr 2015 14:03:02 +0200 Subject: [Rdo-list] Juno vs Kilo in Fedora 22 In-Reply-To: <552BAE89.1090002@berendt.io> References: <648473255763364B961A02AC3BE1060D03CC2DFC00@MX19A.corp.emc.com> <5528071D.4040709@redhat.com> <552BAE89.1090002@berendt.io> Message-ID: <552BB076.5040707@berendt.io> On 04/13/2015 01:54 PM, Christian Berendt wrote: > Is this still necessary or can I propose to drop Fedora from the > official Installation Guide? This would safe us a lot of time because we > do not have to test for Fedora any longer and I do not have to update > the Installation Guide for Fedora 21. Sorry, the official Installation Guide already uses the packages provided in the RDO repository and does not use the packages included in Fedora itself. No need to drop Fedora from the official Installation Guide because the documentation is not affected by the removal of the OpenStack packages in Fedora itself. I will continue the update from the Installation Guide from Fedora 20 to 21. Christian. From ihrachys at redhat.com Mon Apr 13 13:27:56 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 13 Apr 2015 15:27:56 +0200 Subject: [Rdo-list] Juno vs Kilo in Fedora 22 In-Reply-To: References: <648473255763364B961A02AC3BE1060D03CC2DFC00@MX19A.corp.emc.com> <5528071D.4040709@redhat.com> Message-ID: <552BC45C.8010503@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 04/13/2015 12:46 PM, Alan Pevec wrote: > 2015-04-10 19:23 GMT+02:00 Matthias Runge : >> Did we had an outcome of this? > > Status is that we missed F22 Beta freeze on Mar 31, now fesco > freeze exception is required and Kilo import is still not completed > in Rawhide. I'm also having second thoughts after thinking about > Ihar's feedback last week on IRC where he questioned whether the > price of maintaining openstack-* packages in stable Fedora is worth > it when nobody is actually using them. Until proven otherwise, > we're assuming typical OpenStack user on Fedora is interested in > development and wants only latest packages, so CVE fixes and > upstream stable point release updates are wasted. Production users > are expected to run OpenStack on EL platform, which we provide > through RDO. > > Instead of rushing Kilo into F22, I was thinking about the > following model: (1) keep latest OpenStack release in Fedora > Rawhide, starting from milestone2. Delorean stays RDO Trunk as a > place for tracking changes on OpenStack master branch. (2) > deprecate openstack-* in released Fedora to avoid maintenance > overhead for no use (3) provide Fedora Rawhide builds for current > and previous Fedora in RDO repo e.g. Kilo would be available for > F22 and F21 through RDO (4) document that for production EL > platform is recommended > > We cannot retire packages in released Fedora, so I'm not sure > about the process for (2). f20 branches should definitely be closed > since Havana is EOL upstream and in RDO, so there will be no > further updates. If there are security bugs, we should have an ability to patch those. So -1 to retiring branches. I'm fine supporting those packages that are already released in current releases (f20 and f21) while those releases are alive, if that's a temporary transition thing. > When I tried fedpkg retire in openstack-keystone f20 branch, > "dead.package" commit in dist-git worked[*] but then pkgdb step > failed. Ideally, we would push an "empty" update which would warn > installed user base that they're running EOLed packages and should > migrate to RDO but I'm not sure there's such mechanism in Fedora > Bodhi. I'd appreciate any and all feedback on the above plan, > please point out the holes it might have. If the plan makes sense, > F22 would stay Juno and it would be irrelevant which OpenStack > release ends up in Fedora >= 23. > > > Cheers, Alan > > [*] > http://pkgs.fedoraproject.org/cgit/openstack-keystone.git/commit/?h=f20&id=2fdfdd6c8eac32334c2a5018fef196b041468979 > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJVK8RaAAoJEC5aWaUY1u57jA4H/iy0w5DRJ8RsebOm6mPhGeRj SCBppYcukqh9zZTAuULHq7RNlIf633hQGiniDgK5f2DhspW4Tpc7f6WqYRDRq7Fy jTy+5Gnf80Gq3sulB3lq04fKCGeuAcmp9CulnVlxDtBP3hsvpJl50A1diOXamXvz 4M4ng9w3EI2iBm64vcVLUmQSBlGxwiMO++8y77NPIRT4MJsBl4JmAyb9ax09c0m8 kzbbk07v4Orm2UkHmp6RBMj1TvvNETyYG+aCfZld5og0P0s9rdWVJbcNa7y+Fsxx i1kliZuB6QRgoNuMzfazoUEIxQbrOAyIzB6W9N6srPXHs1PuV+nckfIugS7ZeuU= =YXaC -----END PGP SIGNATURE----- From Yaniv.Kaul at emc.com Mon Apr 13 14:10:39 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Mon, 13 Apr 2015 10:10:39 -0400 Subject: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo In-Reply-To: References: <1428666956.27400.28.camel@redhat.com> <648473255763364B961A02AC3BE1060D03CC41A9B9@MX19A.corp.emc.com> Message-ID: <648473255763364B961A02AC3BE1060D03CC41AB13@MX19A.corp.emc.com> > -----Original Message----- > From: Alan Pevec [mailto:apevec at gmail.com] > Sent: Monday, April 13, 2015 12:19 AM > To: Kaul, Yaniv > Cc: Rdo-list at redhat.com > Subject: Re: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo > > > I'm getting (on CentOS 7, using packstack): > > Error: /Stage[main]/Horizon/Package[horizon]/ensure: change from > > absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install > openstack-dashboard' returned 1: Error: Package: openstack-dash board- > 2015.1-0.1.b2.el7.centos.noarch (openstack-kilo) > > Requires: python-oslo-concurrency > > > > > > Doesn't look like the same issue. > > This is from RDO Kilo repos. > > rdo-release-kilo.rpm is currently still bootsraping, you need to enable RDO > Trunk aka Delorean repo. > On CentOS 7 you need: > 1. yum install epel-release > 2. yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > 3. cd /etc/yum.repos.d; wget > http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/delorean.repo > > > Cheers, > Alan Thanks - the good news is that it seems to work. The bad news is that the GUI looks a bit Spartan (attached screenshot)... Y. -------------- next part -------------- A non-text attachment was scrubbed... Name: horizon.png Type: image/png Size: 105092 bytes Desc: horizon.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: horizon2.png Type: image/png Size: 124287 bytes Desc: horizon2.png URL: From hguemar at fedoraproject.org Mon Apr 13 15:00:02 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 13 Apr 2015 15:00:02 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150413150002.DF8DD60A958A@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-04-15 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From javier.pena at redhat.com Mon Apr 13 15:09:28 2015 From: javier.pena at redhat.com (Javier Pena) Date: Mon, 13 Apr 2015 11:09:28 -0400 (EDT) Subject: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo In-Reply-To: <648473255763364B961A02AC3BE1060D03CC41AB13@MX19A.corp.emc.com> References: <1428666956.27400.28.camel@redhat.com> <648473255763364B961A02AC3BE1060D03CC41A9B9@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03CC41AB13@MX19A.corp.emc.com> Message-ID: <722594503.14418908.1428937768552.JavaMail.zimbra@redhat.com> ----- Original Message ----- > > -----Original Message----- > > From: Alan Pevec [mailto:apevec at gmail.com] > > Sent: Monday, April 13, 2015 12:19 AM > > To: Kaul, Yaniv > > Cc: Rdo-list at redhat.com > > Subject: Re: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo > > > > > I'm getting (on CentOS 7, using packstack): > > > Error: /Stage[main]/Horizon/Package[horizon]/ensure: change from > > > absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install > > openstack-dashboard' returned 1: Error: Package: openstack-dash board- > > 2015.1-0.1.b2.el7.centos.noarch (openstack-kilo) > > > Requires: python-oslo-concurrency > > > > > > > > > Doesn't look like the same issue. > > > This is from RDO Kilo repos. > > > > rdo-release-kilo.rpm is currently still bootsraping, you need to enable RDO > > Trunk aka Delorean repo. > > On CentOS 7 you need: > > 1. yum install epel-release > > 2. yum install > > http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > > 3. cd /etc/yum.repos.d; wget > > http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/delorean.repo > > > > > > Cheers, > > Alan > > Thanks - the good news is that it seems to work. > The bad news is that the GUI looks a bit Spartan (attached screenshot)... The reason for this is you are missing package openstack-dashboard-theme. But don't try to install it yet, it will break Horizon due to some other issues with CSS paths. Javier > Y. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From Yaniv.Kaul at emc.com Mon Apr 13 15:11:45 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Mon, 13 Apr 2015 11:11:45 -0400 Subject: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo In-Reply-To: <722594503.14418908.1428937768552.JavaMail.zimbra@redhat.com> References: <1428666956.27400.28.camel@redhat.com> <648473255763364B961A02AC3BE1060D03CC41A9B9@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03CC41AB13@MX19A.corp.emc.com> <722594503.14418908.1428937768552.JavaMail.zimbra@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03CC41AB2F@MX19A.corp.emc.com> > -----Original Message----- > From: Javier Pena [mailto:javier.pena at redhat.com] > Sent: Monday, April 13, 2015 6:09 PM > To: Kaul, Yaniv > Cc: Alan Pevec; Rdo-list at redhat.com > Subject: Re: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo > > > > ----- Original Message ----- > > > -----Original Message----- > > > From: Alan Pevec [mailto:apevec at gmail.com] > > > Sent: Monday, April 13, 2015 12:19 AM > > > To: Kaul, Yaniv > > > Cc: Rdo-list at redhat.com > > > Subject: Re: [Rdo-list] [CI] FYI.. horizon deps failing/blocking > > > kilo > > > > > > > I'm getting (on CentOS 7, using packstack): > > > > Error: /Stage[main]/Horizon/Package[horizon]/ensure: change from > > > > absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y > > > > install > > > openstack-dashboard' returned 1: Error: Package: openstack-dash > > > board- 2015.1-0.1.b2.el7.centos.noarch (openstack-kilo) > > > > Requires: python-oslo-concurrency > > > > > > > > > > > > Doesn't look like the same issue. > > > > This is from RDO Kilo repos. > > > > > > rdo-release-kilo.rpm is currently still bootsraping, you need to > > > enable RDO Trunk aka Delorean repo. > > > On CentOS 7 you need: > > > 1. yum install epel-release > > > 2. yum install > > > http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > > > 3. cd /etc/yum.repos.d; wget > > > http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/delorean.re > > > po > > > > > > > > > Cheers, > > > Alan > > > > Thanks - the good news is that it seems to work. > > The bad news is that the GUI looks a bit Spartan (attached screenshot)... > > The reason for this is you are missing package openstack-dashboard-theme. But > don't try to install it yet, it will break Horizon due to some other issues with CSS > paths. Is there a BZ I can follow up on this? Y. > > Javier > > > Y. > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com From sgordon at redhat.com Mon Apr 13 16:01:31 2015 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 13 Apr 2015 12:01:31 -0400 (EDT) Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> Message-ID: <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> FYI, interested in any further info on the below to help the docs team out. ----- Forwarded Message ----- > From: "Bernd Bausch" > To: openstack-docs at lists.openstack.org > Sent: Sunday, April 12, 2015 9:49:17 PM > Subject: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 > > In preparation for the install guide meeting on Tuesday, I would like to > share what I have been able to do so far and what problems I hit. Advice > would be welcome (I'd be happy to discuss that in the meeting): > > - There are places where the install guide content should be modified > (flagged with "CONTENT" below). What's the procedure - I file a bug and > immediately provide the fix? > - Other places look like packaging bugs; I am using a Kilo repository for > the Red Hat RDO project that is still work in progress. I think I should > leave such bugs alone for now, since they are likely to go away. Correct? > > This is my report. It's based on Matt's version of the install guide > http://docs-draft.openstack.org/92/167692/13/gate/gate-openstack-manuals-tox > -doc-publish-checkbuild/31c1ab2//publish-docs/trunk/install-guide/install/yu > m/content/index.html. > > --------------------------- > Section 2 Basic environment > --------------------------- > > openstack-selinux not found in the repositories I am using. On first look, > it seems that there is no need to install it, as rules in > /etc/selinux/targeted/contexts/files/* seem to be the same as on my Juno > installation. So I am brave, plan to watch the audit log and go ahead > without modifying SELinux configs. > > CONTENT: The guide lacks info about the firewall rules, except a vague > allusion in Chapter 2 Basic Environment. > Since this is Red Hat with a locked-down firewall, nothing will work without > opening ports for fundamental services (DB, RabbitMQ) and the OpenStack > services. > > My NTP server doesn't work (this has nothing to do with OpenStack). > This forum says that NTP needs to be started after DNS (???) > https://forum.zentyal.org/index.php/topic,13045.0.html > In any case, issuing a ``systemctl restart ntpd.service`` fixes the problem, > but how can it be done automatically? > > --------------------------------- > section 2, Maria DB installation: > --------------------------------- > > ``/usr/bin/mysql_secure_installation: line 379: find_mysql_client: command > not found`` > CONTENT: The install guide doesn't say how to answer the questions of this > script. > After setting the root password on the DB, I just hit enter at each > question. > > ------------------------------------ > Section 2, Rabbit MQ installation: > ------------------------------------ > > CONTENT: The guide asks for adding a line to /etc/rabbitmq/rabbitmq.config. > Scratching my head because I don't have that file, but then I see that it > may not always exist. Perhaps this should be made clearer to accommodate > slow thinkers. > > ------------------------------- > Section 3, Identity concepts > ------------------------------- > > CONTENT: The diagram showing the process flow confuses me more than it > helps. > > -------------------------------- > Section 3, install and configure > -------------------------------- > > ``yum install openstack-keystone python-keystoneclient``: dependency > python-cryptography can't be found > > After adding this repo (found via internet search): > > [npmccallum-python-cryptography] > name=Copr repo for python-cryptography owned by npmccallum > > baseurl=https://copr-be.cloud.fedoraproject.org/results/npmccallum/python-cr > yptography/epel-7-$basearch/ > skip_if_unavailable=True > gpgcheck=1 > > gpgkey=https://copr-be.cloud.fedoraproject.org/results/npmccallum/python-cry > ptography/pubkey.gpg > enabled=1 > > it works. > This looks very much like a packaging error, and I hope it will eventually > go away. > > CONTENT (or perhaps not CONTENT): keystone.conf contains "connection = > " rather than the connection string cited in the install guide. This > may be legitimately so, in which case the guide needs to be modified, or a > packaging error. > > ------------------------------------------------------ > Section 3, create the service entity and API endpoints > ------------------------------------------------------ > > CONTENT: ``openstack`` command missing. Found in the package > python-openstackclient. > > CONTENT: ``openstack service create --type identity`` gives me: > WARNING: openstackclient.identity.v2_0.service.CreateService The > argument --type is deprecated, use service create --name type > instead. > > I don't like the openstack client, because its help facility is much > inferior to the one of the separate command line clients. Tough luck, I > guess. > > CONTENT: The relevance of the sentence "Also, OpenStack supports multiple > regions for scalability" is not clear to a first time (even n-th time) user. > > CONTENT: Why are we using API v2, not v3? Why a separate adminurl port, and > same port for internal and publicurl? Some clarification would help. > > CONTENT: I would phrase the note at the end differently, e.g. "You will > create similar endpoints for each of the other services as you install them" > > -------------------------------------------- > Section 3, Create projects, users, and roles > -------------------------------------------- > > CONTENT: Rather than saying "project (tenant)", be a bit more explicit e.g. > "project (also named "tenant" in earlier OpenStack releases)" > > CONTENT: > # openstack role add --project demo --user demo _member_ > ERROR: openstack No role with a name or ID of '_member_' exists. > I fix this by adding the _member_ role first: > # openstack role create _member_ > > -------------------------------------------- > Section 3, verify operation > -------------------------------------------- > > CONTENT: There is no /etc/keystone/keystone-paste.ini; it's now under > /usr/share/keystone. Not sure yet if this file is supposed to be modified. > It seems that all the Paste/Deploy files are now under /usr/share. > > For now, instead of changing paste.ini I just remove the admin token from > keystone.conf. > > -------------------------------------------- > Section 4, Glance install and configure > -------------------------------------------- > > ugly message when synching DB: > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/artifacts.py:20: > DeprecationWarning: The oslo namespace package is deprecated. Please use > oslo_config instead. > Not sure what to do about this. > > -------------------------------------------- > Section 4, Verify operation > -------------------------------------------- > > Major problems with glance. I am stuck with problem 3 below. > > Problem 1: > ~~~~~~~~~~ > > glance image-create fails. See also Monty Taylor's comments on the docs and > dev mailing lists. > > It turns out that I am using glance API v2, set in the rc files: > > export OS_IMAGE_API_VERSION=2 > > Glance v2 requires a quite different workflow to upload images. Setting API > version to 1 for the moment. > > Problem 2: > ~~~~~~~~~~ > > It turns out glance is not running. api.log says: > > ERROR glance.common.config [-] Unable to load glance-api-keystone > from configuration file /usr/share/glance/glance-api-dist-paste.ini. > Got: ImportError('No module named elasticsearch',) > > After pip install elasticsearch, I can start glance. > > Still getting a strange warning in api.log: > 2015-04-12 17:42:30.267 6789 WARNING oslo_config.cfg [-] Option > "username" from group "keystone_authtoken" is deprecated. Use option > "username" from group "keystone_authtoken". > > Problem 3: > ~~~~~~~~~~ > > Trying to upload an image now fails because of wrong credentials???? Haven't > resolved this yet. Any glance request is rejected with > # glance image-list > Invalid OpenStack Identity credentials. > > Glance's API log: > 2015-04-12 22:31:03.932 9048 DEBUG keystoneclient.session [-] REQ: curl -g > -i -X GET http://kilocontrol:35357 -H "Accept: application/json" -H > "User-Agent: python-keystoneclient" _http_log_request > /usr/lib/python2.7/site-packages/keystoneclient/session.py:195 > 2015-04-12 22:31:03.935 9048 WARNING > keystoneclient.auth.identity.generic.base [-] Discovering versions from the > identity service failed when creating the password plugin. Attempting to > determine version from URL. > 2015-04-12 22:31:03.936 9048 WARNING keystonemiddleware.auth_token [-] > Authorization failed for token > > This seems to be related with this DEBUG entry in keystone.log: > keystone.middleware.core [-] Auth token not in the request header. Will not > build auth context. process_request > /usr/lib/python2.7/site-packages/keystone/middleware/core.py:229 > > I assume a misconfiguration on my side but haven't figured out what it might > be. Need to study the nature of WSGI middleware. > > > _______________________________________________ > OpenStack-docs mailing list > OpenStack-docs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs > -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From lars at redhat.com Mon Apr 13 17:58:31 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 13 Apr 2015 13:58:31 -0400 Subject: [Rdo-list] RDO bug statistics for 2015-04-13 Message-ID: <20150413175831.GA8526@redhat.com> This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . You can find an HTML version of this report online at: . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 151 - Fixed (MODIFIED, POST, ON_QA): 146 ## Number of open bugs by component diskimage-builder [ 2] ++ distribution [ 12] ++++++++++++++ dnsmasq [ 2] ++ instack [ 2] ++ instack-undercloud [ 5] +++++ iproute [ 1] + openstack-ceilometer [ 1] + openstack-cinder [ 12] ++++++++++++++ openstack-foreman-inst... [ 3] +++ openstack-horizon [ 2] ++ openstack-ironic-disco... [ 1] + openstack-keystone [ 2] ++ openstack-neutron [ 7] ++++++++ openstack-nova [ 14] ++++++++++++++++ openstack-packstack [ 34] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 6] +++++++ openstack-selinux [ 8] +++++++++ openstack-swift [ 3] +++ openstack-tripleo [ 9] ++++++++++ openstack-tripleo-heat... [ 1] + openstack-tripleo-imag... [ 2] ++ openstack-tuskar [ 1] + openstack-utils [ 2] ++ openvswitch [ 2] ++ python-glanceclient [ 1] + python-heatclient [ 1] + python-keystonemiddleware [ 1] + python-neutronclient [ 1] + python-novaclient [ 1] + python-openstackclient [ 1] + python-tuskarclient [ 2] ++ rdo-manager [ 3] +++ rdo-manager-cli [ 2] ++ rdopkg [ 1] + RFEs [ 2] ++ tempest [ 1] + ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (151 bugs) ### diskimage-builder (2 bugs) [1176269 ] http://bugzilla.redhat.com/1176269 (NEW) Component: diskimage-builder Last change: 2015-01-08 Summary: rhel-common element attempts to install rhel-7-server on RHEL 6 image [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change ### distribution (12 bugs) [999587 ] http://bugzilla.redhat.com/999587 (ASSIGNED) Component: distribution Last change: 2015-01-07 Summary: sos report tracker bug [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-03-27 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-03-20 Summary: RDO: Packages needed to support AMQP1.0 [1116972 ] http://bugzilla.redhat.com/1116972 (NEW) Component: distribution Last change: 2015-03-20 Summary: RDO website: libffi-devel is required to run Tempest (at least on CentOS 6.5) [1116974 ] http://bugzilla.redhat.com/1116974 (NEW) Component: distribution Last change: 2015-03-20 Summary: Running Tempest according to the instructions @ RDO website fails with missing tox.ini error [1116975 ] http://bugzilla.redhat.com/1116975 (NEW) Component: distribution Last change: 2015-03-20 Summary: RDO website: configuring TestR according to website, breaks Tox completely [1117007 ] http://bugzilla.redhat.com/1117007 (NEW) Component: distribution Last change: 2015-03-20 Summary: RDO website: newer python-nose is required to run Tempest (at least on CentOS 6.5) [update to http://open stack.redhat.com/Testing_IceHouse_using_Tempest] [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2014-12-22 Summary: [TripleO] Provisioning Images filter doesn't work [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2014-12-22 Summary: [TripleO] text of uninitialized deployment needs rewording [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-03-19 Summary: SSL supports only broken crypto [1187309 ] http://bugzilla.redhat.com/1187309 (NEW) Component: distribution Last change: 2015-03-20 Summary: New package - python-cliff-tablib [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-03-29 Summary: Tracking bug for bugs that Lars is interested in ### dnsmasq (2 bugs) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2014-12-18 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) [1188423 ] http://bugzilla.redhat.com/1188423 (NEW) Component: dnsmasq Last change: 2015-03-22 Summary: RHEL / Centos 7-based instances lose their default IPv4 gateway ### instack (2 bugs) [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-03-17 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-03-12 Summary: instack-update-overcloud fails because it tries to access non-existing files ### instack-undercloud (5 bugs) [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-03-29 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-01-08 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-03-19 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-03-17 Summary: missing dependency on which [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-04-11 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-02-23 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (1 bug) [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions ### openstack-cinder (12 bugs) [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-03-27 Summary: Configuration file in share forces ignore of auth_uri [1149064 ] http://bugzilla.redhat.com/1149064 (NEW) Component: openstack-cinder Last change: 2014-12-12 Summary: Fail to delete cinder volume on Centos7, using RDO juno [1157939 ] http://bugzilla.redhat.com/1157939 (NEW) Component: openstack-cinder Last change: 2014-10-28 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-cinder Last change: 2014-10-28 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-02-20 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-03-18 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-04-09 Summary: support the ldap user_enabled_invert parameter ### openstack-horizon (2 bugs) [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-horizon Last change: 2014-10-24 Summary: Permissions issue prevents CSS from rendering [1210821 ] http://bugzilla.redhat.com/1210821 (NEW) Component: openstack-horizon Last change: 2015-04-10 Summary: horizon should be using rdo logo instead of openstack's ### openstack-ironic-discoverd (1 bug) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour ### openstack-keystone (2 bugs) [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2014-11-26 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM ### openstack-neutron (7 bugs) [986507 ] http://bugzilla.redhat.com/986507 (ASSIGNED) Component: openstack-neutron Last change: 2015-03-18 Summary: RFE: IPv6 Feature Parity [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1149504 ] http://bugzilla.redhat.com/1149504 (NEW) Component: openstack-neutron Last change: 2014-10-12 Summary: Instances won't obtain IPv6 address and gateway when using SLAAC provided by OpenStack [1149505 ] http://bugzilla.redhat.com/1149505 (NEW) Component: openstack-neutron Last change: 2014-10-12 Summary: Instances won't obtain IPv6 address and gateway when using Stateful DHCPv6 provided by OpenStack [1159733 ] http://bugzilla.redhat.com/1159733 (NEW) Component: openstack-neutron Last change: 2015-03-22 Summary: no ports available when associating floating ips to new instance [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false ### openstack-nova (14 bugs) [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-03-26 Summary: Ensure translations are installed correctly and picked up at runtime [1123298 ] http://bugzilla.redhat.com/1123298 (NEW) Component: openstack-nova Last change: 2015-04-03 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2014-10-01 Summary: nova: fail to edit project quota with DataError from nova [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2014-10-06 Summary: nova object store allow get object after date exires [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2014-12-15 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-04-10 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2014-10-27 Summary: v4-fixed-ip= not working with juno nova networking [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-04-10 Summary: horizon console uses http when horizon is set to use ssl [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2014-11-09 Summary: novnc init script doesnt write to log [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-01-08 Summary: Installation of openstack-nova-compute fails on PowerKVM [1189347 ] http://bugzilla.redhat.com/1189347 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: Nova AVC messages [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-02-09 Summary: Nova - db connection string present on compute nodes [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-03-20 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version ### openstack-packstack (34 bugs) [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-03-18 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-03-23 Summary: [RFE] Include Fedora cloud images in some nice way [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-03-18 Summary: API services has all admin permission instead of service [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-03-20 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2014-06-11 Summary: [RFE] SPICE support in packstack [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-04-01 Summary: Packstack missing ML2 Mellanox Mechanism Driver [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2014-12-18 Summary: Offset Swift ports to 6200 [1141608 ] http://bugzilla.redhat.com/1141608 (NEW) Component: openstack-packstack Last change: 2015-03-30 Summary: PackStack sets unrecognized "net.bridge.bridge-nf- call*" keys on up to date CentOS 6 [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2014-10-02 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1153128 ] http://bugzilla.redhat.com/1153128 (NEW) Component: openstack-packstack Last change: 2014-11-21 Summary: Cannot start nova-network on juno - Centos7 [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2014-11-21 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-01-25 Summary: rabbitmq wont start if ssl is required [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2014-12-18 Summary: centos7 fails to install glance [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-01-03 Summary: Error: service-update is not currently supported by the keystone sql driver [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-03-19 Summary: misleading exit message on fail [1174749 ] http://bugzilla.redhat.com/1174749 (NEW) Component: openstack-packstack Last change: 2014-12-17 Summary: Failed to start httpd service on Fedora 20 (with packstack utility) [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2014-12-21 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2014-12-23 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-03-19 Summary: packstack --allinone fails when starting neutron server [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-01-25 Summary: glance provision disregards keystone region setting [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-01-30 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-02-13 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-03-17 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-03-18 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-03-18 Summary: "private" network created by packstack is not owned by any tenant [1205772 ] http://bugzilla.redhat.com/1205772 (NEW) Component: openstack-packstack Last change: 2015-03-25 Summary: support the ldap user_enabled_invert parameter [1205912 ] http://bugzilla.redhat.com/1205912 (NEW) Component: openstack-packstack Last change: 2015-03-26 Summary: allow to specify admin name and email [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-03-26 Summary: provision_glance does not honour proxy setting when getting image [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-03-30 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-04-08 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-03-30 Summary: auto enablement of the extras channel [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-04-02 Summary: packstack --allinone fails during _keystone.pp [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-04-03 Summary: add DiskFilter to scheduler_default_filters [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-04-06 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] ### openstack-puppet-modules (6 bugs) [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-02-01 Summary: Offset Swift ports to 6200 [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-02-15 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2014-10-22 Summary: Increase the rpc_thread_pool_size [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-03-13 Summary: ERROR: Network commands are not supported when using the Neutron API. [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-02-13 Summary: Add puppet-openstack_extras to opm [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-02-13 Summary: Add puppet-tripleo and puppet-gnocchi to opm ### openstack-selinux (8 bugs) [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: "glance image-list" fails on F21, causing packstack install to fail [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 ### openstack-swift (3 bugs) [1117012 ] http://bugzilla.redhat.com/1117012 (NEW) Component: openstack-swift Last change: 2015-03-30 Summary: openstack-swift-proxy depends on openstack-swift- plugin-swift3 [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (9 bugs) [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1162333 ] http://bugzilla.redhat.com/1162333 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: Instack fails to complete instack-virt-setup with syntax error near unexpected token `newline' [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-01-08 Summary: User can not login into the overcloud horizon using the proper credentials [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-01-31 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-03-25 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos ### openstack-tripleo-heat-templates (1 bug) [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-03-22 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-01-29 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-02-28 Summary: mariadb my.cnf socket path does not exist ### openstack-tuskar (1 bug) [1210223 ] http://bugzilla.redhat.com/1210223 (NEW) Component: openstack-tuskar Last change: 2015-04-10 Summary: Updating the controller count to 3 fails ### openstack-utils (2 bugs) [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2014-11-07 Summary: Can't enable OpenStack service after openstack-service disable [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-03-12 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (2 bugs) [1209003 ] http://bugzilla.redhat.com/1209003 (NEW) Component: openvswitch Last change: 2015-04-11 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity [1211072 ] http://bugzilla.redhat.com/1211072 (NEW) Component: openvswitch Last change: 2015-04-12 Summary: openvswitch 2.3.1 package missing from rdo juno epel-7 repo ### python-glanceclient (1 bug) [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-heatclient (1 bug) [1205675 ] http://bugzilla.redhat.com/1205675 (NEW) Component: python-heatclient Last change: 2015-04-09 Summary: When passing --pre-create to the heat stack-create command, the command is ignored ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-03-02 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (1 bug) [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-04-03 Summary: Missing versioned dependency on python-novaclient ### python-openstackclient (1 bug) [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-03-04 Summary: Add --user to project list command to filter projects by user ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (ASSIGNED) Component: python-tuskarclient Last change: 2015-04-07 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (NEW) Component: python-tuskarclient Last change: 2015-04-07 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (3 bugs) [1209341 ] http://bugzilla.redhat.com/1209341 (NEW) Component: rdo-manager Last change: 2015-04-08 Summary: [RFE] Missing --help for unified CLI commands [1209908 ] http://bugzilla.redhat.com/1209908 (NEW) Component: rdo-manager Last change: 2015-04-08 Summary: [RFE] Add ability to create images in the GUI [1211069 ] http://bugzilla.redhat.com/1211069 (NEW) Component: rdo-manager Last change: 2015-04-12 Summary: [RFE] Add possibility to kill node discovery ### rdo-manager-cli (2 bugs) [1209153 ] http://bugzilla.redhat.com/1209153 (NEW) Component: rdo-manager-cli Last change: 2015-04-07 Summary: ERROR: openstack string indices must be integers [1211190 ] http://bugzilla.redhat.com/1211190 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-04-13 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (2 bugs) [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-03-30 Summary: [RFE] Provide easy to use upgrade tool [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot ### tempest (1 bug) [1154633 ] http://bugzilla.redhat.com/1154633 (NEW) Component: tempest Last change: 2014-10-20 Summary: Tempest_config failure (RDO, Juno, CentOS 7 - heat related?) ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (146 bugs) ### distribution (3 bugs) [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2014-09-19 Summary: update el6 icehouse kombu packages for improved performance [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2014-10-16 Summary: Tuskar Fails After Remove/Reinstall Of RDO [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr ### instack-undercloud (1 bug) [1204126 ] http://bugzilla.redhat.com/1204126 (POST) Component: instack-undercloud Last change: 2015-03-26 Summary: [RFE] Enable deployment of Ceph via instack ### openstack-ceilometer (2 bugs) [1001832 ] http://bugzilla.redhat.com/1001832 (MODIFIED) Component: openstack-ceilometer Last change: 2014-01-13 Summary: sos report tracker bug - Ceilometer [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency ### openstack-cinder (6 bugs) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [999651 ] http://bugzilla.redhat.com/999651 (POST) Component: openstack-cinder Last change: 2014-03-17 Summary: splitting the sos report to modules - Cinder [1007515 ] http://bugzilla.redhat.com/1007515 (MODIFIED) Component: openstack-cinder Last change: 2013-11-27 Summary: cinder [Havana]: we try to create a backup although cinder-backup service is down [1010039 ] http://bugzilla.redhat.com/1010039 (MODIFIED) Component: openstack-cinder Last change: 2013-11-27 Summary: Grizzly -> Havana upgrade fails during db_sync [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) ### openstack-glance (4 bugs) [999653 ] http://bugzilla.redhat.com/999653 (POST) Component: openstack-glance Last change: 2015-01-07 Summary: splitting the sos report to modules - Glance [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue ### openstack-heat (1 bug) [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (14 bugs) [999660 ] http://bugzilla.redhat.com/999660 (POST) Component: openstack-neutron Last change: 2015-02-01 Summary: splitting the sos report to modules - Neutron [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service ### openstack-nova (2 bugs) [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail ### openstack-packstack (55 bugs) [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install succeeds even when puppet completely fails [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-02-01 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-01-07 Summary: please give greater control over use of EPEL [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-01-07 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: rdo release RPM not installed on all fedora hosts [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: Warning message for installing RDO kernel needs to be adjusted [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2014-10-28 Summary: RFE: support setting up apache to serve keystone requests [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2014-04-21 Summary: openstack-dashboard django dependency conflict stops packstack execution [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2014-04-29 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-02-01 Summary: Openstack Installer: packstack does not create tables in Heat db. [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2014-04-29 Summary: packstack reports installation completed successfully but nothing installed [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2014-02-05 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-02-20 Summary: Packstack creates duplicate cirros images in glance [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2014-10-02 Summary: Packstack neutron plugin does not check if Nova is disabled [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: qpid should enable SSL [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2014-08-19 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2014-11-26 Summary: packstack requires 2 runs to install ceilometer [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2014-12-01 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2014-10-02 Summary: packstack fails if iptables.service is not available [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2014-06-02 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2014-10-02 Summary: Dashboard port firewall rule is not permanent [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2014-04-14 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-03-13 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2014-03-06 Summary: Change packstack to use openstack-puppet-modules [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2014-04-14 Summary: Fedora20: packstack gives traceback when SElinux permissive [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2014-03-25 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2014-05-13 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2014-06-23 Summary: Havana Fedora 19, packstack fails w/ mysql error [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2014-12-19 Summary: packstack package should depend on yum-utils [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-03-29 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2014-06-17 Summary: el7 Icehouse: Nagios installation fails [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-03-13 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2014-11-25 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-03-13 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [1139246 ] http://bugzilla.redhat.com/1139246 (POST) Component: openstack-packstack Last change: 2014-09-12 Summary: Refactor cinder plugin to support multiple cinder backends [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2014-10-27 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2014-10-31 Summary: packstack icehouse doesn't install anything because of repo [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-03-11 Summary: packstack fails on centos6 with missing systemctl [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-02-24 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-02-24 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2014-12-19 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-03-29 Summary: RabbitMQ fails to start if configured with ssl ### openstack-puppet-modules (16 bugs) [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-02-05 Summary: explicit check for pymongo is incorrect [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-02-05 Summary: cinder modules require glance installed [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-02-15 Summary: horizon log errors [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-02-05 Summary: netns.py syntax error [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-06-16 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-04-08 Summary: prescript.pp does not ensure iptables-services package installation [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-12 Summary: Horizon help url in RDO points to the RHOS documentation [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-04-24 Summary: prescript puppet - missing dependency package iptables- services [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-10-01 Summary: swift.pp: Could not find command 'restorecon' [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-02-17 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-01-21 Summary: packstack chokes on ironic - centos7 + juno [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1205757 ] http://bugzilla.redhat.com/1205757 (POST) Component: openstack-puppet-modules Last change: 2015-04-06 Summary: puppet-keystone support the ldap user_enabled_invert parameter [1207701 ] http://bugzilla.redhat.com/1207701 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-04-03 Summary: Unable to attach cinder volume to instance ### openstack-sahara (1 bug) [1184522 ] http://bugzilla.redhat.com/1184522 (MODIFIED) Component: openstack-sahara Last change: 2015-03-27 Summary: launch_command.py missing ### openstack-selinux (12 bugs) [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1093297 ] http://bugzilla.redhat.com/1093297 (POST) Component: openstack-selinux Last change: 2014-05-15 Summary: selinux AVC RHEL7 and RDO - Neutron [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later ### openstack-swift (2 bugs) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages [999662 ] http://bugzilla.redhat.com/999662 (POST) Component: openstack-swift Last change: 2015-01-07 Summary: splitting the sos report to modules - Swift ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-02-09 Summary: Registering nodes with the IPMI driver always fails [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-03-23 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-03-23 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py ### openstack-utils (1 bug) [1090648 ] http://bugzilla.redhat.com/1090648 (POST) Component: openstack-utils Last change: 2014-05-21 Summary: glance-manage db_sync silently fails to prepare the database ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-openstack-auth (1 bug) [985570 ] http://bugzilla.redhat.com/985570 (ON_QA) Component: python-django-openstack-auth Last change: 2013-07-18 Summary: Please upgrade to 1.0.9 or better ### python-glanceclient (2 bugs) [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-01-07 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-01-07 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2014-01-13 Summary: keystone missing tab completion ### python-neutronclient (3 bugs) [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() ### python-novaclient (2 bugs) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-01-07 Summary: nova commands fail with gnomekeyring IOError [1001107 ] http://bugzilla.redhat 100 75385 100 75385 0 0 91894 0 --:--:-- --:--:-- --:--:-- 91932 .com/1001107 (MODIFIED) Component: python-novaclient Last change: 2013-09-04 Summary: Please upgrade to 2.14.1 ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2014-06-17 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-quantumclient (1 bug) [989789 ] http://bugzilla.redhat.com/989789 (MODIFIED) Component: python-quantumclient Last change: 2013-07-30 Summary: warnings about missing editors ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### rdo-manager (1 bug) [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-08 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails ### rdopkg (1 bug) [1127309 ] http://bugzilla.redhat.com/1127309 (POST) Component: rdopkg Last change: 2014-09-01 Summary: rdopkg version 0.18 fails to find rdoupdate.bsources.koji_ -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lars at redhat.com Mon Apr 13 18:53:56 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 13 Apr 2015 14:53:56 -0400 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> Message-ID: <20150413185356.GB8526@redhat.com> > > openstack-selinux not found in the repositories I am using. openstack-selinux is generally available for RHEL and CentOS, but not for Fedora. On Fedora in particular, selinux related issues are supposed to be reported directly against selinux-policy. When using RDO Kilo with CentOS 7, the openstack-selinux package comes from the rdo-juno repository. Note that installing https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo-0.noarch.rpm configures two repositories on your system: [openstack-juno] name=OpenStack Juno Repository baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/ enabled=1 skip_if_unavailable=0 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno [openstack-kilo] name=Temporary OpenStack Kilo new deps baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-kilo/epel-7/ skip_if_unavailable=0 gpgcheck=0 enabled=1 I notice that the draft Kilo install guide still points at the Juno packages: > Install the rdo-release-juno package to enable the RDO repository: > > # yum install > # http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm [from http://docs-draft.openstack.org/92/167692/13/gate/gate-openstack-manuals-tox-doc-publish-checkbuild/31c1ab2//publish-docs/trunk/install-guide/install/yum/content/ch_basic_environment.html#basics-prerequisites] > > it seems that there is no need to install it, as rules in > > /etc/selinux/targeted/contexts/files/* seem to be the same as on my Juno > > installation. So I am brave, plan to watch the audit log and go ahead > > without modifying SELinux configs. As of last week, there were still selinux bugs open against RDO Juno. I'm not sure what the state of the Kilo packages are at this opint, but it seems likely that there may be issues there as well. > > My NTP server doesn't work (this has nothing to do with OpenStack). > > This forum says that NTP needs to be started after DNS (???) > > https://forum.zentyal.org/index.php/topic,13045.0.html > > In any case, issuing a ``systemctl restart ntpd.service`` fixes the problem, > > but how can it be done automatically? If you're seeing this on a RHEL or Fedora system, you should open a bug in bugzilla so we can track this issue and maybe come up with an appropriate solution. > > ------------------------------------ > > Section 2, Rabbit MQ installation: > > ------------------------------------ > > > > CONTENT: The guide asks for adding a line to /etc/rabbitmq/rabbitmq.config. > > Scratching my head because I don't have that file, but then I see that it > > may not always exist. Perhaps this should be made clearer to accommodate > > slow thinkers. There's a bug open on that for RHEL: https://bugzilla.redhat.com/show_bug.cgi?id=1134956 We should probably clone that to RDO as well. > > ``yum install openstack-keystone python-keystoneclient``: dependency > > python-cryptography can't be found > > > > After adding this repo (found via internet search): > > > > [npmccallum-python-cryptography] > > name=Copr repo for python-cryptography owned by npmccallum > [...] > > This looks very much like a packaging error, and I hope it will eventually > > go away. You shouldn't require COPR repositories for anything. If you encounter a repeatable packaging error, make sure to open a bugzilla so that folks are aware of the issue. On CentOS 7 right now, I am able to install both python-keystoneclient and openstack-keystone without any errors, using only the base, RDO, and EPEL repositories. > > CONTENT: Why are we using API v2, not v3? Why a separate adminurl port, and > > same port for internal and publicurl? Some clarification would help. I suspect the answer to all of the above is, "because legacy". v3 support has only recently been showing up in all of the services, and many folks still aren't familiar with the newer APIs. The admin/non-admin port separation is another historical oddity that we have to live with. With Keystone v2, at least, there are some features only available through the admin api on the admin port. > > Major problems with glance. I am stuck with problem 3 below. > > ERROR glance.common.config [-] Unable to load glance-api-keystone > > from configuration file /usr/share/glance/glance-api-dist-paste.ini. > > Got: ImportError('No module named elasticsearch',) There is a known problem that has been corrected in the latest Kilo packages. There was no corresponding bz filed (via apevec, irc). > > Trying to upload an image now fails because of wrong credentials???? Haven't > > resolved this yet. Any glance request is rejected with > > # glance image-list > > Invalid OpenStack Identity credentials. Debugging this will probably require someone looking over your shoulder at your glance configuration. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From apevec at gmail.com Mon Apr 13 21:30:41 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 13 Apr 2015 23:30:41 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <20150413185356.GB8526@redhat.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> Message-ID: > https://bugzilla.redhat.com/show_bug.cgi?id=1134956 > > We should probably clone that to RDO as well. rabbitmq-server comes from EPEL so I've cloned to EPEL7 https://bugzilla.redhat.com/show_bug.cgi?id=1211394 From mohammed.arafa at gmail.com Mon Apr 13 21:44:03 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 13 Apr 2015 17:44:03 -0400 Subject: [Rdo-list] Juno vs Kilo in Fedora 22 In-Reply-To: <552BC45C.8010503@redhat.com> References: <648473255763364B961A02AC3BE1060D03CC2DFC00@MX19A.corp.emc.com> <5528071D.4040709@redhat.com> <552BC45C.8010503@redhat.com> Message-ID: Is there no way to identify how many times the openstack packages have been downloaded? With a life cycle of 18 months max I cannot see any sane organisation ... I take it back I have seen "sane" organisations deploy fedora in production. And large ones too. In that case, why not make f21 the last version. My 0.02$ Thanks On Apr 13, 2015 9:29 AM, "Ihar Hrachyshka" wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 04/13/2015 12:46 PM, Alan Pevec wrote: > > 2015-04-10 19:23 GMT+02:00 Matthias Runge : > >> Did we had an outcome of this? > > > > Status is that we missed F22 Beta freeze on Mar 31, now fesco > > freeze exception is required and Kilo import is still not completed > > in Rawhide. I'm also having second thoughts after thinking about > > Ihar's feedback last week on IRC where he questioned whether the > > price of maintaining openstack-* packages in stable Fedora is worth > > it when nobody is actually using them. Until proven otherwise, > > we're assuming typical OpenStack user on Fedora is interested in > > development and wants only latest packages, so CVE fixes and > > upstream stable point release updates are wasted. Production users > > are expected to run OpenStack on EL platform, which we provide > > through RDO. > > > > Instead of rushing Kilo into F22, I was thinking about the > > following model: (1) keep latest OpenStack release in Fedora > > Rawhide, starting from milestone2. Delorean stays RDO Trunk as a > > place for tracking changes on OpenStack master branch. (2) > > deprecate openstack-* in released Fedora to avoid maintenance > > overhead for no use (3) provide Fedora Rawhide builds for current > > and previous Fedora in RDO repo e.g. Kilo would be available for > > F22 and F21 through RDO (4) document that for production EL > > platform is recommended > > > > We cannot retire packages in released Fedora, so I'm not sure > > about the process for (2). f20 branches should definitely be closed > > since Havana is EOL upstream and in RDO, so there will be no > > further updates. > > If there are security bugs, we should have an ability to patch those. > So -1 to retiring branches. > > I'm fine supporting those packages that are already released in > current releases (f20 and f21) while those releases are alive, if > that's a temporary transition thing. > > > When I tried fedpkg retire in openstack-keystone f20 branch, > > "dead.package" commit in dist-git worked[*] but then pkgdb step > > failed. Ideally, we would push an "empty" update which would warn > > installed user base that they're running EOLed packages and should > > migrate to RDO but I'm not sure there's such mechanism in Fedora > > Bodhi. I'd appreciate any and all feedback on the above plan, > > please point out the holes it might have. If the plan makes sense, > > F22 would stay Juno and it would be irrelevant which OpenStack > > release ends up in Fedora >= 23. > > > > > > Cheers, Alan > > > > [*] > > > http://pkgs.fedoraproject.org/cgit/openstack-keystone.git/commit/?h=f20&id=2fdfdd6c8eac32334c2a5018fef196b041468979 > > > > _______________________________________________ Rdo-list mailing > > list Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQEcBAEBAgAGBQJVK8RaAAoJEC5aWaUY1u57jA4H/iy0w5DRJ8RsebOm6mPhGeRj > SCBppYcukqh9zZTAuULHq7RNlIf633hQGiniDgK5f2DhspW4Tpc7f6WqYRDRq7Fy > jTy+5Gnf80Gq3sulB3lq04fKCGeuAcmp9CulnVlxDtBP3hsvpJl50A1diOXamXvz > 4M4ng9w3EI2iBm64vcVLUmQSBlGxwiMO++8y77NPIRT4MJsBl4JmAyb9ax09c0m8 > kzbbk07v4Orm2UkHmp6RBMj1TvvNETyYG+aCfZld5og0P0s9rdWVJbcNa7y+Fsxx > i1kliZuB6QRgoNuMzfazoUEIxQbrOAyIzB6W9N6srPXHs1PuV+nckfIugS7ZeuU= > =YXaC > -----END PGP SIGNATURE----- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Mon Apr 13 20:44:07 2015 From: ayoung at redhat.com (Adam Young) Date: Mon, 13 Apr 2015 16:44:07 -0400 Subject: [Rdo-list] Systemd extensions for HTTPD hosted applications Message-ID: <552C2A97.401@redhat.com> Discussing offline versus online compression in #rdo. What do we do now? Its ugly. The compression is performed at RPM build time. THis is ugly, because the Javascript files it is compressing come from other RPMS. So, if you update the RPM that has newer javascript, the RPM will not see the change, and show the old code. What we want is to specify that the compression script runs before Horizon starts. While we know we need to optimize the script to keep restart times down, that is an issue that needs to be solved upstream as well. Lets assume for the moment that we will always run it. The compression needs to be done in the system, but should not be done by the HTTPD daemon itself; Static files should be owned by a user other than the one that runs HTTPD. The right answer seems to be systemd, since we use systemd to restart httpd. We should be able to indicate that it needs to run the compression script. We don't want to create a separate service for openstack-dashboard, though; the service is HTTPD. Steve Gallagher was kind enough to walk me through the basics. He pointed me to what reviewboard (another mod_wsgi App) does. It installs a file under /usr/lib/systemd/system/httpd.service.d named reviewboard-sites.conf. It looks like this: |[Service] ExecStartPre=/usr/bin/rb-site upgrade --all-sites [Unit] After=postgresql.service mariadb.service mysql.service memcached.service| (visible at http://pkgs.fedoraproject.org/cgit/ReviewBoard.git/tree/reviewboard-sites.conf?h=f21) so for horizon: |ExecStartPre=python ${horizon_path}/manage.py compress --force-if-not-fresh| I think this should be the pattern for all of the HTTPD hosted services. We should do this with Keystone next. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Mon Apr 13 22:19:01 2015 From: ayoung at redhat.com (Adam Young) Date: Mon, 13 Apr 2015 18:19:01 -0400 Subject: [Rdo-list] Systemd extension for HTTPD hosted applications Message-ID: <552C40D5.6030506@redhat.com> Discussing offline versus online compression in #rdo. What do we do now? Its ugly. The compression is performed at RPM build time. THis is ugly, because the Javascript files it is compressing come from other RPMS. So, if you update the RPM that has newer javascript, the RPM will not see the change, and show the old code. What we want is to specify that the compression script runs before Horizon starts. While we know we need to optimize the script to keep restart times down, that is an issue that needs to be solved upstream as well. Lets assume for the moment that we will always run it. The compression needs to be done in the system, but should not be done by the HTTPD daemon itself; Static files should be owned by a user other than the one that runs HTTPD. The right answer seems to be systemd, since we use systemd to restart httpd. We should be able to indicate that it needs to run the compression script. We don't want to create a separate service for openstack-dashboard, though; the service is HTTPD. Steve Gallagher was kind enough to walk me through the basics. He pointed me to what reviewboard (another mod_wsgi App) does. It installs a file under /usr/lib/systemd/system/httpd.service.d named reviewboard-sites.conf. It looks like this: |[Service] ExecStartPre=/usr/bin/rb-site upgrade --all-sites [Unit] After=postgresql.service mariadb.service mysql.service memcached.service| (visible at http://pkgs.fedoraproject.org/cgit/ReviewBoard.git/tree/reviewboard-sites.conf?h=f21) so for horizon: |ExecStartPre=python ${horizon_path}/manage.py compress --force-if-not-fresh| I think this should be the pattern for all of the HTTPD hosted services. We should do this with Keystone next. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yaniv.Kaul at emc.com Tue Apr 14 06:48:35 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 14 Apr 2015 02:48:35 -0400 Subject: [Rdo-list] Systemd extension for HTTPD hosted applications In-Reply-To: <552C40D5.6030506@redhat.com> References: <552C40D5.6030506@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03CC41AB88@MX19A.corp.emc.com> Can you compress at RPM install time? Y. From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Adam Young Sent: Tuesday, April 14, 2015 1:19 AM To: rdo-list at redhat.com Subject: [Rdo-list] Systemd extension for HTTPD hosted applications Discussing offline versus online compression in #rdo. What do we do now? Its ugly. The compression is performed at RPM build time. THis is ugly, because the Javascript files it is compressing come from other RPMS. So, if you update the RPM that has newer javascript, the RPM will not see the change, and show the old code. What we want is to specify that the compression script runs before Horizon starts. While we know we need to optimize the script to keep restart times down, that is an issue that needs to be solved upstream as well. Lets assume for the moment that we will always run it. The compression needs to be done in the system, but should not be done by the HTTPD daemon itself; Static files should be owned by a user other than the one that runs HTTPD. The right answer seems to be systemd, since we use systemd to restart httpd. We should be able to indicate that it needs to run the compression script. We don't want to create a separate service for openstack-dashboard, though; the service is HTTPD. Steve Gallagher was kind enough to walk me through the basics. He pointed me to what reviewboard (another mod_wsgi App) does. It installs a file under /usr/lib/systemd/system/httpd.service.d named reviewboard-sites.conf. It looks like this: [Service] ExecStartPre=/usr/bin/rb-site upgrade --all-sites [Unit] After=postgresql.service mariadb.service mysql.service memcached.service (visible at http://pkgs.fedoraproject.org/cgit/ReviewBoard.git/tree/reviewboard-sites.conf?h=f21) so for horizon: ExecStartPre=python ${horizon_path}/manage.py compress --force-if-not-fresh I think this should be the pattern for all of the HTTPD hosted services. We should do this with Keystone next. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Tue Apr 14 06:48:54 2015 From: mrunge at redhat.com (Matthias Runge) Date: Tue, 14 Apr 2015 08:48:54 +0200 Subject: [Rdo-list] Juno vs Kilo in Fedora 22 In-Reply-To: References: <648473255763364B961A02AC3BE1060D03CC2DFC00@MX19A.corp.emc.com> <5528071D.4040709@redhat.com> Message-ID: <552CB856.8000208@redhat.com> On 13/04/15 12:46, Alan Pevec wrote: > 2015-04-10 19:23 GMT+02:00 Matthias Runge : > Instead of rushing Kilo into F22, I was thinking about the following model: > (1) keep latest OpenStack release in Fedora Rawhide, starting from milestone2. > Delorean stays RDO Trunk as a place for tracking changes on > OpenStack master branch. Horizon will be broken in F22, unless we push kilo packages. I have no intend to backport Django-1.8 support to Juno. I'd even go the route to push horizon in kilo version to f22, even if the rest of the stack will stay on Juno. In theory that should work, as it did on my dev platform (using git checkout and a remote RHOS-6 installation). Matthias From phaurep at gmail.com Tue Apr 14 07:48:43 2015 From: phaurep at gmail.com (pauline phaure) Date: Tue, 14 Apr 2015 09:48:43 +0200 Subject: [Rdo-list] Problems with Openstack installation on CentOS 7 Message-ID: Hi everyone, I recently installed Openstack with RDO packstack on two servers. on each server I have 2 interfaces eth0 and eth1 each one of this interfaces is on a seperate vlans. As the VMs spawened by NOVA couldn't get an IP address, I saw in a tutorial that I should edit the files ifcfg-br-ex and ifcfg-br-int and when I did I lost my connection and couldn't anymore ssh to my servers. Do you have any idea how i can solve this? thank you in advance, Pauline, -------------- next part -------------- An HTML attachment was scrubbed... URL: From mangelajo at redhat.com Tue Apr 14 08:00:42 2015 From: mangelajo at redhat.com (Miguel Angel Ajo Pelayo) Date: Tue, 14 Apr 2015 10:00:42 +0200 Subject: [Rdo-list] Problems with Openstack installation on CentOS 7 In-Reply-To: References: Message-ID: <35996E91-ADF5-4A06-A6D4-C4B3AC669D22@redhat.com> Hi Pauline, I?m afraid that at this point you may need to connect via a KVM or direct monitor / keyboard to properly reconfigure the ifcfg files. > On 14/4/2015, at 9:48, pauline phaure wrote: > > Hi everyone, > I recently installed Openstack with RDO packstack on two servers. on each server I have 2 interfaces eth0 and eth1 each one of this interfaces is on a seperate vlans. As the VMs spawened by NOVA couldn't get an IP address, I saw in a tutorial that I should edit the files ifcfg-br-ex and ifcfg-br-int and when I did I lost my connection and couldn't anymore ssh to my servers. > Do you have any idea how i can solve this? > thank you in advance, > Pauline, > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com Miguel Angel Ajo From phaurep at gmail.com Tue Apr 14 08:02:50 2015 From: phaurep at gmail.com (pauline phaure) Date: Tue, 14 Apr 2015 10:02:50 +0200 Subject: [Rdo-list] Fwd: Problems with Openstack installation on CentOS 7 In-Reply-To: References: <35996E91-ADF5-4A06-A6D4-C4B3AC669D22@redhat.com> Message-ID: ok, i will but how should i configure this ifcfg-files? should i put the br-ex and br-int in the same vlans as eth0 and eth1? 2015-04-14 10:00 GMT+02:00 Miguel Angel Ajo Pelayo : > Hi Pauline, > > I?m afraid that at this point you may need to connect via a KVM or direct > monitor / keyboard to > properly reconfigure the ifcfg files. > > > > On 14/4/2015, at 9:48, pauline phaure wrote: > > > > Hi everyone, > > I recently installed Openstack with RDO packstack on two servers. on > each server I have 2 interfaces eth0 and eth1 each one of this interfaces > is on a seperate vlans. As the VMs spawened by NOVA couldn't get an IP > address, I saw in a tutorial that I should edit the files ifcfg-br-ex and > ifcfg-br-int and when I did I lost my connection and couldn't anymore ssh > to my servers. > > Do you have any idea how i can solve this? > > thank you in advance, > > Pauline, > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > Miguel Angel Ajo > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mangelajo at redhat.com Tue Apr 14 08:04:07 2015 From: mangelajo at redhat.com (Miguel Angel Ajo Pelayo) Date: Tue, 14 Apr 2015 04:04:07 -0400 (EDT) Subject: [Rdo-list] Fwd: Problems with Openstack installation on CentOS 7 In-Reply-To: References: <35996E91-ADF5-4A06-A6D4-C4B3AC669D22@redhat.com> Message-ID: <969273340.10659687.1428998647206.JavaMail.zimbra@redhat.com> It's been a while since I'm not doing it, but isn't packstack supposed to do that for you?. What guide steps are you following to modify such files? ----- Original Message ----- > > > > ok, i will but how should i configure this ifcfg-files? should i put the > br-ex and br-int in the same vlans as eth0 and eth1? > > 2015-04-14 10:00 GMT+02:00 Miguel Angel Ajo Pelayo < mangelajo at redhat.com > : > > > Hi Pauline, > > I?m afraid that at this point you may need to connect via a KVM or direct > monitor / keyboard to > properly reconfigure the ifcfg files. > > > > On 14/4/2015, at 9:48, pauline phaure < phaurep at gmail.com > wrote: > > > > Hi everyone, > > I recently installed Openstack with RDO packstack on two servers. on each > > server I have 2 interfaces eth0 and eth1 each one of this interfaces is on > > a seperate vlans. As the VMs spawened by NOVA couldn't get an IP address, > > I saw in a tutorial that I should edit the files ifcfg-br-ex and > > ifcfg-br-int and when I did I lost my connection and couldn't anymore ssh > > to my servers. > > Do you have any idea how i can solve this? > > thank you in advance, > > Pauline, > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > Miguel Angel Ajo > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From phaurep at gmail.com Tue Apr 14 08:23:06 2015 From: phaurep at gmail.com (pauline phaure) Date: Tue, 14 Apr 2015 10:23:06 +0200 Subject: [Rdo-list] Fwd: Problems with Openstack installation on CentOS 7 In-Reply-To: <969273340.10659687.1428998647206.JavaMail.zimbra@redhat.com> References: <35996E91-ADF5-4A06-A6D4-C4B3AC669D22@redhat.com> <969273340.10659687.1428998647206.JavaMail.zimbra@redhat.com> Message-ID: no actually, packstack just added two files which were empty. Besides my VMs couldn't reach the dhcp agent and it was clear that all the br-ex br-int and br-tun were down. In order to fill my ifcfg-br-ex file i followed this tuto here https://www.rdoproject.org/Neutron_with_existing_external_network. for the br-int i followed this one https://www.rdoproject.org/forum/discussion/196/quantum-basic-setup/p1. I did exactly the same but i didn't work for me. I think that the problem is related to my interfaces eth0 and eth1 placed in vlans on the physical switch. any idea on how i could fix things? 2015-04-14 10:04 GMT+02:00 Miguel Angel Ajo Pelayo : > It's been a while since I'm not doing it, but isn't packstack supposed to > do that for you?. > > What guide steps are you following to modify such files? > > ----- Original Message ----- > > > > > > > > ok, i will but how should i configure this ifcfg-files? should i put the > > br-ex and br-int in the same vlans as eth0 and eth1? > > > > 2015-04-14 10:00 GMT+02:00 Miguel Angel Ajo Pelayo < > mangelajo at redhat.com > : > > > > > > Hi Pauline, > > > > I?m afraid that at this point you may need to connect via a KVM or direct > > monitor / keyboard to > > properly reconfigure the ifcfg files. > > > > > > > On 14/4/2015, at 9:48, pauline phaure < phaurep at gmail.com > wrote: > > > > > > Hi everyone, > > > I recently installed Openstack with RDO packstack on two servers. on > each > > > server I have 2 interfaces eth0 and eth1 each one of this interfaces > is on > > > a seperate vlans. As the VMs spawened by NOVA couldn't get an IP > address, > > > I saw in a tutorial that I should edit the files ifcfg-br-ex and > > > ifcfg-br-int and when I did I lost my connection and couldn't anymore > ssh > > > to my servers. > > > Do you have any idea how i can solve this? > > > thank you in advance, > > > Pauline, > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > Miguel Angel Ajo > > > > > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Tue Apr 14 08:54:17 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 14 Apr 2015 10:54:17 +0200 Subject: [Rdo-list] Juno vs Kilo in Fedora 22 In-Reply-To: <552CB856.8000208@redhat.com> References: <648473255763364B961A02AC3BE1060D03CC2DFC00@MX19A.corp.emc.com> <5528071D.4040709@redhat.com> <552CB856.8000208@redhat.com> Message-ID: > I'd even go the route to push horizon in kilo version to f22, even if the > rest of the stack will stay on Juno. > In theory that should work, as it did on my dev platform (using git checkout > and a remote RHOS-6 installation). Horizon is special, it would be even fine as an upgrade in the released Fedora since it has no db with state to upgrade. Swift is the same, it is always backward compatible. Cheers, Alan From phaurep at gmail.com Tue Apr 14 09:48:59 2015 From: phaurep at gmail.com (pauline phaure) Date: Tue, 14 Apr 2015 11:48:59 +0200 Subject: [Rdo-list] Fwd: Problems with Openstack installation on CentOS 7 In-Reply-To: References: <35996E91-ADF5-4A06-A6D4-C4B3AC669D22@redhat.com> <969273340.10659687.1428998647206.JavaMail.zimbra@redhat.com> Message-ID: i turned down the br-ex and br-int and returned the eth0, eth0.xx, eth1,eth1.xx to their original state. Now i can ssh to my servers. But when I spawn a VM it can't reach the dhcp agent. Starting acpid: OK cirros-ds 'local' up at 0.84 no results found for mode=local. up 0.87. searched: nocloud configdrive ec2 Starting network... udhcpc (v1.20.1) started Sending discover... Sending discover... Sending discover... Usage: /sbin/cirros-dhcpc No lease, failing WARN: /etc/rc3.d/S40-network failed cirros-ds 'net' up at 181.06 checking http://169.254.169.254/2009-04-04/instance-id failed 1/20: up 181.07. request failed failed 2/20: up 183.09. request failed failed 3/20: up 185.10. request failed failed 4/20: up 187.10. request failed failed 5/20: up 189.11. request failed failed 6/20: up 191.11. request failed failed 7/20: up 193.12. request failed failed 8/20: up 195.12. request failed failed 9/20: up 197.13. request failed failed 10/20: up 199.13. request failed failed 11/20: up 201.14. request failed failed 12/20: up 203.14. request failed failed 13/20: up 205.15. request failed failed 14/20: up 207.15. request failed failed 15/20: up 209.16. request failed failed 16/20: up 211.16. request failed failed 17/20: up 213.17. request failed failed 18/20: up 215.17. request failed failed 19/20: up 217.18. request failed failed 20/20: up 219.18. request failed failed to read iid from metadata. tried 20 no results found for mode=net. up 221.19. searched: nocloud configdrive ec2 failed to get instance-id of datasource 2015-04-14 10:23 GMT+02:00 pauline phaure : > no actually, packstack just added two files which were empty. Besides my > VMs couldn't reach the dhcp agent and it was clear that all the br-ex > br-int and br-tun were down. > > In order to fill my ifcfg-br-ex file i followed this tuto here > https://www.rdoproject.org/Neutron_with_existing_external_network. for > the br-int i followed this one > https://www.rdoproject.org/forum/discussion/196/quantum-basic-setup/p1. I > did exactly the same but i didn't work for me. I think that the problem is > related to my interfaces eth0 and eth1 placed in vlans on the physical > switch. any idea on how i could fix things? > > 2015-04-14 10:04 GMT+02:00 Miguel Angel Ajo Pelayo : > >> It's been a while since I'm not doing it, but isn't packstack supposed to >> do that for you?. >> >> What guide steps are you following to modify such files? >> >> ----- Original Message ----- >> > >> > >> > >> > ok, i will but how should i configure this ifcfg-files? should i put the >> > br-ex and br-int in the same vlans as eth0 and eth1? >> > >> > 2015-04-14 10:00 GMT+02:00 Miguel Angel Ajo Pelayo < >> mangelajo at redhat.com > : >> > >> > >> > Hi Pauline, >> > >> > I?m afraid that at this point you may need to connect via a KVM or >> direct >> > monitor / keyboard to >> > properly reconfigure the ifcfg files. >> > >> > >> > > On 14/4/2015, at 9:48, pauline phaure < phaurep at gmail.com > wrote: >> > > >> > > Hi everyone, >> > > I recently installed Openstack with RDO packstack on two servers. on >> each >> > > server I have 2 interfaces eth0 and eth1 each one of this interfaces >> is on >> > > a seperate vlans. As the VMs spawened by NOVA couldn't get an IP >> address, >> > > I saw in a tutorial that I should edit the files ifcfg-br-ex and >> > > ifcfg-br-int and when I did I lost my connection and couldn't anymore >> ssh >> > > to my servers. >> > > Do you have any idea how i can solve this? >> > > thank you in advance, >> > > Pauline, >> > > _______________________________________________ >> > > Rdo-list mailing list >> > > Rdo-list at redhat.com >> > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> > Miguel Angel Ajo >> > >> > >> > >> > >> > >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Tue Apr 14 10:15:23 2015 From: mrunge at redhat.com (Matthias Runge) Date: Tue, 14 Apr 2015 12:15:23 +0200 Subject: [Rdo-list] Systemd extension for HTTPD hosted applications In-Reply-To: <648473255763364B961A02AC3BE1060D03CC41AB88@MX19A.corp.emc.com> References: <552C40D5.6030506@redhat.com> <648473255763364B961A02AC3BE1060D03CC41AB88@MX19A.corp.emc.com> Message-ID: <552CE8BB.5060303@redhat.com> On 14/04/15 08:48, Kaul, Yaniv wrote: > Can you compress at RPM install time? > > Y. > Yes, you can. But that won't take updated dependencies into account. Horizon uses a tonne of javascript and css stuff; if that gets updated, that should result in refreshed compressed files. Matthias From kchamart at redhat.com Tue Apr 14 11:13:26 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 14 Apr 2015 13:13:26 +0200 Subject: [Rdo-list] [Upcoming] Fedora 22 Virtualization Test Day on 16-APR-2015 Message-ID: <20150414111326.GA9262@tesla.redhat.com> Heya, If you use Virtualization on Fedora for OpenStack development/test, and have a few spare cycles, you might want to participate in the upcoming (day after tomorrow) Virtualization test day for Fedora 22. Announcement[1] from fedora-virt list[1]: "A reminder that the Fedora 22 Virt Test Day is this coming Thu Apr 16. Check out the test day landing page: https://fedoraproject.org/wiki/Test_Day:2015-04-16_Virtualization It's a great time to make sure your virt workflow is still working correctly with the latest packages in Fedora 22. No requirement to run through test cases on the wiki, just show up and let us know what works (or breaks). Updating to a development release of Fedora scares some people, but it's NOT required to help out with the test day: you can test the latest virt bits on the latest Fedora release courtesy of the virt-preview repo. For more details, as well as easy instructions on updating to Fedora 22, see: https://fedoraproject.org/wiki/Test_Day:2015-04-16_Virtualization#What.27s_needed_to_test Though running latest Fedora 22 on a physical machine is still preferred :) If you want to help out, pop into #fedora-test-day on Thursday and give us a shout!" [1] https://lists.fedoraproject.org/pipermail/virt/2015-April/004259.html -- /kashyap From majopela at redhat.com Tue Apr 14 07:55:44 2015 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Tue, 14 Apr 2015 09:55:44 +0200 Subject: [Rdo-list] Problems with Openstack installation on CentOS 7 In-Reply-To: References: Message-ID: You may need to connect via a KVM or direct monitor / keyboard to properly reconfigure the ifcfg files. > On 14/4/2015, at 9:48, pauline phaure wrote: > > Hi everyone, > I recently installed Openstack with RDO packstack on two servers. on each server I have 2 interfaces eth0 and eth1 each one of this interfaces is on a seperate vlans. As the VMs spawened by NOVA couldn't get an IP address, I saw in a tutorial that I should edit the files ifcfg-br-ex and ifcfg-br-int and when I did I lost my connection and couldn't anymore ssh to my servers. > Do you have any idea how i can solve this? > thank you in advance, > Pauline, > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com Miguel Angel Ajo From berndbausch at gmail.com Tue Apr 14 08:14:01 2015 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 14 Apr 2015 17:14:01 +0900 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <20150413185356.GB8526@redhat.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> Message-ID: <00b001d0768a$f95e84f0$ec1b8ed0$@gmail.com> Many thanks for addressing this, Lars (and Steve for raising this to the RDO list). A few comments: - it would seem that I have the wrong environment; my rdo-release.repo doesn't contain anything about Kilo. This should explain the packaging problems I experienced. - the yum part of the draft install guide has not yet been written and is pure Juno right now. It's to write that part that I am trying to install Kilo. - I will work with my OpenStack docs team mates on the glance problems - perhaps they will go away when my repo configuration is fixed. When I hit problems after using the right repositories, I will raise my flag again, or just file bugs. Cheers, Bernd -----Original Message----- From: Lars Kellogg-Stedman [mailto:lars at redhat.com] Sent: Tuesday, April 14, 2015 3:54 AM To: Steve Gordon Cc: rdo-list; berndbausch at gmail.com Subject: Re: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 > > openstack-selinux not found in the repositories I am using. openstack-selinux is generally available for RHEL and CentOS, but not for Fedora. On Fedora in particular, selinux related issues are supposed to be reported directly against selinux-policy. When using RDO Kilo with CentOS 7, the openstack-selinux package comes from the rdo-juno repository. Note that installing https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-ki lo-0.noarch.rpm configures two repositories on your system: [openstack-juno] name=OpenStack Juno Repository baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/ enabled=1 skip_if_unavailable=0 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno [openstack-kilo] name=Temporary OpenStack Kilo new deps baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-kilo/epel-7/ skip_if_unavailable=0 gpgcheck=0 enabled=1 I notice that the draft Kilo install guide still points at the Juno packages: > Install the rdo-release-juno package to enable the RDO repository: > > # yum install > # > http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm [from http://docs-draft.openstack.org/92/167692/13/gate/gate-openstack-manuals-tox -doc-publish-checkbuild/31c1ab2//publish-docs/trunk/install-guide/install/yu m/content/ch_basic_environment.html#basics-prerequisites] > > it seems that there is no need to install it, as rules in > > /etc/selinux/targeted/contexts/files/* seem to be the same as on my > > Juno installation. So I am brave, plan to watch the audit log and go > > ahead without modifying SELinux configs. As of last week, there were still selinux bugs open against RDO Juno. I'm not sure what the state of the Kilo packages are at this opint, but it seems likely that there may be issues there as well. > > My NTP server doesn't work (this has nothing to do with OpenStack). > > This forum says that NTP needs to be started after DNS (???) > > https://forum.zentyal.org/index.php/topic,13045.0.html > > In any case, issuing a ``systemctl restart ntpd.service`` fixes the > > problem, but how can it be done automatically? If you're seeing this on a RHEL or Fedora system, you should open a bug in bugzilla so we can track this issue and maybe come up with an appropriate solution. > > ------------------------------------ > > Section 2, Rabbit MQ installation: > > ------------------------------------ > > > > CONTENT: The guide asks for adding a line to /etc/rabbitmq/rabbitmq.config. > > Scratching my head because I don't have that file, but then I see > > that it may not always exist. Perhaps this should be made clearer to > > accommodate slow thinkers. There's a bug open on that for RHEL: https://bugzilla.redhat.com/show_bug.cgi?id=1134956 We should probably clone that to RDO as well. > > ``yum install openstack-keystone python-keystoneclient``: dependency > > python-cryptography can't be found > > > > After adding this repo (found via internet search): > > > > [npmccallum-python-cryptography] > > name=Copr repo for python-cryptography owned by npmccallum > [...] > > This looks very much like a packaging error, and I hope it will > > eventually go away. You shouldn't require COPR repositories for anything. If you encounter a repeatable packaging error, make sure to open a bugzilla so that folks are aware of the issue. On CentOS 7 right now, I am able to install both python-keystoneclient and openstack-keystone without any errors, using only the base, RDO, and EPEL repositories. > > CONTENT: Why are we using API v2, not v3? Why a separate adminurl > > port, and same port for internal and publicurl? Some clarification would help. I suspect the answer to all of the above is, "because legacy". v3 support has only recently been showing up in all of the services, and many folks still aren't familiar with the newer APIs. The admin/non-admin port separation is another historical oddity that we have to live with. With Keystone v2, at least, there are some features only available through the admin api on the admin port. > > Major problems with glance. I am stuck with problem 3 below. > > ERROR glance.common.config [-] Unable to load glance-api-keystone > > from configuration file /usr/share/glance/glance-api-dist-paste.ini. > > Got: ImportError('No module named elasticsearch',) There is a known problem that has been corrected in the latest Kilo packages. There was no corresponding bz filed (via apevec, irc). > > Trying to upload an image now fails because of wrong credentials???? > > Haven't resolved this yet. Any glance request is rejected with > > # glance image-list > > Invalid OpenStack Identity credentials. Debugging this will probably require someone looking over your shoulder at your glance configuration. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ From phaurep at gmail.com Tue Apr 14 11:55:54 2015 From: phaurep at gmail.com (pauline phaure) Date: Tue, 14 Apr 2015 13:55:54 +0200 Subject: [Rdo-list] Rdo-list Digest, Vol 25, Issue 17 In-Reply-To: References: Message-ID: My openstack installation turns out to be a good mess. I'm going to start from scratch. can anyway please give me a tuto that you tried to install multinode architecture? plz (including br-ex and br-int configs) 2015-04-14 11:49 GMT+02:00 : > Send Rdo-list mailing list submissions to > rdo-list at redhat.com > > To subscribe or unsubscribe via the World Wide Web, visit > https://www.redhat.com/mailman/listinfo/rdo-list > or, via email, send a message with subject or body 'help' to > rdo-list-request at redhat.com > > You can reach the person managing the list at > rdo-list-owner at redhat.com > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Rdo-list digest..." > > > Today's Topics: > > 1. Re: Fwd: Problems with Openstack installation on CentOS 7 > (Miguel Angel Ajo Pelayo) > 2. Re: Fwd: Problems with Openstack installation on CentOS 7 > (pauline phaure) > 3. Re: Juno vs Kilo in Fedora 22 (Alan Pevec) > 4. Re: Fwd: Problems with Openstack installation on CentOS 7 > (pauline phaure) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 14 Apr 2015 04:04:07 -0400 (EDT) > From: Miguel Angel Ajo Pelayo > To: pauline phaure > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Fwd: Problems with Openstack installation on > CentOS 7 > Message-ID: > <969273340.10659687.1428998647206.JavaMail.zimbra at redhat.com> > Content-Type: text/plain; charset=utf-8 > > It's been a while since I'm not doing it, but isn't packstack supposed to > do that for you?. > > What guide steps are you following to modify such files? > > ----- Original Message ----- > > > > > > > > ok, i will but how should i configure this ifcfg-files? should i put the > > br-ex and br-int in the same vlans as eth0 and eth1? > > > > 2015-04-14 10:00 GMT+02:00 Miguel Angel Ajo Pelayo < > mangelajo at redhat.com > : > > > > > > Hi Pauline, > > > > I?m afraid that at this point you may need to connect via a KVM or direct > > monitor / keyboard to > > properly reconfigure the ifcfg files. > > > > > > > On 14/4/2015, at 9:48, pauline phaure < phaurep at gmail.com > wrote: > > > > > > Hi everyone, > > > I recently installed Openstack with RDO packstack on two servers. on > each > > > server I have 2 interfaces eth0 and eth1 each one of this interfaces > is on > > > a seperate vlans. As the VMs spawened by NOVA couldn't get an IP > address, > > > I saw in a tutorial that I should edit the files ifcfg-br-ex and > > > ifcfg-br-int and when I did I lost my connection and couldn't anymore > ssh > > > to my servers. > > > Do you have any idea how i can solve this? > > > thank you in advance, > > > Pauline, > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > Miguel Angel Ajo > > > > > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > ------------------------------ > > Message: 2 > Date: Tue, 14 Apr 2015 10:23:06 +0200 > From: pauline phaure > To: Miguel Angel Ajo Pelayo , > rdo-list at redhat.com > Subject: Re: [Rdo-list] Fwd: Problems with Openstack installation on > CentOS 7 > Message-ID: > < > CAJM-u-XyFnVkSoxAFtA622SquOECvjtF3f5H7Gh8xrOkuND2kQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > no actually, packstack just added two files which were empty. Besides my > VMs couldn't reach the dhcp agent and it was clear that all the br-ex > br-int and br-tun were down. > > In order to fill my ifcfg-br-ex file i followed this tuto here > https://www.rdoproject.org/Neutron_with_existing_external_network. for the > br-int i followed this one > https://www.rdoproject.org/forum/discussion/196/quantum-basic-setup/p1. I > did exactly the same but i didn't work for me. I think that the problem is > related to my interfaces eth0 and eth1 placed in vlans on the physical > switch. any idea on how i could fix things? > > 2015-04-14 10:04 GMT+02:00 Miguel Angel Ajo Pelayo : > > > It's been a while since I'm not doing it, but isn't packstack supposed to > > do that for you?. > > > > What guide steps are you following to modify such files? > > > > ----- Original Message ----- > > > > > > > > > > > > ok, i will but how should i configure this ifcfg-files? should i put > the > > > br-ex and br-int in the same vlans as eth0 and eth1? > > > > > > 2015-04-14 10:00 GMT+02:00 Miguel Angel Ajo Pelayo < > > mangelajo at redhat.com > : > > > > > > > > > Hi Pauline, > > > > > > I?m afraid that at this point you may need to connect via a KVM or > direct > > > monitor / keyboard to > > > properly reconfigure the ifcfg files. > > > > > > > > > > On 14/4/2015, at 9:48, pauline phaure < phaurep at gmail.com > wrote: > > > > > > > > Hi everyone, > > > > I recently installed Openstack with RDO packstack on two servers. on > > each > > > > server I have 2 interfaces eth0 and eth1 each one of this interfaces > > is on > > > > a seperate vlans. As the VMs spawened by NOVA couldn't get an IP > > address, > > > > I saw in a tutorial that I should edit the files ifcfg-br-ex and > > > > ifcfg-br-int and when I did I lost my connection and couldn't anymore > > ssh > > > > to my servers. > > > > Do you have any idea how i can solve this? > > > > thank you in advance, > > > > Pauline, > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > Miguel Angel Ajo > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://www.redhat.com/archives/rdo-list/attachments/20150414/8dc22533/attachment.html > > > > ------------------------------ > > Message: 3 > Date: Tue, 14 Apr 2015 10:54:17 +0200 > From: Alan Pevec > To: Matthias Runge > Cc: "Rdo-list at redhat.com" > Subject: Re: [Rdo-list] Juno vs Kilo in Fedora 22 > Message-ID: > AdPozBPW+yw6imw at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > > I'd even go the route to push horizon in kilo version to f22, even if the > > rest of the stack will stay on Juno. > > In theory that should work, as it did on my dev platform (using git > checkout > > and a remote RHOS-6 installation). > > Horizon is special, it would be even fine as an upgrade in the > released Fedora since it has no db with state to upgrade. > Swift is the same, it is always backward compatible. > > Cheers, > Alan > > > > ------------------------------ > > Message: 4 > Date: Tue, 14 Apr 2015 11:48:59 +0200 > From: pauline phaure > To: Miguel Angel Ajo Pelayo , > rdo-list at redhat.com > Subject: Re: [Rdo-list] Fwd: Problems with Openstack installation on > CentOS 7 > Message-ID: > < > CAJM-u-Xo3nd9XozUWHXb-Q8S6d6mbda7GPFBTmg+Vm4WrEeUhw at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > i turned down the br-ex and br-int and returned the eth0, eth0.xx, > eth1,eth1.xx to their original state. Now i can ssh to my servers. But when > I spawn a VM it can't reach the dhcp agent. > > Starting acpid: OK > cirros-ds 'local' up at 0.84 > no results found for mode=local. up 0.87. searched: nocloud configdrive ec2 > Starting network... > udhcpc (v1.20.1) started > Sending discover... > Sending discover... > Sending discover... > Usage: /sbin/cirros-dhcpc > No lease, failing > WARN: /etc/rc3.d/S40-network failed > cirros-ds 'net' up at 181.06 > checking http://169.254.169.254/2009-04-04/instance-id > failed 1/20: up 181.07. request failed > failed 2/20: up 183.09. request failed > failed 3/20: up 185.10. request failed > failed 4/20: up 187.10. request failed > failed 5/20: up 189.11. request failed > failed 6/20: up 191.11. request failed > failed 7/20: up 193.12. request failed > failed 8/20: up 195.12. request failed > failed 9/20: up 197.13. request failed > failed 10/20: up 199.13. request failed > failed 11/20: up 201.14. request failed > failed 12/20: up 203.14. request failed > failed 13/20: up 205.15. request failed > failed 14/20: up 207.15. request failed > failed 15/20: up 209.16. request failed > failed 16/20: up 211.16. request failed > failed 17/20: up 213.17. request failed > failed 18/20: up 215.17. request failed > failed 19/20: up 217.18. request failed > failed 20/20: up 219.18. request failed > failed to read iid from metadata. tried 20 > no results found for mode=net. up 221.19. searched: nocloud configdrive ec2 > failed to get instance-id of datasource > > > > 2015-04-14 10:23 GMT+02:00 pauline phaure : > > > no actually, packstack just added two files which were empty. Besides my > > VMs couldn't reach the dhcp agent and it was clear that all the br-ex > > br-int and br-tun were down. > > > > In order to fill my ifcfg-br-ex file i followed this tuto here > > https://www.rdoproject.org/Neutron_with_existing_external_network. for > > the br-int i followed this one > > https://www.rdoproject.org/forum/discussion/196/quantum-basic-setup/p1. > I > > did exactly the same but i didn't work for me. I think that the problem > is > > related to my interfaces eth0 and eth1 placed in vlans on the physical > > switch. any idea on how i could fix things? > > > > 2015-04-14 10:04 GMT+02:00 Miguel Angel Ajo Pelayo >: > > > >> It's been a while since I'm not doing it, but isn't packstack supposed > to > >> do that for you?. > >> > >> What guide steps are you following to modify such files? > >> > >> ----- Original Message ----- > >> > > >> > > >> > > >> > ok, i will but how should i configure this ifcfg-files? should i put > the > >> > br-ex and br-int in the same vlans as eth0 and eth1? > >> > > >> > 2015-04-14 10:00 GMT+02:00 Miguel Angel Ajo Pelayo < > >> mangelajo at redhat.com > : > >> > > >> > > >> > Hi Pauline, > >> > > >> > I?m afraid that at this point you may need to connect via a KVM or > >> direct > >> > monitor / keyboard to > >> > properly reconfigure the ifcfg files. > >> > > >> > > >> > > On 14/4/2015, at 9:48, pauline phaure < phaurep at gmail.com > wrote: > >> > > > >> > > Hi everyone, > >> > > I recently installed Openstack with RDO packstack on two servers. on > >> each > >> > > server I have 2 interfaces eth0 and eth1 each one of this interfaces > >> is on > >> > > a seperate vlans. As the VMs spawened by NOVA couldn't get an IP > >> address, > >> > > I saw in a tutorial that I should edit the files ifcfg-br-ex and > >> > > ifcfg-br-int and when I did I lost my connection and couldn't > anymore > >> ssh > >> > > to my servers. > >> > > Do you have any idea how i can solve this? > >> > > thank you in advance, > >> > > Pauline, > >> > > _______________________________________________ > >> > > Rdo-list mailing list > >> > > Rdo-list at redhat.com > >> > > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > > >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > >> > Miguel Angel Ajo > >> > > >> > > >> > > >> > > >> > > >> > > >> > _______________________________________________ > >> > Rdo-list mailing list > >> > Rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://www.redhat.com/archives/rdo-list/attachments/20150414/e3b1c346/attachment.html > > > > ------------------------------ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > End of Rdo-list Digest, Vol 25, Issue 17 > **************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From meilei007 at gmail.com Tue Apr 14 12:06:26 2015 From: meilei007 at gmail.com (lei mei) Date: Tue, 14 Apr 2015 20:06:26 +0800 Subject: [Rdo-list] issue about add compute node Message-ID: Hi everyone, When I add a new compute node to the openstack which I deployed some months ago, I meet a problem about the package version incompatible. Detail thing is below: 1. I prepare a clean centos 7 system and add the ip address to the packstack answer file. 2. Run packstack 3. Everything looks fine and I get the successful hint at last. 4. But I find the nova-compute service can't start on new compute node with below log: nova compute service fail to start due to "Connection to the hypervisor is broken on host" 5. I checked the libvirt on compute node, find it has upgrade to the latest version but the old openstack use the old version. And a lot of packages on compute node have the newer version than the old openstack. So I want to know how do you add a new compute node to the old openstack avoid this package version incompatible issue? BTW, I use the default yum repo, so should I maintain a internal static repo for expand the openstack? -BR Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Tue Apr 14 12:26:08 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 14 Apr 2015 08:26:08 -0400 Subject: [Rdo-list] issue about add compute node In-Reply-To: References: Message-ID: andy yes you will need to stabilise the package versions in an openstack deployment. so you better make your own yum repo. but the horses have bolted already from the barn. so just keep it in mind for the future right now, what you can do, is to extract/rebuild the rpms on your compute host and rebuild them to use in your internal static yum repo. see http://unix.stackexchange.com/questions/140778/how-to-build-an-rpm-package-from-the-installed-files never tried rpmrebuild but i'd imagine that it would take a huge amount of time and resources on the compute host. if this is a lab environment, i suggest re deploying your set up again. alternatively, you can do "yum -y upgrade" on all the old nodes, reboot and voila, all nodes are now up to the same version thanks On Tue, Apr 14, 2015 at 8:06 AM, lei mei wrote: > Hi everyone, > When I add a new compute node to the openstack which I deployed some > months ago, I meet a problem about the package version incompatible. > Detail thing is below: > 1. I prepare a clean centos 7 system and add the ip address to the > packstack answer file. > 2. Run packstack > 3. Everything looks fine and I get the successful hint at last. > 4. But I find the nova-compute service can't start on new compute node > with below log: > nova compute service fail to start due to "Connection to the > hypervisor is broken on host" > 5. I checked the libvirt on compute node, find it has upgrade to the > latest version but the old openstack use the old version. And a lot of > packages on compute node have the newer version than the old openstack. > > So I want to know how do you add a new compute node to the old openstack > avoid this package version incompatible issue? BTW, I use the default yum > repo, so should I maintain a internal static repo for expand the openstack? > > -BR > Andy > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mangelajo at redhat.com Tue Apr 14 12:33:01 2015 From: mangelajo at redhat.com (Miguel Angel Ajo Pelayo) Date: Tue, 14 Apr 2015 14:33:01 +0200 Subject: [Rdo-list] Problems with Openstack installation on CentOS 7 In-Reply-To: References: <35996E91-ADF5-4A06-A6D4-C4B3AC669D22@redhat.com> <969273340.10659687.1428998647206.JavaMail.zimbra@redhat.com> Message-ID: Ok, let?s first try to fix the inter-VM communication, then let?s look at the external network. Question: 1) What kind of segmentation are you attempting? VLAN? I would recommend you to try VXLAN or GRE first. In such case you won?t need to modify your local eth0.xx, just specify the local_ip on the ovs agent configuration, and of course all the other VXLAN bits (that?s generally done automatically by packsack) For the external network you could plug your eth1 directly to the br-ex bridge, (without vlan tagged), and then specify: http://docs.openstack.org/havana/install-guide/install/apt/content/install-neutron.configure-networks.plug-in-specific.ovs.vlan.html --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id SEG_ID Make sure ?physnet1? is the mapping that connects to your br-ex (bridge mappings on the packstack config, that?s installed to the ovs agent configuration file). This way neutron would handle the vlan tagging/untagging from/to your external network. Alternatively, if you want to provide your network node/controller with an external IP address you can try to plug in eth1:1 instead and keep eth1 for the system (warning, I haven?t tried this specifically, but I guess it may work). Best, Miguel ?ngel Ajo > On 14/4/2015, at 11:48, pauline phaure wrote: > > i turned down the br-ex and br-int and returned the eth0, eth0.xx, eth1,eth1.xx to their original state. Now i can ssh to my servers. But when I spawn a VM it can't reach the dhcp agent. > > Starting acpid: OK > cirros-ds 'local' up at 0.84 > no results found for mode=local. up 0.87. searched: nocloud configdrive ec2 > Starting network... > udhcpc (v1.20.1) started > Sending discover... > Sending discover... > Sending discover... > Usage: /sbin/cirros-dhcpc > No lease, failing > WARN: /etc/rc3.d/S40-network failed > cirros-ds 'net' up at 181.06 > checking http://169.254.169.254/2009-04-04/instance-id > failed 1/20: up 181.07. request failed > failed 2/20: up 183.09. request failed > failed 3/20: up 185.10. request failed > failed 4/20: up 187.10. request failed > failed 5/20: up 189.11. request failed > failed 6/20: up 191.11. request failed > failed 7/20: up 193.12. request failed > failed 8/20: up 195.12. request failed > failed 9/20: up 197.13. request failed > failed 10/20: up 199.13. request failed > failed 11/20: up 201.14. request failed > failed 12/20: up 203.14. request failed > failed 13/20: up 205.15. request failed > failed 14/20: up 207.15. request failed > failed 15/20: up 209.16. request failed > failed 16/20: up 211.16. request failed > failed 17/20: up 213.17. request failed > failed 18/20: up 215.17. request failed > failed 19/20: up 217.18. request failed > failed 20/20: up 219.18. request failed > failed to read iid from metadata. tried 20 > no results found for mode=net. up 221.19. searched: nocloud configdrive ec2 > failed to get instance-id of datasource > > > 2015-04-14 10:23 GMT+02:00 pauline phaure >: > no actually, packstack just added two files which were empty. Besides my VMs couldn't reach the dhcp agent and it was clear that all the br-ex br-int and br-tun were down. > > In order to fill my ifcfg-br-ex file i followed this tuto here https://www.rdoproject.org/Neutron_with_existing_external_network . for the br-int i followed this one https://www.rdoproject.org/forum/discussion/196/quantum-basic-setup/p1 . I did exactly the same but i didn't work for me. I think that the problem is related to my interfaces eth0 and eth1 placed in vlans on the physical switch. any idea on how i could fix things? > > 2015-04-14 10:04 GMT+02:00 Miguel Angel Ajo Pelayo >: > It's been a while since I'm not doing it, but isn't packstack supposed to do that for you?. > > What guide steps are you following to modify such files? > > ----- Original Message ----- > > > > > > > > ok, i will but how should i configure this ifcfg-files? should i put the > > br-ex and br-int in the same vlans as eth0 and eth1? > > > > 2015-04-14 10:00 GMT+02:00 Miguel Angel Ajo Pelayo < mangelajo at redhat.com > : > > > > > > Hi Pauline, > > > > I?m afraid that at this point you may need to connect via a KVM or direct > > monitor / keyboard to > > properly reconfigure the ifcfg files. > > > > > > > On 14/4/2015, at 9:48, pauline phaure < phaurep at gmail.com > wrote: > > > > > > Hi everyone, > > > I recently installed Openstack with RDO packstack on two servers. on each > > > server I have 2 interfaces eth0 and eth1 each one of this interfaces is on > > > a seperate vlans. As the VMs spawened by NOVA couldn't get an IP address, > > > I saw in a tutorial that I should edit the files ifcfg-br-ex and > > > ifcfg-br-int and when I did I lost my connection and couldn't anymore ssh > > > to my servers. > > > Do you have any idea how i can solve this? > > > thank you in advance, > > > Pauline, > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > Miguel Angel Ajo > > > > > > > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > Miguel Angel Ajo -------------- next part -------------- An HTML attachment was scrubbed... URL: From meilei007 at gmail.com Tue Apr 14 12:38:28 2015 From: meilei007 at gmail.com (lei mei) Date: Tue, 14 Apr 2015 20:38:28 +0800 Subject: [Rdo-list] issue about add compute node In-Reply-To: References: Message-ID: Thanks for your reply, Mohammed, the second method seems a better way I can have a try, only one problem I worry about, are you sure the upgrade on all the old node will not break the current environment? Best Regards, Andy 2015-04-14 20:26 GMT+08:00 Mohammed Arafa : > andy > > yes you will need to stabilise the package versions in an openstack > deployment. so you better make your own yum repo. but the horses have > bolted already from the barn. so just keep it in mind for the future > > right now, what you can do, is to extract/rebuild the rpms on your compute > host and rebuild them to use in your internal static yum repo. see > > http://unix.stackexchange.com/questions/140778/how-to-build-an-rpm-package-from-the-installed-files > > never tried rpmrebuild but i'd imagine that it would take a huge amount of > time and resources on the compute host. if this is a lab environment, i > suggest re deploying your set up again. > > alternatively, you can do "yum -y upgrade" on all the old nodes, reboot > and voila, all nodes are now up to the same version > > thanks > > > On Tue, Apr 14, 2015 at 8:06 AM, lei mei wrote: > >> Hi everyone, >> When I add a new compute node to the openstack which I deployed some >> months ago, I meet a problem about the package version incompatible. >> Detail thing is below: >> 1. I prepare a clean centos 7 system and add the ip address to the >> packstack answer file. >> 2. Run packstack >> 3. Everything looks fine and I get the successful hint at last. >> 4. But I find the nova-compute service can't start on new compute >> node with below log: >> nova compute service fail to start due to "Connection to the >> hypervisor is broken on host" >> 5. I checked the libvirt on compute node, find it has upgrade to the >> latest version but the old openstack use the old version. And a lot of >> packages on compute node have the newer version than the old openstack. >> >> So I want to know how do you add a new compute node to the old openstack >> avoid this package version incompatible issue? BTW, I use the default yum >> repo, so should I maintain a internal static repo for expand the openstack? >> >> -BR >> Andy >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Tue Apr 14 12:54:14 2015 From: mrunge at redhat.com (Matthias Runge) Date: Tue, 14 Apr 2015 14:54:14 +0200 Subject: [Rdo-list] Juno vs Kilo in Fedora 22 In-Reply-To: <552CB856.8000208@redhat.com> References: <648473255763364B961A02AC3BE1060D03CC2DFC00@MX19A.corp.emc.com> <5528071D.4040709@redhat.com> <552CB856.8000208@redhat.com> Message-ID: <552D0DF6.1070802@redhat.com> On 14/04/15 08:48, Matthias Runge wrote: > On 13/04/15 12:46, Alan Pevec wrote: >> 2015-04-10 19:23 GMT+02:00 Matthias Runge : > >> Instead of rushing Kilo into F22, I was thinking about the following >> model: >> (1) keep latest OpenStack release in Fedora Rawhide, starting from >> milestone2. >> Delorean stays RDO Trunk as a place for tracking changes on >> OpenStack master branch. > > Horizon will be broken in F22, unless we push kilo packages. > I have no intend to backport Django-1.8 support to Juno. I've just built django-openstack-auth for rawhide and will push that to f22 base. It makes Horizon-2015.1 fit for Django-1.8 Current horizon from delorean aka rdoproject trunk is fine. Matthias From mohammed.arafa at gmail.com Tue Apr 14 12:57:02 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 14 Apr 2015 08:57:02 -0400 Subject: [Rdo-list] issue about add compute node In-Reply-To: References: Message-ID: No guarantees!!! :) let me know how it goes and what problems you encountered it would make a good blog post or a thesis On Apr 14, 2015 8:38 AM, "lei mei" wrote: > Thanks for your reply, Mohammed, the second method seems a better way I > can have a try, only one problem I worry about, are you sure the upgrade on > all the old node will not break the current environment? > > Best Regards, > Andy > > 2015-04-14 20:26 GMT+08:00 Mohammed Arafa : > >> andy >> >> yes you will need to stabilise the package versions in an openstack >> deployment. so you better make your own yum repo. but the horses have >> bolted already from the barn. so just keep it in mind for the future >> >> right now, what you can do, is to extract/rebuild the rpms on your >> compute host and rebuild them to use in your internal static yum repo. see >> >> http://unix.stackexchange.com/questions/140778/how-to-build-an-rpm-package-from-the-installed-files >> >> never tried rpmrebuild but i'd imagine that it would take a huge amount >> of time and resources on the compute host. if this is a lab environment, i >> suggest re deploying your set up again. >> >> alternatively, you can do "yum -y upgrade" on all the old nodes, reboot >> and voila, all nodes are now up to the same version >> >> thanks >> >> >> On Tue, Apr 14, 2015 at 8:06 AM, lei mei wrote: >> >>> Hi everyone, >>> When I add a new compute node to the openstack which I deployed some >>> months ago, I meet a problem about the package version incompatible. >>> Detail thing is below: >>> 1. I prepare a clean centos 7 system and add the ip address to the >>> packstack answer file. >>> 2. Run packstack >>> 3. Everything looks fine and I get the successful hint at last. >>> 4. But I find the nova-compute service can't start on new compute >>> node with below log: >>> nova compute service fail to start due to "Connection to the >>> hypervisor is broken on host" >>> 5. I checked the libvirt on compute node, find it has upgrade to >>> the latest version but the old openstack use the old version. And a lot of >>> packages on compute node have the newer version than the old openstack. >>> >>> So I want to know how do you add a new compute node to the old openstack >>> avoid this package version incompatible issue? BTW, I use the default yum >>> repo, so should I maintain a internal static repo for expand the openstack? >>> >>> -BR >>> Andy >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> >> >> -- >> >> >> >> >> *805010942448935* >> >> >> *GR750055912MA* >> >> >> *Link to me on LinkedIn * >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aortega at redhat.com Tue Apr 14 13:24:00 2015 From: aortega at redhat.com (Alvaro Lopez Ortega) Date: Tue, 14 Apr 2015 06:24:00 -0700 Subject: [Rdo-list] Systemd extension for HTTPD hosted applications In-Reply-To: <552CE8BB.5060303@redhat.com> References: <552C40D5.6030506@redhat.com> <648473255763364B961A02AC3BE1060D03CC41AB88@MX19A.corp.emc.com> <552CE8BB.5060303@redhat.com> Message-ID: <49A9ADF4-1F3D-422C-9C39-6D303BD98DD4@redhat.com> > On 14 Apr 2015, at 03:15, Matthias Runge wrote: > On 14/04/15 08:48, Kaul, Yaniv wrote: >> Can you compress at RPM install time? >> >> Y. >> > Yes, you can. But that won't take updated dependencies into account. > Horizon uses a tonne of javascript and css stuff; if that gets updated, that should result in refreshed compressed files. In my understanding, the most sane option is to compress it when the RPM is built ? the very same thing as if you were building a binary executable. Compiling a binary on RPM install would not make sense, and neither it would be to compile it every single time it?s executed, right? Compressed JS/CSS files are ugly.. but so they are binary files, and we have to live them too. As I see it, this is NOTABUG :) Best, Alvaro From whayutin at redhat.com Tue Apr 14 13:39:29 2015 From: whayutin at redhat.com (whayutin) Date: Tue, 14 Apr 2015 09:39:29 -0400 Subject: [Rdo-list] [CI] rdo kilo status is green (blue) Message-ID: <1429018769.2698.27.camel@redhat.com> FYI.. RDO Kilo is once again passing CI https://prod-rdojenkins.rhcloud.com/view/RDO-Trunk/ The delorean repo used in the testing was baseurl=http://trunk.rdoproject.org/centos70/d6/82/d6824ae14830362af1412b730256c84a1f7d3067_cd20c1a3 Of course this is in addition to the other required yum repos. For example CentOS-7 requires: CentOS-Base EPEL rdo-release-kilo There are mirrored jobs internal at Red Hat that cover other distros. Please see the internal jenkins for details. Thank you! From apevec at gmail.com Tue Apr 14 13:46:02 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 14 Apr 2015 15:46:02 +0200 Subject: [Rdo-list] [CI] rdo kilo status is green (blue) In-Reply-To: <1429018769.2698.27.camel@redhat.com> References: <1429018769.2698.27.camel@redhat.com> Message-ID: > The delorean repo used in the testing was > baseurl=http://trunk.rdoproject.org/centos70/d6/82/d6824ae14830362af1412b730256c84a1f7d3067_cd20c1a3 http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/ symlink updated to this hash. Cheers, Alan From javier.pena at redhat.com Tue Apr 14 14:01:26 2015 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 14 Apr 2015 10:01:26 -0400 (EDT) Subject: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo In-Reply-To: <648473255763364B961A02AC3BE1060D03CC41AB2F@MX19A.corp.emc.com> References: <1428666956.27400.28.camel@redhat.com> <648473255763364B961A02AC3BE1060D03CC41A9B9@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03CC41AB13@MX19A.corp.emc.com> <722594503.14418908.1428937768552.JavaMail.zimbra@redhat.com> <648473255763364B961A02AC3BE1060D03CC41AB2F@MX19A.corp.emc.com> Message-ID: <1373689570.14949564.1429020086493.JavaMail.zimbra@redhat.com> ----- Original Message ----- > > -----Original Message----- > > From: Javier Pena [mailto:javier.pena at redhat.com] > > Sent: Monday, April 13, 2015 6:09 PM > > To: Kaul, Yaniv > > Cc: Alan Pevec; Rdo-list at redhat.com > > Subject: Re: [Rdo-list] [CI] FYI.. horizon deps failing/blocking kilo > > > > > > > > ----- Original Message ----- > > > > -----Original Message----- > > > > From: Alan Pevec [mailto:apevec at gmail.com] > > > > Sent: Monday, April 13, 2015 12:19 AM > > > > To: Kaul, Yaniv > > > > Cc: Rdo-list at redhat.com > > > > Subject: Re: [Rdo-list] [CI] FYI.. horizon deps failing/blocking > > > > kilo > > > > > > > > > I'm getting (on CentOS 7, using packstack): > > > > > Error: /Stage[main]/Horizon/Package[horizon]/ensure: change from > > > > > absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y > > > > > install > > > > openstack-dashboard' returned 1: Error: Package: openstack-dash > > > > board- 2015.1-0.1.b2.el7.centos.noarch (openstack-kilo) > > > > > Requires: python-oslo-concurrency > > > > > > > > > > > > > > > Doesn't look like the same issue. > > > > > This is from RDO Kilo repos. > > > > > > > > rdo-release-kilo.rpm is currently still bootsraping, you need to > > > > enable RDO Trunk aka Delorean repo. > > > > On CentOS 7 you need: > > > > 1. yum install epel-release > > > > 2. yum install > > > > http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > > > > 3. cd /etc/yum.repos.d; wget > > > > http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/delorean.re > > > > po > > > > > > > > > > > > Cheers, > > > > Alan > > > > > > Thanks - the good news is that it seems to work. > > > The bad news is that the GUI looks a bit Spartan (attached screenshot)... > > > > The reason for this is you are missing package openstack-dashboard-theme. > > But > > don't try to install it yet, it will break Horizon due to some other issues > > with CSS > > paths. > > Is there a BZ I can follow up on this? There was no BZ for this yesterday. Actually, I was wrong in my initial assumption, there is no need for package openstack-dashboard-theme, and the issue was elsewhere. There are actually two issues: - openstack-dashboard packaging needed a change (https://bugzilla.redhat.com/show_bug.cgi?id=1211552 , addressed by https://review.gerrithub.io/#/c/230170/). With that, a default package installation should produce a working setup. - packstack was not generating a correct configuration for Horizon (https://review.openstack.org/173327 and https://review.openstack.org/173331 should fix that). Regards, Javier > Y. > > > > > Javier > > > > > Y. > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From bderzhavets at hotmail.com Tue Apr 14 14:16:42 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 14 Apr 2015 10:16:42 -0400 Subject: [Rdo-list] [Upcoming] Fedora 22 Virtualization Test Day on 16-APR-2015 In-Reply-To: <20150414111326.GA9262@tesla.redhat.com> References: <20150414111326.GA9262@tesla.redhat.com> Message-ID: Kashyap, I was unable to install any graphical environment on F21 Server. Just "MATE Desktop" group install reports no such group and so on. A couple of times F21 WKS failed with MAC address detection of RTL 8169 32-bit on new ASUS Board. Server did perfect,but . . . . Thank you Boris > Date: Tue, 14 Apr 2015 13:13:26 +0200 > From: kchamart at redhat.com > To: rdo-list at redhat.com > Subject: [Rdo-list] [Upcoming] Fedora 22 Virtualization Test Day on 16-APR-2015 > > Heya, > > If you use Virtualization on Fedora for OpenStack development/test, and > have a few spare cycles, you might want to participate in the upcoming > (day after tomorrow) Virtualization test day for Fedora 22. > > Announcement[1] from fedora-virt list[1]: > > "A reminder that the Fedora 22 Virt Test Day is this coming Thu Apr > 16. Check out the test day landing page: > > https://fedoraproject.org/wiki/Test_Day:2015-04-16_Virtualization > > It's a great time to make sure your virt workflow is still working > correctly with the latest packages in Fedora 22. No requirement to run > through test cases on the wiki, just show up and let us know what > works (or breaks). > > Updating to a development release of Fedora scares some people, but > it's NOT required to help out with the test day: you can test the > latest virt bits on the latest Fedora release courtesy of the > virt-preview repo. For more details, as well as easy instructions on > updating to Fedora 22, see: > > https://fedoraproject.org/wiki/Test_Day:2015-04-16_Virtualization#What.27s_needed_to_test > > Though running latest Fedora 22 on a physical machine is still > preferred :) > > If you want to help out, pop into #fedora-test-day on Thursday and > give us a shout!" > > > [1] https://lists.fedoraproject.org/pipermail/virt/2015-April/004259.html > > -- > /kashyap > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Tue Apr 14 14:30:29 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 14 Apr 2015 16:30:29 +0200 Subject: [Rdo-list] [Upcoming] Fedora 22 Virtualization Test Day on 16-APR-2015 In-Reply-To: References: <20150414111326.GA9262@tesla.redhat.com> Message-ID: <20150414143029.GB9262@tesla.redhat.com> On Tue, Apr 14, 2015 at 10:16:42AM -0400, Boris Derzhavets wrote: > Kashyap, > > I was unable to install any graphical environment on F21 Server. F21? Don't know about MATE, but GNOME works just fine here. In any case (assuimng you mean F22), the Fedora test list is more appropriate for these discussions: https://admin.fedoraproject.org/mailman/listinfo/test > Just "MATE Desktop" group install reports no such group and so on. A > couple of times F21 WKS failed with MAC address detection of RTL 8169 > 32-bit on new ASUS Board. Server did perfect,but . . . . > > > Date: Tue, 14 Apr 2015 13:13:26 +0200 > > From: kchamart at redhat.com > > To: rdo-list at redhat.com > > Subject: [Rdo-list] [Upcoming] Fedora 22 Virtualization Test Day on 16-APR-2015 > > > > Heya, > > > > If you use Virtualization on Fedora for OpenStack development/test, and > > have a few spare cycles, you might want to participate in the upcoming > > (day after tomorrow) Virtualization test day for Fedora 22. > > > > Announcement[1] from fedora-virt list[1]: > > > > "A reminder that the Fedora 22 Virt Test Day is this coming Thu Apr > > 16. Check out the test day landing page: > > > > https://fedoraproject.org/wiki/Test_Day:2015-04-16_Virtualization > > > > It's a great time to make sure your virt workflow is still working > > correctly with the latest packages in Fedora 22. No requirement to run > > through test cases on the wiki, just show up and let us know what > > works (or breaks). > > > > Updating to a development release of Fedora scares some people, but > > it's NOT required to help out with the test day: you can test the > > latest virt bits on the latest Fedora release courtesy of the > > virt-preview repo. For more details, as well as easy instructions on > > updating to Fedora 22, see: > > > > https://fedoraproject.org/wiki/Test_Day:2015-04-16_Virtualization#What.27s_needed_to_test > > > > Though running latest Fedora 22 on a physical machine is still > > preferred :) > > > > If you want to help out, pop into #fedora-test-day on Thursday and > > give us a shout!" > > > > > > [1] https://lists.fedoraproject.org/pipermail/virt/2015-April/004259.html > > > > -- > > /kashyap > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- /kashyap From ayoung at redhat.com Tue Apr 14 15:18:35 2015 From: ayoung at redhat.com (Adam Young) Date: Tue, 14 Apr 2015 11:18:35 -0400 Subject: [Rdo-list] Systemd extension for HTTPD hosted applications In-Reply-To: <49A9ADF4-1F3D-422C-9C39-6D303BD98DD4@redhat.com> References: <552C40D5.6030506@redhat.com> <648473255763364B961A02AC3BE1060D03CC41AB88@MX19A.corp.emc.com> <552CE8BB.5060303@redhat.com> <49A9ADF4-1F3D-422C-9C39-6D303BD98DD4@redhat.com> Message-ID: <552D2FCB.4070503@redhat.com> On 04/14/2015 09:24 AM, Alvaro Lopez Ortega wrote: >> On 14 Apr 2015, at 03:15, Matthias Runge wrote: >> On 14/04/15 08:48, Kaul, Yaniv wrote: >>> Can you compress at RPM install time? >>> >>> Y. >>> >> Yes, you can. But that won't take updated dependencies into account. >> Horizon uses a tonne of javascript and css stuff; if that gets updated, that should result in refreshed compressed files. > In my understanding, the most sane option is to compress it when the RPM is built ? the very same thing as if you were building a binary executable. Compiling a binary on RPM install would not make sense, and neither it would be to compile it every single time it?s executed, right? > > Compressed JS/CSS files are ugly.. but so they are binary files, and we have to live them too. As I see it, this is NOTABUG :) The comparable other would be linking, not compiling. We will find ourselves in a case where the compression has used a version of a Javascript file with a bug in it, the bug has been fixed in the RPM, but Horizon will still need to live with it. The right thing to do is make it possible to recompress on the remote machine. We can certainly do this at RPM install time, and make it a manual process to recompress. That will lead to support requests, but at least it is solvable. So, I think the realistic approach is to compress when we install the RPM, provide docs for how to manually recompress, and work towards making this sane in to do in systemd. The first two are packaging tasks and should be straight forward. The third we should work on for the Liberty release. > > Best, > Alvaro > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From whayutin at redhat.com Tue Apr 14 17:42:31 2015 From: whayutin at redhat.com (whayutin) Date: Tue, 14 Apr 2015 13:42:31 -0400 Subject: [Rdo-list] [CI] selinux audit denials on openvswitch rdo kilo Message-ID: <1429033351.2698.52.camel@redhat.com> https://bugzilla.redhat.com/show_bug.cgi?id=1211719 Thank you From christian at berendt.io Wed Apr 15 12:02:44 2015 From: christian at berendt.io (Christian Berendt) Date: Wed, 15 Apr 2015 14:02:44 +0200 Subject: [Rdo-list] Wrong GPG key for rdo-release-kilo.rpm Message-ID: <552E5364.7080601@berendt.io> At the moment the rdo-release-kilo.rpm is signed with the rdo-juno-sign key. This should be fixed before the release of Kilo. # rpm -ql rdo-release-kilo-0.noarch /etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno /etc/yum.repos.d/rdo-release.repo # grep gpgkey /etc/yum.repos.d/rdo-release.repo gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno Trying to install something with yum: ---snip--- Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno Importing GPG key 0xDF6674E3: Userid : "rdo-juno-sign " Fingerprint: b643 39ca eebf d1ec 3ebf aeda eeca c5d5 df66 74e3 Package : rdo-release-kilo-0.noarch (@/rdo-release-kilo) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno ---snap--- Christian. From christian at berendt.io Wed Apr 15 14:11:24 2015 From: christian at berendt.io (Christian Berendt) Date: Wed, 15 Apr 2015 16:11:24 +0200 Subject: [Rdo-list] [packaging] add token flush cronjob script to keystone package Message-ID: <552E718C.5040203@berendt.io> Can you please add a hourly token flush cronjob script to the keystone package like seen in other distributions? At the moment we manually add this cronjob in the installation guide with the following command: # (crontab -l -u keystone 2>&1 | grep -q token_flush) || \ echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' \ >> /var/spool/cron/keystone It would be nice to be able to remove this command for RDO, it is only necessary with RDO at the moment. Christian. From itzikb at redhat.com Wed Apr 15 14:28:48 2015 From: itzikb at redhat.com (Itzik Brown) Date: Wed, 15 Apr 2015 17:28:48 +0300 Subject: [Rdo-list] [rhos-qe-dept][rdo-list] RDO build that passed CI In-Reply-To: <211423530.16708479.1429015744984.JavaMail.zimbra@redhat.com> References: <211423530.16708479.1429015744984.JavaMail.zimbra@redhat.com> Message-ID: <552E75A0.6050804@redhat.com> Hi, I successfully installed RDO Kilo using the new delorean.repo. Some notes: Still had the issues I listed below. I had a problem a problem installing Ceilometer (mongod) - I disabled Ceilometer until I figure it out. I enabled LBaaS. Itzik On 04/14/2015 03:49 PM, Itzik Brown wrote: > Hi, > > First - I succeed to install Openstack Kilo using packstack with RDO repositories. > It's a distributed environment (Controller and 2 compute nodes). > > Haven't installed LBaaS - I saw there is a bug https://bugzilla.redhat.com/show_bug.cgi?id=1209932 > so it should be fixed in the next release. > > I had to rerun the installation few times because there were some errors regarding problem with installation of packages using yum - Running the installation again solved the issues. > > Other issues: > > 1)openstack-nova-compute service failed to started due to missing package python-psutil: > Filled a bug https://bugzilla.redhat.com/show_bug.cgi?id=1211587 > Workaround - Install the package python-psutil and and rerun the installation. > > 2)Problem with Horizon - getting permission denied error. > There is an old bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1150678. > I added a comment there. > > Workaround - Changing the ownership of the /usr/share/openstack-dashboard/static/dashboard to > apache:apache solves the issue > > 3) openstack-nova-novncproxy service fails to start: > There is a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1200701 > I tried to install websockify from git - the services is started but still have problem with the instance's > console. > > I added the repositories files I used. > > Itzik > -------------- next part -------------- A non-text attachment was scrubbed... Name: rdo_repos.tar.gz Type: application/gzip Size: 1026 bytes Desc: not available URL: From lars at redhat.com Wed Apr 15 15:17:57 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 15 Apr 2015 11:17:57 -0400 Subject: [Rdo-list] [packaging] add token flush cronjob script to keystone package In-Reply-To: <552E718C.5040203@berendt.io> References: <552E718C.5040203@berendt.io> Message-ID: <20150415151757.GD8526@redhat.com> On Wed, Apr 15, 2015 at 04:11:24PM +0200, Christian Berendt wrote: > Can you please add a hourly token flush cronjob script to the keystone > package like seen in other distributions? Christian, Can you open a bug at https://bugzilla.redhat.com/enter_bug.cgi?product=RDO with this request? Thanks, -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From christian at berendt.io Wed Apr 15 15:30:46 2015 From: christian at berendt.io (Christian Berendt) Date: Wed, 15 Apr 2015 17:30:46 +0200 Subject: [Rdo-list] [packaging] add token flush cronjob script to keystone package In-Reply-To: <20150415151757.GD8526@redhat.com> References: <552E718C.5040203@berendt.io> <20150415151757.GD8526@redhat.com> Message-ID: <552E8426.5090303@berendt.io> On 04/15/2015 05:17 PM, Lars Kellogg-Stedman wrote: > Can you open a bug at > https://bugzilla.redhat.com/enter_bug.cgi?product=RDO with this > request? Done. https://bugzilla.redhat.com/show_bug.cgi?id=1212126 Christian. From whayutin at redhat.com Wed Apr 15 17:42:46 2015 From: whayutin at redhat.com (whayutin) Date: Wed, 15 Apr 2015 13:42:46 -0400 Subject: [Rdo-list] [rhos-qe-dept][rdo-list] RDO build that passed CI In-Reply-To: <552E75A0.6050804@redhat.com> References: <211423530.16708479.1429015744984.JavaMail.zimbra@redhat.com> <552E75A0.6050804@redhat.com> Message-ID: <1429119766.2738.29.camel@redhat.com> On Wed, 2015-04-15 at 17:28 +0300, Itzik Brown wrote: > Hi, > I successfully installed RDO Kilo using the new delorean.repo. > > Some notes: > Still had the issues I listed below. > I had a problem a problem installing Ceilometer (mongod) - I disabled Are there any bugs on the mongodb issue? CI is now hitting it as well. > Ceilometer until I figure it out. > I enabled LBaaS. > > Itzik > > On 04/14/2015 03:49 PM, Itzik Brown wrote: > > Hi, > > > > First - I succeed to install Openstack Kilo using packstack with RDO repositories. > > It's a distributed environment (Controller and 2 compute nodes). > > > > Haven't installed LBaaS - I saw there is a bug https://bugzilla.redhat.com/show_bug.cgi?id=1209932 > > so it should be fixed in the next release. > > > > I had to rerun the installation few times because there were some errors regarding problem with installation of packages using yum - Running the installation again solved the issues. > > > > Other issues: > > > > 1)openstack-nova-compute service failed to started due to missing package python-psutil: > > Filled a bug https://bugzilla.redhat.com/show_bug.cgi?id=1211587 > > Workaround - Install the package python-psutil and and rerun the installation. > > > > 2)Problem with Horizon - getting permission denied error. > > There is an old bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1150678. > > I added a comment there. > > > > Workaround - Changing the ownership of the /usr/share/openstack-dashboard/static/dashboard to > > apache:apache solves the issue > > > > 3) openstack-nova-novncproxy service fails to start: > > There is a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1200701 > > I tried to install websockify from git - the services is started but still have problem with the instance's > > console. > > > > I added the repositories files I used. > > > > Itzik > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From whayutin at redhat.com Wed Apr 15 17:51:00 2015 From: whayutin at redhat.com (whayutin) Date: Wed, 15 Apr 2015 13:51:00 -0400 Subject: [Rdo-list] [rhos-qe-dept][rdo-list] RDO build that passed CI In-Reply-To: <1429119766.2738.29.camel@redhat.com> References: <211423530.16708479.1429015744984.JavaMail.zimbra@redhat.com> <552E75A0.6050804@redhat.com> <1429119766.2738.29.camel@redhat.com> Message-ID: <1429120260.2738.30.camel@redhat.com> On Wed, 2015-04-15 at 13:42 -0400, whayutin wrote: > On Wed, 2015-04-15 at 17:28 +0300, Itzik Brown wrote: > > Hi, > > I successfully installed RDO Kilo using the new delorean.repo. > > > > Some notes: > > Still had the issues I listed below. > > I had a problem a problem installing Ceilometer (mongod) - I disabled > > Are there any bugs on the mongodb issue? CI is now hitting it as well. Nope.. https://bugzilla.redhat.com/show_bug.cgi?id=1212174 > > > > Ceilometer until I figure it out. > > I enabled LBaaS. > > > > Itzik > > > > On 04/14/2015 03:49 PM, Itzik Brown wrote: > > > Hi, > > > > > > First - I succeed to install Openstack Kilo using packstack with RDO repositories. > > > It's a distributed environment (Controller and 2 compute nodes). > > > > > > Haven't installed LBaaS - I saw there is a bug https://bugzilla.redhat.com/show_bug.cgi?id=1209932 > > > so it should be fixed in the next release. > > > > > > I had to rerun the installation few times because there were some errors regarding problem with installation of packages using yum - Running the installation again solved the issues. > > > > > > Other issues: > > > > > > 1)openstack-nova-compute service failed to started due to missing package python-psutil: > > > Filled a bug https://bugzilla.redhat.com/show_bug.cgi?id=1211587 > > > Workaround - Install the package python-psutil and and rerun the installation. > > > > > > 2)Problem with Horizon - getting permission denied error. > > > There is an old bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1150678. > > > I added a comment there. > > > > > > Workaround - Changing the ownership of the /usr/share/openstack-dashboard/static/dashboard to > > > apache:apache solves the issue > > > > > > 3) openstack-nova-novncproxy service fails to start: > > > There is a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1200701 > > > I tried to install websockify from git - the services is started but still have problem with the instance's > > > console. > > > > > > I added the repositories files I used. > > > > > > Itzik > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From christian at berendt.io Wed Apr 15 18:34:51 2015 From: christian at berendt.io (Christian Berendt) Date: Wed, 15 Apr 2015 20:34:51 +0200 Subject: [Rdo-list] [rhos-qe-dept][rdo-list] RDO build that passed CI In-Reply-To: <1429120260.2738.30.camel@redhat.com> References: <211423530.16708479.1429015744984.JavaMail.zimbra@redhat.com> <552E75A0.6050804@redhat.com> <1429119766.2738.29.camel@redhat.com> <1429120260.2738.30.camel@redhat.com> Message-ID: <552EAF4B.8060504@berendt.io> On 04/15/2015 07:51 PM, whayutin wrote: > https://bugzilla.redhat.com/show_bug.cgi?id=1212174 Thanks. I have the same issue using the Juno branch and added the following comment to the mentioned bug report: ---snip--- I think this is a problem with a moved configuration file. Packstack generates the configuration file /etc/mongodb.conf, but mongodb-server ships and uses /etc/mongod.conf (on CentOS 7.1). Because of that mongod only listens on the loopback device and it is not possible to connect to MongoDB using the address deinfed in CONFIG_MONGODB_HOST. I think adding "config => '/etc/mongod.conf'," to templates/mongodb.pp solves the issue for CentOS 7.1. This should be backported to the Juno branch, I am working with this branch at the moment and this branch is affected, too. ---snap--- Christian. From ukalifon at redhat.com Wed Apr 15 18:56:13 2015 From: ukalifon at redhat.com (Udi Kalifon) Date: Wed, 15 Apr 2015 14:56:13 -0400 (EDT) Subject: [Rdo-list] [packaging] add token flush cronjob script to keystone package In-Reply-To: <552E8426.5090303@berendt.io> References: <552E718C.5040203@berendt.io> <20150415151757.GD8526@redhat.com> <552E8426.5090303@berendt.io> Message-ID: <656972843.349419.1429124173260.JavaMail.zimbra@redhat.com> I believe the job should run every minute, no? Isn't that how it was until now? Thanks, Udi. ----- Original Message ----- From: "Christian Berendt" To: "Lars Kellogg-Stedman" Cc: rdo-list at redhat.com Sent: Wednesday, April 15, 2015 6:30:46 PM Subject: Re: [Rdo-list] [packaging] add token flush cronjob script to keystone package On 04/15/2015 05:17 PM, Lars Kellogg-Stedman wrote: > Can you open a bug at > https://bugzilla.redhat.com/enter_bug.cgi?product=RDO with this > request? Done. https://bugzilla.redhat.com/show_bug.cgi?id=1212126 Christian. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From whayutin at redhat.com Wed Apr 15 21:57:14 2015 From: whayutin at redhat.com (whayutin) Date: Wed, 15 Apr 2015 17:57:14 -0400 Subject: [Rdo-list] juno package dep errors on mongodb Message-ID: <1429135034.13695.0.camel@redhat.com> https://bugzilla.redhat.com/show_bug.cgi?id=1212223 From apevec at gmail.com Wed Apr 15 23:32:32 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 16 Apr 2015 01:32:32 +0200 Subject: [Rdo-list] [rhos-qe-dept][rdo-list] RDO build that passed CI In-Reply-To: <552EAF4B.8060504@berendt.io> References: <211423530.16708479.1429015744984.JavaMail.zimbra@redhat.com> <552E75A0.6050804@redhat.com> <1429119766.2738.29.camel@redhat.com> <1429120260.2738.30.camel@redhat.com> <552EAF4B.8060504@berendt.io> Message-ID: >> https://bugzilla.redhat.com/show_bug.cgi?id=1212174 copying my comment in bz: https://admin.fedoraproject.org/updates/FEDORA-EPEL-2015-1458/mongodb-2.6.9-1.el7 was pushed to stable yesterday causing this issue. But isn't such change breaking all EPEL users not just RDO?? Cheers, Alan From berndbausch at gmail.com Thu Apr 16 04:40:26 2015 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 16 Apr 2015 13:40:26 +0900 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <20150413185356.GB8526@redhat.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> Message-ID: <004e01d077ff$778fe830$66afb890$@gmail.com> Who can help installing Kilo on Centos 7? I am trying it out in view of updating the install guide on docs.openstack.org and have showstopper problems early in the process. Thanks to Lars' input I corrected my repositories and got rid of what looked like packaging errors. I am meeting new problems though and am not sure if they are due to misconfiguration, or if Kilo still works only in a narrow setting, or bugs (hard to believe, since I am doing nothing fancy)? My OS is CentOS Linux release 7.1.1503 (Core), plus epel-release-7-5. I use these instructions http://docs-draft.openstack.org/92/167692/13/gate/gate-openstack-manuals-tox -doc-publish-checkbuild/31c1ab2//publish-docs/trunk/install-guide/install/yu m with a few changes: - rdo-release-kilo-0.noarch.rpm instead of the juno rpm - manually installing, enabling and starting memcached - manually adding the _member_ role to keystone ---------------------------------------------------------- PROBLEM 1: I am unable to use memcached as token backend ---------------------------------------------------------- In keystone.conf: [token] driver=keystone.token.persistence.backends.memcache.Token An attempt to request a token hangs, either ``openstack token issue`` or ``curl -i -X POST http://kilocontrol:35357/v2.0/tokens ...``. On the server side, the hang is in the method() call inside /lib/python2.7/site-packages/keystone/common/wsgi.py. If I use a wrong password or malformed credentials, I get the expected error message, so that I assume that the problem occurs after authentication. memcached is running and accessible, but there is no traffic on its port 11211. I don't know enough to trace any further. My workaround is to use a different backend, sql. ---------------------------------------------- Problem 2: glance image-create doesn't work ---------------------------------------------- Using API version 2: # glance image-create --name "cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress usage: glance [--version] [-d] [-v] [--get-schema] [--timeout TIMEOUT] ... glance: error: unrecognized arguments: --name --disk-format qcow2 --container-format bare --visibility public Let's try the help: # glance help image-create usage: glance image-create [--property ] [--file ] [--progress] Create a new image. Positional arguments: Please run with connection parameters set to retrieve the schema for generating help for this command Optional arguments: --property Arbitrary property to associate with image. May be used multiple times. --file Local file to save downloaded image data to. If this is not specified the image data will be written to stdout. --progress Show upload progress bar. This doesn't look like the image-create I am used to. Switching to API version 1 (I also replace "--visibility public" with "--is-public true"): # glance image-create --name "cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --is-public true --progress 'Namespace' object has no attribute 'project_domain_name' In fact, any glance command I try, except help, comes back with this error: # glance image-list 'Namespace' object has no attribute 'project_domain_name' The namespace messages appear to come from the message queue. There is nothing in the glance logs - the problem is on the client side. --debug doesn't change the output. Meanwhile, I was made aware that a new image upload workflow exists, but how to make it work with the CLI client is unclear to me. Bernd -----Original Message----- From: Lars Kellogg-Stedman [mailto:lars at redhat.com] Sent: Tuesday, April 14, 2015 3:54 AM To: Steve Gordon Cc: rdo-list; berndbausch at gmail.com Subject: Re: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 > > openstack-selinux not found in the repositories I am using. openstack-selinux is generally available for RHEL and CentOS, but not for Fedora. On Fedora in particular, selinux related issues are supposed to be reported directly against selinux-policy. When using RDO Kilo with CentOS 7, the openstack-selinux package comes from the rdo-juno repository. Note that installing https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-ki lo-0.noarch.rpm configures two repositories on your system: [openstack-juno] name=OpenStack Juno Repository baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/ enabled=1 skip_if_unavailable=0 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno [openstack-kilo] name=Temporary OpenStack Kilo new deps baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-kilo/epel-7/ skip_if_unavailable=0 gpgcheck=0 enabled=1 I notice that the draft Kilo install guide still points at the Juno packages: > Install the rdo-release-juno package to enable the RDO repository: > > # yum install > # > http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm [from http://docs-draft.openstack.org/92/167692/13/gate/gate-openstack-manuals-tox -doc-publish-checkbuild/31c1ab2//publish-docs/trunk/install-guide/install/yu m/content/ch_basic_environment.html#basics-prerequisites] > > it seems that there is no need to install it, as rules in > > /etc/selinux/targeted/contexts/files/* seem to be the same as on my > > Juno installation. So I am brave, plan to watch the audit log and go > > ahead without modifying SELinux configs. As of last week, there were still selinux bugs open against RDO Juno. I'm not sure what the state of the Kilo packages are at this opint, but it seems likely that there may be issues there as well. > > My NTP server doesn't work (this has nothing to do with OpenStack). > > This forum says that NTP needs to be started after DNS (???) > > https://forum.zentyal.org/index.php/topic,13045.0.html > > In any case, issuing a ``systemctl restart ntpd.service`` fixes the > > problem, but how can it be done automatically? If you're seeing this on a RHEL or Fedora system, you should open a bug in bugzilla so we can track this issue and maybe come up with an appropriate solution. > > ------------------------------------ > > Section 2, Rabbit MQ installation: > > ------------------------------------ > > > > CONTENT: The guide asks for adding a line to /etc/rabbitmq/rabbitmq.config. > > Scratching my head because I don't have that file, but then I see > > that it may not always exist. Perhaps this should be made clearer to > > accommodate slow thinkers. There's a bug open on that for RHEL: https://bugzilla.redhat.com/show_bug.cgi?id=1134956 We should probably clone that to RDO as well. > > ``yum install openstack-keystone python-keystoneclient``: dependency > > python-cryptography can't be found > > > > After adding this repo (found via internet search): > > > > [npmccallum-python-cryptography] > > name=Copr repo for python-cryptography owned by npmccallum > [...] > > This looks very much like a packaging error, and I hope it will > > eventually go away. You shouldn't require COPR repositories for anything. If you encounter a repeatable packaging error, make sure to open a bugzilla so that folks are aware of the issue. On CentOS 7 right now, I am able to install both python-keystoneclient and openstack-keystone without any errors, using only the base, RDO, and EPEL repositories. > > CONTENT: Why are we using API v2, not v3? Why a separate adminurl > > port, and same port for internal and publicurl? Some clarification would help. I suspect the answer to all of the above is, "because legacy". v3 support has only recently been showing up in all of the services, and many folks still aren't familiar with the newer APIs. The admin/non-admin port separation is another historical oddity that we have to live with. With Keystone v2, at least, there are some features only available through the admin api on the admin port. > > Major problems with glance. I am stuck with problem 3 below. > > ERROR glance.common.config [-] Unable to load glance-api-keystone > > from configuration file /usr/share/glance/glance-api-dist-paste.ini. > > Got: ImportError('No module named elasticsearch',) There is a known problem that has been corrected in the latest Kilo packages. There was no corresponding bz filed (via apevec, irc). > > Trying to upload an image now fails because of wrong credentials???? > > Haven't resolved this yet. Any glance request is rejected with > > # glance image-list > > Invalid OpenStack Identity credentials. Debugging this will probably require someone looking over your shoulder at your glance configuration. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ From christian at berendt.io Thu Apr 16 08:07:10 2015 From: christian at berendt.io (Christian Berendt) Date: Thu, 16 Apr 2015 10:07:10 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <004e01d077ff$778fe830$66afb890$@gmail.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> <004e01d077ff$778fe830$66afb890$@gmail.com> Message-ID: <552F6DAE.3050508@berendt.io> On 04/16/2015 06:40 AM, Bernd Bausch wrote: > Who can help installing Kilo on Centos 7? I am trying it out in view of > updating the install guide on docs.openstack.org and have showstopper > problems early in the process. Bernd, I am already testing with CentOS 7 (stopped after Keystone yesterday and will continue with Apache2/Memcache for Keystone today). I will have a look at your Glance issues after completing the necessary changes in the Keystone section. Christian. From mabrams at redhat.com Thu Apr 16 06:57:06 2015 From: mabrams at redhat.com (Mike Abrams) Date: Thu, 16 Apr 2015 02:57:06 -0400 (EDT) Subject: [Rdo-list] [rhos-qe-dept] [rdo-list] RDO build that passed CI In-Reply-To: <1429120260.2738.30.camel@redhat.com> References: <211423530.16708479.1429015744984.JavaMail.zimbra@redhat.com> <552E75A0.6050804@redhat.com> <1429119766.2738.29.camel@redhat.com> <1429120260.2738.30.camel@redhat.com> Message-ID: <552928706.873922.1429167426797.JavaMail.zimbra@redhat.com> I'm hitting "could not restart service httpd". http://pastebin.test.redhat.com/276653 ----- Original Message ----- From: "whayutin" To: "Itzik Brown" , "Alan Pevec" Cc: "rhos-qe-dept" , rdo-list at redhat.com Sent: Wednesday, 15 April, 2015 8:51:00 PM Subject: Re: [rhos-qe-dept] [Rdo-list] [rdo-list] RDO build that passed CI On Wed, 2015-04-15 at 13:42 -0400, whayutin wrote: > On Wed, 2015-04-15 at 17:28 +0300, Itzik Brown wrote: > > Hi, > > I successfully installed RDO Kilo using the new delorean.repo. > > > > Some notes: > > Still had the issues I listed below. > > I had a problem a problem installing Ceilometer (mongod) - I disabled > > Are there any bugs on the mongodb issue? CI is now hitting it as well. Nope.. https://bugzilla.redhat.com/show_bug.cgi?id=1212174 > > > > Ceilometer until I figure it out. > > I enabled LBaaS. > > > > Itzik > > > > On 04/14/2015 03:49 PM, Itzik Brown wrote: > > > Hi, > > > > > > First - I succeed to install Openstack Kilo using packstack with RDO repositories. > > > It's a distributed environment (Controller and 2 compute nodes). > > > > > > Haven't installed LBaaS - I saw there is a bug https://bugzilla.redhat.com/show_bug.cgi?id=1209932 > > > so it should be fixed in the next release. > > > > > > I had to rerun the installation few times because there were some errors regarding problem with installation of packages using yum - Running the installation again solved the issues. > > > > > > Other issues: > > > > > > 1)openstack-nova-compute service failed to started due to missing package python-psutil: > > > Filled a bug https://bugzilla.redhat.com/show_bug.cgi?id=1211587 > > > Workaround - Install the package python-psutil and and rerun the installation. > > > > > > 2)Problem with Horizon - getting permission denied error. > > > There is an old bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1150678. > > > I added a comment there. > > > > > > Workaround - Changing the ownership of the /usr/share/openstack-dashboard/static/dashboard to > > > apache:apache solves the issue > > > > > > 3) openstack-nova-novncproxy service fails to start: > > > There is a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1200701 > > > I tried to install websockify from git - the services is started but still have problem with the instance's > > > console. > > > > > > I added the repositories files I used. > > > > > > Itzik > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Mike Abrams mabrams at redhat.com From christian at berendt.io Thu Apr 16 12:00:57 2015 From: christian at berendt.io (Christian Berendt) Date: Thu, 16 Apr 2015 14:00:57 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <004e01d077ff$778fe830$66afb890$@gmail.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> <004e01d077ff$778fe830$66afb890$@gmail.com> Message-ID: <552FA479.4050103@berendt.io> On 04/16/2015 06:40 AM, Bernd Bausch wrote: > Problem 2: glance image-create doesn't work Bernd, do you work with https://repos.fedorapeople.org/repos/openstack/openstack-kilo/epel-7/? Then you are still using the old packages (2014.2). Milestone 3 packages for 2015.1 are available at https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/kilo-3/. I do not know if we already have usable RC1 packages, I have not yet found them. Christian. From christian at berendt.io Thu Apr 16 12:23:41 2015 From: christian at berendt.io (Christian Berendt) Date: Thu, 16 Apr 2015 14:23:41 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <552FA479.4050103@berendt.io> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> <004e01d077ff$778fe830$66afb890$@gmail.com> <552FA479.4050103@berendt.io> Message-ID: <552FA9CD.1080208@berendt.io> On 04/16/2015 02:00 PM, Christian Berendt wrote: > I do not know if we already have usable RC1 packages, I have not yet > found them. Ah, Alan mentioned it in an other mail on this list: ---snip--- 1. yum install epel-release 2. yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm 3. cd /etc/yum.repos.d; wget http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/delorean.repo ---snap--- Christian. From jcoufal at redhat.com Thu Apr 16 15:43:41 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Thu, 16 Apr 2015 17:43:41 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <004e01d077ff$778fe830$66afb890$@gmail.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> <004e01d077ff$778fe830$66afb890$@gmail.com> Message-ID: <552FD8AD.6090408@redhat.com> Hi Bernd, if you are interested we also work on new installer called RDO-Manager. It is project based on OpenStack's official deployment tool called TripleO [0]. If you are interested you can have a look to the project's homepage [1] and follow installation guide [2] If you have any questions, feel free to contact me, I am happy to help. [0] TripleO: https://wiki.openstack.org/wiki/TripleO [1] RDO-Manager Home Page: https://www.rdoproject.org/RDO-Manager [2] RDO-Manager User Guide: http://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/index.html Cheers -- Jarda https://www.rdoproject.org/RDO-Manager On 16/04/15 06:40, Bernd Bausch wrote: > Who can help installing Kilo on Centos 7? I am trying it out in view of > updating the install guide on docs.openstack.org and have showstopper > problems early in the process. > > Thanks to Lars' input I corrected my repositories and got rid of what looked > like packaging errors. I am meeting new problems though and am not sure if > they are due to misconfiguration, or if Kilo still works only in a narrow > setting, or bugs (hard to believe, since I am doing nothing fancy)? > > My OS is CentOS Linux release 7.1.1503 (Core), plus epel-release-7-5. > > I use these instructions > http://docs-draft.openstack.org/92/167692/13/gate/gate-openstack-manuals-tox > -doc-publish-checkbuild/31c1ab2//publish-docs/trunk/install-guide/install/yu > m with a few changes: > > - rdo-release-kilo-0.noarch.rpm instead of the juno rpm > - manually installing, enabling and starting memcached > - manually adding the _member_ role to keystone > > ---------------------------------------------------------- > PROBLEM 1: I am unable to use memcached as token backend > ---------------------------------------------------------- > > In keystone.conf: > > [token] > driver=keystone.token.persistence.backends.memcache.Token > > An attempt to request a token hangs, either ``openstack token issue`` or > ``curl -i -X POST http://kilocontrol:35357/v2.0/tokens ...``. On the server > side, the hang is in the method() call inside > /lib/python2.7/site-packages/keystone/common/wsgi.py. If I use a wrong > password or malformed credentials, I get the expected error message, so that > I assume that the problem occurs after authentication. memcached is running > and accessible, but there is no traffic on its port 11211. I don't know > enough to trace any further. > > My workaround is to use a different backend, sql. > > ---------------------------------------------- > Problem 2: glance image-create doesn't work > ---------------------------------------------- > > Using API version 2: > > # glance image-create --name "cirros-0.3.3-x86_64" --file > /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 > --container-format bare --visibility public --progress > usage: glance [--version] [-d] [-v] [--get-schema] [--timeout TIMEOUT] > ... > glance: error: unrecognized arguments: --name --disk-format qcow2 > --container-format bare --visibility public > > Let's try the help: > > # glance help image-create > usage: glance image-create [--property ] [--file ] > [--progress] > > Create a new image. > Positional arguments: > Please run with connection parameters set to > retrieve > the schema for generating help for this command > Optional arguments: > --property > Arbitrary property to associate with image. May be > used multiple times. > --file Local file to save downloaded image data to. If > this > is not specified the image data will be written to > stdout. > --progress Show upload progress bar. > > This doesn't look like the image-create I am used to. > > Switching to API version 1 (I also replace "--visibility public" with > "--is-public true"): > > # glance image-create --name "cirros-0.3.3-x86_64" --file > /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 > --container-format bare --is-public true --progress > 'Namespace' object has no attribute 'project_domain_name' > > In fact, any glance command I try, except help, comes back with this error: > > # glance image-list > 'Namespace' object has no attribute 'project_domain_name' > > The namespace messages appear to come from the message queue. There is > nothing in the glance logs - the problem is on the client side. --debug > doesn't change the output. Meanwhile, I was made aware that a new image > upload workflow exists, but how to make it work with the CLI client is > unclear to me. > > Bernd > > -----Original Message----- > From: Lars Kellogg-Stedman [mailto:lars at redhat.com] > Sent: Tuesday, April 14, 2015 3:54 AM > To: Steve Gordon > Cc: rdo-list; berndbausch at gmail.com > Subject: Re: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that > much) progress with Kilo install on RHEL/Centos 7 > >>> openstack-selinux not found in the repositories I am using. > > openstack-selinux is generally available for RHEL and CentOS, but not for > Fedora. On Fedora in particular, selinux related issues are supposed to be > reported directly against selinux-policy. > > When using RDO Kilo with CentOS 7, the openstack-selinux package comes from > the rdo-juno repository. Note that installing > https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-ki > lo-0.noarch.rpm > configures two repositories on your system: > > [openstack-juno] > name=OpenStack Juno Repository > > baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/ > enabled=1 > skip_if_unavailable=0 > gpgcheck=1 > gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno > > [openstack-kilo] > name=Temporary OpenStack Kilo new deps > > baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-kilo/epel-7/ > skip_if_unavailable=0 > gpgcheck=0 > enabled=1 > > I notice that the draft Kilo install guide still points at the Juno > packages: > >> Install the rdo-release-juno package to enable the RDO repository: >> >> # yum install >> # >> http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm > > [from > http://docs-draft.openstack.org/92/167692/13/gate/gate-openstack-manuals-tox > -doc-publish-checkbuild/31c1ab2//publish-docs/trunk/install-guide/install/yu > m/content/ch_basic_environment.html#basics-prerequisites] > >>> it seems that there is no need to install it, as rules in >>> /etc/selinux/targeted/contexts/files/* seem to be the same as on my >>> Juno installation. So I am brave, plan to watch the audit log and go >>> ahead without modifying SELinux configs. > > As of last week, there were still selinux bugs open against RDO Juno. > I'm not sure what the state of the Kilo packages are at this opint, but it > seems likely that there may be issues there as well. > >>> My NTP server doesn't work (this has nothing to do with OpenStack). >>> This forum says that NTP needs to be started after DNS (???) >>> https://forum.zentyal.org/index.php/topic,13045.0.html >>> In any case, issuing a ``systemctl restart ntpd.service`` fixes the >>> problem, but how can it be done automatically? > > If you're seeing this on a RHEL or Fedora system, you should open a bug in > bugzilla so we can track this issue and maybe come up with an appropriate > solution. > >>> ------------------------------------ >>> Section 2, Rabbit MQ installation: >>> ------------------------------------ >>> >>> CONTENT: The guide asks for adding a line to > /etc/rabbitmq/rabbitmq.config. >>> Scratching my head because I don't have that file, but then I see >>> that it may not always exist. Perhaps this should be made clearer to >>> accommodate slow thinkers. > > There's a bug open on that for RHEL: > > https://bugzilla.redhat.com/show_bug.cgi?id=1134956 > > We should probably clone that to RDO as well. > >>> ``yum install openstack-keystone python-keystoneclient``: dependency >>> python-cryptography can't be found >>> >>> After adding this repo (found via internet search): >>> >>> [npmccallum-python-cryptography] >>> name=Copr repo for python-cryptography owned by npmccallum >> [...] >>> This looks very much like a packaging error, and I hope it will >>> eventually go away. > > You shouldn't require COPR repositories for anything. If you encounter a > repeatable packaging error, make sure to open a bugzilla so that folks are > aware of the issue. > > On CentOS 7 right now, I am able to install both python-keystoneclient and > openstack-keystone without any errors, using only the base, RDO, and EPEL > repositories. > >>> CONTENT: Why are we using API v2, not v3? Why a separate adminurl >>> port, and same port for internal and publicurl? Some clarification would > help. > > I suspect the answer to all of the above is, "because legacy". v3 support > has only recently been showing up in all of the services, and many folks > still aren't familiar with the newer APIs. The admin/non-admin port > separation is another historical oddity that we have to live with. With > Keystone v2, at least, there are some features only available through the > admin api on the admin port. > >>> Major problems with glance. I am stuck with problem 3 below. > >>> ERROR glance.common.config [-] Unable to load glance-api-keystone >>> from configuration file /usr/share/glance/glance-api-dist-paste.ini. >>> Got: ImportError('No module named elasticsearch',) > > There is a known problem that has been corrected in the latest Kilo > packages. There was no corresponding bz filed (via apevec, irc). > >>> Trying to upload an image now fails because of wrong credentials???? >>> Haven't resolved this yet. Any glance request is rejected with >>> # glance image-list >>> Invalid OpenStack Identity credentials. > > Debugging this will probably require someone looking over your shoulder at > your glance configuration. > > -- > Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From christian at berendt.io Thu Apr 16 15:48:12 2015 From: christian at berendt.io (Christian Berendt) Date: Thu, 16 Apr 2015 17:48:12 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <552FD8AD.6090408@redhat.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> <004e01d077ff$778fe830$66afb890$@gmail.com> <552FD8AD.6090408@redhat.com> Message-ID: <552FD9BC.8030104@berendt.io> On 04/16/2015 05:43 PM, Jaromir Coufal wrote: > If you are interested you can have a look to the project's homepage [1] > and follow installation guide [2] Thanks for pointing to this guide. We are working on the official OpenStack Installation Guide at the moment. This is a manual step by step installation, we do not use distribution specific installers there. http://docs.openstack.org/juno/install-guide/install/yum/content/ Christian. From lars at redhat.com Thu Apr 16 16:35:18 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 16 Apr 2015 12:35:18 -0400 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <004e01d077ff$778fe830$66afb890$@gmail.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> <004e01d077ff$778fe830$66afb890$@gmail.com> Message-ID: <20150416163518.GA18285@redhat.com> > ---------------------------------------------------------- > PROBLEM 1: I am unable to use memcached as token backend > ---------------------------------------------------------- > > In keystone.conf: > > [token] > driver=keystone.token.persistence.backends.memcache.Token > > An attempt to request a token hangs, either ``openstack token issue`` or > ``curl -i -X POST http://kilocontrol:35357/v2.0/tokens ...``. On the server > side, the hang is in the method() call inside This appears to be an selinux issue: # audit2allow -a #============= keystone_t ============== allow keystone_t memcache_port_t:tcp_socket name_connect; If I put selinux in permissive mode, I am able to successfully use the memcache driver. > ---------------------------------------------- > Problem 2: glance image-create doesn't work > ---------------------------------------------- > > Using API version 2: > > # glance image-create --name "cirros-0.3.3-x86_64" --file > /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 > --container-format bare --visibility public --progress > usage: glance [--version] [-d] [-v] [--get-schema] [--timeout TIMEOUT] > ... > glance: error: unrecognized arguments: --name --disk-format qcow2 > --container-format bare --visibility public I am not able to reproduce your problem. By default, the glance client operates with API version 1, so you would use the '--is-public' parameter: glance image-create --name cirros-public \ --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 \ --container-format bare --is-public True If you use version 2 of the API, '--is-public' is replaced with '--visibility': glance --os-image-api-version=2 image-create --name cirros-public \ --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 \ --container-format bare --visibility public Both of these commands worked successfully for me. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lars at redhat.com Thu Apr 16 16:40:12 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 16 Apr 2015 12:40:12 -0400 Subject: [Rdo-list] Help getting started with rdo-manager Message-ID: <20150416164012.GB18285@redhat.com> I am trying to get rdo-manager up and running using the instructions at: https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/internal-html/virt-setup.html. I get as far as running "instack-install-undercloud" on the virtual host that was provisioned by "instack-virt-setup" and I'm getting: [2015-04-16 15:54:11,499] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 6] There are no actual errors on screen. If I scroll back a few screens, I find: Error: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: Failed to call refresh: Could not find command 'glance-manage' Error: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: Could not find command 'glance-manage' It seems as if the script did not install openstack-glance. Is this a known problem? -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From jslagle at redhat.com Thu Apr 16 17:29:58 2015 From: jslagle at redhat.com (James Slagle) Date: Thu, 16 Apr 2015 13:29:58 -0400 Subject: [Rdo-list] Help getting started with rdo-manager In-Reply-To: <20150416164012.GB18285@redhat.com> References: <20150416164012.GB18285@redhat.com> Message-ID: <20150416172958.GH29586@teletran-1.redhat.com> On Thu, Apr 16, 2015 at 12:40:12PM -0400, Lars Kellogg-Stedman wrote: > I am trying to get rdo-manager up and running using the instructions > at: > > https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/internal-html/virt-setup.html. > > I get as far as running "instack-install-undercloud" on the virtual > host that was provisioned by "instack-virt-setup" and I'm getting: > > [2015-04-16 15:54:11,499] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 6] > > There are no actual errors on screen. If I scroll back a few screens, > I find: > > Error: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: > Failed to call refresh: Could not find command 'glance-manage' > Error: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: > Could not find command 'glance-manage' > > It seems as if the script did not install openstack-glance. Is this a > known problem? Yes, there was a change to puppet-glance that was not compatible with the existing packaging: https://bugs.launchpad.net/puppet-glance/+bug/1444974 The bug ended up WONTFIX, as it's getting fixed on the packaging side. Those fixes are in flight, and we hope to get it working again today. -- -- James Slagle -- From mkassawara at gmail.com Thu Apr 16 19:29:17 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Thu, 16 Apr 2015 14:29:17 -0500 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <20150416163518.GA18285@redhat.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> <004e01d077ff$778fe830$66afb890$@gmail.com> <20150416163518.GA18285@redhat.com> Message-ID: I forget which thread we're on now, but apparently the glance client/server (and associated workflow for images) changed between kilo-2 and kilo-3. We need to address this in the installation guide (and probably every other piece of documentation)... once I can figure out how it works now because people like to make significant changes without documenting them and the client help is useless. On Thu, Apr 16, 2015 at 11:35 AM, Lars Kellogg-Stedman wrote: > > ---------------------------------------------------------- > > PROBLEM 1: I am unable to use memcached as token backend > > ---------------------------------------------------------- > > > > In keystone.conf: > > > > [token] > > driver=keystone.token.persistence.backends.memcache.Token > > > > An attempt to request a token hangs, either ``openstack token issue`` or > > ``curl -i -X POST http://kilocontrol:35357/v2.0/tokens ...``. On the > server > > side, the hang is in the method() call inside > > This appears to be an selinux issue: > > # audit2allow -a > #============= keystone_t ============== > allow keystone_t memcache_port_t:tcp_socket name_connect; > > If I put selinux in permissive mode, I am able to successfully use the > memcache driver. > > > ---------------------------------------------- > > Problem 2: glance image-create doesn't work > > ---------------------------------------------- > > > > Using API version 2: > > > > # glance image-create --name "cirros-0.3.3-x86_64" --file > > /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 > > --container-format bare --visibility public --progress > > usage: glance [--version] [-d] [-v] [--get-schema] [--timeout > TIMEOUT] > > ... > > glance: error: unrecognized arguments: --name --disk-format qcow2 > > --container-format bare --visibility public > > I am not able to reproduce your problem. By default, the glance > client operates with API version 1, so you would use the '--is-public' > parameter: > > glance image-create --name cirros-public \ > --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 \ > --container-format bare --is-public True > > If you use version 2 of the API, '--is-public' is replaced with > '--visibility': > > glance --os-image-api-version=2 image-create --name cirros-public \ > --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 \ > --container-format bare --visibility public > > Both of these commands worked successfully for me. > > -- > Lars Kellogg-Stedman | larsks @ > {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Thu Apr 16 20:58:14 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 16 Apr 2015 16:58:14 -0400 Subject: [Rdo-list] Help getting started with rdo-manager In-Reply-To: <20150416172958.GH29586@teletran-1.redhat.com> References: <20150416164012.GB18285@redhat.com> <20150416172958.GH29586@teletran-1.redhat.com> Message-ID: <20150416205813.GD18285@redhat.com> Okay, a few steps closer; I was able to run 'instack-deploy-overcloud --tuskar' after manually installing the tuskar client, but that got me: + tripleo wait_for_stack_ready 220 10 overcloud Command output matched '(CREATE|UPDATE)_FAILED'. Exiting... Which failed because: $ nova list +---...-+-------...-+--------+------------+-------------+--------------------+ | ID... | Name ... | Status | Task State | Power State | Networks | +---...-+-------...-+--------+------------+-------------+--------------------+ | fd... | ov-o3s... | ERROR | - | NOSTATE | | | 5f... | ov-sty... | ACTIVE | - | Running | ctlplane=192.0.2.9 | +---...-+-------...-+--------+------------+-------------+--------------------+ Which failed because: $ nova show fd930d80-b543-4243-84f8-adbf1072fa16 [...] | fault | {"message": "No valid host was found. There are not enough hosts available.", There are two hosts available (baremetal_0 and baremetal_1), as described in https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/internal-html/virt-setup.html. There were both successfully discovered in the "Discovering Nodes" step; the output of `instack-ironic-deployment --show-profile` is: Querying assigned profiles ... 4dce6892-1aca-4207-a2fe-dca12b5128fd "profile:compute,boot_option:local" 7b4dd56d-3912-45db-8d05-ffbf6f769ac6 "profile:compute,boot_option:local" -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ohochman at redhat.com Thu Apr 16 21:08:38 2015 From: ohochman at redhat.com (Omri Hochman) Date: Thu, 16 Apr 2015 17:08:38 -0400 (EDT) Subject: [Rdo-list] Help getting started with rdo-manager In-Reply-To: <20150416205813.GD18285@redhat.com> References: <20150416164012.GB18285@redhat.com> <20150416172958.GH29586@teletran-1.redhat.com> <20150416205813.GD18285@redhat.com> Message-ID: <1888917977.1666212.1429218518242.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Lars Kellogg-Stedman" > To: "James Slagle" > Cc: rdo-list at redhat.com > Sent: Thursday, April 16, 2015 4:58:14 PM > Subject: Re: [Rdo-list] Help getting started with rdo-manager > > Okay, a few steps closer; I was able to run 'instack-deploy-overcloud > --tuskar' after manually installing the tuskar client, but that got > me: > > + tripleo wait_for_stack_ready 220 10 overcloud > Command output matched '(CREATE|UPDATE)_FAILED'. Exiting... > > Which failed because: > > $ nova list > +---...-+-------...-+--------+------------+-------------+--------------------+ > | ID... | Name ... | Status | Task State | Power State | Networks > | | > +---...-+-------...-+--------+------------+-------------+--------------------+ > | fd... | ov-o3s... | ERROR | - | NOSTATE | > | | > | 5f... | ov-sty... | ACTIVE | - | Running | > | ctlplane=192.0.2.9 | > +---...-+-------...-+--------+------------+-------------+--------------------+ > > Which failed because: > > $ nova show fd930d80-b543-4243-84f8-adbf1072fa16 > [...] > | fault | {"message": "No valid host was found. There are not enough hosts > | available.", > > There are two hosts available (baremetal_0 and baremetal_1), as described in > https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/internal-html/virt-setup.html. > There were both successfully discovered in the "Discovering Nodes" step; the > output of `instack-ironic-deployment --show-profile` is: > > Querying assigned profiles ... > > 4dce6892-1aca-4207-a2fe-dca12b5128fd > "profile:compute,boot_option:local" > > 7b4dd56d-3912-45db-8d05-ffbf6f769ac6 > "profile:compute,boot_option:local" > I think you should have check that in /etc/edeploy/state you have --> : [('control', 1), ('compute', '*')] Then try : heat stack-delete && ironic node-delete && re-discover the nodes && run the overcloud deployment again . Omri. > -- > Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From jslagle at redhat.com Thu Apr 16 21:36:03 2015 From: jslagle at redhat.com (James Slagle) Date: Thu, 16 Apr 2015 17:36:03 -0400 Subject: [Rdo-list] rdo-manager status update Message-ID: <20150416213603.GK29586@teletran-1.redhat.com> Hi, rdo-manager was affected today by a puppet-glance[0] change. That has been fixed on the packaging side. Also, when we moved forward to a later delorean trunk repo to pick up the needed packaging change, we were then hit with a breaking change in python-openstackclient that was causing an error during node discovery[1]. With Brad's help, we got python-ironicclient, python-rdomanager-oscplugin and ironic-discoverd updated. I just wanted to make folks aware of these patches since they're now applied in the rdo-management repos: python-ironicclient: https://review.openstack.org/#/c/174551/ ironic-discoverd: https://review.openstack.org/#/c/174575/ <- the commit message here has more details These fixes are in the process of getting promoted out to the delorean trunk-mgt repos once CI has passed. [0] https://bugs.launchpad.net/puppet-glance/+bug/1444974 [1] ERROR: openstack 'module' object has no attribute 'API_VERSIONS' -- -- James Slagle -- From whayutin at redhat.com Thu Apr 16 22:23:32 2015 From: whayutin at redhat.com (whayutin) Date: Thu, 16 Apr 2015 18:23:32 -0400 Subject: [Rdo-list] rdo juno failure w/ mariadb deps Message-ID: <1429223012.2761.10.camel@redhat.com> https://bugzilla.redhat.com/show_bug.cgi?id=1212651 Thanks! From mkassawara at gmail.com Fri Apr 17 01:03:22 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Thu, 16 Apr 2015 20:03:22 -0500 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> <004e01d077ff$778fe830$66afb890$@gmail.com> <20150416163518.GA18285@redhat.com> Message-ID: You might try setting the OS_TENANT_NAME environment variable (in addition to OS_PROJECT_NAME) and then try the glance image-create command again. Still investigating further. On Thu, Apr 16, 2015 at 2:29 PM, Matt Kassawara wrote: > I forget which thread we're on now, but apparently the glance > client/server (and associated workflow for images) changed between kilo-2 > and kilo-3. We need to address this in the installation guide (and probably > every other piece of documentation)... once I can figure out how it works > now because people like to make significant changes without documenting > them and the client help is useless. > > On Thu, Apr 16, 2015 at 11:35 AM, Lars Kellogg-Stedman > wrote: > >> > ---------------------------------------------------------- >> > PROBLEM 1: I am unable to use memcached as token backend >> > ---------------------------------------------------------- >> > >> > In keystone.conf: >> > >> > [token] >> > driver=keystone.token.persistence.backends.memcache.Token >> > >> > An attempt to request a token hangs, either ``openstack token issue`` or >> > ``curl -i -X POST http://kilocontrol:35357/v2.0/tokens ...``. On the >> server >> > side, the hang is in the method() call inside >> >> This appears to be an selinux issue: >> >> # audit2allow -a >> #============= keystone_t ============== >> allow keystone_t memcache_port_t:tcp_socket name_connect; >> >> If I put selinux in permissive mode, I am able to successfully use the >> memcache driver. >> >> > ---------------------------------------------- >> > Problem 2: glance image-create doesn't work >> > ---------------------------------------------- >> > >> > Using API version 2: >> > >> > # glance image-create --name "cirros-0.3.3-x86_64" --file >> > /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 >> > --container-format bare --visibility public --progress >> > usage: glance [--version] [-d] [-v] [--get-schema] [--timeout >> TIMEOUT] >> > ... >> > glance: error: unrecognized arguments: --name --disk-format qcow2 >> > --container-format bare --visibility public >> >> I am not able to reproduce your problem. By default, the glance >> client operates with API version 1, so you would use the '--is-public' >> parameter: >> >> glance image-create --name cirros-public \ >> --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 \ >> --container-format bare --is-public True >> >> If you use version 2 of the API, '--is-public' is replaced with >> '--visibility': >> >> glance --os-image-api-version=2 image-create --name cirros-public \ >> --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 \ >> --container-format bare --visibility public >> >> Both of these commands worked successfully for me. >> >> -- >> Lars Kellogg-Stedman | larsks @ >> {freenode,twitter,github} >> Cloud Engineering / OpenStack | http://blog.oddbit.com/ >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Apr 17 02:35:46 2015 From: whayutin at redhat.com (whayutin) Date: Thu, 16 Apr 2015 22:35:46 -0400 Subject: [Rdo-list] [CI] swift fails to start in rdo kilo Message-ID: <1429238146.3235.3.camel@redhat.com> https://bugzilla.redhat.com/show_bug.cgi?id=1212670 From berndbausch at gmail.com Fri Apr 17 06:40:48 2015 From: berndbausch at gmail.com (Bernd Bausch) Date: Fri, 17 Apr 2015 15:40:48 +0900 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <20150416163518.GA18285@redhat.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> <004e01d077ff$778fe830$66afb890$@gmail.com> <20150416163518.GA18285@redhat.com> Message-ID: <004e01d078d9$7239c790$56ad56b0$@gmail.com> It works! My repos needed fixing, after which I have: - a Centos 7 base installation - epel 7.5 - openstack-kilo - delorean With that, my second problem (glance client refusing to cooperate) is gone. And permissive mode is indeed a workaround for problem 2. On the way to success, I noticed that openstack-glance doesn't have the right set of dependencies in the delorean repo. I installed openstack-glance-api, ...-registry, ...-doc and ...-common manually. Is this considered a bug or are such wrinkles normal at this stage? Thanks! I will be back with further problems. Perhaps. Bernd -----Original Message----- From: Lars Kellogg-Stedman [mailto:lars at redhat.com] Sent: Friday, April 17, 2015 1:35 AM To: Bernd Bausch Cc: 'rdo-list' Subject: Re: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 > ---------------------------------------------------------- > PROBLEM 1: I am unable to use memcached as token backend > ---------------------------------------------------------- > > In keystone.conf: > > [token] > driver=keystone.token.persistence.backends.memcache.Token > > An attempt to request a token hangs, either ``openstack token issue`` > or ``curl -i -X POST http://kilocontrol:35357/v2.0/tokens ...``. On > the server side, the hang is in the method() call inside This appears to be an selinux issue: # audit2allow -a #============= keystone_t ============== allow keystone_t memcache_port_t:tcp_socket name_connect; If I put selinux in permissive mode, I am able to successfully use the memcache driver. > ---------------------------------------------- > Problem 2: glance image-create doesn't work > ---------------------------------------------- > > Using API version 2: > > # glance image-create --name "cirros-0.3.3-x86_64" --file > /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 > --container-format bare --visibility public --progress > usage: glance [--version] [-d] [-v] [--get-schema] [--timeout TIMEOUT] > ... > glance: error: unrecognized arguments: --name --disk-format qcow2 > --container-format bare --visibility public I am not able to reproduce your problem. By default, the glance client operates with API version 1, so you would use the '--is-public' parameter: glance image-create --name cirros-public \ --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 \ --container-format bare --is-public True If you use version 2 of the API, '--is-public' is replaced with '--visibility': glance --os-image-api-version=2 image-create --name cirros-public \ --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 \ --container-format bare --visibility public Both of these commands worked successfully for me. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ From christian at berendt.io Fri Apr 17 06:44:52 2015 From: christian at berendt.io (Christian Berendt) Date: Fri, 17 Apr 2015 08:44:52 +0200 Subject: [Rdo-list] Fwd: [OpenStack-docs] [install-guide] (not that much) progress with Kilo install on RHEL/Centos 7 In-Reply-To: <004e01d078d9$7239c790$56ad56b0$@gmail.com> References: <001001d0758c$0fbb78c0$2f326a40$@gmail.com> <1164177642.14449963.1428940891724.JavaMail.zimbra@redhat.com> <20150413185356.GB8526@redhat.com> <004e01d077ff$778fe830$66afb890$@gmail.com> <20150416163518.GA18285@redhat.com> <004e01d078d9$7239c790$56ad56b0$@gmail.com> Message-ID: <5530ABE4.8070909@berendt.io> On 04/17/2015 08:40 AM, Bernd Bausch wrote: > openstack-glance-api, ...-registry, ...-doc and ...-common manually. Is this > considered a bug or are such wrinkles normal at this stage? I proposed a review request to fix this in our docs: https://review.openstack.org/#/c/174603/ Christian. From phaurep at gmail.com Fri Apr 17 07:48:49 2015 From: phaurep at gmail.com (pauline phaure) Date: Fri, 17 Apr 2015 09:48:49 +0200 Subject: [Rdo-list] Problem with floating IP Message-ID: Hello everyone, I have some troubles making the floating IP work. When I associate a floating IP to my instance, the instance can reach the neutron-router and ping but cannot ping the external gateway. any ideas where to look? [image: Images int?gr?es 3] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 15425 bytes Desc: not available URL: From hguemar at fedoraproject.org Fri Apr 17 08:37:51 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Fri, 17 Apr 2015 10:37:51 +0200 Subject: [Rdo-list] [CI] swift fails to start in rdo kilo In-Reply-To: <1429238146.3235.3.camel@redhat.com> References: <1429238146.3235.3.camel@redhat.com> Message-ID: 2015-04-17 4:35 GMT+02:00 whayutin : > https://bugzilla.redhat.com/show_bug.cgi?id=1212670 > Pete already submitted that new dependency, it wasn't in my radar. Reviewing it. https://bugzilla.redhat.com/show_bug.cgi?id=1212148 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mangelajo at redhat.com Fri Apr 17 08:42:11 2015 From: mangelajo at redhat.com (Miguel Angel Ajo Pelayo) Date: Fri, 17 Apr 2015 10:42:11 +0200 Subject: [Rdo-list] Problem with floating IP In-Reply-To: References: Message-ID: <075CDAEE-E429-4143-9CD5-2EA43FA2E9B7@redhat.com> To troubleshoot this I?d recommend you 1) doing a tcpdump in the controller node, on the external interface attached to br-ex, and find what?s going on, tcpdump -e -n -v -v -v -i ethX note: as per your schema you may use an ?external flat network? (no segmentation) from your network/controller node, so the packets going out from the router should not be tagged in your tcpdump. If you set the external network as vlan tagged, you may have to change it into flat. (such operation may require removing the floating ips from instances, removing legs from router (External, and internal), and then removing the router, then the external network/subnet). In a separate terminal, it may help to .. 2) look for the router netns: # ip netns qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 note : this is the ?virtual router?, it lives in a network namespace which is another isolated instance of the linux networking stack., you will find the interfaces and IPs attached with the following command: # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 ip a (here look for the external leg of the router, it will have the external router IP and the floating ip attached) it should look like qg-xxxxxxxx-xx # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 tcpdump -e -n -v -v -v -i qg-xxxxxxx-xx Please tell us how is it going . > On 17/4/2015, at 9:48, pauline phaure wrote: > > Hello everyone, > I have some troubles making the floating IP work. When I associate a floating IP to my instance, the instance can reach the neutron-router and ping but cannot ping the external gateway. any ideas where to look? > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com Miguel Angel Ajo From phaurep at gmail.com Fri Apr 17 09:52:43 2015 From: phaurep at gmail.com (pauline phaure) Date: Fri, 17 Apr 2015 11:52:43 +0200 Subject: [Rdo-list] Problem with floating IP In-Reply-To: <075CDAEE-E429-4143-9CD5-2EA43FA2E9B7@redhat.com> References: <075CDAEE-E429-4143-9CD5-2EA43FA2E9B7@redhat.com> Message-ID: hey Miguel, thank you for your response, plz found below the output of the commands: *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 ip a* 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 12: qr-207805ae-39: mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:1c:62:a8 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-207805ae-39 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe1c:62a8/64 scope link valid_lft forever preferred_lft forever 13: qg-52b4d686-58: mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:34:d5:6e brd ff:ff:ff:ff:ff:ff inet 192.168.2.70/24 brd 192.168.2.255 scope global qg-52b4d686-58 valid_lft forever preferred_lft forever inet *192.168.2.72/32 * brd 192.168.2.72 scope global *qg-52b4d686-58* valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe34:d56e/64 scope link valid_lft forever preferred_lft forever *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 tcpdump -e -n -v -v -v -i qg-52b4d686-58* equest who-has 192.168.2.1 tell 192.168.2.72, length 28 11:49:19.705378 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 11:49:20.707292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 11:49:22.706910 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 11:49:23.707412 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 11:49:24.709292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 11:49:26.710264 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 11:49:27.711297 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 11:49:28.002005 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.42 (Broadcast) tell 192.168.2.1, length 46 11:49:28.002064 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 58298, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 494, length 64 11:49:28.002079 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 58299, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 495, length 64 11:49:28.040439 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.5 (Broadcast) tell 192.168.2.1, length 46 11:49:28.079105 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.20 (Broadcast) tell 192.168.2.1, length 46 11:49:28.115671 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.34 (Broadcast) tell 192.168.2.1, length 46 11:49:28.179014 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.22 (Broadcast) tell 192.168.2.1, length 46 11:49:28.223391 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.240 (Broadcast) tell 192.168.2.1, length 46 *tcpdump -e -n -v -v -v -i eth0 * 11:41:44.953118 00:0c:29:56:d9:09 > 74:46:a0:9e:ff:a5, ethertype IPv4 (0x0800), length 166: (tos 0x10, ttl 64, id 10881, offset 0, flags [DF], proto TCP (6), length 152) 192.168.2.19.ssh > 192.168.2.99.53021: Flags [P.], cksum 0x8651 (incorrect -> 0x9f53), seq 2550993953:2550994065, ack 2916435463, win 146, length 112 11:41:44.953804 74:46:a0:9e:ff:a5 > 00:0c:29:56:d9:09, ethertype IPv4 (0x0800), length 60: (tos 0x0, ttl 128, id 31471, offset 0, flags [DF], proto TCP (6), length 40) 192.168.2.99.53021 > 192.168.2.19.ssh: Flags [.], cksum 0x7b65 (correct), seq 1, ack 112, win 16121, length 0 11:41:45.017729 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 (0x0800), length 99: (tos 0x0, ttl 64, id 17044, offset 0, flags [DF], proto TCP (6), length 85) 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [P.], cksum 0x7339 (correct), seq 2968653045:2968653078, ack 1461763310, win 123, options [nop,nop,TS val 222978 ecr 218783], length 33 11:41:45.018242 00:0c:29:56:d9:09 > 00:0c:29:91:4c:ea, ethertype IPv4 (0x0800), length 78: (tos 0x0, ttl 64, id 47485, offset 0, flags [DF], proto TCP (6), length 64) 192.168.2.19.amqp > 192.168.2.22.45167: Flags [P.], cksum 0x85ac (incorrect -> 0x4c5d), seq 1:13, ack 33, win 330, options [nop,nop,TS val 223746 ecr 222978], length 12 11:41:45.018453 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 (0x0800), length 66: (tos 0x0, ttl 64, id 17045, offset 0, flags [DF], proto TCP (6), length 52) 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [.], cksum 0x8701 (correct), seq 33, ack 13, win 123, options [nop,nop,TS val 222979 ecr 223746], length 0 2015-04-17 10:42 GMT+02:00 Miguel Angel Ajo Pelayo : > To troubleshoot this I?d recommend you > > 1) doing a tcpdump in the controller node, on the external interface > attached to br-ex, > and find what?s going on, > > tcpdump -e -n -v -v -v -i ethX > > note: as per your schema you may use an ?external flat network? > (no segmentation) from your network/controller node, so the packets going > out from the router > should not be tagged in your tcpdump. > > If you set the external network as vlan tagged, you may have to change it > into flat. (such operation > may require removing the floating ips from instances, removing legs from > router (External, and internal), > and then removing the router, then the external network/subnet). > > > In a separate terminal, it may help to .. > 2) look for the router netns: > > # ip netns > qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 > > note : this is the ?virtual router?, it lives in a network namespace which > is another isolated > instance of the linux networking stack., you will find the interfaces and > IPs attached with > the following command: > > # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 ip a > > (here look for the external leg of the router, it will have the external > router IP and the floating ip attached) > it should look like qg-xxxxxxxx-xx > > > # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 tcpdump -e -n > -v -v -v -i qg-xxxxxxx-xx > > > Please tell us how is it going . > > > > > On 17/4/2015, at 9:48, pauline phaure wrote: > > > > Hello everyone, > > I have some troubles making the floating IP work. When I associate a > floating IP to my instance, the instance can reach the neutron-router and > ping but cannot ping the external gateway. any ideas where to look? > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > Miguel Angel Ajo > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mangelajo at redhat.com Fri Apr 17 12:23:03 2015 From: mangelajo at redhat.com (Miguel Angel Ajo Pelayo) Date: Fri, 17 Apr 2015 14:23:03 +0200 Subject: [Rdo-list] Problem with floating IP In-Reply-To: References: <075CDAEE-E429-4143-9CD5-2EA43FA2E9B7@redhat.com> Message-ID: <2BBB7E5E-DC92-49CE-AD45-63D50394F4E6@redhat.com> The traffic shows that neutron is doing the right thing, Check that your ESX is not applying any MAC anti spoof on the vmware vswitch, it looks like the ARP requests could be blocked at switch level since every qrouter is going to have it?s own MAC address (separate from your own VM one). Otherwise connect other machine to the physical switch on vlan30 and check if the ARP requests (it?s broadcast traffic) are arriving to confirm my above theory. > On 17/4/2015, at 13:51, pauline phaure wrote: > > i found these lines on the input file of tcpdump -e -n -v -v -v -i eth0 > > 192.168.2.72 > 10.0.0.4 : ICMP host 192.168.2.1 unreachable, length 92 > 192.168.2.72 > 10.0.0.4 : ICMP host 192.168.2.1 unreachable, length 92 > 192.168.2.72 > 10.0.0.4 : ICMP host 192.168.2.1 unreachable, length 92 > 192.168.2.72 > 10.0.0.4 : ICMP host 192.168.2.1 unreachable, length 92 > 11:41:46.661008 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 > 11:41:47.663307 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 > 11:41:48.665301 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 > > > 2015-04-17 11:52 GMT+02:00 pauline phaure >: > hey Miguel, thank you for your response, plz found below the output of the commands: > > > ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 12: qr-207805ae-39: mtu 1500 qdisc noqueue state UNKNOWN > link/ether fa:16:3e:1c:62:a8 brd ff:ff:ff:ff:ff:ff > inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-207805ae-39 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe1c:62a8/64 scope link > valid_lft forever preferred_lft forever > 13: qg-52b4d686-58: mtu 1500 qdisc noqueue state UNKNOWN > link/ether fa:16:3e:34:d5:6e brd ff:ff:ff:ff:ff:ff > inet 192.168.2.70/24 brd 192.168.2.255 scope global qg-52b4d686-58 > valid_lft forever preferred_lft forever > inet 192.168.2.72/32 brd 192.168.2.72 scope global qg-52b4d686-58 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe34:d56e/64 scope link > valid_lft forever preferred_lft forever > > > ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 tcpdump -e -n -v -v -v -i qg-52b4d686-58 > > equest who-has 192.168.2.1 tell 192.168.2.72, length 28 > 11:49:19.705378 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 > 11:49:20.707292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 > 11:49:22.706910 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 > 11:49:23.707412 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 > 11:49:24.709292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 > 11:49:26.710264 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 > 11:49:27.711297 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 > 11:49:28.002005 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.42 (Broadcast) tell 192.168.2.1, length 46 > 11:49:28.002064 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 58298, offset 0, flags [DF], proto ICMP (1), length 84) > 192.168.2.72 > 192.168.2.1 : ICMP echo request, id 19201, seq 494, length 64 > 11:49:28.002079 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 58299, offset 0, flags [DF], proto ICMP (1), length 84) > 192.168.2.72 > 192.168.2.1 : ICMP echo request, id 19201, seq 495, length 64 > 11:49:28.040439 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.5 (Broadcast) tell 192.168.2.1, length 46 > 11:49:28.079105 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.20 (Broadcast) tell 192.168.2.1, length 46 > 11:49:28.115671 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.34 (Broadcast) tell 192.168.2.1, length 46 > 11:49:28.179014 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.22 (Broadcast) tell 192.168.2.1, length 46 > 11:49:28.223391 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.240 (Broadcast) tell 192.168.2.1, length 46 > > > tcpdump -e -n -v -v -v -i eth0 > > 11:41:44.953118 00:0c:29:56:d9:09 > 74:46:a0:9e:ff:a5, ethertype IPv4 (0x0800), length 166: (tos 0x10, ttl 64, id 10881, offset 0, flags [DF], proto TCP (6), length 152) > 192.168.2.19.ssh > 192.168.2.99.53021: Flags [P.], cksum 0x8651 (incorrect -> 0x9f53), seq 2550993953:2550994065, ack 2916435463, win 146, length 112 > 11:41:44.953804 74:46:a0:9e:ff:a5 > 00:0c:29:56:d9:09, ethertype IPv4 (0x0800), length 60: (tos 0x0, ttl 128, id 31471, offset 0, flags [DF], proto TCP (6), length 40) > 192.168.2.99.53021 > 192.168.2.19.ssh: Flags [.], cksum 0x7b65 (correct), seq 1, ack 112, win 16121, length 0 > 11:41:45.017729 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 (0x0800), length 99: (tos 0x0, ttl 64, id 17044, offset 0, flags [DF], proto TCP (6), length 85) > 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [P.], cksum 0x7339 (correct), seq 2968653045:2968653078, ack 1461763310, win 123, options [nop,nop,TS val 222978 ecr 218783], length 33 > 11:41:45.018242 00:0c:29:56:d9:09 > 00:0c:29:91:4c:ea, ethertype IPv4 (0x0800), length 78: (tos 0x0, ttl 64, id 47485, offset 0, flags [DF], proto TCP (6), length 64) > 192.168.2.19.amqp > 192.168.2.22.45167: Flags [P.], cksum 0x85ac (incorrect -> 0x4c5d), seq 1:13, ack 33, win 330, options [nop,nop,TS val 223746 ecr 222978], length 12 > 11:41:45.018453 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 (0x0800), length 66: (tos 0x0, ttl 64, id 17045, offset 0, flags [DF], proto TCP (6), length 52) > 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [.], cksum 0x8701 (correct), seq 33, ack 13, win 123, options [nop,nop,TS val 222979 ecr 223746], length 0 > > > > 2015-04-17 10:42 GMT+02:00 Miguel Angel Ajo Pelayo >: > To troubleshoot this I?d recommend you > > 1) doing a tcpdump in the controller node, on the external interface attached to br-ex, > and find what?s going on, > > tcpdump -e -n -v -v -v -i ethX > > note: as per your schema you may use an ?external flat network? > (no segmentation) from your network/controller node, so the packets going out from the router > should not be tagged in your tcpdump. > > If you set the external network as vlan tagged, you may have to change it into flat. (such operation > may require removing the floating ips from instances, removing legs from router (External, and internal), > and then removing the router, then the external network/subnet). > > > In a separate terminal, it may help to .. > 2) look for the router netns: > > # ip netns > qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 > > note : this is the ?virtual router?, it lives in a network namespace which is another isolated > instance of the linux networking stack., you will find the interfaces and IPs attached with > the following command: > > # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 ip a > > (here look for the external leg of the router, it will have the external router IP and the floating ip attached) > it should look like qg-xxxxxxxx-xx > > > # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 tcpdump -e -n -v -v -v -i qg-xxxxxxx-xx > > > Please tell us how is it going . > > > > > On 17/4/2015, at 9:48, pauline phaure > wrote: > > > > Hello everyone, > > I have some troubles making the floating IP work. When I associate a floating IP to my instance, the instance can reach the neutron-router and ping but cannot ping the external gateway. any ideas where to look? > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > Miguel Angel Ajo > > > > > Miguel Angel Ajo -------------- next part -------------- An HTML attachment was scrubbed... URL: From phaurep at gmail.com Fri Apr 17 13:55:08 2015 From: phaurep at gmail.com (pauline phaure) Date: Fri, 17 Apr 2015 15:55:08 +0200 Subject: [Rdo-list] Problem with floating IP In-Reply-To: <2BBB7E5E-DC92-49CE-AD45-63D50394F4E6@redhat.com> References: <075CDAEE-E429-4143-9CD5-2EA43FA2E9B7@redhat.com> <2BBB7E5E-DC92-49CE-AD45-63D50394F4E6@redhat.com> Message-ID: Thank you Miguel, my openstack is working fine on ESXi. But when I try to do the same things with my openstack installation on real servers it doesn't work. I'm still stuck with br-ex problem and the vlans in which my interfaces are. br-ex can't reach the outside because eth0 is in a vlan. any idea 2015-04-17 14:23 GMT+02:00 Miguel Angel Ajo Pelayo : > > The traffic shows that neutron is doing the right thing, > > Check that your ESX is not applying any MAC anti spoof on the > vmware vswitch, it looks like the ARP requests could be blocked at switch > level > since every qrouter is going to have it?s own MAC address (separate from > your own > VM one). > > Otherwise connect other machine to the physical switch on vlan30 and check > if > the ARP requests (it?s broadcast traffic) are arriving to confirm my above > theory. > > > > On 17/4/2015, at 13:51, pauline phaure wrote: > > i found these lines on the input file of > > *tcpdump -e -n -v -v -v -i eth0 *192.168.2.72 > 10.0.0.4: ICMP host > 192.168.2.1 unreachable, length 92 > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length 92 > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length 92 > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length 92 > 11:41:46.661008 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell > 192.168.2.72, length 28 > 11:41:47.663307 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell > 192.168.2.72, length 28 > 11:41:48.665301 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell > 192.168.2.72, length 28 > > > 2015-04-17 11:52 GMT+02:00 pauline phaure : > >> hey Miguel, thank you for your response, plz found below the output of >> the commands: >> >> >> *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 ip a* >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> inet 127.0.0.1/8 scope host lo >> valid_lft forever preferred_lft forever >> inet6 ::1/128 scope host >> valid_lft forever preferred_lft forever >> 12: qr-207805ae-39: mtu 1500 qdisc >> noqueue state UNKNOWN >> link/ether fa:16:3e:1c:62:a8 brd ff:ff:ff:ff:ff:ff >> inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-207805ae-39 >> valid_lft forever preferred_lft forever >> inet6 fe80::f816:3eff:fe1c:62a8/64 scope link >> valid_lft forever preferred_lft forever >> 13: qg-52b4d686-58: mtu 1500 qdisc >> noqueue state UNKNOWN >> link/ether fa:16:3e:34:d5:6e brd ff:ff:ff:ff:ff:ff >> inet 192.168.2.70/24 brd 192.168.2.255 scope global qg-52b4d686-58 >> valid_lft forever preferred_lft forever >> inet *192.168.2.72/32 * brd 192.168.2.72 >> scope global *qg-52b4d686-58* >> valid_lft forever preferred_lft forever >> inet6 fe80::f816:3eff:fe34:d56e/64 scope link >> valid_lft forever preferred_lft forever >> >> >> *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 tcpdump -e -n >> -v -v -v -i qg-52b4d686-58* >> >> equest who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:49:19.705378 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell >> 192.168.2.72, length 28 >> 11:49:20.707292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell >> 192.168.2.72, length 28 >> 11:49:22.706910 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell >> 192.168.2.72, length 28 >> 11:49:23.707412 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell >> 192.168.2.72, length 28 >> 11:49:24.709292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell >> 192.168.2.72, length 28 >> 11:49:26.710264 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell >> 192.168.2.72, length 28 >> 11:49:27.711297 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell >> 192.168.2.72, length 28 >> 11:49:28.002005 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.42 >> (Broadcast) tell 192.168.2.1, length 46 >> 11:49:28.002064 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 >> (0x0800), length 98: (tos 0x0, ttl 63, id 58298, offset 0, flags [DF], >> proto ICMP (1), length 84) >> 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 494, >> length 64 >> 11:49:28.002079 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 >> (0x0800), length 98: (tos 0x0, ttl 63, id 58299, offset 0, flags [DF], >> proto ICMP (1), length 84) >> 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 495, >> length 64 >> 11:49:28.040439 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.5 >> (Broadcast) tell 192.168.2.1, length 46 >> 11:49:28.079105 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.20 >> (Broadcast) tell 192.168.2.1, length 46 >> 11:49:28.115671 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.34 >> (Broadcast) tell 192.168.2.1, length 46 >> 11:49:28.179014 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.22 >> (Broadcast) tell 192.168.2.1, length 46 >> 11:49:28.223391 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.240 >> (Broadcast) tell 192.168.2.1, length 46 >> >> >> *tcpdump -e -n -v -v -v -i eth0 * >> >> 11:41:44.953118 00:0c:29:56:d9:09 > 74:46:a0:9e:ff:a5, ethertype IPv4 >> (0x0800), length 166: (tos 0x10, ttl 64, id 10881, offset 0, flags [DF], >> proto TCP (6), length 152) >> 192.168.2.19.ssh > 192.168.2.99.53021: Flags [P.], cksum 0x8651 >> (incorrect -> 0x9f53), seq 2550993953:2550994065, ack 2916435463, win 146, >> length 112 >> 11:41:44.953804 74:46:a0:9e:ff:a5 > 00:0c:29:56:d9:09, ethertype IPv4 >> (0x0800), length 60: (tos 0x0, ttl 128, id 31471, offset 0, flags [DF], >> proto TCP (6), length 40) >> 192.168.2.99.53021 > 192.168.2.19.ssh: Flags [.], cksum 0x7b65 >> (correct), seq 1, ack 112, win 16121, length 0 >> 11:41:45.017729 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 >> (0x0800), length 99: (tos 0x0, ttl 64, id 17044, offset 0, flags [DF], >> proto TCP (6), length 85) >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [P.], cksum 0x7339 >> (correct), seq 2968653045:2968653078, ack 1461763310, win 123, options >> [nop,nop,TS val 222978 ecr 218783], length 33 >> 11:41:45.018242 00:0c:29:56:d9:09 > 00:0c:29:91:4c:ea, ethertype IPv4 >> (0x0800), length 78: (tos 0x0, ttl 64, id 47485, offset 0, flags [DF], >> proto TCP (6), length 64) >> 192.168.2.19.amqp > 192.168.2.22.45167: Flags [P.], cksum 0x85ac >> (incorrect -> 0x4c5d), seq 1:13, ack 33, win 330, options [nop,nop,TS val >> 223746 ecr 222978], length 12 >> 11:41:45.018453 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 >> (0x0800), length 66: (tos 0x0, ttl 64, id 17045, offset 0, flags [DF], >> proto TCP (6), length 52) >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [.], cksum 0x8701 >> (correct), seq 33, ack 13, win 123, options [nop,nop,TS val 222979 ecr >> 223746], length 0 >> >> >> >> 2015-04-17 10:42 GMT+02:00 Miguel Angel Ajo Pelayo >> : >> >>> To troubleshoot this I?d recommend you >>> >>> 1) doing a tcpdump in the controller node, on the external interface >>> attached to br-ex, >>> and find what?s going on, >>> >>> tcpdump -e -n -v -v -v -i ethX >>> >>> note: as per your schema you may use an ?external flat network? >>> (no segmentation) from your network/controller node, so the packets >>> going out from the router >>> should not be tagged in your tcpdump. >>> >>> If you set the external network as vlan tagged, you may have to change >>> it into flat. (such operation >>> may require removing the floating ips from instances, removing legs from >>> router (External, and internal), >>> and then removing the router, then the external network/subnet). >>> >>> >>> In a separate terminal, it may help to .. >>> 2) look for the router netns: >>> >>> # ip netns >>> qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 >>> >>> note : this is the ?virtual router?, it lives in a network namespace >>> which is another isolated >>> instance of the linux networking stack., you will find the interfaces >>> and IPs attached with >>> the following command: >>> >>> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 ip a >>> >>> (here look for the external leg of the router, it will have the external >>> router IP and the floating ip attached) >>> it should look like qg-xxxxxxxx-xx >>> >>> >>> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 tcpdump -e >>> -n -v -v -v -i qg-xxxxxxx-xx >>> >>> >>> Please tell us how is it going . >>> >>> >>> >>> > On 17/4/2015, at 9:48, pauline phaure wrote: >>> > >>> > Hello everyone, >>> > I have some troubles making the floating IP work. When I associate a >>> floating IP to my instance, the instance can reach the neutron-router and >>> ping but cannot ping the external gateway. any ideas where to look? >>> > >>> > >>> > >>> > _______________________________________________ >>> > Rdo-list mailing list >>> > Rdo-list at redhat.com >>> > https://www.redhat.com/mailman/listinfo/rdo-list >>> > >>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> Miguel Angel Ajo >>> >>> >>> >>> >> > > Miguel Angel Ajo > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Fri Apr 17 14:00:11 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 17 Apr 2015 10:00:11 -0400 Subject: [Rdo-list] Help getting started with rdo-manager In-Reply-To: <1888917977.1666212.1429218518242.JavaMail.zimbra@redhat.com> References: <20150416164012.GB18285@redhat.com> <20150416172958.GH29586@teletran-1.redhat.com> <20150416205813.GD18285@redhat.com> <1888917977.1666212.1429218518242.JavaMail.zimbra@redhat.com> Message-ID: <20150417140011.GF18285@redhat.com> > I think you should have check that in /etc/edeploy/state you have > --> : [('control', 1), ('compute', '*')] Omri, Thanks, that did get me one step closer. The deploy is still failing, but now it's due to the following resource: | ControllerNodesPostDeployment | 9a24f414-4e35-4d27-b550-77d47651f56a | OS::TripleO::ControllerPostDeployment | CREATE_FAILED | 2015-04-17T01:28:32Z | -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From phaurep at gmail.com Fri Apr 17 14:07:13 2015 From: phaurep at gmail.com (pauline phaure) Date: Fri, 17 Apr 2015 16:07:13 +0200 Subject: [Rdo-list] Rdo-list Digest, Vol 25, Issue 28 In-Reply-To: References: Message-ID: this is my architecture , i don't know how to connecte br-ex to external and ping the router. any ideas ? [image: Images int?gr?es 2] 2015-04-17 16:00 GMT+02:00 : > Send Rdo-list mailing list submissions to > rdo-list at redhat.com > > To subscribe or unsubscribe via the World Wide Web, visit > https://www.redhat.com/mailman/listinfo/rdo-list > or, via email, send a message with subject or body 'help' to > rdo-list-request at redhat.com > > You can reach the person managing the list at > rdo-list-owner at redhat.com > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Rdo-list digest..." > > > Today's Topics: > > 1. Re: Problem with floating IP (pauline phaure) > 2. Re: Help getting started with rdo-manager (Lars Kellogg-Stedman) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 17 Apr 2015 15:55:08 +0200 > From: pauline phaure > To: Miguel Angel Ajo Pelayo > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Problem with floating IP > Message-ID: > < > CAJM-u-X51u8xgEp9FQj4tmSbvG0GnUc0taj1SUMrx7rjzVXqLQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Thank you Miguel, my openstack is working fine on ESXi. But when I try to > do the same things with my openstack installation on real servers it > doesn't work. I'm still stuck with br-ex problem and the vlans in which my > interfaces are. br-ex can't reach the outside because eth0 is in a vlan. > any idea > > 2015-04-17 14:23 GMT+02:00 Miguel Angel Ajo Pelayo : > > > > > The traffic shows that neutron is doing the right thing, > > > > Check that your ESX is not applying any MAC anti spoof on the > > vmware vswitch, it looks like the ARP requests could be blocked at switch > > level > > since every qrouter is going to have it?s own MAC address (separate from > > your own > > VM one). > > > > Otherwise connect other machine to the physical switch on vlan30 and > check > > if > > the ARP requests (it?s broadcast traffic) are arriving to confirm my > above > > theory. > > > > > > > > On 17/4/2015, at 13:51, pauline phaure wrote: > > > > i found these lines on the input file of > > > > *tcpdump -e -n -v -v -v -i eth0 *192.168.2.72 > 10.0.0.4: ICMP host > > 192.168.2.1 unreachable, length 92 > > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length > 92 > > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length > 92 > > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length > 92 > > 11:41:46.661008 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 > tell > > 192.168.2.72, length 28 > > 11:41:47.663307 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 > tell > > 192.168.2.72, length 28 > > 11:41:48.665301 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 > tell > > 192.168.2.72, length 28 > > > > > > 2015-04-17 11:52 GMT+02:00 pauline phaure : > > > >> hey Miguel, thank you for your response, plz found below the output of > >> the commands: > >> > >> > >> *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 ip a* > >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >> inet 127.0.0.1/8 scope host lo > >> valid_lft forever preferred_lft forever > >> inet6 ::1/128 scope host > >> valid_lft forever preferred_lft forever > >> 12: qr-207805ae-39: mtu 1500 qdisc > >> noqueue state UNKNOWN > >> link/ether fa:16:3e:1c:62:a8 brd ff:ff:ff:ff:ff:ff > >> inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-207805ae-39 > >> valid_lft forever preferred_lft forever > >> inet6 fe80::f816:3eff:fe1c:62a8/64 scope link > >> valid_lft forever preferred_lft forever > >> 13: qg-52b4d686-58: mtu 1500 qdisc > >> noqueue state UNKNOWN > >> link/ether fa:16:3e:34:d5:6e brd ff:ff:ff:ff:ff:ff > >> inet 192.168.2.70/24 brd 192.168.2.255 scope global qg-52b4d686-58 > >> valid_lft forever preferred_lft forever > >> inet *192.168.2.72/32 * brd 192.168.2.72 > >> scope global *qg-52b4d686-58* > >> valid_lft forever preferred_lft forever > >> inet6 fe80::f816:3eff:fe34:d56e/64 scope link > >> valid_lft forever preferred_lft forever > >> > >> > >> *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 tcpdump -e > -n > >> -v -v -v -i qg-52b4d686-58* > >> > >> equest who-has 192.168.2.1 tell 192.168.2.72, length 28 > >> 11:49:19.705378 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 > tell > >> 192.168.2.72, length 28 > >> 11:49:20.707292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 > tell > >> 192.168.2.72, length 28 > >> 11:49:22.706910 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 > tell > >> 192.168.2.72, length 28 > >> 11:49:23.707412 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 > tell > >> 192.168.2.72, length 28 > >> 11:49:24.709292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 > tell > >> 192.168.2.72, length 28 > >> 11:49:26.710264 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 > tell > >> 192.168.2.72, length 28 > >> 11:49:27.711297 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), > >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 > tell > >> 192.168.2.72, length 28 > >> 11:49:28.002005 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), > >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.42 > >> (Broadcast) tell 192.168.2.1, length 46 > >> 11:49:28.002064 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 > >> (0x0800), length 98: (tos 0x0, ttl 63, id 58298, offset 0, flags [DF], > >> proto ICMP (1), length 84) > >> 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 494, > >> length 64 > >> 11:49:28.002079 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 > >> (0x0800), length 98: (tos 0x0, ttl 63, id 58299, offset 0, flags [DF], > >> proto ICMP (1), length 84) > >> 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 495, > >> length 64 > >> 11:49:28.040439 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), > >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.5 > >> (Broadcast) tell 192.168.2.1, length 46 > >> 11:49:28.079105 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), > >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.20 > >> (Broadcast) tell 192.168.2.1, length 46 > >> 11:49:28.115671 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), > >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.34 > >> (Broadcast) tell 192.168.2.1, length 46 > >> 11:49:28.179014 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), > >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.22 > >> (Broadcast) tell 192.168.2.1, length 46 > >> 11:49:28.223391 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), > >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.240 > >> (Broadcast) tell 192.168.2.1, length 46 > >> > >> > >> *tcpdump -e -n -v -v -v -i eth0 * > >> > >> 11:41:44.953118 00:0c:29:56:d9:09 > 74:46:a0:9e:ff:a5, ethertype IPv4 > >> (0x0800), length 166: (tos 0x10, ttl 64, id 10881, offset 0, flags [DF], > >> proto TCP (6), length 152) > >> 192.168.2.19.ssh > 192.168.2.99.53021: Flags [P.], cksum 0x8651 > >> (incorrect -> 0x9f53), seq 2550993953:2550994065, ack 2916435463, win > 146, > >> length 112 > >> 11:41:44.953804 74:46:a0:9e:ff:a5 > 00:0c:29:56:d9:09, ethertype IPv4 > >> (0x0800), length 60: (tos 0x0, ttl 128, id 31471, offset 0, flags [DF], > >> proto TCP (6), length 40) > >> 192.168.2.99.53021 > 192.168.2.19.ssh: Flags [.], cksum 0x7b65 > >> (correct), seq 1, ack 112, win 16121, length 0 > >> 11:41:45.017729 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 > >> (0x0800), length 99: (tos 0x0, ttl 64, id 17044, offset 0, flags [DF], > >> proto TCP (6), length 85) > >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [P.], cksum 0x7339 > >> (correct), seq 2968653045:2968653078, ack 1461763310, win 123, options > >> [nop,nop,TS val 222978 ecr 218783], length 33 > >> 11:41:45.018242 00:0c:29:56:d9:09 > 00:0c:29:91:4c:ea, ethertype IPv4 > >> (0x0800), length 78: (tos 0x0, ttl 64, id 47485, offset 0, flags [DF], > >> proto TCP (6), length 64) > >> 192.168.2.19.amqp > 192.168.2.22.45167: Flags [P.], cksum 0x85ac > >> (incorrect -> 0x4c5d), seq 1:13, ack 33, win 330, options [nop,nop,TS > val > >> 223746 ecr 222978], length 12 > >> 11:41:45.018453 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 > >> (0x0800), length 66: (tos 0x0, ttl 64, id 17045, offset 0, flags [DF], > >> proto TCP (6), length 52) > >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [.], cksum 0x8701 > >> (correct), seq 33, ack 13, win 123, options [nop,nop,TS val 222979 ecr > >> 223746], length 0 > >> > >> > >> > >> 2015-04-17 10:42 GMT+02:00 Miguel Angel Ajo Pelayo < > mangelajo at redhat.com> > >> : > >> > >>> To troubleshoot this I?d recommend you > >>> > >>> 1) doing a tcpdump in the controller node, on the external interface > >>> attached to br-ex, > >>> and find what?s going on, > >>> > >>> tcpdump -e -n -v -v -v -i ethX > >>> > >>> note: as per your schema you may use an ?external flat network? > >>> (no segmentation) from your network/controller node, so the packets > >>> going out from the router > >>> should not be tagged in your tcpdump. > >>> > >>> If you set the external network as vlan tagged, you may have to change > >>> it into flat. (such operation > >>> may require removing the floating ips from instances, removing legs > from > >>> router (External, and internal), > >>> and then removing the router, then the external network/subnet). > >>> > >>> > >>> In a separate terminal, it may help to .. > >>> 2) look for the router netns: > >>> > >>> # ip netns > >>> qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 > >>> > >>> note : this is the ?virtual router?, it lives in a network namespace > >>> which is another isolated > >>> instance of the linux networking stack., you will find the interfaces > >>> and IPs attached with > >>> the following command: > >>> > >>> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 ip a > >>> > >>> (here look for the external leg of the router, it will have the > external > >>> router IP and the floating ip attached) > >>> it should look like qg-xxxxxxxx-xx > >>> > >>> > >>> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 tcpdump -e > >>> -n -v -v -v -i qg-xxxxxxx-xx > >>> > >>> > >>> Please tell us how is it going . > >>> > >>> > >>> > >>> > On 17/4/2015, at 9:48, pauline phaure wrote: > >>> > > >>> > Hello everyone, > >>> > I have some troubles making the floating IP work. When I associate a > >>> floating IP to my instance, the instance can reach the neutron-router > and > >>> ping but cannot ping the external gateway. any ideas where to look? > >>> > > >>> > > >>> > > >>> > _______________________________________________ > >>> > Rdo-list mailing list > >>> > Rdo-list at redhat.com > >>> > https://www.redhat.com/mailman/listinfo/rdo-list > >>> > > >>> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > >>> Miguel Angel Ajo > >>> > >>> > >>> > >>> > >> > > > > Miguel Angel Ajo > > > > > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://www.redhat.com/archives/rdo-list/attachments/20150417/2f2d9cfd/attachment.html > > > > ------------------------------ > > Message: 2 > Date: Fri, 17 Apr 2015 10:00:11 -0400 > From: Lars Kellogg-Stedman > To: Omri Hochman > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Help getting started with rdo-manager > Message-ID: <20150417140011.GF18285 at redhat.com> > Content-Type: text/plain; charset="us-ascii" > > > I think you should have check that in /etc/edeploy/state you have > > --> : [('control', 1), ('compute', '*')] > > Omri, > > Thanks, that did get me one step closer. > > The deploy is still failing, but now it's due to the following > resource: > > | ControllerNodesPostDeployment | 9a24f414-4e35-4d27-b550-77d47651f56a > | OS::TripleO::ControllerPostDeployment | CREATE_FAILED | > 2015-04-17T01:28:32Z | > > -- > Lars Kellogg-Stedman | larsks @ > {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 819 bytes > Desc: not available > URL: < > https://www.redhat.com/archives/rdo-list/attachments/20150417/6e784186/attachment.bin > > > > ------------------------------ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > End of Rdo-list Digest, Vol 25, Issue 28 > **************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11855 bytes Desc: not available URL: From pgsousa at gmail.com Fri Apr 17 14:21:02 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 17 Apr 2015 15:21:02 +0100 Subject: [Rdo-list] Rdo-list Digest, Vol 25, Issue 28 In-Reply-To: References: Message-ID: Hi Pauline can you show how did you setup the bridges? # ovs-vsctl show Regards, Pedro Sousa On Fri, Apr 17, 2015 at 3:07 PM, pauline phaure wrote: > this is my architecture , i don't know how to connecte br-ex to external > and ping the router. any ideas ? > > [image: Images int?gr?es 2] > > 2015-04-17 16:00 GMT+02:00 : > >> Send Rdo-list mailing list submissions to >> rdo-list at redhat.com >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://www.redhat.com/mailman/listinfo/rdo-list >> or, via email, send a message with subject or body 'help' to >> rdo-list-request at redhat.com >> >> You can reach the person managing the list at >> rdo-list-owner at redhat.com >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of Rdo-list digest..." >> >> >> Today's Topics: >> >> 1. Re: Problem with floating IP (pauline phaure) >> 2. Re: Help getting started with rdo-manager (Lars Kellogg-Stedman) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Fri, 17 Apr 2015 15:55:08 +0200 >> From: pauline phaure >> To: Miguel Angel Ajo Pelayo >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] Problem with floating IP >> Message-ID: >> < >> CAJM-u-X51u8xgEp9FQj4tmSbvG0GnUc0taj1SUMrx7rjzVXqLQ at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> Thank you Miguel, my openstack is working fine on ESXi. But when I try to >> do the same things with my openstack installation on real servers it >> doesn't work. I'm still stuck with br-ex problem and the vlans in which my >> interfaces are. br-ex can't reach the outside because eth0 is in a vlan. >> any idea >> >> 2015-04-17 14:23 GMT+02:00 Miguel Angel Ajo Pelayo > >: >> >> > >> > The traffic shows that neutron is doing the right thing, >> > >> > Check that your ESX is not applying any MAC anti spoof on the >> > vmware vswitch, it looks like the ARP requests could be blocked at >> switch >> > level >> > since every qrouter is going to have it?s own MAC address (separate from >> > your own >> > VM one). >> > >> > Otherwise connect other machine to the physical switch on vlan30 and >> check >> > if >> > the ARP requests (it?s broadcast traffic) are arriving to confirm my >> above >> > theory. >> > >> > >> > >> > On 17/4/2015, at 13:51, pauline phaure wrote: >> > >> > i found these lines on the input file of >> > >> > *tcpdump -e -n -v -v -v -i eth0 *192.168.2.72 > 10.0.0.4: ICMP host >> > 192.168.2.1 unreachable, length 92 >> > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length >> 92 >> > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length >> 92 >> > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length >> 92 >> > 11:41:46.661008 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> > 192.168.2.72, length 28 >> > 11:41:47.663307 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> > 192.168.2.72, length 28 >> > 11:41:48.665301 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> > 192.168.2.72, length 28 >> > >> > >> > 2015-04-17 11:52 GMT+02:00 pauline phaure : >> > >> >> hey Miguel, thank you for your response, plz found below the output of >> >> the commands: >> >> >> >> >> >> *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 ip a* >> >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> >> inet 127.0.0.1/8 scope host lo >> >> valid_lft forever preferred_lft forever >> >> inet6 ::1/128 scope host >> >> valid_lft forever preferred_lft forever >> >> 12: qr-207805ae-39: mtu 1500 qdisc >> >> noqueue state UNKNOWN >> >> link/ether fa:16:3e:1c:62:a8 brd ff:ff:ff:ff:ff:ff >> >> inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-207805ae-39 >> >> valid_lft forever preferred_lft forever >> >> inet6 fe80::f816:3eff:fe1c:62a8/64 scope link >> >> valid_lft forever preferred_lft forever >> >> 13: qg-52b4d686-58: mtu 1500 qdisc >> >> noqueue state UNKNOWN >> >> link/ether fa:16:3e:34:d5:6e brd ff:ff:ff:ff:ff:ff >> >> inet 192.168.2.70/24 brd 192.168.2.255 scope global qg-52b4d686-58 >> >> valid_lft forever preferred_lft forever >> >> inet *192.168.2.72/32 * brd 192.168.2.72 >> >> scope global *qg-52b4d686-58* >> >> valid_lft forever preferred_lft forever >> >> inet6 fe80::f816:3eff:fe34:d56e/64 scope link >> >> valid_lft forever preferred_lft forever >> >> >> >> >> >> *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 tcpdump -e >> -n >> >> -v -v -v -i qg-52b4d686-58* >> >> >> >> equest who-has 192.168.2.1 tell 192.168.2.72, length 28 >> >> 11:49:19.705378 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:20.707292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:22.706910 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:23.707412 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:24.709292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:26.710264 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:27.711297 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:28.002005 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.42 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> 11:49:28.002064 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 >> >> (0x0800), length 98: (tos 0x0, ttl 63, id 58298, offset 0, flags [DF], >> >> proto ICMP (1), length 84) >> >> 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 494, >> >> length 64 >> >> 11:49:28.002079 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 >> >> (0x0800), length 98: (tos 0x0, ttl 63, id 58299, offset 0, flags [DF], >> >> proto ICMP (1), length 84) >> >> 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 495, >> >> length 64 >> >> 11:49:28.040439 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.5 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> 11:49:28.079105 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.20 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> 11:49:28.115671 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.34 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> 11:49:28.179014 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.22 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> 11:49:28.223391 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has >> 192.168.2.240 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> >> >> >> >> *tcpdump -e -n -v -v -v -i eth0 * >> >> >> >> 11:41:44.953118 00:0c:29:56:d9:09 > 74:46:a0:9e:ff:a5, ethertype IPv4 >> >> (0x0800), length 166: (tos 0x10, ttl 64, id 10881, offset 0, flags >> [DF], >> >> proto TCP (6), length 152) >> >> 192.168.2.19.ssh > 192.168.2.99.53021: Flags [P.], cksum 0x8651 >> >> (incorrect -> 0x9f53), seq 2550993953:2550994065, ack 2916435463, win >> 146, >> >> length 112 >> >> 11:41:44.953804 74:46:a0:9e:ff:a5 > 00:0c:29:56:d9:09, ethertype IPv4 >> >> (0x0800), length 60: (tos 0x0, ttl 128, id 31471, offset 0, flags [DF], >> >> proto TCP (6), length 40) >> >> 192.168.2.99.53021 > 192.168.2.19.ssh: Flags [.], cksum 0x7b65 >> >> (correct), seq 1, ack 112, win 16121, length 0 >> >> 11:41:45.017729 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 >> >> (0x0800), length 99: (tos 0x0, ttl 64, id 17044, offset 0, flags [DF], >> >> proto TCP (6), length 85) >> >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [P.], cksum 0x7339 >> >> (correct), seq 2968653045:2968653078, ack 1461763310, win 123, options >> >> [nop,nop,TS val 222978 ecr 218783], length 33 >> >> 11:41:45.018242 00:0c:29:56:d9:09 > 00:0c:29:91:4c:ea, ethertype IPv4 >> >> (0x0800), length 78: (tos 0x0, ttl 64, id 47485, offset 0, flags [DF], >> >> proto TCP (6), length 64) >> >> 192.168.2.19.amqp > 192.168.2.22.45167: Flags [P.], cksum 0x85ac >> >> (incorrect -> 0x4c5d), seq 1:13, ack 33, win 330, options [nop,nop,TS >> val >> >> 223746 ecr 222978], length 12 >> >> 11:41:45.018453 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 >> >> (0x0800), length 66: (tos 0x0, ttl 64, id 17045, offset 0, flags [DF], >> >> proto TCP (6), length 52) >> >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [.], cksum 0x8701 >> >> (correct), seq 33, ack 13, win 123, options [nop,nop,TS val 222979 ecr >> >> 223746], length 0 >> >> >> >> >> >> >> >> 2015-04-17 10:42 GMT+02:00 Miguel Angel Ajo Pelayo < >> mangelajo at redhat.com> >> >> : >> >> >> >>> To troubleshoot this I?d recommend you >> >>> >> >>> 1) doing a tcpdump in the controller node, on the external interface >> >>> attached to br-ex, >> >>> and find what?s going on, >> >>> >> >>> tcpdump -e -n -v -v -v -i ethX >> >>> >> >>> note: as per your schema you may use an ?external flat network? >> >>> (no segmentation) from your network/controller node, so the packets >> >>> going out from the router >> >>> should not be tagged in your tcpdump. >> >>> >> >>> If you set the external network as vlan tagged, you may have to change >> >>> it into flat. (such operation >> >>> may require removing the floating ips from instances, removing legs >> from >> >>> router (External, and internal), >> >>> and then removing the router, then the external network/subnet). >> >>> >> >>> >> >>> In a separate terminal, it may help to .. >> >>> 2) look for the router netns: >> >>> >> >>> # ip netns >> >>> qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 >> >>> >> >>> note : this is the ?virtual router?, it lives in a network namespace >> >>> which is another isolated >> >>> instance of the linux networking stack., you will find the interfaces >> >>> and IPs attached with >> >>> the following command: >> >>> >> >>> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 ip a >> >>> >> >>> (here look for the external leg of the router, it will have the >> external >> >>> router IP and the floating ip attached) >> >>> it should look like qg-xxxxxxxx-xx >> >>> >> >>> >> >>> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 tcpdump >> -e >> >>> -n -v -v -v -i qg-xxxxxxx-xx >> >>> >> >>> >> >>> Please tell us how is it going . >> >>> >> >>> >> >>> >> >>> > On 17/4/2015, at 9:48, pauline phaure wrote: >> >>> > >> >>> > Hello everyone, >> >>> > I have some troubles making the floating IP work. When I associate a >> >>> floating IP to my instance, the instance can reach the neutron-router >> and >> >>> ping but cannot ping the external gateway. any ideas where to look? >> >>> > >> >>> > >> >>> > >> >>> > _______________________________________________ >> >>> > Rdo-list mailing list >> >>> > Rdo-list at redhat.com >> >>> > https://www.redhat.com/mailman/listinfo/rdo-list >> >>> > >> >>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >>> >> >>> Miguel Angel Ajo >> >>> >> >>> >> >>> >> >>> >> >> >> > >> > Miguel Angel Ajo >> > >> > >> > >> > >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> https://www.redhat.com/archives/rdo-list/attachments/20150417/2f2d9cfd/attachment.html >> > >> >> ------------------------------ >> >> Message: 2 >> Date: Fri, 17 Apr 2015 10:00:11 -0400 >> From: Lars Kellogg-Stedman >> To: Omri Hochman >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] Help getting started with rdo-manager >> Message-ID: <20150417140011.GF18285 at redhat.com> >> Content-Type: text/plain; charset="us-ascii" >> >> > I think you should have check that in /etc/edeploy/state you have >> > --> : [('control', 1), ('compute', '*')] >> >> Omri, >> >> Thanks, that did get me one step closer. >> >> The deploy is still failing, but now it's due to the following >> resource: >> >> | ControllerNodesPostDeployment | 9a24f414-4e35-4d27-b550-77d47651f56a >> | OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >> 2015-04-17T01:28:32Z | >> >> -- >> Lars Kellogg-Stedman | larsks @ >> {freenode,twitter,github} >> Cloud Engineering / OpenStack | http://blog.oddbit.com/ >> >> -------------- next part -------------- >> A non-text attachment was scrubbed... >> Name: signature.asc >> Type: application/pgp-signature >> Size: 819 bytes >> Desc: not available >> URL: < >> https://www.redhat.com/archives/rdo-list/attachments/20150417/6e784186/attachment.bin >> > >> >> ------------------------------ >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> End of Rdo-list Digest, Vol 25, Issue 28 >> **************************************** >> > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11855 bytes Desc: not available URL: From christian at berendt.io Fri Apr 17 14:38:00 2015 From: christian at berendt.io (Christian Berendt) Date: Fri, 17 Apr 2015 16:38:00 +0200 Subject: [Rdo-list] [packaging] /etc/cinder/cinder.conf missing in openstack-cinder Message-ID: <55311AC8.6040508@berendt.io> Only the file /usr/share/cinder/cinder-dist.conf is included in the package openstack-cinder. The file /etc/cinder/cinder.conf is not included in the package and /usr/share/cinder/cinder-dist.conf is not really complete. Can you please add the file genereated with tox -egenconfig (https://github.com/openstack/cinder/blob/master/etc/cinder/README-cinder.conf.sample) to openstack-cinder? Christian. From phaurep at gmail.com Fri Apr 17 14:41:03 2015 From: phaurep at gmail.com (pauline phaure) Date: Fri, 17 Apr 2015 16:41:03 +0200 Subject: [Rdo-list] Rdo-list Digest, Vol 25, Issue 28 In-Reply-To: References: Message-ID: plz find attached the output of ovs-vsctl show 2015-04-17 16:21 GMT+02:00 Pedro Sousa : > Hi Pauline can you show how did you setup the bridges? > > # ovs-vsctl show > > Regards, > Pedro Sousa > > > On Fri, Apr 17, 2015 at 3:07 PM, pauline phaure wrote: > >> this is my architecture , i don't know how to connecte br-ex to external >> and ping the router. any ideas ? >> >> [image: Images int?gr?es 2] >> >> 2015-04-17 16:00 GMT+02:00 : >> >>> Send Rdo-list mailing list submissions to >>> rdo-list at redhat.com >>> >>> To subscribe or unsubscribe via the World Wide Web, visit >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> or, via email, send a message with subject or body 'help' to >>> rdo-list-request at redhat.com >>> >>> You can reach the person managing the list at >>> rdo-list-owner at redhat.com >>> >>> When replying, please edit your Subject line so it is more specific >>> than "Re: Contents of Rdo-list digest..." >>> >>> >>> Today's Topics: >>> >>> 1. Re: Problem with floating IP (pauline phaure) >>> 2. Re: Help getting started with rdo-manager (Lars Kellogg-Stedman) >>> >>> >>> ---------------------------------------------------------------------- >>> >>> Message: 1 >>> Date: Fri, 17 Apr 2015 15:55:08 +0200 >>> From: pauline phaure >>> To: Miguel Angel Ajo Pelayo >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] Problem with floating IP >>> Message-ID: >>> < >>> CAJM-u-X51u8xgEp9FQj4tmSbvG0GnUc0taj1SUMrx7rjzVXqLQ at mail.gmail.com> >>> Content-Type: text/plain; charset="utf-8" >>> >>> Thank you Miguel, my openstack is working fine on ESXi. But when I try to >>> do the same things with my openstack installation on real servers it >>> doesn't work. I'm still stuck with br-ex problem and the vlans in which >>> my >>> interfaces are. br-ex can't reach the outside because eth0 is in a vlan. >>> any idea >>> >>> 2015-04-17 14:23 GMT+02:00 Miguel Angel Ajo Pelayo >> >: >>> >>> > >>> > The traffic shows that neutron is doing the right thing, >>> > >>> > Check that your ESX is not applying any MAC anti spoof on the >>> > vmware vswitch, it looks like the ARP requests could be blocked at >>> switch >>> > level >>> > since every qrouter is going to have it?s own MAC address (separate >>> from >>> > your own >>> > VM one). >>> > >>> > Otherwise connect other machine to the physical switch on vlan30 and >>> check >>> > if >>> > the ARP requests (it?s broadcast traffic) are arriving to confirm my >>> above >>> > theory. >>> > >>> > >>> > >>> > On 17/4/2015, at 13:51, pauline phaure wrote: >>> > >>> > i found these lines on the input file of >>> > >>> > *tcpdump -e -n -v -v -v -i eth0 *192.168.2.72 > 10.0.0.4: ICMP host >>> > 192.168.2.1 unreachable, length 92 >>> > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, >>> length 92 >>> > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, >>> length 92 >>> > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, >>> length 92 >>> > 11:41:46.661008 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >>> > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >>> tell >>> > 192.168.2.72, length 28 >>> > 11:41:47.663307 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >>> > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >>> tell >>> > 192.168.2.72, length 28 >>> > 11:41:48.665301 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >>> > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >>> tell >>> > 192.168.2.72, length 28 >>> > >>> > >>> > 2015-04-17 11:52 GMT+02:00 pauline phaure : >>> > >>> >> hey Miguel, thank you for your response, plz found below the output of >>> >> the commands: >>> >> >>> >> >>> >> *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 ip a* >>> >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >>> >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >>> >> inet 127.0.0.1/8 scope host lo >>> >> valid_lft forever preferred_lft forever >>> >> inet6 ::1/128 scope host >>> >> valid_lft forever preferred_lft forever >>> >> 12: qr-207805ae-39: mtu 1500 qdisc >>> >> noqueue state UNKNOWN >>> >> link/ether fa:16:3e:1c:62:a8 brd ff:ff:ff:ff:ff:ff >>> >> inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-207805ae-39 >>> >> valid_lft forever preferred_lft forever >>> >> inet6 fe80::f816:3eff:fe1c:62a8/64 scope link >>> >> valid_lft forever preferred_lft forever >>> >> 13: qg-52b4d686-58: mtu 1500 qdisc >>> >> noqueue state UNKNOWN >>> >> link/ether fa:16:3e:34:d5:6e brd ff:ff:ff:ff:ff:ff >>> >> inet 192.168.2.70/24 brd 192.168.2.255 scope global >>> qg-52b4d686-58 >>> >> valid_lft forever preferred_lft forever >>> >> inet *192.168.2.72/32 * brd 192.168.2.72 >>> >> scope global *qg-52b4d686-58* >>> >> valid_lft forever preferred_lft forever >>> >> inet6 fe80::f816:3eff:fe34:d56e/64 scope link >>> >> valid_lft forever preferred_lft forever >>> >> >>> >> >>> >> *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 tcpdump >>> -e -n >>> >> -v -v -v -i qg-52b4d686-58* >>> >> >>> >> equest who-has 192.168.2.1 tell 192.168.2.72, length 28 >>> >> 11:49:19.705378 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >>> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.1 tell >>> >> 192.168.2.72, length 28 >>> >> 11:49:20.707292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >>> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.1 tell >>> >> 192.168.2.72, length 28 >>> >> 11:49:22.706910 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >>> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.1 tell >>> >> 192.168.2.72, length 28 >>> >> 11:49:23.707412 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >>> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.1 tell >>> >> 192.168.2.72, length 28 >>> >> 11:49:24.709292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >>> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.1 tell >>> >> 192.168.2.72, length 28 >>> >> 11:49:26.710264 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >>> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.1 tell >>> >> 192.168.2.72, length 28 >>> >> 11:49:27.711297 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >>> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.1 tell >>> >> 192.168.2.72, length 28 >>> >> 11:49:28.002005 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >>> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.42 >>> >> (Broadcast) tell 192.168.2.1, length 46 >>> >> 11:49:28.002064 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 >>> >> (0x0800), length 98: (tos 0x0, ttl 63, id 58298, offset 0, flags [DF], >>> >> proto ICMP (1), length 84) >>> >> 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 494, >>> >> length 64 >>> >> 11:49:28.002079 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 >>> >> (0x0800), length 98: (tos 0x0, ttl 63, id 58299, offset 0, flags [DF], >>> >> proto ICMP (1), length 84) >>> >> 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 495, >>> >> length 64 >>> >> 11:49:28.040439 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >>> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.5 >>> >> (Broadcast) tell 192.168.2.1, length 46 >>> >> 11:49:28.079105 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >>> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.20 >>> >> (Broadcast) tell 192.168.2.1, length 46 >>> >> 11:49:28.115671 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >>> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.34 >>> >> (Broadcast) tell 192.168.2.1, length 46 >>> >> 11:49:28.179014 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >>> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.22 >>> >> (Broadcast) tell 192.168.2.1, length 46 >>> >> 11:49:28.223391 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >>> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has >>> 192.168.2.240 >>> >> (Broadcast) tell 192.168.2.1, length 46 >>> >> >>> >> >>> >> *tcpdump -e -n -v -v -v -i eth0 * >>> >> >>> >> 11:41:44.953118 00:0c:29:56:d9:09 > 74:46:a0:9e:ff:a5, ethertype IPv4 >>> >> (0x0800), length 166: (tos 0x10, ttl 64, id 10881, offset 0, flags >>> [DF], >>> >> proto TCP (6), length 152) >>> >> 192.168.2.19.ssh > 192.168.2.99.53021: Flags [P.], cksum 0x8651 >>> >> (incorrect -> 0x9f53), seq 2550993953:2550994065, ack 2916435463, win >>> 146, >>> >> length 112 >>> >> 11:41:44.953804 74:46:a0:9e:ff:a5 > 00:0c:29:56:d9:09, ethertype IPv4 >>> >> (0x0800), length 60: (tos 0x0, ttl 128, id 31471, offset 0, flags >>> [DF], >>> >> proto TCP (6), length 40) >>> >> 192.168.2.99.53021 > 192.168.2.19.ssh: Flags [.], cksum 0x7b65 >>> >> (correct), seq 1, ack 112, win 16121, length 0 >>> >> 11:41:45.017729 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 >>> >> (0x0800), length 99: (tos 0x0, ttl 64, id 17044, offset 0, flags [DF], >>> >> proto TCP (6), length 85) >>> >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [P.], cksum 0x7339 >>> >> (correct), seq 2968653045:2968653078, ack 1461763310, win 123, options >>> >> [nop,nop,TS val 222978 ecr 218783], length 33 >>> >> 11:41:45.018242 00:0c:29:56:d9:09 > 00:0c:29:91:4c:ea, ethertype IPv4 >>> >> (0x0800), length 78: (tos 0x0, ttl 64, id 47485, offset 0, flags [DF], >>> >> proto TCP (6), length 64) >>> >> 192.168.2.19.amqp > 192.168.2.22.45167: Flags [P.], cksum 0x85ac >>> >> (incorrect -> 0x4c5d), seq 1:13, ack 33, win 330, options [nop,nop,TS >>> val >>> >> 223746 ecr 222978], length 12 >>> >> 11:41:45.018453 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 >>> >> (0x0800), length 66: (tos 0x0, ttl 64, id 17045, offset 0, flags [DF], >>> >> proto TCP (6), length 52) >>> >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [.], cksum 0x8701 >>> >> (correct), seq 33, ack 13, win 123, options [nop,nop,TS val 222979 ecr >>> >> 223746], length 0 >>> >> >>> >> >>> >> >>> >> 2015-04-17 10:42 GMT+02:00 Miguel Angel Ajo Pelayo < >>> mangelajo at redhat.com> >>> >> : >>> >> >>> >>> To troubleshoot this I?d recommend you >>> >>> >>> >>> 1) doing a tcpdump in the controller node, on the external interface >>> >>> attached to br-ex, >>> >>> and find what?s going on, >>> >>> >>> >>> tcpdump -e -n -v -v -v -i ethX >>> >>> >>> >>> note: as per your schema you may use an ?external flat network? >>> >>> (no segmentation) from your network/controller node, so the packets >>> >>> going out from the router >>> >>> should not be tagged in your tcpdump. >>> >>> >>> >>> If you set the external network as vlan tagged, you may have to >>> change >>> >>> it into flat. (such operation >>> >>> may require removing the floating ips from instances, removing legs >>> from >>> >>> router (External, and internal), >>> >>> and then removing the router, then the external network/subnet). >>> >>> >>> >>> >>> >>> In a separate terminal, it may help to .. >>> >>> 2) look for the router netns: >>> >>> >>> >>> # ip netns >>> >>> qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 >>> >>> >>> >>> note : this is the ?virtual router?, it lives in a network namespace >>> >>> which is another isolated >>> >>> instance of the linux networking stack., you will find the interfaces >>> >>> and IPs attached with >>> >>> the following command: >>> >>> >>> >>> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 ip a >>> >>> >>> >>> (here look for the external leg of the router, it will have the >>> external >>> >>> router IP and the floating ip attached) >>> >>> it should look like qg-xxxxxxxx-xx >>> >>> >>> >>> >>> >>> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 tcpdump >>> -e >>> >>> -n -v -v -v -i qg-xxxxxxx-xx >>> >>> >>> >>> >>> >>> Please tell us how is it going . >>> >>> >>> >>> >>> >>> >>> >>> > On 17/4/2015, at 9:48, pauline phaure wrote: >>> >>> > >>> >>> > Hello everyone, >>> >>> > I have some troubles making the floating IP work. When I associate >>> a >>> >>> floating IP to my instance, the instance can reach the >>> neutron-router and >>> >>> ping but cannot ping the external gateway. any ideas where to look? >>> >>> > >>> >>> > >>> >>> > >>> >>> > _______________________________________________ >>> >>> > Rdo-list mailing list >>> >>> > Rdo-list at redhat.com >>> >>> > https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> > >>> >>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >>> >>> Miguel Angel Ajo >>> >>> >>> >>> >>> >>> >>> >>> >>> >> >>> > >>> > Miguel Angel Ajo >>> > >>> > >>> > >>> > >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: < >>> https://www.redhat.com/archives/rdo-list/attachments/20150417/2f2d9cfd/attachment.html >>> > >>> >>> ------------------------------ >>> >>> Message: 2 >>> Date: Fri, 17 Apr 2015 10:00:11 -0400 >>> From: Lars Kellogg-Stedman >>> To: Omri Hochman >>> Cc: rdo-list at redhat.com >>> Subject: Re: [Rdo-list] Help getting started with rdo-manager >>> Message-ID: <20150417140011.GF18285 at redhat.com> >>> Content-Type: text/plain; charset="us-ascii" >>> >>> > I think you should have check that in /etc/edeploy/state you have >>> > --> : [('control', 1), ('compute', '*')] >>> >>> Omri, >>> >>> Thanks, that did get me one step closer. >>> >>> The deploy is still failing, but now it's due to the following >>> resource: >>> >>> | ControllerNodesPostDeployment | 9a24f414-4e35-4d27-b550-77d47651f56a >>> | OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >>> 2015-04-17T01:28:32Z | >>> >>> -- >>> Lars Kellogg-Stedman | larsks @ >>> {freenode,twitter,github} >>> Cloud Engineering / OpenStack | http://blog.oddbit.com/ >>> >>> -------------- next part -------------- >>> A non-text attachment was scrubbed... >>> Name: signature.asc >>> Type: application/pgp-signature >>> Size: 819 bytes >>> Desc: not available >>> URL: < >>> https://www.redhat.com/archives/rdo-list/attachments/20150417/6e784186/attachment.bin >>> > >>> >>> ------------------------------ >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> >>> End of Rdo-list Digest, Vol 25, Issue 28 >>> **************************************** >>> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 20150417_163705.jpg Type: image/jpeg Size: 2295404 bytes Desc: not available URL: From apevec at gmail.com Fri Apr 17 14:47:20 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 17 Apr 2015 16:47:20 +0200 Subject: [Rdo-list] [CI] swift fails to start in rdo kilo In-Reply-To: References: <1429238146.3235.3.camel@redhat.com> Message-ID: >> https://bugzilla.redhat.com/show_bug.cgi?id=1212670 > > Pete already submitted that new dependency, it wasn't in my radar. > Reviewing it. > https://bugzilla.redhat.com/show_bug.cgi?id=1212148 There are few issues with this review + deps so in the meantime I've uploaded draft packages to RDO Kilo and updated deps in openstack-swift: https://bugzilla.redhat.com/show_bug.cgi?id=1212670#c4 CI passed and I've updated symlink latest-RDO-trunk-CI -> ad/66/ad66801915c0b87f3ba3b6648d473d601deac1e9_af64a80a Cheers, Alan From pgsousa at gmail.com Fri Apr 17 14:54:17 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 17 Apr 2015 15:54:17 +0100 Subject: [Rdo-list] Rdo-list Digest, Vol 25, Issue 28 In-Reply-To: References: Message-ID: I don't understand why do you use 2 bridges br-ex and br-ex.30. Considering that you have ip 192.168.2.19 configured on ifcfg-eth0.30 you should use br-ex:eth0.30 mapping Can you show interfaces configuration? #ifconfig And bridge_mappings on /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini? Thanks On Fri, Apr 17, 2015 at 3:07 PM, pauline phaure wrote: > this is my architecture , i don't know how to connecte br-ex to external > and ping the router. any ideas ? > > [image: Images int?gr?es 2] > > 2015-04-17 16:00 GMT+02:00 : > >> Send Rdo-list mailing list submissions to >> rdo-list at redhat.com >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://www.redhat.com/mailman/listinfo/rdo-list >> or, via email, send a message with subject or body 'help' to >> rdo-list-request at redhat.com >> >> You can reach the person managing the list at >> rdo-list-owner at redhat.com >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of Rdo-list digest..." >> >> >> Today's Topics: >> >> 1. Re: Problem with floating IP (pauline phaure) >> 2. Re: Help getting started with rdo-manager (Lars Kellogg-Stedman) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Fri, 17 Apr 2015 15:55:08 +0200 >> From: pauline phaure >> To: Miguel Angel Ajo Pelayo >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] Problem with floating IP >> Message-ID: >> < >> CAJM-u-X51u8xgEp9FQj4tmSbvG0GnUc0taj1SUMrx7rjzVXqLQ at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> Thank you Miguel, my openstack is working fine on ESXi. But when I try to >> do the same things with my openstack installation on real servers it >> doesn't work. I'm still stuck with br-ex problem and the vlans in which my >> interfaces are. br-ex can't reach the outside because eth0 is in a vlan. >> any idea >> >> 2015-04-17 14:23 GMT+02:00 Miguel Angel Ajo Pelayo > >: >> >> > >> > The traffic shows that neutron is doing the right thing, >> > >> > Check that your ESX is not applying any MAC anti spoof on the >> > vmware vswitch, it looks like the ARP requests could be blocked at >> switch >> > level >> > since every qrouter is going to have it?s own MAC address (separate from >> > your own >> > VM one). >> > >> > Otherwise connect other machine to the physical switch on vlan30 and >> check >> > if >> > the ARP requests (it?s broadcast traffic) are arriving to confirm my >> above >> > theory. >> > >> > >> > >> > On 17/4/2015, at 13:51, pauline phaure wrote: >> > >> > i found these lines on the input file of >> > >> > *tcpdump -e -n -v -v -v -i eth0 *192.168.2.72 > 10.0.0.4: ICMP host >> > 192.168.2.1 unreachable, length 92 >> > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length >> 92 >> > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length >> 92 >> > 192.168.2.72 > 10.0.0.4: ICMP host 192.168.2.1 unreachable, length >> 92 >> > 11:41:46.661008 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> > 192.168.2.72, length 28 >> > 11:41:47.663307 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> > 192.168.2.72, length 28 >> > 11:41:48.665301 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> > 192.168.2.72, length 28 >> > >> > >> > 2015-04-17 11:52 GMT+02:00 pauline phaure : >> > >> >> hey Miguel, thank you for your response, plz found below the output of >> >> the commands: >> >> >> >> >> >> *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 ip a* >> >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> >> inet 127.0.0.1/8 scope host lo >> >> valid_lft forever preferred_lft forever >> >> inet6 ::1/128 scope host >> >> valid_lft forever preferred_lft forever >> >> 12: qr-207805ae-39: mtu 1500 qdisc >> >> noqueue state UNKNOWN >> >> link/ether fa:16:3e:1c:62:a8 brd ff:ff:ff:ff:ff:ff >> >> inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-207805ae-39 >> >> valid_lft forever preferred_lft forever >> >> inet6 fe80::f816:3eff:fe1c:62a8/64 scope link >> >> valid_lft forever preferred_lft forever >> >> 13: qg-52b4d686-58: mtu 1500 qdisc >> >> noqueue state UNKNOWN >> >> link/ether fa:16:3e:34:d5:6e brd ff:ff:ff:ff:ff:ff >> >> inet 192.168.2.70/24 brd 192.168.2.255 scope global qg-52b4d686-58 >> >> valid_lft forever preferred_lft forever >> >> inet *192.168.2.72/32 * brd 192.168.2.72 >> >> scope global *qg-52b4d686-58* >> >> valid_lft forever preferred_lft forever >> >> inet6 fe80::f816:3eff:fe34:d56e/64 scope link >> >> valid_lft forever preferred_lft forever >> >> >> >> >> >> *ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 tcpdump -e >> -n >> >> -v -v -v -i qg-52b4d686-58* >> >> >> >> equest who-has 192.168.2.1 tell 192.168.2.72, length 28 >> >> 11:49:19.705378 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:20.707292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:22.706910 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:23.707412 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:24.709292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:26.710264 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:27.711297 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), >> >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 >> tell >> >> 192.168.2.72, length 28 >> >> 11:49:28.002005 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.42 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> 11:49:28.002064 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 >> >> (0x0800), length 98: (tos 0x0, ttl 63, id 58298, offset 0, flags [DF], >> >> proto ICMP (1), length 84) >> >> 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 494, >> >> length 64 >> >> 11:49:28.002079 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 >> >> (0x0800), length 98: (tos 0x0, ttl 63, id 58299, offset 0, flags [DF], >> >> proto ICMP (1), length 84) >> >> 192.168.2.72 > 192.168.2.1: ICMP echo request, id 19201, seq 495, >> >> length 64 >> >> 11:49:28.040439 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.5 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> 11:49:28.079105 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.20 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> 11:49:28.115671 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.34 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> 11:49:28.179014 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.22 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> 11:49:28.223391 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), >> >> length 60: Ethernet (len 6), IPv4 (len 4), Request who-has >> 192.168.2.240 >> >> (Broadcast) tell 192.168.2.1, length 46 >> >> >> >> >> >> *tcpdump -e -n -v -v -v -i eth0 * >> >> >> >> 11:41:44.953118 00:0c:29:56:d9:09 > 74:46:a0:9e:ff:a5, ethertype IPv4 >> >> (0x0800), length 166: (tos 0x10, ttl 64, id 10881, offset 0, flags >> [DF], >> >> proto TCP (6), length 152) >> >> 192.168.2.19.ssh > 192.168.2.99.53021: Flags [P.], cksum 0x8651 >> >> (incorrect -> 0x9f53), seq 2550993953:2550994065, ack 2916435463, win >> 146, >> >> length 112 >> >> 11:41:44.953804 74:46:a0:9e:ff:a5 > 00:0c:29:56:d9:09, ethertype IPv4 >> >> (0x0800), length 60: (tos 0x0, ttl 128, id 31471, offset 0, flags [DF], >> >> proto TCP (6), length 40) >> >> 192.168.2.99.53021 > 192.168.2.19.ssh: Flags [.], cksum 0x7b65 >> >> (correct), seq 1, ack 112, win 16121, length 0 >> >> 11:41:45.017729 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 >> >> (0x0800), length 99: (tos 0x0, ttl 64, id 17044, offset 0, flags [DF], >> >> proto TCP (6), length 85) >> >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [P.], cksum 0x7339 >> >> (correct), seq 2968653045:2968653078, ack 1461763310, win 123, options >> >> [nop,nop,TS val 222978 ecr 218783], length 33 >> >> 11:41:45.018242 00:0c:29:56:d9:09 > 00:0c:29:91:4c:ea, ethertype IPv4 >> >> (0x0800), length 78: (tos 0x0, ttl 64, id 47485, offset 0, flags [DF], >> >> proto TCP (6), length 64) >> >> 192.168.2.19.amqp > 192.168.2.22.45167: Flags [P.], cksum 0x85ac >> >> (incorrect -> 0x4c5d), seq 1:13, ack 33, win 330, options [nop,nop,TS >> val >> >> 223746 ecr 222978], length 12 >> >> 11:41:45.018453 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 >> >> (0x0800), length 66: (tos 0x0, ttl 64, id 17045, offset 0, flags [DF], >> >> proto TCP (6), length 52) >> >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [.], cksum 0x8701 >> >> (correct), seq 33, ack 13, win 123, options [nop,nop,TS val 222979 ecr >> >> 223746], length 0 >> >> >> >> >> >> >> >> 2015-04-17 10:42 GMT+02:00 Miguel Angel Ajo Pelayo < >> mangelajo at redhat.com> >> >> : >> >> >> >>> To troubleshoot this I?d recommend you >> >>> >> >>> 1) doing a tcpdump in the controller node, on the external interface >> >>> attached to br-ex, >> >>> and find what?s going on, >> >>> >> >>> tcpdump -e -n -v -v -v -i ethX >> >>> >> >>> note: as per your schema you may use an ?external flat network? >> >>> (no segmentation) from your network/controller node, so the packets >> >>> going out from the router >> >>> should not be tagged in your tcpdump. >> >>> >> >>> If you set the external network as vlan tagged, you may have to change >> >>> it into flat. (such operation >> >>> may require removing the floating ips from instances, removing legs >> from >> >>> router (External, and internal), >> >>> and then removing the router, then the external network/subnet). >> >>> >> >>> >> >>> In a separate terminal, it may help to .. >> >>> 2) look for the router netns: >> >>> >> >>> # ip netns >> >>> qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 >> >>> >> >>> note : this is the ?virtual router?, it lives in a network namespace >> >>> which is another isolated >> >>> instance of the linux networking stack., you will find the interfaces >> >>> and IPs attached with >> >>> the following command: >> >>> >> >>> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 ip a >> >>> >> >>> (here look for the external leg of the router, it will have the >> external >> >>> router IP and the floating ip attached) >> >>> it should look like qg-xxxxxxxx-xx >> >>> >> >>> >> >>> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 tcpdump >> -e >> >>> -n -v -v -v -i qg-xxxxxxx-xx >> >>> >> >>> >> >>> Please tell us how is it going . >> >>> >> >>> >> >>> >> >>> > On 17/4/2015, at 9:48, pauline phaure wrote: >> >>> > >> >>> > Hello everyone, >> >>> > I have some troubles making the floating IP work. When I associate a >> >>> floating IP to my instance, the instance can reach the neutron-router >> and >> >>> ping but cannot ping the external gateway. any ideas where to look? >> >>> > >> >>> > >> >>> > >> >>> > _______________________________________________ >> >>> > Rdo-list mailing list >> >>> > Rdo-list at redhat.com >> >>> > https://www.redhat.com/mailman/listinfo/rdo-list >> >>> > >> >>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >>> >> >>> Miguel Angel Ajo >> >>> >> >>> >> >>> >> >>> >> >> >> > >> > Miguel Angel Ajo >> > >> > >> > >> > >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> https://www.redhat.com/archives/rdo-list/attachments/20150417/2f2d9cfd/attachment.html >> > >> >> ------------------------------ >> >> Message: 2 >> Date: Fri, 17 Apr 2015 10:00:11 -0400 >> From: Lars Kellogg-Stedman >> To: Omri Hochman >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] Help getting started with rdo-manager >> Message-ID: <20150417140011.GF18285 at redhat.com> >> Content-Type: text/plain; charset="us-ascii" >> >> > I think you should have check that in /etc/edeploy/state you have >> > --> : [('control', 1), ('compute', '*')] >> >> Omri, >> >> Thanks, that did get me one step closer. >> >> The deploy is still failing, but now it's due to the following >> resource: >> >> | ControllerNodesPostDeployment | 9a24f414-4e35-4d27-b550-77d47651f56a >> | OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >> 2015-04-17T01:28:32Z | >> >> -- >> Lars Kellogg-Stedman | larsks @ >> {freenode,twitter,github} >> Cloud Engineering / OpenStack | http://blog.oddbit.com/ >> >> -------------- next part -------------- >> A non-text attachment was scrubbed... >> Name: signature.asc >> Type: application/pgp-signature >> Size: 819 bytes >> Desc: not available >> URL: < >> https://www.redhat.com/archives/rdo-list/attachments/20150417/6e784186/attachment.bin >> > >> >> ------------------------------ >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> End of Rdo-list Digest, Vol 25, Issue 28 >> **************************************** >> > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11855 bytes Desc: not available URL: From mangelajo at redhat.com Fri Apr 17 15:00:17 2015 From: mangelajo at redhat.com (Miguel Angel Ajo Pelayo) Date: Fri, 17 Apr 2015 17:00:17 +0200 Subject: [Rdo-list] Problem with floating IP In-Reply-To: References: <075CDAEE-E429-4143-9CD5-2EA43FA2E9B7@redhat.com> <2BBB7E5E-DC92-49CE-AD45-63D50394F4E6@redhat.com> Message-ID: <18093BF5-5F05-4F3A-8F1A-67694F65CDD3@redhat.com> So, the traces were on the ESXi? or the traces were on bare metal node? Your initial message had a diagram with ESXi, so from your response, now I?m not sure if we fixed it for ESXi and bare metal still doesn?t work, or, if we didn?t fix anything. Best, Miguel ?ngel. > On 17/4/2015, at 15:55, pauline phaure wrote: > > Thank you Miguel, my openstack is working fine on ESXi. But when I try to do the same things with my openstack installation on real servers it doesn't work. I'm still stuck with br-ex problem and the vlans in which my interfaces are. br-ex can't reach the outside because eth0 is in a vlan. any idea > > 2015-04-17 14:23 GMT+02:00 Miguel Angel Ajo Pelayo >: > > The traffic shows that neutron is doing the right thing, > > Check that your ESX is not applying any MAC anti spoof on the > vmware vswitch, it looks like the ARP requests could be blocked at switch level > since every qrouter is going to have it?s own MAC address (separate from your own > VM one). > > Otherwise connect other machine to the physical switch on vlan30 and check if > the ARP requests (it?s broadcast traffic) are arriving to confirm my above theory. > > > >> On 17/4/2015, at 13:51, pauline phaure > wrote: >> >> i found these lines on the input file of tcpdump -e -n -v -v -v -i eth0 >> >> 192.168.2.72 > 10.0.0.4 : ICMP host 192.168.2.1 unreachable, length 92 >> 192.168.2.72 > 10.0.0.4 : ICMP host 192.168.2.1 unreachable, length 92 >> 192.168.2.72 > 10.0.0.4 : ICMP host 192.168.2.1 unreachable, length 92 >> 192.168.2.72 > 10.0.0.4 : ICMP host 192.168.2.1 unreachable, length 92 >> 11:41:46.661008 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:41:47.663307 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:41:48.665301 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 >> >> >> 2015-04-17 11:52 GMT+02:00 pauline phaure >: >> hey Miguel, thank you for your response, plz found below the output of the commands: >> >> >> ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 ip a >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> inet 127.0.0.1/8 scope host lo >> valid_lft forever preferred_lft forever >> inet6 ::1/128 scope host >> valid_lft forever preferred_lft forever >> 12: qr-207805ae-39: mtu 1500 qdisc noqueue state UNKNOWN >> link/ether fa:16:3e:1c:62:a8 brd ff:ff:ff:ff:ff:ff >> inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-207805ae-39 >> valid_lft forever preferred_lft forever >> inet6 fe80::f816:3eff:fe1c:62a8/64 scope link >> valid_lft forever preferred_lft forever >> 13: qg-52b4d686-58: mtu 1500 qdisc noqueue state UNKNOWN >> link/ether fa:16:3e:34:d5:6e brd ff:ff:ff:ff:ff:ff >> inet 192.168.2.70/24 brd 192.168.2.255 scope global qg-52b4d686-58 >> valid_lft forever preferred_lft forever >> inet 192.168.2.72/32 brd 192.168.2.72 scope global qg-52b4d686-58 >> valid_lft forever preferred_lft forever >> inet6 fe80::f816:3eff:fe34:d56e/64 scope link >> valid_lft forever preferred_lft forever >> >> >> ip netns exec qrouter-f7194985-eb13-41bf-8158-f0e78fc932c4 tcpdump -e -n -v -v -v -i qg-52b4d686-58 >> >> equest who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:49:19.705378 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:49:20.707292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:49:22.706910 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:49:23.707412 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:49:24.709292 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:49:26.710264 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:49:27.711297 fa:16:3e:34:d5:6e > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.1 tell 192.168.2.72, length 28 >> 11:49:28.002005 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.42 (Broadcast) tell 192.168.2.1, length 46 >> 11:49:28.002064 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 58298, offset 0, flags [DF], proto ICMP (1), length 84) >> 192.168.2.72 > 192.168.2.1 : ICMP echo request, id 19201, seq 494, length 64 >> 11:49:28.002079 fa:16:3e:34:d5:6e > 00:23:48:9e:85:7c, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 58299, offset 0, flags [DF], proto ICMP (1), length 84) >> 192.168.2.72 > 192.168.2.1 : ICMP echo request, id 19201, seq 495, length 64 >> 11:49:28.040439 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.5 (Broadcast) tell 192.168.2.1, length 46 >> 11:49:28.079105 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.20 (Broadcast) tell 192.168.2.1, length 46 >> 11:49:28.115671 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.34 (Broadcast) tell 192.168.2.1, length 46 >> 11:49:28.179014 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.22 (Broadcast) tell 192.168.2.1, length 46 >> 11:49:28.223391 00:23:48:9e:85:7c > Broadcast, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.2.240 (Broadcast) tell 192.168.2.1, length 46 >> >> >> tcpdump -e -n -v -v -v -i eth0 >> >> 11:41:44.953118 00:0c:29:56:d9:09 > 74:46:a0:9e:ff:a5, ethertype IPv4 (0x0800), length 166: (tos 0x10, ttl 64, id 10881, offset 0, flags [DF], proto TCP (6), length 152) >> 192.168.2.19.ssh > 192.168.2.99.53021: Flags [P.], cksum 0x8651 (incorrect -> 0x9f53), seq 2550993953:2550994065, ack 2916435463, win 146, length 112 >> 11:41:44.953804 74:46:a0:9e:ff:a5 > 00:0c:29:56:d9:09, ethertype IPv4 (0x0800), length 60: (tos 0x0, ttl 128, id 31471, offset 0, flags [DF], proto TCP (6), length 40) >> 192.168.2.99.53021 > 192.168.2.19.ssh: Flags [.], cksum 0x7b65 (correct), seq 1, ack 112, win 16121, length 0 >> 11:41:45.017729 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 (0x0800), length 99: (tos 0x0, ttl 64, id 17044, offset 0, flags [DF], proto TCP (6), length 85) >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [P.], cksum 0x7339 (correct), seq 2968653045:2968653078, ack 1461763310, win 123, options [nop,nop,TS val 222978 ecr 218783], length 33 >> 11:41:45.018242 00:0c:29:56:d9:09 > 00:0c:29:91:4c:ea, ethertype IPv4 (0x0800), length 78: (tos 0x0, ttl 64, id 47485, offset 0, flags [DF], proto TCP (6), length 64) >> 192.168.2.19.amqp > 192.168.2.22.45167: Flags [P.], cksum 0x85ac (incorrect -> 0x4c5d), seq 1:13, ack 33, win 330, options [nop,nop,TS val 223746 ecr 222978], length 12 >> 11:41:45.018453 00:0c:29:91:4c:ea > 00:0c:29:56:d9:09, ethertype IPv4 (0x0800), length 66: (tos 0x0, ttl 64, id 17045, offset 0, flags [DF], proto TCP (6), length 52) >> 192.168.2.22.45167 > 192.168.2.19.amqp: Flags [.], cksum 0x8701 (correct), seq 33, ack 13, win 123, options [nop,nop,TS val 222979 ecr 223746], length 0 >> >> >> >> 2015-04-17 10:42 GMT+02:00 Miguel Angel Ajo Pelayo >: >> To troubleshoot this I?d recommend you >> >> 1) doing a tcpdump in the controller node, on the external interface attached to br-ex, >> and find what?s going on, >> >> tcpdump -e -n -v -v -v -i ethX >> >> note: as per your schema you may use an ?external flat network? >> (no segmentation) from your network/controller node, so the packets going out from the router >> should not be tagged in your tcpdump. >> >> If you set the external network as vlan tagged, you may have to change it into flat. (such operation >> may require removing the floating ips from instances, removing legs from router (External, and internal), >> and then removing the router, then the external network/subnet). >> >> >> In a separate terminal, it may help to .. >> 2) look for the router netns: >> >> # ip netns >> qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 >> >> note : this is the ?virtual router?, it lives in a network namespace which is another isolated >> instance of the linux networking stack., you will find the interfaces and IPs attached with >> the following command: >> >> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 ip a >> >> (here look for the external leg of the router, it will have the external router IP and the floating ip attached) >> it should look like qg-xxxxxxxx-xx >> >> >> # ip netns exec qrouter-8f2f7e69-02c3-4b75-9b25-e23b64757935 tcpdump -e -n -v -v -v -i qg-xxxxxxx-xx >> >> >> Please tell us how is it going . >> >> >> >> > On 17/4/2015, at 9:48, pauline phaure > wrote: >> > >> > Hello everyone, >> > I have some troubles making the floating IP work. When I associate a floating IP to my instance, the instance can reach the neutron-router and ping but cannot ping the external gateway. any ideas where to look? >> > >> > >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> Miguel Angel Ajo >> >> >> >> >> > > Miguel Angel Ajo > > > > Miguel Angel Ajo -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Fri Apr 17 15:08:25 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 17 Apr 2015 11:08:25 -0400 Subject: [Rdo-list] DNS resolver problems w/ instack-virt-setup Message-ID: <20150417150825.GG18285@redhat.com> On the overcloud nodes deployed by instack-virt-setup, /etc/resolv.conf looks like this: search openstacklocal nameserver 192.168.122.1 That's not a useful address for either of these nodes, on which external connectivity -- at least on the controller -- is via eth0/br-ex on the 192.0.2.0/24 network. Even worse, on the controller "192.168.122.1" is actually the address of the local "virbr0" interface configured by libvirt, so DNS requests aren't going to go anywhere useful. There are obviously a number of ways to fix this, including setting up dnsmasq on the undercloud node and using that instead. How is this supposed to work? Did the deployments script screw up, or did I miss something in the docs? -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From christian at berendt.io Fri Apr 17 15:45:48 2015 From: christian at berendt.io (Christian Berendt) Date: Fri, 17 Apr 2015 17:45:48 +0200 Subject: [Rdo-list] [packaging] /etc/cinder/cinder.conf missing in openstack-cinder In-Reply-To: <55311AC8.6040508@berendt.io> References: <55311AC8.6040508@berendt.io> Message-ID: <55312AAC.7050509@berendt.io> On 04/17/2015 04:38 PM, Christian Berendt wrote: > Only the file /usr/share/cinder/cinder-dist.conf is included in the > package openstack-cinder. The file /etc/cinder/cinder.conf is not > included in the package and /usr/share/cinder/cinder-dist.conf is not > really complete. > > Can you please add the file genereated with tox -egenconfig > (https://github.com/openstack/cinder/blob/master/etc/cinder/README-cinder.conf.sample) > to openstack-cinder? Opened a bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1212900 Christian. From jslagle at redhat.com Fri Apr 17 16:06:38 2015 From: jslagle at redhat.com (James Slagle) Date: Fri, 17 Apr 2015 12:06:38 -0400 Subject: [Rdo-list] DNS resolver problems w/ instack-virt-setup In-Reply-To: <20150417150825.GG18285@redhat.com> References: <20150417150825.GG18285@redhat.com> Message-ID: <20150417160638.GP29586@teletran-1.redhat.com> On Fri, Apr 17, 2015 at 11:08:25AM -0400, Lars Kellogg-Stedman wrote: > On the overcloud nodes deployed by instack-virt-setup, > /etc/resolv.conf looks like this: > > search openstacklocal > nameserver 192.168.122.1 > > That's not a useful address for either of these nodes, on which > external connectivity -- at least on the controller -- is via > eth0/br-ex on the 192.0.2.0/24 network. Even worse, on the controller > "192.168.122.1" is actually the address of the local "virbr0" > interface configured by libvirt, so DNS requests aren't going to go > anywhere useful. > > There are obviously a number of ways to fix this, including setting up > dnsmasq on the undercloud node and using that instead. How is this > supposed to work? Did the deployments script screw up, or did I miss > something in the docs? I think we might have a couple of patches posted that are in review that might help with these issues: https://review.gerrithub.io/230339 https://review.gerrithub.io/230143 Also makes sure ip forwarding is enabled on the host machine: sudo sysctl net.ipv4.ip_forward=1 (This needs to get added to the documentation). Thanks for helping to try things out. -- -- James Slagle -- From marius at remote-lab.net Fri Apr 17 16:15:37 2015 From: marius at remote-lab.net (Marius Cornea) Date: Fri, 17 Apr 2015 18:15:37 +0200 Subject: [Rdo-list] DNS resolver problems w/ instack-virt-setup In-Reply-To: <20150417160638.GP29586@teletran-1.redhat.com> References: <20150417150825.GG18285@redhat.com> <20150417160638.GP29586@teletran-1.redhat.com> Message-ID: Hi Lars, 192.168.122.1 is set on the virbr0 interface on the host (default libvirt net) where under/overcloud VMs are running. The undercloud node (instack VM) has 2 interfaces - one in the virbr0 bridge(192.168.122.0/24 subnet) and the other in the brbm ovs bridge (192.0.2.0/24 subnet). The overcloud nodes have one interface in the brbm bridge and route the traffic through the undercloud node. You can check that default gw on overcloud nodes is 192.0.2.1 (eth1 of instack VM). The undercloud node masquerades all traffic coming from 192.0.2.0/24 so the overcloud nodes can get external connectivity, including to 192.168.122.1 which handles the dns queries. On Fri, Apr 17, 2015 at 6:06 PM, James Slagle wrote: > On Fri, Apr 17, 2015 at 11:08:25AM -0400, Lars Kellogg-Stedman wrote: >> On the overcloud nodes deployed by instack-virt-setup, >> /etc/resolv.conf looks like this: >> >> search openstacklocal >> nameserver 192.168.122.1 >> >> That's not a useful address for either of these nodes, on which >> external connectivity -- at least on the controller -- is via >> eth0/br-ex on the 192.0.2.0/24 network. Even worse, on the controller >> "192.168.122.1" is actually the address of the local "virbr0" >> interface configured by libvirt, so DNS requests aren't going to go >> anywhere useful. >> >> There are obviously a number of ways to fix this, including setting up >> dnsmasq on the undercloud node and using that instead. How is this >> supposed to work? Did the deployments script screw up, or did I miss >> something in the docs? > > I think we might have a couple of patches posted that are in review that might > help with these issues: > > https://review.gerrithub.io/230339 > https://review.gerrithub.io/230143 > > Also makes sure ip forwarding is enabled on the host machine: > sudo sysctl net.ipv4.ip_forward=1 > (This needs to get added to the documentation). > > Thanks for helping to try things out. > > -- > -- James Slagle > -- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From lars at redhat.com Fri Apr 17 17:09:02 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 17 Apr 2015 13:09:02 -0400 Subject: [Rdo-list] DNS resolver problems w/ instack-virt-setup In-Reply-To: References: <20150417150825.GG18285@redhat.com> <20150417160638.GP29586@teletran-1.redhat.com> Message-ID: <20150417170902.GH18285@redhat.com> On Fri, Apr 17, 2015 at 06:15:37PM +0200, Marius Cornea wrote: > 192.168.122.1 is set on the virbr0 interface on the host (default > libvirt net) where under/overcloud VMs are running. It is also the address of the virbr0 interface *on the overcloud nodes*. > (192.0.2.0/24 subnet). The overcloud nodes have one interface in the > brbm bridge and route the traffic through the undercloud node. You can > check that default gw on overcloud nodes is 192.0.2.1 (eth1 of instack > VM). That confirms what I said in my previous email: > That's not a useful address for either of these nodes, on which > external connectivity -- at least on the controller -- is via > eth0/br-ex on the 192.0.2.0/24 network. > The undercloud node masquerades all traffic coming from > 192.0.2.0/24 so the overcloud nodes can get external connectivity, > including to 192.168.122.1 which handles the dns queries. It doesn't. First, because 192.168.122.1 is set on the virbr0 interface on the overcloud controller node, traffic to this address never leaves the host. While the undercloud node does have masquerade rules in place: # iptables -t nat -S | grep -i masquerade -A POSTROUTING -s 192.0.2.0/24 -o eth0 -j MASQUERADE -A BOOTSTACK_MASQ -s 192.0.2.0/24 ! -d 192.0.2.0/24 -j MASQUERADE It doesn't have ip forwarding enabled: # sysctl -a | grep ip_forward net.ipv4.ip_forward = 0 No forwarding, so no masquerading. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lars at redhat.com Fri Apr 17 17:11:30 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 17 Apr 2015 13:11:30 -0400 Subject: [Rdo-list] DNS resolver problems w/ instack-virt-setup In-Reply-To: <20150417160638.GP29586@teletran-1.redhat.com> References: <20150417150825.GG18285@redhat.com> <20150417160638.GP29586@teletran-1.redhat.com> Message-ID: <20150417171130.GI18285@redhat.com> On Fri, Apr 17, 2015 at 12:06:38PM -0400, James Slagle wrote: > Also makes sure ip forwarding is enabled on the host machine: > sudo sysctl net.ipv4.ip_forward=1 > (This needs to get added to the documentation). ip_forward is enabled on the host (did instack-virt-setup do that? Because I didn't set that up explicitly). But it was *not* enabled on the undercloud node, which is half of the problem I encountered with DNS lookups. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From whayutin at redhat.com Fri Apr 17 17:56:26 2015 From: whayutin at redhat.com (whayutin) Date: Fri, 17 Apr 2015 13:56:26 -0400 Subject: [Rdo-list] [CI] rdo kilo passing CI Message-ID: <1429293386.2724.9.camel@redhat.com> FYI delorean repo used baseurl=http://trunk.rdoproject.org/centos70/fd/8b/fd8bf5dd60106a0cb64dd0424d519193e1a82162_e01c0ee0 There are still some selinux errors, so to be successful at this time, use permissive. Bugs can be found in rdo trunk. Big thanks to Alan and his team :) From hamidnoroozitux at gmail.com Sat Apr 18 08:33:36 2015 From: hamidnoroozitux at gmail.com (Hamid Noroozi) Date: Sat, 18 Apr 2015 10:33:36 +0200 Subject: [Rdo-list] RDO Juno + orchestration Message-ID: <553216E0.6090603@gmail.com> Hi all, I'm trying RDO Juno on a single blade running RHEL7.1.After a fresh install using packstack, I don't see the orchestration services like heat to be installed. Am I doing it wrong? Do I need to enable something to include all openstack services/projects in my installation? -- Regards, Hamid From ak at cloudssky.com Sat Apr 18 13:27:20 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sat, 18 Apr 2015 15:27:20 +0200 Subject: [Rdo-list] RDO Juno + orchestration In-Reply-To: <553216E0.6090603@gmail.com> References: <553216E0.6090603@gmail.com> Message-ID: Hi Hamid, was heat enabled in your packstack answer file? Do you have an /etc/heat/heat.conf file? Does pgrep -l heat say something like this? [root at xxx ~]# pgrep -l heat 2965 heat-api 2997 heat-api-cfn 3016 heat-engine And have a look here: http://www.server-world.info/en/note?os=CentOS_7&p=openstack_juno2&f=6 Best, Arash On Sat, Apr 18, 2015 at 10:33 AM, Hamid Noroozi wrote: > Hi all, > > I'm trying RDO Juno on a single blade running RHEL7.1.After a fresh > install using packstack, I don't see the orchestration services like > heat to be installed. Am I doing it wrong? Do I need to enable something > to include all openstack services/projects in my installation? > > -- > Regards, > Hamid > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Sat Apr 18 13:51:06 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sat, 18 Apr 2015 09:51:06 -0400 Subject: [Rdo-list] RDO Juno + orchestration In-Reply-To: References: <553216E0.6090603@gmail.com> Message-ID: Hamid You can edit your packstack answerfile to enable Heat. and then rerun the packstack installation On Sat, Apr 18, 2015 at 9:27 AM, Arash Kaffamanesh wrote: > Hi Hamid, > > was heat enabled in your packstack answer file? > Do you have an /etc/heat/heat.conf file? > > Does pgrep -l heat say something like this? > > [root at xxx ~]# pgrep -l heat > > 2965 heat-api > > 2997 heat-api-cfn > > 3016 heat-engine > > And have a look here: > > http://www.server-world.info/en/note?os=CentOS_7&p=openstack_juno2&f=6 > > Best, > Arash > > > On Sat, Apr 18, 2015 at 10:33 AM, Hamid Noroozi > wrote: > >> Hi all, >> >> I'm trying RDO Juno on a single blade running RHEL7.1.After a fresh >> install using packstack, I don't see the orchestration services like >> heat to be installed. Am I doing it wrong? Do I need to enable something >> to include all openstack services/projects in my installation? >> >> -- >> Regards, >> Hamid >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Sat Apr 18 16:45:51 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sat, 18 Apr 2015 18:45:51 +0200 Subject: [Rdo-list] RDO Test Days for OpenStack Kilo release, status Murano on RDO Message-ID: Hello everyone, Is there any known date for RDO Test Days for OpenStack Kilo release? And is Murano going to be provided on RDO Kilo release (through) packstack? Thanks! Arash -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Sat Apr 18 19:57:02 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sat, 18 Apr 2015 21:57:02 +0200 Subject: [Rdo-list] [CI] rdo kilo passing CI In-Reply-To: <1429293386.2724.9.camel@redhat.com> References: <1429293386.2724.9.camel@redhat.com> Message-ID: Hi, I'm trying to get delorean running and have set selinux to permissive, but I'm still getting: 10.0.0.16_prescript.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 10.0.0.16_prescript.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-selinux' returned 1: Error: No matching Packages to list Any advice is much appreciated. Thx, Arash On Fri, Apr 17, 2015 at 7:56 PM, whayutin wrote: > FYI > > delorean repo used > baseurl= > http://trunk.rdoproject.org/centos70/fd/8b/fd8bf5dd60106a0cb64dd0424d519193e1a82162_e01c0ee0 > > There are still some selinux errors, so to be successful at this time, > use permissive. Bugs can be found in rdo trunk. > > Big thanks to Alan and his team :) > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Sun Apr 19 03:34:42 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sat, 18 Apr 2015 23:34:42 -0400 Subject: [Rdo-list] security q: horizon and ssl Message-ID: hi all wondering when this security bug is slated to be resolved, its been open since november 2014 *Bug 1161915* - horizon console uses http when horizon is set to use ssl thanks -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From vedsarkushwaha at gmail.com Sun Apr 19 10:09:23 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Sun, 19 Apr 2015 15:39:23 +0530 Subject: [Rdo-list] openvswitch problem Message-ID: I installed openstack juno on centos7. Then I updated centos7, using yum update. Now when I restart centos7, I'm not able to ping any other computer on same network. (Earlier I was able to ping). Packet is not going outside my computer. openvswitch status is "running". But when I restart openvswitch everything starts working. The openvswitch status is still remains "running" after restart. Can someone explain me what is the problem? -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Sun Apr 19 11:14:01 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sun, 19 Apr 2015 13:14:01 +0200 Subject: [Rdo-list] RDO Test Days for OpenStack Kilo release, status Murano on RDO In-Reply-To: References: Message-ID: Hi, I found the following page to install Murano: http://murano.readthedocs.org/en/latest/install/index.html And I didn't find any packages in delorean repo for murano. So the answer to my second question is most probably "No". Thx, Arash On Sat, Apr 18, 2015 at 6:45 PM, Arash Kaffamanesh wrote: > Hello everyone, > > Is there any known date for RDO Test Days for OpenStack Kilo release? > > And is Murano going to be provided on RDO Kilo release (through) packstack? > > Thanks! > Arash > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Sun Apr 19 19:10:35 2015 From: jslagle at redhat.com (James Slagle) Date: Sun, 19 Apr 2015 15:10:35 -0400 Subject: [Rdo-list] rdo-manager installs failing at horizon's manage.py step Message-ID: <20150419191035.GQ29586@teletran-1.redhat.com> FYI, all rdo-manager installs are currently failing during the Undercloud installation with: 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: CommandError: An error occured during rendering /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/index.html: Error evaluating expression: 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: $brand-danger 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: From /usr/share/openstack-dashboard/static/dashboard/scss/_variables.scss:1 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: ...imported from :0 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: Traceback: 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: File "/usr/lib64/python2.7/site-packages/scss/expression.py", line 130, in evaluate_expression 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: return ast.evaluate(self, divide=divide) 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: File "/usr/lib64/python2.7/site-packages/scss/expression.py", line 359, in evaluate 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: raise SyntaxError("Undefined variable: '%s'." % self.name) 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: SyntaxError: Undefined variable: '$brand-danger'. 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: Found 'compress' tags in: 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/horizon/templates/horizon/_conf.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/index.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/undeploy_confirmation.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/scale_out.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/deploy_confirmation.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/_workflow_base.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_boxes/templates/tuskar_boxes/overview/index.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/nodes/register.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/post_deploy_init.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/_fullscreen_workflow_base.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/horizon/templates/horizon/_scripts.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/nodes/index.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/share/openstack-dashboard/openstack_dashboard/templates/_stylesheets.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/base.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/base_detail.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/nodes/detail.html 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: Compressing... 16:26:11 Error: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]: Failed to call refresh: /usr/share/openstack-dashboard/manage.py compress returned 1 instead of one of [0] 16:26:11 Error: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]: /usr/share/openstack-dashboard/manage.py compress returned 1 instead of one of [0] Initial investigation indicates it's probably related to this Horizon packaging change from Friday: https://review.gerrithub.io/#/c/230349/ It's look like that packaging change was included in the updated: http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/ -- -- James Slagle -- From mrunge at redhat.com Mon Apr 20 06:14:12 2015 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 20 Apr 2015 08:14:12 +0200 Subject: [Rdo-list] security q: horizon and ssl In-Reply-To: References: Message-ID: <55349934.8030903@redhat.com> On 19/04/15 05:34, Mohammed Arafa wrote: > hi all > > wondering when this security bug is slated to be resolved, its been open > since november 2014 > *Bug 1161915* > -horizon console uses http when horizon is set to use ssl > As far as I understand here, it's a local configuration issue on your side. horizon console url is returned from nova, returning the nova-novnc url. Please change in your /etc/nova/nova.conf to something like novncproxy_base_url=https://192.168.36.10:6080/vnc_auto.html Matthias From mrunge at redhat.com Mon Apr 20 06:23:30 2015 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 20 Apr 2015 08:23:30 +0200 Subject: [Rdo-list] rdo-manager installs failing at horizon's manage.py step In-Reply-To: <20150419191035.GQ29586@teletran-1.redhat.com> References: <20150419191035.GQ29586@teletran-1.redhat.com> Message-ID: <55349B62.1060806@redhat.com> On 19/04/15 21:10, James Slagle wrote: > FYI, all rdo-manager installs are currently failing during the Undercloud > installation with: > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/horizon/templates/horizon/_conf.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/index.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/undeploy_confirmation.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/scale_out.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/deploy_confirmation.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/_workflow_base.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_boxes/templates/tuskar_boxes/overview/index.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/nodes/register.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/post_deploy_init.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/_fullscreen_workflow_base.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/horizon/templates/horizon/_scripts.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/nodes/index.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/share/openstack-dashboard/openstack_dashboard/templates/_stylesheets.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/base.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/base_detail.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/nodes/detail.html > 16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: Compressing... > 16:26:11 Error: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]: Failed to call refresh: /usr/share/openstack-dashboard/manage.py compress returned 1 instead of one of [0] > 16:26:11 Error: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]: /usr/share/openstack-dashboard/manage.py compress returned 1 instead of one of [0] > > Initial investigation indicates it's probably related to this Horizon packaging > change from Friday: > > https://review.gerrithub.io/#/c/230349/ The issue is produced in tuskar-ui. Your mentioned config change was a change in compression to remove the compression step from package build. That should make it easier for anyone to package add-ons to horizon. It was briefly discussed on #rdo and on this list last week https://www.redhat.com/archives/rdo-list/2015-April/msg00046.html (and a few other mentions) For horizon itself, I saw some changes in static files required, because location of them changed during kilo cycle. That's the pain, we're going through, when packaging stuff depending on Horizon upstream. Matthias From mrunge at redhat.com Mon Apr 20 07:16:42 2015 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 20 Apr 2015 09:16:42 +0200 Subject: [Rdo-list] Heads up on Horizon changes in RDO for Kilo release Message-ID: <5534A7DA.9070209@redhat.com> Hello, tl;dr: About packaging changes in Horizon. If you did not change horizon and you're not packaging plugins for Horizon, you don't need to change anything. Kilo is nearing its release and I did some changes on horizons packaging. * Static file location was on /static before, which is not consistent with /dashboard. Since Kilo, Horizon now uses a config option to address all files belonging to Horizon, in our case, everything is now under /dashboard, and static files under /dashboard/static. For reference, it used to be /dashboard and /static, which doesn't match that well, when sharing a web server. * Before last week, static files were compressed at package build time, which required addons to do ugly hacks at install time. Further more, if there was an issue with static files, one would have needed to run some steps manually or to rebuild horizon as whole. Luckily that never happened though. Now I implemented a small hook into httpd systemd unit to rebuild compressed files at httpd start time. No hacks needed any more, after an update of static files, one needs to restart the web server (which would be required anyways) and you're fine. Matthias From phaurep at gmail.com Mon Apr 20 08:15:33 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 20 Apr 2015 10:15:33 +0200 Subject: [Rdo-list] no valid host was found: nova problem Message-ID: hello guys , When I try to launch an instance, have this error, no valmid host was found. I found these errors in the file logs of nova and rabbitmq but don't know how to proceed: *grep error nova-scheduler.log* 2015-04-17 15:41:25.119 1976 TRACE oslo.messaging._drivers.impl_rabbit error: [Errno 110] Connection timed out 2015-04-17 16:34:37.163 1959 TRACE oslo.messaging._drivers.impl_rabbit error: [Errno 110] Connection timed out *grep error nova-api.log* 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not Found"}} 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not Found"}} 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not Found"}} 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was not provided to the Neutron client 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not Found"}} 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was not provided to the Neutron client 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not Found"}} 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was not provided to the Neutron client 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not Found"}} 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was not provided to the Neutron client 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not Found"}} *grep error /var/log/rabbitmq/rabbit\@localhost.log* AMQP connection <0.865.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1312.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.499.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1657.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.9349.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.761.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.2801.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.4447.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1065.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.712.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.618.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.664.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.9428.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1222.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.9530.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.9496.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.6746.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.9479.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.9280.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.9295.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1203.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1420.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.9513.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1048.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.741.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1734.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1390.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1400.0> (running), channel 0 - error: {amqp_error,connection_forced, AMQP connection <0.1430.0> (running), channel 0 - error: {amqp_error,connection_forced, -------------- next part -------------- An HTML attachment was scrubbed... URL: From phaurep at gmail.com Mon Apr 20 08:23:41 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 20 Apr 2015 10:23:41 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: grep ERROR nova-scheduler.log 2015-04-17 15:41:25.119 1976 ERROR oslo.messaging._drivers.impl_rabbit [-] Failed to consume message from queue: [Errno 110] Connection timed out 2015-04-17 16:34:37.163 1959 ERROR oslo.messaging._drivers.impl_rabbit [-] Failed to consume message from queue: [Errno 110] Connection timed out 2015-04-20 09:40:21.192 9683 ERROR oslo.messaging._drivers.impl_rabbit [-] Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown' 2015-04-20 09:40:22.217 9683 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 1 seconds. grep ERROR nova-api.log 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: auth_url was not provided to the Neutron client 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: auth_url was not provided to the Neutron client 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: auth_url was not provided to the Neutron client 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: auth_url was not provided to the Neutron client 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: auth_url was not provided to the Neutron client 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: auth_url was not provided to the Neutron client 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: auth_url was not provided to the Neutron client 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: auth_url was not provided to the Neutron client 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: auth_url was not provided to the Neutron client 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: auth_url was not provided to the Neutron client 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: auth_url was not provided to the Neutron client 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was not provided to the Neutron client 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was not provided to the Neutron client 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was not provided to the Neutron client 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was not provided to the Neutron client 2015-04-20 10:15 GMT+02:00 pauline phaure : > hello guys , > When I try to launch an instance, have this error, no valmid host was > found. I found these errors in the file logs of nova and rabbitmq but don't > know how to proceed: > > > > *grep error nova-scheduler.log* > 2015-04-17 15:41:25.119 1976 TRACE oslo.messaging._drivers.impl_rabbit > error: [Errno 110] Connection timed out > 2015-04-17 16:34:37.163 1959 TRACE oslo.messaging._drivers.impl_rabbit > error: [Errno 110] Connection timed out > > > *grep error nova-api.log* > 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token [-] > Identity response: {"error": {"message": "Could not find token: > 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not Found"}} > 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token [-] > Identity response: {"error": {"message": "Could not find token: > b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not Found"}} > 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token [-] > Identity response: {"error": {"message": "Could not find token: > 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not Found"}} > 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack > [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token [-] > Identity response: {"error": {"message": "Could not find token: > 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not Found"}} > 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack > [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token [-] > Identity response: {"error": {"message": "Could not find token: > ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not Found"}} > 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack > [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token [-] > Identity response: {"error": {"message": "Could not find token: > 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not Found"}} > 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack > [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token [-] > Identity response: {"error": {"message": "Could not find token: > 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not Found"}} > > > *grep error /var/log/rabbitmq/rabbit\@localhost.log* > > > AMQP connection <0.865.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1312.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.499.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1657.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.9349.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.761.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.2801.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.4447.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1065.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.712.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.618.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.664.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.9428.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1222.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.9530.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.9496.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.6746.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.9479.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.9280.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.9295.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1203.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1420.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.9513.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1048.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.741.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1734.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1390.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1400.0> (running), channel 0 - error: > {amqp_error,connection_forced, > AMQP connection <0.1430.0> (running), channel 0 - error: > {amqp_error,connection_forced, > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phaurep at gmail.com Mon Apr 20 08:50:11 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 20 Apr 2015 10:50:11 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: I think it's because of an authentication issue as my plateform was working fine friday. May be the tokens given to nova, neutron... has expired?? how can I fix it if so. plz help 2015-04-20 10:23 GMT+02:00 pauline phaure : > grep ERROR nova-scheduler.log > > 2015-04-17 15:41:25.119 1976 ERROR oslo.messaging._drivers.impl_rabbit [-] > Failed to consume message from queue: [Errno 110] Connection timed out > 2015-04-17 16:34:37.163 1959 ERROR oslo.messaging._drivers.impl_rabbit [-] > Failed to consume message from queue: [Errno 110] Connection timed out > 2015-04-20 09:40:21.192 9683 ERROR oslo.messaging._drivers.impl_rabbit [-] > Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - > broker forced connection closure with reason 'shutdown' > 2015-04-20 09:40:22.217 9683 ERROR oslo.messaging._drivers.impl_rabbit [-] > AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] > ECONNREFUSED. Trying again in 1 seconds. > > > > grep ERROR nova-api.log > > 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack > [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack > [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack > [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack > [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack > [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack > [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack > [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack > [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack > [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack > [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack > [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack > [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack > [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack > [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was > not provided to the Neutron client > 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack > [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was > not provided to the Neutron client > > > > 2015-04-20 10:15 GMT+02:00 pauline phaure : > >> hello guys , >> When I try to launch an instance, have this error, no valmid host was >> found. I found these errors in the file logs of nova and rabbitmq but don't >> know how to proceed: >> >> >> >> *grep error nova-scheduler.log* >> 2015-04-17 15:41:25.119 1976 TRACE oslo.messaging._drivers.impl_rabbit >> error: [Errno 110] Connection timed out >> 2015-04-17 16:34:37.163 1959 TRACE oslo.messaging._drivers.impl_rabbit >> error: [Errno 110] Connection timed out >> >> >> *grep error nova-api.log* >> 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token [-] >> Identity response: {"error": {"message": "Could not find token: >> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not Found"}} >> 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token [-] >> Identity response: {"error": {"message": "Could not find token: >> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not Found"}} >> 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token [-] >> Identity response: {"error": {"message": "Could not find token: >> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not Found"}} >> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token [-] >> Identity response: {"error": {"message": "Could not find token: >> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not Found"}} >> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token [-] >> Identity response: {"error": {"message": "Could not find token: >> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not Found"}} >> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token [-] >> Identity response: {"error": {"message": "Could not find token: >> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not Found"}} >> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token [-] >> Identity response: {"error": {"message": "Could not find token: >> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not Found"}} >> >> >> *grep error /var/log/rabbitmq/rabbit\@localhost.log* >> >> >> AMQP connection <0.865.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1312.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.499.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1657.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.9349.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.761.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.2801.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.4447.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1065.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.712.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.618.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.664.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.9428.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1222.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.9530.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.9496.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.6746.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.9479.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.9280.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.9295.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1203.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1420.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.9513.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1048.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.741.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1734.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1390.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1400.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> AMQP connection <0.1430.0> (running), channel 0 - error: >> {amqp_error,connection_forced, >> >> >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Mon Apr 20 10:12:56 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 20 Apr 2015 12:12:56 +0200 Subject: [Rdo-list] Glance packaging and RDO Kilo Message-ID: Moving discussion to rdo-list > I have spent some time during the weekend thinking about the options here. > Looking at the requirements from all parties, I see the following: > a) From the packaging side, we want to split Glance into several packages (glance,-common,-api and -registry). > b) From the deployment side, we want the glance package to behave as it did before, i.e. pull -api and -registry. > c) From the puppet-glance side, if we have separate -api and -registry packages, we want to reflect that change in the Puppet modules and be able to configure -api and -registry independently. Also, this package split is already happening in Debian/Ubuntu, so removing distro-specific code is always welcome. > > With that in mind, I think the following options as the easiest ones to implement: > > 1- Split packages, with the following deps: > > * -api and -registry depend on -common > * glance depends on -api and -registry > > This would require moving the existing content in glance (/usr/bin/glance-manage and /usr/bin/glance-control) into -common, so glance becomes a meta-package. With this, we would get b) and c), and most of a). The only drawback is that glance-manage and glance-control may not be a good fit for the -common package (Haikel, can you comment on this?). FWIW, this is how it is being packaged today in Debian. > > 2- Keep the old situation (no Glance package split) > > This obviously negates a), and keeps distro-specific code in c), but still works and does not break any existing code. > > Any thoughts? > > Regards, > Javier Thanks for the summary Javier, 1) is the right thing to do. For the record, history of this change was: * https://review.gerrithub.io/229724 - Split openstack-glance into new subpackages * https://review.gerrithub.io/229980 - Backward compatiblity with previous all-in-one main package (quickfix after I've seen Packstack failures, in retrospect that were I should've introduced -common with glance-manage as you propose) ** in the meantime, puppet-glance was adjusted to take advantage of the subpackages: https://review.openstack.org/172440 - Separate api and registry packages for Red Hat * https://review.gerrithub.io/230356 - Revert dependencies between services and the main packages (followup, b/c after puppet-glance change glance-manage was not getting installed) * https://review.gerrithub.io/230453 - Revert "Split openstack-glance into new subpackage" (merged to unblock tripleo) ** https://review.openstack.org/174872 - Revert "Separate api and registry packages for Red Hat" in puppet-glance So the plan is to re-propose "Split openstack-glance into new subpackages" and merged only after it's verified by all interested teams and then re-propose "Separate api and registry packages for Red Hat" in puppet-glance. Cheers, Alan From mohammed.arafa at gmail.com Mon Apr 20 11:16:41 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 20 Apr 2015 07:16:41 -0400 Subject: [Rdo-list] security q: horizon and ssl In-Reply-To: <55349934.8030903@redhat.com> References: <55349934.8030903@redhat.com> Message-ID: nope didnt work "connection reset" on both firefox and chrome. konqueror gave me this: The requested operation could not be completed Connection to Server Refused Details of the Request: URL: https://192.168.0.250:6080/vnc_auto.html?token=51cdd3d1-7044-4f2d-9112-7bc718a1dc0b&title=spacewalk(0de4c005-bb32-4457-befa-a158fb5546e5) Protocol: https Date and Time: Monday, April 20, 2015 07:15 AM Additional Information: 192.168.0.250: SSL negotiation failed Description: The server 192.168.0.250 refused to allow this computer to make a connection. Possible Causes: The server, while currently connected to the Internet, may not be configured to allow requests. The server, while currently connected to the Internet, may not be running the requested service (https). A network firewall (a device which restricts Internet requests), either protecting your network or the network of the server, may have intervened, preventing this request. Possible Solutions: Try again, either now or at a later time. Contact the administrator of the server for further assistance. Contact your appropriate computer support system, whether the system administrator, or technical support group for further assistance. On Mon, Apr 20, 2015 at 2:14 AM, Matthias Runge wrote: > On 19/04/15 05:34, Mohammed Arafa wrote: > >> hi all >> >> wondering when this security bug is slated to be resolved, its been open >> since november 2014 >> *Bug 1161915* >> -horizon console uses http when horizon is set to use ssl >> >> As far as I understand here, it's a local configuration issue on your > side. > > horizon console url is returned from nova, returning the nova-novnc url. > > Please change in your /etc/nova/nova.conf > to something like > > novncproxy_base_url=https://192.168.36.10:6080/vnc_auto.html > > > Matthias > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From phaurep at gmail.com Mon Apr 20 11:32:22 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 20 Apr 2015 13:32:22 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: i tried with cirros actually and it didn't work. I don't think it's a problem of ressources. I'm on baremetal and I only have 4 VMs spawned 2015-04-20 13:28 GMT+02:00 Mohammed Arafa : > Pauline it could be a number of issues including too little CPU or ram or > disk. Try with the cirros image first > > Tokens in the service project do not expire as far as I know > On Apr 20, 2015 4:53 AM, "pauline phaure" wrote: > >> I think it's because of an authentication issue as my plateform was >> working fine friday. May be the tokens given to nova, neutron... has >> expired?? how can I fix it if so. plz help >> >> 2015-04-20 10:23 GMT+02:00 pauline phaure : >> >>> grep ERROR nova-scheduler.log >>> >>> 2015-04-17 15:41:25.119 1976 ERROR oslo.messaging._drivers.impl_rabbit >>> [-] Failed to consume message from queue: [Errno 110] Connection timed out >>> 2015-04-17 16:34:37.163 1959 ERROR oslo.messaging._drivers.impl_rabbit >>> [-] Failed to consume message from queue: [Errno 110] Connection timed out >>> 2015-04-20 09:40:21.192 9683 ERROR oslo.messaging._drivers.impl_rabbit >>> [-] Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - >>> broker forced connection closure with reason 'shutdown' >>> 2015-04-20 09:40:22.217 9683 ERROR oslo.messaging._drivers.impl_rabbit >>> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] >>> ECONNREFUSED. Trying again in 1 seconds. >>> >>> >>> >>> grep ERROR nova-api.log >>> >>> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack >>> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack >>> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack >>> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack >>> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack >>> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack >>> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack >>> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack >>> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack >>> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack >>> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack >>> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >>> not provided to the Neutron client >>> >>> >>> >>> 2015-04-20 10:15 GMT+02:00 pauline phaure : >>> >>>> hello guys , >>>> When I try to launch an instance, have this error, no valmid host was >>>> found. I found these errors in the file logs of nova and rabbitmq but don't >>>> know how to proceed: >>>> >>>> >>>> >>>> *grep error nova-scheduler.log* >>>> 2015-04-17 15:41:25.119 1976 TRACE oslo.messaging._drivers.impl_rabbit >>>> error: [Errno 110] Connection timed out >>>> 2015-04-17 16:34:37.163 1959 TRACE oslo.messaging._drivers.impl_rabbit >>>> error: [Errno 110] Connection timed out >>>> >>>> >>>> *grep error nova-api.log* >>>> 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not Found"}} >>>> >>>> >>>> *grep error /var/log/rabbitmq/rabbit\@localhost.log* >>>> >>>> >>>> AMQP connection <0.865.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1312.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.499.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1657.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9349.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.761.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.2801.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.4447.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1065.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.712.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.618.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.664.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9428.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1222.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9530.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9496.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.6746.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9479.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9280.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9295.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1203.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1420.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9513.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1048.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.741.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1734.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1390.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1400.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1430.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Mon Apr 20 11:28:38 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 20 Apr 2015 07:28:38 -0400 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: Pauline it could be a number of issues including too little CPU or ram or disk. Try with the cirros image first Tokens in the service project do not expire as far as I know On Apr 20, 2015 4:53 AM, "pauline phaure" wrote: > I think it's because of an authentication issue as my plateform was > working fine friday. May be the tokens given to nova, neutron... has > expired?? how can I fix it if so. plz help > > 2015-04-20 10:23 GMT+02:00 pauline phaure : > >> grep ERROR nova-scheduler.log >> >> 2015-04-17 15:41:25.119 1976 ERROR oslo.messaging._drivers.impl_rabbit >> [-] Failed to consume message from queue: [Errno 110] Connection timed out >> 2015-04-17 16:34:37.163 1959 ERROR oslo.messaging._drivers.impl_rabbit >> [-] Failed to consume message from queue: [Errno 110] Connection timed out >> 2015-04-20 09:40:21.192 9683 ERROR oslo.messaging._drivers.impl_rabbit >> [-] Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - >> broker forced connection closure with reason 'shutdown' >> 2015-04-20 09:40:22.217 9683 ERROR oslo.messaging._drivers.impl_rabbit >> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] >> ECONNREFUSED. Trying again in 1 seconds. >> >> >> >> grep ERROR nova-api.log >> >> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack >> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack >> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack >> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack >> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack >> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack >> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack >> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack >> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack >> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack >> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack >> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >> not provided to the Neutron client >> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >> not provided to the Neutron client >> >> >> >> 2015-04-20 10:15 GMT+02:00 pauline phaure : >> >>> hello guys , >>> When I try to launch an instance, have this error, no valmid host was >>> found. I found these errors in the file logs of nova and rabbitmq but don't >>> know how to proceed: >>> >>> >>> >>> *grep error nova-scheduler.log* >>> 2015-04-17 15:41:25.119 1976 TRACE oslo.messaging._drivers.impl_rabbit >>> error: [Errno 110] Connection timed out >>> 2015-04-17 16:34:37.163 1959 TRACE oslo.messaging._drivers.impl_rabbit >>> error: [Errno 110] Connection timed out >>> >>> >>> *grep error nova-api.log* >>> 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token [-] >>> Identity response: {"error": {"message": "Could not find token: >>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not Found"}} >>> 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token [-] >>> Identity response: {"error": {"message": "Could not find token: >>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not Found"}} >>> 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token [-] >>> Identity response: {"error": {"message": "Could not find token: >>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not Found"}} >>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token [-] >>> Identity response: {"error": {"message": "Could not find token: >>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not Found"}} >>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token [-] >>> Identity response: {"error": {"message": "Could not find token: >>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not Found"}} >>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token [-] >>> Identity response: {"error": {"message": "Could not find token: >>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not Found"}} >>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token [-] >>> Identity response: {"error": {"message": "Could not find token: >>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not Found"}} >>> >>> >>> *grep error /var/log/rabbitmq/rabbit\@localhost.log* >>> >>> >>> AMQP connection <0.865.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1312.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.499.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1657.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.9349.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.761.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.2801.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.4447.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1065.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.712.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.618.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.664.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.9428.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1222.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.9530.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.9496.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.6746.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.9479.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.9280.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.9295.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1203.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1420.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.9513.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1048.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.741.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1734.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1390.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1400.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> AMQP connection <0.1430.0> (running), channel 0 - error: >>> {amqp_error,connection_forced, >>> >>> >>> >>> >>> >>> >>> >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Mon Apr 20 11:56:35 2015 From: marius at remote-lab.net (Marius Cornea) Date: Mon, 20 Apr 2015 13:56:35 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: Hi Pauline, I suspect the issues are caused by the rabbitmq connection. Check that it's running and can be reached on 192.168.2.34:5672 (server is running, listening on that address/port, no firewall rule preventing connections). On Mon, Apr 20, 2015 at 1:32 PM, pauline phaure wrote: > i tried with cirros actually and it didn't work. I don't think it's a > problem of ressources. I'm on baremetal and I only have 4 VMs spawned > > 2015-04-20 13:28 GMT+02:00 Mohammed Arafa : >> >> Pauline it could be a number of issues including too little CPU or ram or >> disk. Try with the cirros image first >> >> Tokens in the service project do not expire as far as I know >> >> On Apr 20, 2015 4:53 AM, "pauline phaure" wrote: >>> >>> I think it's because of an authentication issue as my plateform was >>> working fine friday. May be the tokens given to nova, neutron... has >>> expired?? how can I fix it if so. plz help >>> >>> 2015-04-20 10:23 GMT+02:00 pauline phaure : >>>> >>>> grep ERROR nova-scheduler.log >>>> >>>> 2015-04-17 15:41:25.119 1976 ERROR oslo.messaging._drivers.impl_rabbit >>>> [-] Failed to consume message from queue: [Errno 110] Connection timed out >>>> 2015-04-17 16:34:37.163 1959 ERROR oslo.messaging._drivers.impl_rabbit >>>> [-] Failed to consume message from queue: [Errno 110] Connection timed out >>>> 2015-04-20 09:40:21.192 9683 ERROR oslo.messaging._drivers.impl_rabbit >>>> [-] Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - >>>> broker forced connection closure with reason 'shutdown' >>>> 2015-04-20 09:40:22.217 9683 ERROR oslo.messaging._drivers.impl_rabbit >>>> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] >>>> ECONNREFUSED. Trying again in 1 seconds. >>>> >>>> >>>> >>>> grep ERROR nova-api.log >>>> >>>> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack >>>> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack >>>> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack >>>> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack >>>> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack >>>> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack >>>> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack >>>> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack >>>> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack >>>> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack >>>> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack >>>> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> >>>> >>>> >>>> 2015-04-20 10:15 GMT+02:00 pauline phaure : >>>>> >>>>> hello guys , >>>>> When I try to launch an instance, have this error, no valmid host was >>>>> found. I found these errors in the file logs of nova and rabbitmq but don't >>>>> know how to proceed: >>>>> >>>>> >>>>> >>>>> grep error nova-scheduler.log >>>>> 2015-04-17 15:41:25.119 1976 TRACE oslo.messaging._drivers.impl_rabbit >>>>> error: [Errno 110] Connection timed out >>>>> 2015-04-17 16:34:37.163 1959 TRACE oslo.messaging._drivers.impl_rabbit >>>>> error: [Errno 110] Connection timed out >>>>> >>>>> grep error nova-api.log >>>>> >>>>> 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >>>>> not provided to the Neutron client >>>>> 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >>>>> not provided to the Neutron client >>>>> 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >>>>> not provided to the Neutron client >>>>> 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >>>>> not provided to the Neutron client >>>>> 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not Found"}} >>>>> >>>>> >>>>> grep error /var/log/rabbitmq/rabbit\@localhost.log >>>>> >>>>> >>>>> AMQP connection <0.865.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1312.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.499.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1657.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9349.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.761.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.2801.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.4447.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1065.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.712.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.618.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.664.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9428.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1222.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9530.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9496.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.6746.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9479.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9280.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9295.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1203.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1420.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9513.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1048.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.741.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1734.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1390.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1400.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1430.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From phaurep at gmail.com Mon Apr 20 12:08:45 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 20 Apr 2015 14:08:45 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: [root at localhost ~]# netstat -nltp |grep 5672 tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 1927/beam.smp tcp6 0 0 :::5672 :::* LISTEN 1927/beam.smp 2015-04-20 13:56 GMT+02:00 Marius Cornea : > Hi Pauline, > > I suspect the issues are caused by the rabbitmq connection. Check that > it's running and can be reached on 192.168.2.34:5672 (server is > running, listening on that address/port, no firewall rule preventing > connections). > > On Mon, Apr 20, 2015 at 1:32 PM, pauline phaure wrote: > > i tried with cirros actually and it didn't work. I don't think it's a > > problem of ressources. I'm on baremetal and I only have 4 VMs spawned > > > > 2015-04-20 13:28 GMT+02:00 Mohammed Arafa : > >> > >> Pauline it could be a number of issues including too little CPU or ram > or > >> disk. Try with the cirros image first > >> > >> Tokens in the service project do not expire as far as I know > >> > >> On Apr 20, 2015 4:53 AM, "pauline phaure" wrote: > >>> > >>> I think it's because of an authentication issue as my plateform was > >>> working fine friday. May be the tokens given to nova, neutron... has > >>> expired?? how can I fix it if so. plz help > >>> > >>> 2015-04-20 10:23 GMT+02:00 pauline phaure : > >>>> > >>>> grep ERROR nova-scheduler.log > >>>> > >>>> 2015-04-17 15:41:25.119 1976 ERROR oslo.messaging._drivers.impl_rabbit > >>>> [-] Failed to consume message from queue: [Errno 110] Connection > timed out > >>>> 2015-04-17 16:34:37.163 1959 ERROR oslo.messaging._drivers.impl_rabbit > >>>> [-] Failed to consume message from queue: [Errno 110] Connection > timed out > >>>> 2015-04-20 09:40:21.192 9683 ERROR oslo.messaging._drivers.impl_rabbit > >>>> [-] Failed to consume message from queue: (0, 0): (320) > CONNECTION_FORCED - > >>>> broker forced connection closure with reason 'shutdown' > >>>> 2015-04-20 09:40:22.217 9683 ERROR oslo.messaging._drivers.impl_rabbit > >>>> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] > >>>> ECONNREFUSED. Trying again in 1 seconds. > >>>> > >>>> > >>>> > >>>> grep ERROR nova-api.log > >>>> > >>>> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack > >>>> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack > >>>> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack > >>>> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack > >>>> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack > >>>> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack > >>>> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack > >>>> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack > >>>> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack > >>>> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack > >>>> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack > >>>> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack > >>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack > >>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack > >>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack > >>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: > auth_url was > >>>> not provided to the Neutron client > >>>> > >>>> > >>>> > >>>> 2015-04-20 10:15 GMT+02:00 pauline phaure : > >>>>> > >>>>> hello guys , > >>>>> When I try to launch an instance, have this error, no valmid host > was > >>>>> found. I found these errors in the file logs of nova and rabbitmq > but don't > >>>>> know how to proceed: > >>>>> > >>>>> > >>>>> > >>>>> grep error nova-scheduler.log > >>>>> 2015-04-17 15:41:25.119 1976 TRACE > oslo.messaging._drivers.impl_rabbit > >>>>> error: [Errno 110] Connection timed out > >>>>> 2015-04-17 16:34:37.163 1959 TRACE > oslo.messaging._drivers.impl_rabbit > >>>>> error: [Errno 110] Connection timed out > >>>>> > >>>>> grep error nova-api.log > >>>>> > >>>>> 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token > [-] > >>>>> Identity response: {"error": {"message": "Could not find token: > >>>>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not > Found"}} > >>>>> 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token > [-] > >>>>> Identity response: {"error": {"message": "Could not find token: > >>>>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not > Found"}} > >>>>> 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token > [-] > >>>>> Identity response: {"error": {"message": "Could not find token: > >>>>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not > Found"}} > >>>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack > >>>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: > auth_url was > >>>>> not provided to the Neutron client > >>>>> 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token > [-] > >>>>> Identity response: {"error": {"message": "Could not find token: > >>>>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not > Found"}} > >>>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack > >>>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: > auth_url was > >>>>> not provided to the Neutron client > >>>>> 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token > [-] > >>>>> Identity response: {"error": {"message": "Could not find token: > >>>>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not > Found"}} > >>>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack > >>>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: > auth_url was > >>>>> not provided to the Neutron client > >>>>> 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token > [-] > >>>>> Identity response: {"error": {"message": "Could not find token: > >>>>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not > Found"}} > >>>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack > >>>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: > auth_url was > >>>>> not provided to the Neutron client > >>>>> 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token > [-] > >>>>> Identity response: {"error": {"message": "Could not find token: > >>>>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not > Found"}} > >>>>> > >>>>> > >>>>> grep error /var/log/rabbitmq/rabbit\@localhost.log > >>>>> > >>>>> > >>>>> AMQP connection <0.865.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1312.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.499.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1657.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.9349.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.761.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.2801.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.4447.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1065.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.712.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.618.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.664.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.9428.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1222.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.9530.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.9496.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.6746.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.9479.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.9280.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.9295.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1203.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1420.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.9513.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1048.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.741.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1734.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1390.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1400.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> AMQP connection <0.1430.0> (running), channel 0 - error: > >>>>> {amqp_error,connection_forced, > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>> > >>> > >>> > >>> _______________________________________________ > >>> Rdo-list mailing list > >>> Rdo-list at redhat.com > >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phaurep at gmail.com Mon Apr 20 12:10:46 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 20 Apr 2015 14:10:46 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: [root at localhost ~]# rabbitmqctl status Status of node rabbit at localhost ... [{pid,18863}, {running_applications,[{rabbit,"RabbitMQ","3.3.5"}, {os_mon,"CPO CXC 138 46","2.2.14"}, {mnesia,"MNESIA CXC 138 12","4.11"}, {xmerl,"XML parser","1.3.6"}, {sasl,"SASL CXC 138 11","2.3.4"}, {stdlib,"ERTS CXC 138 10","1.19.4"}, {kernel,"ERTS CXC 138 10","2.16.4"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:16:16] [async-threads:30] [hipe] [kernel-poll:true]\n"}, {memory,[{total,91191840}, {connection_procs,2848136}, {queue_procs,1327752}, {plugins,0}, {other_proc,13932712}, {mnesia,268832}, {mgmt_db,0}, {msg_index,87144}, {other_ets,948952}, {binary,48239344}, {code,16698259}, {atom,602729}, {other_system,6237980}]}, {alarms,[]}, {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,4967161856}, {disk_free_limit,50000000}, {disk_free,49428885504}, {file_descriptors,[{total_limit,924}, {total_used,80}, {sockets_limit,829}, {sockets_used,78}]}, {processes,[{limit,1048576},{used,921}]}, {run_queue,0}, {uptime,124}] ...done. 2015-04-20 14:08 GMT+02:00 pauline phaure : > > > [root at localhost ~]# netstat -nltp |grep 5672 > tcp 0 0 0.0.0.0:25672 0.0.0.0:* > LISTEN 1927/beam.smp > tcp6 0 0 :::5672 :::* LISTEN > 1927/beam.smp > > > 2015-04-20 13:56 GMT+02:00 Marius Cornea : > >> Hi Pauline, >> >> I suspect the issues are caused by the rabbitmq connection. Check that >> it's running and can be reached on 192.168.2.34:5672 (server is >> running, listening on that address/port, no firewall rule preventing >> connections). >> >> On Mon, Apr 20, 2015 at 1:32 PM, pauline phaure >> wrote: >> > i tried with cirros actually and it didn't work. I don't think it's a >> > problem of ressources. I'm on baremetal and I only have 4 VMs spawned >> > >> > 2015-04-20 13:28 GMT+02:00 Mohammed Arafa : >> >> >> >> Pauline it could be a number of issues including too little CPU or ram >> or >> >> disk. Try with the cirros image first >> >> >> >> Tokens in the service project do not expire as far as I know >> >> >> >> On Apr 20, 2015 4:53 AM, "pauline phaure" wrote: >> >>> >> >>> I think it's because of an authentication issue as my plateform was >> >>> working fine friday. May be the tokens given to nova, neutron... has >> >>> expired?? how can I fix it if so. plz help >> >>> >> >>> 2015-04-20 10:23 GMT+02:00 pauline phaure : >> >>>> >> >>>> grep ERROR nova-scheduler.log >> >>>> >> >>>> 2015-04-17 15:41:25.119 1976 ERROR >> oslo.messaging._drivers.impl_rabbit >> >>>> [-] Failed to consume message from queue: [Errno 110] Connection >> timed out >> >>>> 2015-04-17 16:34:37.163 1959 ERROR >> oslo.messaging._drivers.impl_rabbit >> >>>> [-] Failed to consume message from queue: [Errno 110] Connection >> timed out >> >>>> 2015-04-20 09:40:21.192 9683 ERROR >> oslo.messaging._drivers.impl_rabbit >> >>>> [-] Failed to consume message from queue: (0, 0): (320) >> CONNECTION_FORCED - >> >>>> broker forced connection closure with reason 'shutdown' >> >>>> 2015-04-20 09:40:22.217 9683 ERROR >> oslo.messaging._drivers.impl_rabbit >> >>>> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] >> >>>> ECONNREFUSED. Trying again in 1 seconds. >> >>>> >> >>>> >> >>>> >> >>>> grep ERROR nova-api.log >> >>>> >> >>>> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack >> >>>> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack >> >>>> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack >> >>>> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack >> >>>> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack >> >>>> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack >> >>>> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack >> >>>> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack >> >>>> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack >> >>>> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack >> >>>> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack >> >>>> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >> >>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >> >>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >> >>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >> >>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: >> auth_url was >> >>>> not provided to the Neutron client >> >>>> >> >>>> >> >>>> >> >>>> 2015-04-20 10:15 GMT+02:00 pauline phaure : >> >>>>> >> >>>>> hello guys , >> >>>>> When I try to launch an instance, have this error, no valmid host >> was >> >>>>> found. I found these errors in the file logs of nova and rabbitmq >> but don't >> >>>>> know how to proceed: >> >>>>> >> >>>>> >> >>>>> >> >>>>> grep error nova-scheduler.log >> >>>>> 2015-04-17 15:41:25.119 1976 TRACE >> oslo.messaging._drivers.impl_rabbit >> >>>>> error: [Errno 110] Connection timed out >> >>>>> 2015-04-17 16:34:37.163 1959 TRACE >> oslo.messaging._drivers.impl_rabbit >> >>>>> error: [Errno 110] Connection timed out >> >>>>> >> >>>>> grep error nova-api.log >> >>>>> >> >>>>> 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token >> [-] >> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not >> Found"}} >> >>>>> 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token >> [-] >> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not >> Found"}} >> >>>>> 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token >> [-] >> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not >> Found"}} >> >>>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >> >>>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: >> auth_url was >> >>>>> not provided to the Neutron client >> >>>>> 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token >> [-] >> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not >> Found"}} >> >>>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >> >>>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: >> auth_url was >> >>>>> not provided to the Neutron client >> >>>>> 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token >> [-] >> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not >> Found"}} >> >>>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >> >>>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: >> auth_url was >> >>>>> not provided to the Neutron client >> >>>>> 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token >> [-] >> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not >> Found"}} >> >>>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >> >>>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: >> auth_url was >> >>>>> not provided to the Neutron client >> >>>>> 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token >> [-] >> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not >> Found"}} >> >>>>> >> >>>>> >> >>>>> grep error /var/log/rabbitmq/rabbit\@localhost.log >> >>>>> >> >>>>> >> >>>>> AMQP connection <0.865.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1312.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.499.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1657.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.9349.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.761.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.2801.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.4447.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1065.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.712.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.618.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.664.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.9428.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1222.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.9530.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.9496.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.6746.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.9479.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.9280.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.9295.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1203.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1420.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.9513.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1048.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.741.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1734.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1390.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1400.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> AMQP connection <0.1430.0> (running), channel 0 - error: >> >>>>> {amqp_error,connection_forced, >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>> >> >>> >> >>> >> >>> _______________________________________________ >> >>> Rdo-list mailing list >> >>> Rdo-list at redhat.com >> >>> https://www.redhat.com/mailman/listinfo/rdo-list >> >>> >> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> > >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phaurep at gmail.com Mon Apr 20 12:15:58 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 20 Apr 2015 14:15:58 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: is it normal the the port for tcp4 is 25672??? 2015-04-20 14:10 GMT+02:00 pauline phaure : > [root at localhost ~]# rabbitmqctl status > Status of node rabbit at localhost ... > [{pid,18863}, > {running_applications,[{rabbit,"RabbitMQ","3.3.5"}, > {os_mon,"CPO CXC 138 46","2.2.14"}, > {mnesia,"MNESIA CXC 138 12","4.11"}, > {xmerl,"XML parser","1.3.6"}, > {sasl,"SASL CXC 138 11","2.3.4"}, > {stdlib,"ERTS CXC 138 10","1.19.4"}, > {kernel,"ERTS CXC 138 10","2.16.4"}]}, > {os,{unix,linux}}, > {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] > [smp:16:16] [async-threads:30] [hipe] [kernel-poll:true]\n"}, > {memory,[{total,91191840}, > {connection_procs,2848136}, > {queue_procs,1327752}, > {plugins,0}, > {other_proc,13932712}, > {mnesia,268832}, > {mgmt_db,0}, > {msg_index,87144}, > {other_ets,948952}, > {binary,48239344}, > {code,16698259}, > {atom,602729}, > {other_system,6237980}]}, > {alarms,[]}, > {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, > {vm_memory_high_watermark,0.4}, > {vm_memory_limit,4967161856}, > {disk_free_limit,50000000}, > {disk_free,49428885504}, > {file_descriptors,[{total_limit,924}, > {total_used,80}, > {sockets_limit,829}, > {sockets_used,78}]}, > {processes,[{limit,1048576},{used,921}]}, > {run_queue,0}, > {uptime,124}] > ...done. > > > 2015-04-20 14:08 GMT+02:00 pauline phaure : > >> >> >> [root at localhost ~]# netstat -nltp |grep 5672 >> tcp 0 0 0.0.0.0:25672 0.0.0.0:* >> LISTEN 1927/beam.smp >> tcp6 0 0 :::5672 :::* >> LISTEN 1927/beam.smp >> >> >> 2015-04-20 13:56 GMT+02:00 Marius Cornea : >> >>> Hi Pauline, >>> >>> I suspect the issues are caused by the rabbitmq connection. Check that >>> it's running and can be reached on 192.168.2.34:5672 (server is >>> running, listening on that address/port, no firewall rule preventing >>> connections). >>> >>> On Mon, Apr 20, 2015 at 1:32 PM, pauline phaure >>> wrote: >>> > i tried with cirros actually and it didn't work. I don't think it's a >>> > problem of ressources. I'm on baremetal and I only have 4 VMs spawned >>> > >>> > 2015-04-20 13:28 GMT+02:00 Mohammed Arafa : >>> >> >>> >> Pauline it could be a number of issues including too little CPU or >>> ram or >>> >> disk. Try with the cirros image first >>> >> >>> >> Tokens in the service project do not expire as far as I know >>> >> >>> >> On Apr 20, 2015 4:53 AM, "pauline phaure" wrote: >>> >>> >>> >>> I think it's because of an authentication issue as my plateform was >>> >>> working fine friday. May be the tokens given to nova, neutron... has >>> >>> expired?? how can I fix it if so. plz help >>> >>> >>> >>> 2015-04-20 10:23 GMT+02:00 pauline phaure : >>> >>>> >>> >>>> grep ERROR nova-scheduler.log >>> >>>> >>> >>>> 2015-04-17 15:41:25.119 1976 ERROR >>> oslo.messaging._drivers.impl_rabbit >>> >>>> [-] Failed to consume message from queue: [Errno 110] Connection >>> timed out >>> >>>> 2015-04-17 16:34:37.163 1959 ERROR >>> oslo.messaging._drivers.impl_rabbit >>> >>>> [-] Failed to consume message from queue: [Errno 110] Connection >>> timed out >>> >>>> 2015-04-20 09:40:21.192 9683 ERROR >>> oslo.messaging._drivers.impl_rabbit >>> >>>> [-] Failed to consume message from queue: (0, 0): (320) >>> CONNECTION_FORCED - >>> >>>> broker forced connection closure with reason 'shutdown' >>> >>>> 2015-04-20 09:40:22.217 9683 ERROR >>> oslo.messaging._drivers.impl_rabbit >>> >>>> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] >>> >>>> ECONNREFUSED. Trying again in 1 seconds. >>> >>>> >>> >>>> >>> >>>> >>> >>>> grep ERROR nova-api.log >>> >>>> >>> >>>> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack >>> >>>> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack >>> >>>> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack >>> >>>> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack >>> >>>> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack >>> >>>> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack >>> >>>> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack >>> >>>> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack >>> >>>> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack >>> >>>> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack >>> >>>> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack >>> >>>> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>> >>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>> >>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>> >>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>> >>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: >>> auth_url was >>> >>>> not provided to the Neutron client >>> >>>> >>> >>>> >>> >>>> >>> >>>> 2015-04-20 10:15 GMT+02:00 pauline phaure : >>> >>>>> >>> >>>>> hello guys , >>> >>>>> When I try to launch an instance, have this error, no valmid host >>> was >>> >>>>> found. I found these errors in the file logs of nova and rabbitmq >>> but don't >>> >>>>> know how to proceed: >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> grep error nova-scheduler.log >>> >>>>> 2015-04-17 15:41:25.119 1976 TRACE >>> oslo.messaging._drivers.impl_rabbit >>> >>>>> error: [Errno 110] Connection timed out >>> >>>>> 2015-04-17 16:34:37.163 1959 TRACE >>> oslo.messaging._drivers.impl_rabbit >>> >>>>> error: [Errno 110] Connection timed out >>> >>>>> >>> >>>>> grep error nova-api.log >>> >>>>> >>> >>>>> 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token >>> [-] >>> >>>>> Identity response: {"error": {"message": "Could not find token: >>> >>>>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not >>> Found"}} >>> >>>>> 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token >>> [-] >>> >>>>> Identity response: {"error": {"message": "Could not find token: >>> >>>>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not >>> Found"}} >>> >>>>> 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token >>> [-] >>> >>>>> Identity response: {"error": {"message": "Could not find token: >>> >>>>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not >>> Found"}} >>> >>>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>> >>>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: >>> auth_url was >>> >>>>> not provided to the Neutron client >>> >>>>> 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token >>> [-] >>> >>>>> Identity response: {"error": {"message": "Could not find token: >>> >>>>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not >>> Found"}} >>> >>>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>> >>>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: >>> auth_url was >>> >>>>> not provided to the Neutron client >>> >>>>> 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token >>> [-] >>> >>>>> Identity response: {"error": {"message": "Could not find token: >>> >>>>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not >>> Found"}} >>> >>>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>> >>>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: >>> auth_url was >>> >>>>> not provided to the Neutron client >>> >>>>> 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token >>> [-] >>> >>>>> Identity response: {"error": {"message": "Could not find token: >>> >>>>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not >>> Found"}} >>> >>>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>> >>>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: >>> auth_url was >>> >>>>> not provided to the Neutron client >>> >>>>> 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token >>> [-] >>> >>>>> Identity response: {"error": {"message": "Could not find token: >>> >>>>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not >>> Found"}} >>> >>>>> >>> >>>>> >>> >>>>> grep error /var/log/rabbitmq/rabbit\@localhost.log >>> >>>>> >>> >>>>> >>> >>>>> AMQP connection <0.865.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1312.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.499.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1657.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.9349.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.761.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.2801.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.4447.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1065.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.712.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.618.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.664.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.9428.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1222.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.9530.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.9496.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.6746.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.9479.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.9280.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.9295.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1203.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1420.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.9513.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1048.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.741.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1734.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1390.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1400.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> AMQP connection <0.1430.0> (running), channel 0 - error: >>> >>>>> {amqp_error,connection_forced, >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> >>> Rdo-list mailing list >>> >>> Rdo-list at redhat.com >>> >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> > >>> > >>> > >>> > _______________________________________________ >>> > Rdo-list mailing list >>> > Rdo-list at redhat.com >>> > https://www.redhat.com/mailman/listinfo/rdo-list >>> > >>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Mon Apr 20 12:26:01 2015 From: marius at remote-lab.net (Marius Cornea) Date: Mon, 20 Apr 2015 14:26:01 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: That's the port used by rabbit nodes for communicating in a cluster. You should check the docs here[1] for further digging into clustering: https://www.rabbitmq.com/clustering.html On Mon, Apr 20, 2015 at 2:15 PM, pauline phaure wrote: > is it normal the the port for tcp4 is 25672??? > > 2015-04-20 14:10 GMT+02:00 pauline phaure : >> >> [root at localhost ~]# rabbitmqctl status >> Status of node rabbit at localhost ... >> [{pid,18863}, >> {running_applications,[{rabbit,"RabbitMQ","3.3.5"}, >> {os_mon,"CPO CXC 138 46","2.2.14"}, >> {mnesia,"MNESIA CXC 138 12","4.11"}, >> {xmerl,"XML parser","1.3.6"}, >> {sasl,"SASL CXC 138 11","2.3.4"}, >> {stdlib,"ERTS CXC 138 10","1.19.4"}, >> {kernel,"ERTS CXC 138 10","2.16.4"}]}, >> {os,{unix,linux}}, >> {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] >> [smp:16:16] [async-threads:30] [hipe] [kernel-poll:true]\n"}, >> {memory,[{total,91191840}, >> {connection_procs,2848136}, >> {queue_procs,1327752}, >> {plugins,0}, >> {other_proc,13932712}, >> {mnesia,268832}, >> {mgmt_db,0}, >> {msg_index,87144}, >> {other_ets,948952}, >> {binary,48239344}, >> {code,16698259}, >> {atom,602729}, >> {other_system,6237980}]}, >> {alarms,[]}, >> {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, >> {vm_memory_high_watermark,0.4}, >> {vm_memory_limit,4967161856}, >> {disk_free_limit,50000000}, >> {disk_free,49428885504}, >> {file_descriptors,[{total_limit,924}, >> {total_used,80}, >> {sockets_limit,829}, >> {sockets_used,78}]}, >> {processes,[{limit,1048576},{used,921}]}, >> {run_queue,0}, >> {uptime,124}] >> ...done. >> >> >> 2015-04-20 14:08 GMT+02:00 pauline phaure : >>> >>> >>> >>> [root at localhost ~]# netstat -nltp |grep 5672 >>> tcp 0 0 0.0.0.0:25672 0.0.0.0:* >>> LISTEN 1927/beam.smp >>> tcp6 0 0 :::5672 :::* >>> LISTEN 1927/beam.smp >>> >>> >>> 2015-04-20 13:56 GMT+02:00 Marius Cornea : >>>> >>>> Hi Pauline, >>>> >>>> I suspect the issues are caused by the rabbitmq connection. Check that >>>> it's running and can be reached on 192.168.2.34:5672 (server is >>>> running, listening on that address/port, no firewall rule preventing >>>> connections). >>>> >>>> On Mon, Apr 20, 2015 at 1:32 PM, pauline phaure >>>> wrote: >>>> > i tried with cirros actually and it didn't work. I don't think it's a >>>> > problem of ressources. I'm on baremetal and I only have 4 VMs spawned >>>> > >>>> > 2015-04-20 13:28 GMT+02:00 Mohammed Arafa : >>>> >> >>>> >> Pauline it could be a number of issues including too little CPU or >>>> >> ram or >>>> >> disk. Try with the cirros image first >>>> >> >>>> >> Tokens in the service project do not expire as far as I know >>>> >> >>>> >> On Apr 20, 2015 4:53 AM, "pauline phaure" wrote: >>>> >>> >>>> >>> I think it's because of an authentication issue as my plateform was >>>> >>> working fine friday. May be the tokens given to nova, neutron... has >>>> >>> expired?? how can I fix it if so. plz help >>>> >>> >>>> >>> 2015-04-20 10:23 GMT+02:00 pauline phaure : >>>> >>>> >>>> >>>> grep ERROR nova-scheduler.log >>>> >>>> >>>> >>>> 2015-04-17 15:41:25.119 1976 ERROR >>>> >>>> oslo.messaging._drivers.impl_rabbit >>>> >>>> [-] Failed to consume message from queue: [Errno 110] Connection >>>> >>>> timed out >>>> >>>> 2015-04-17 16:34:37.163 1959 ERROR >>>> >>>> oslo.messaging._drivers.impl_rabbit >>>> >>>> [-] Failed to consume message from queue: [Errno 110] Connection >>>> >>>> timed out >>>> >>>> 2015-04-20 09:40:21.192 9683 ERROR >>>> >>>> oslo.messaging._drivers.impl_rabbit >>>> >>>> [-] Failed to consume message from queue: (0, 0): (320) >>>> >>>> CONNECTION_FORCED - >>>> >>>> broker forced connection closure with reason 'shutdown' >>>> >>>> 2015-04-20 09:40:22.217 9683 ERROR >>>> >>>> oslo.messaging._drivers.impl_rabbit >>>> >>>> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] >>>> >>>> ECONNREFUSED. Trying again in 1 seconds. >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> grep ERROR nova-api.log >>>> >>>> >>>> >>>> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack >>>> >>>> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack >>>> >>>> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack >>>> >>>> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack >>>> >>>> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack >>>> >>>> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack >>>> >>>> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack >>>> >>>> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack >>>> >>>> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack >>>> >>>> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack >>>> >>>> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack >>>> >>>> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>>> >>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>>> >>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>>> >>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>>> >>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: >>>> >>>> auth_url was >>>> >>>> not provided to the Neutron client >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> 2015-04-20 10:15 GMT+02:00 pauline phaure : >>>> >>>>> >>>> >>>>> hello guys , >>>> >>>>> When I try to launch an instance, have this error, no valmid host >>>> >>>>> was >>>> >>>>> found. I found these errors in the file logs of nova and rabbitmq >>>> >>>>> but don't >>>> >>>>> know how to proceed: >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>>> grep error nova-scheduler.log >>>> >>>>> 2015-04-17 15:41:25.119 1976 TRACE >>>> >>>>> oslo.messaging._drivers.impl_rabbit >>>> >>>>> error: [Errno 110] Connection timed out >>>> >>>>> 2015-04-17 16:34:37.163 1959 TRACE >>>> >>>>> oslo.messaging._drivers.impl_rabbit >>>> >>>>> error: [Errno 110] Connection timed out >>>> >>>>> >>>> >>>>> grep error nova-api.log >>>> >>>>> >>>> >>>>> 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token >>>> >>>>> [-] >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >>>> >>>>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not >>>> >>>>> Found"}} >>>> >>>>> 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token >>>> >>>>> [-] >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >>>> >>>>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not >>>> >>>>> Found"}} >>>> >>>>> 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token >>>> >>>>> [-] >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >>>> >>>>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not >>>> >>>>> Found"}} >>>> >>>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>>> >>>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: >>>> >>>>> auth_url was >>>> >>>>> not provided to the Neutron client >>>> >>>>> 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token >>>> >>>>> [-] >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >>>> >>>>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not >>>> >>>>> Found"}} >>>> >>>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>>> >>>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: >>>> >>>>> auth_url was >>>> >>>>> not provided to the Neutron client >>>> >>>>> 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token >>>> >>>>> [-] >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >>>> >>>>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not >>>> >>>>> Found"}} >>>> >>>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>>> >>>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: >>>> >>>>> auth_url was >>>> >>>>> not provided to the Neutron client >>>> >>>>> 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token >>>> >>>>> [-] >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >>>> >>>>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not >>>> >>>>> Found"}} >>>> >>>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>>> >>>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: >>>> >>>>> auth_url was >>>> >>>>> not provided to the Neutron client >>>> >>>>> 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token >>>> >>>>> [-] >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >>>> >>>>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not >>>> >>>>> Found"}} >>>> >>>>> >>>> >>>>> >>>> >>>>> grep error /var/log/rabbitmq/rabbit\@localhost.log >>>> >>>>> >>>> >>>>> >>>> >>>>> AMQP connection <0.865.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1312.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.499.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1657.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.9349.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.761.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.2801.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.4447.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1065.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.712.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.618.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.664.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.9428.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1222.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.9530.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.9496.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.6746.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.9479.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.9280.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.9295.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1203.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1420.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.9513.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1048.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.741.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1734.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1390.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1400.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> AMQP connection <0.1430.0> (running), channel 0 - error: >>>> >>>>> {amqp_error,connection_forced, >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>> >>>> >>> >>>> >>> >>>> >>> _______________________________________________ >>>> >>> Rdo-list mailing list >>>> >>> Rdo-list at redhat.com >>>> >>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>> >>>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> > >>>> > >>>> > >>>> > _______________________________________________ >>>> > Rdo-list mailing list >>>> > Rdo-list at redhat.com >>>> > https://www.redhat.com/mailman/listinfo/rdo-list >>>> > >>>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >> > From phaurep at gmail.com Mon Apr 20 12:33:53 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 20 Apr 2015 14:33:53 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: ok, then evrything is fine with my rabbitmq [root at localhost ~]# iptables -S |grep 5672 -A INPUT -s 192.168.2.34/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.168.2.34" -j ACCEPT -A INPUT -s 192.168.2.35/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.168.2.35" -j ACCEPT do you have any other idea about what it may be happening with my installation? 2015-04-20 14:26 GMT+02:00 Marius Cornea : > That's the port used by rabbit nodes for communicating in a cluster. > You should check the docs here[1] for further digging into clustering: > > https://www.rabbitmq.com/clustering.html > > On Mon, Apr 20, 2015 at 2:15 PM, pauline phaure wrote: > > is it normal the the port for tcp4 is 25672??? > > > > 2015-04-20 14:10 GMT+02:00 pauline phaure : > >> > >> [root at localhost ~]# rabbitmqctl status > >> Status of node rabbit at localhost ... > >> [{pid,18863}, > >> {running_applications,[{rabbit,"RabbitMQ","3.3.5"}, > >> {os_mon,"CPO CXC 138 46","2.2.14"}, > >> {mnesia,"MNESIA CXC 138 12","4.11"}, > >> {xmerl,"XML parser","1.3.6"}, > >> {sasl,"SASL CXC 138 11","2.3.4"}, > >> {stdlib,"ERTS CXC 138 10","1.19.4"}, > >> {kernel,"ERTS CXC 138 10","2.16.4"}]}, > >> {os,{unix,linux}}, > >> {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] > >> [smp:16:16] [async-threads:30] [hipe] [kernel-poll:true]\n"}, > >> {memory,[{total,91191840}, > >> {connection_procs,2848136}, > >> {queue_procs,1327752}, > >> {plugins,0}, > >> {other_proc,13932712}, > >> {mnesia,268832}, > >> {mgmt_db,0}, > >> {msg_index,87144}, > >> {other_ets,948952}, > >> {binary,48239344}, > >> {code,16698259}, > >> {atom,602729}, > >> {other_system,6237980}]}, > >> {alarms,[]}, > >> {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, > >> {vm_memory_high_watermark,0.4}, > >> {vm_memory_limit,4967161856}, > >> {disk_free_limit,50000000}, > >> {disk_free,49428885504}, > >> {file_descriptors,[{total_limit,924}, > >> {total_used,80}, > >> {sockets_limit,829}, > >> {sockets_used,78}]}, > >> {processes,[{limit,1048576},{used,921}]}, > >> {run_queue,0}, > >> {uptime,124}] > >> ...done. > >> > >> > >> 2015-04-20 14:08 GMT+02:00 pauline phaure : > >>> > >>> > >>> > >>> [root at localhost ~]# netstat -nltp |grep 5672 > >>> tcp 0 0 0.0.0.0:25672 0.0.0.0:* > >>> LISTEN 1927/beam.smp > >>> tcp6 0 0 :::5672 :::* > >>> LISTEN 1927/beam.smp > >>> > >>> > >>> 2015-04-20 13:56 GMT+02:00 Marius Cornea : > >>>> > >>>> Hi Pauline, > >>>> > >>>> I suspect the issues are caused by the rabbitmq connection. Check that > >>>> it's running and can be reached on 192.168.2.34:5672 (server is > >>>> running, listening on that address/port, no firewall rule preventing > >>>> connections). > >>>> > >>>> On Mon, Apr 20, 2015 at 1:32 PM, pauline phaure > >>>> wrote: > >>>> > i tried with cirros actually and it didn't work. I don't think it's > a > >>>> > problem of ressources. I'm on baremetal and I only have 4 VMs > spawned > >>>> > > >>>> > 2015-04-20 13:28 GMT+02:00 Mohammed Arafa >: > >>>> >> > >>>> >> Pauline it could be a number of issues including too little CPU or > >>>> >> ram or > >>>> >> disk. Try with the cirros image first > >>>> >> > >>>> >> Tokens in the service project do not expire as far as I know > >>>> >> > >>>> >> On Apr 20, 2015 4:53 AM, "pauline phaure" > wrote: > >>>> >>> > >>>> >>> I think it's because of an authentication issue as my plateform > was > >>>> >>> working fine friday. May be the tokens given to nova, neutron... > has > >>>> >>> expired?? how can I fix it if so. plz help > >>>> >>> > >>>> >>> 2015-04-20 10:23 GMT+02:00 pauline phaure : > >>>> >>>> > >>>> >>>> grep ERROR nova-scheduler.log > >>>> >>>> > >>>> >>>> 2015-04-17 15:41:25.119 1976 ERROR > >>>> >>>> oslo.messaging._drivers.impl_rabbit > >>>> >>>> [-] Failed to consume message from queue: [Errno 110] Connection > >>>> >>>> timed out > >>>> >>>> 2015-04-17 16:34:37.163 1959 ERROR > >>>> >>>> oslo.messaging._drivers.impl_rabbit > >>>> >>>> [-] Failed to consume message from queue: [Errno 110] Connection > >>>> >>>> timed out > >>>> >>>> 2015-04-20 09:40:21.192 9683 ERROR > >>>> >>>> oslo.messaging._drivers.impl_rabbit > >>>> >>>> [-] Failed to consume message from queue: (0, 0): (320) > >>>> >>>> CONNECTION_FORCED - > >>>> >>>> broker forced connection closure with reason 'shutdown' > >>>> >>>> 2015-04-20 09:40:22.217 9683 ERROR > >>>> >>>> oslo.messaging._drivers.impl_rabbit > >>>> >>>> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] > >>>> >>>> ECONNREFUSED. Trying again in 1 seconds. > >>>> >>>> > >>>> >>>> > >>>> >>>> > >>>> >>>> grep ERROR nova-api.log > >>>> >>>> > >>>> >>>> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack > >>>> >>>> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack > >>>> >>>> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack > >>>> >>>> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack > >>>> >>>> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack > >>>> >>>> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack > >>>> >>>> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack > >>>> >>>> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack > >>>> >>>> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack > >>>> >>>> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack > >>>> >>>> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack > >>>> >>>> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack > >>>> >>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack > >>>> >>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack > >>>> >>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack > >>>> >>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: > >>>> >>>> auth_url was > >>>> >>>> not provided to the Neutron client > >>>> >>>> > >>>> >>>> > >>>> >>>> > >>>> >>>> 2015-04-20 10:15 GMT+02:00 pauline phaure : > >>>> >>>>> > >>>> >>>>> hello guys , > >>>> >>>>> When I try to launch an instance, have this error, no valmid > host > >>>> >>>>> was > >>>> >>>>> found. I found these errors in the file logs of nova and > rabbitmq > >>>> >>>>> but don't > >>>> >>>>> know how to proceed: > >>>> >>>>> > >>>> >>>>> > >>>> >>>>> > >>>> >>>>> grep error nova-scheduler.log > >>>> >>>>> 2015-04-17 15:41:25.119 1976 TRACE > >>>> >>>>> oslo.messaging._drivers.impl_rabbit > >>>> >>>>> error: [Errno 110] Connection timed out > >>>> >>>>> 2015-04-17 16:34:37.163 1959 TRACE > >>>> >>>>> oslo.messaging._drivers.impl_rabbit > >>>> >>>>> error: [Errno 110] Connection timed out > >>>> >>>>> > >>>> >>>>> grep error nova-api.log > >>>> >>>>> > >>>> >>>>> 2015-04-20 01:13:52.005 4748 WARNING > keystonemiddleware.auth_token > >>>> >>>>> [-] > >>>> >>>>> Identity response: {"error": {"message": "Could not find token: > >>>> >>>>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not > >>>> >>>>> Found"}} > >>>> >>>>> 2015-04-20 02:13:52.022 4741 WARNING > keystonemiddleware.auth_token > >>>> >>>>> [-] > >>>> >>>>> Identity response: {"error": {"message": "Could not find token: > >>>> >>>>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not > >>>> >>>>> Found"}} > >>>> >>>>> 2015-04-20 03:13:52.004 4737 WARNING > keystonemiddleware.auth_token > >>>> >>>>> [-] > >>>> >>>>> Identity response: {"error": {"message": "Could not find token: > >>>> >>>>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not > >>>> >>>>> Found"}} > >>>> >>>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack > >>>> >>>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: > >>>> >>>>> auth_url was > >>>> >>>>> not provided to the Neutron client > >>>> >>>>> 2015-04-20 04:23:52.044 4742 WARNING > keystonemiddleware.auth_token > >>>> >>>>> [-] > >>>> >>>>> Identity response: {"error": {"message": "Could not find token: > >>>> >>>>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not > >>>> >>>>> Found"}} > >>>> >>>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack > >>>> >>>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: > >>>> >>>>> auth_url was > >>>> >>>>> not provided to the Neutron client > >>>> >>>>> 2015-04-20 05:33:52.024 4731 WARNING > keystonemiddleware.auth_token > >>>> >>>>> [-] > >>>> >>>>> Identity response: {"error": {"message": "Could not find token: > >>>> >>>>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not > >>>> >>>>> Found"}} > >>>> >>>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack > >>>> >>>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: > >>>> >>>>> auth_url was > >>>> >>>>> not provided to the Neutron client > >>>> >>>>> 2015-04-20 06:43:51.887 4737 WARNING > keystonemiddleware.auth_token > >>>> >>>>> [-] > >>>> >>>>> Identity response: {"error": {"message": "Could not find token: > >>>> >>>>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not > >>>> >>>>> Found"}} > >>>> >>>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack > >>>> >>>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: > >>>> >>>>> auth_url was > >>>> >>>>> not provided to the Neutron client > >>>> >>>>> 2015-04-20 07:53:52.037 4742 WARNING > keystonemiddleware.auth_token > >>>> >>>>> [-] > >>>> >>>>> Identity response: {"error": {"message": "Could not find token: > >>>> >>>>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not > >>>> >>>>> Found"}} > >>>> >>>>> > >>>> >>>>> > >>>> >>>>> grep error /var/log/rabbitmq/rabbit\@localhost.log > >>>> >>>>> > >>>> >>>>> > >>>> >>>>> AMQP connection <0.865.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1312.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.499.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1657.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.9349.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.761.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.2801.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.4447.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1065.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.712.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.618.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.664.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.9428.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1222.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.9530.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.9496.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.6746.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.9479.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.9280.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.9295.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1203.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1420.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.9513.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1048.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.741.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1734.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1390.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1400.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> AMQP connection <0.1430.0> (running), channel 0 - error: > >>>> >>>>> {amqp_error,connection_forced, > >>>> >>>>> > >>>> >>>>> > >>>> >>>>> > >>>> >>>>> > >>>> >>>>> > >>>> >>>>> > >>>> >>>> > >>>> >>> > >>>> >>> > >>>> >>> _______________________________________________ > >>>> >>> Rdo-list mailing list > >>>> >>> Rdo-list at redhat.com > >>>> >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>> >>> > >>>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >>>> > > >>>> > > >>>> > > >>>> > _______________________________________________ > >>>> > Rdo-list mailing list > >>>> > Rdo-list at redhat.com > >>>> > https://www.redhat.com/mailman/listinfo/rdo-list > >>>> > > >>>> > To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > >>> > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Mon Apr 20 12:41:42 2015 From: marius at remote-lab.net (Marius Cornea) Date: Mon, 20 Apr 2015 14:41:42 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: Maybe it's worth checking that you accept connections on the 25672 port also. On Mon, Apr 20, 2015 at 2:33 PM, pauline phaure wrote: > ok, then evrything is fine with my rabbitmq > > [root at localhost ~]# iptables -S |grep 5672 > -A INPUT -s 192.168.2.34/32 -p tcp -m multiport --dports 5671,5672 -m > comment --comment "001 amqp incoming amqp_192.168.2.34" -j ACCEPT > -A INPUT -s 192.168.2.35/32 -p tcp -m multiport --dports 5671,5672 -m > comment --comment "001 amqp incoming amqp_192.168.2.35" -j ACCEPT > > > do you have any other idea about what it may be happening with my > installation? > > 2015-04-20 14:26 GMT+02:00 Marius Cornea : >> >> That's the port used by rabbit nodes for communicating in a cluster. >> You should check the docs here[1] for further digging into clustering: >> >> https://www.rabbitmq.com/clustering.html >> >> On Mon, Apr 20, 2015 at 2:15 PM, pauline phaure wrote: >> > is it normal the the port for tcp4 is 25672??? >> > >> > 2015-04-20 14:10 GMT+02:00 pauline phaure : >> >> >> >> [root at localhost ~]# rabbitmqctl status >> >> Status of node rabbit at localhost ... >> >> [{pid,18863}, >> >> {running_applications,[{rabbit,"RabbitMQ","3.3.5"}, >> >> {os_mon,"CPO CXC 138 46","2.2.14"}, >> >> {mnesia,"MNESIA CXC 138 12","4.11"}, >> >> {xmerl,"XML parser","1.3.6"}, >> >> {sasl,"SASL CXC 138 11","2.3.4"}, >> >> {stdlib,"ERTS CXC 138 10","1.19.4"}, >> >> {kernel,"ERTS CXC 138 10","2.16.4"}]}, >> >> {os,{unix,linux}}, >> >> {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] >> >> [smp:16:16] [async-threads:30] [hipe] [kernel-poll:true]\n"}, >> >> {memory,[{total,91191840}, >> >> {connection_procs,2848136}, >> >> {queue_procs,1327752}, >> >> {plugins,0}, >> >> {other_proc,13932712}, >> >> {mnesia,268832}, >> >> {mgmt_db,0}, >> >> {msg_index,87144}, >> >> {other_ets,948952}, >> >> {binary,48239344}, >> >> {code,16698259}, >> >> {atom,602729}, >> >> {other_system,6237980}]}, >> >> {alarms,[]}, >> >> {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, >> >> {vm_memory_high_watermark,0.4}, >> >> {vm_memory_limit,4967161856}, >> >> {disk_free_limit,50000000}, >> >> {disk_free,49428885504}, >> >> {file_descriptors,[{total_limit,924}, >> >> {total_used,80}, >> >> {sockets_limit,829}, >> >> {sockets_used,78}]}, >> >> {processes,[{limit,1048576},{used,921}]}, >> >> {run_queue,0}, >> >> {uptime,124}] >> >> ...done. >> >> >> >> >> >> 2015-04-20 14:08 GMT+02:00 pauline phaure : >> >>> >> >>> >> >>> >> >>> [root at localhost ~]# netstat -nltp |grep 5672 >> >>> tcp 0 0 0.0.0.0:25672 0.0.0.0:* >> >>> LISTEN 1927/beam.smp >> >>> tcp6 0 0 :::5672 :::* >> >>> LISTEN 1927/beam.smp >> >>> >> >>> >> >>> 2015-04-20 13:56 GMT+02:00 Marius Cornea : >> >>>> >> >>>> Hi Pauline, >> >>>> >> >>>> I suspect the issues are caused by the rabbitmq connection. Check >> >>>> that >> >>>> it's running and can be reached on 192.168.2.34:5672 (server is >> >>>> running, listening on that address/port, no firewall rule preventing >> >>>> connections). >> >>>> >> >>>> On Mon, Apr 20, 2015 at 1:32 PM, pauline phaure >> >>>> wrote: >> >>>> > i tried with cirros actually and it didn't work. I don't think it's >> >>>> > a >> >>>> > problem of ressources. I'm on baremetal and I only have 4 VMs >> >>>> > spawned >> >>>> > >> >>>> > 2015-04-20 13:28 GMT+02:00 Mohammed Arafa >> >>>> > : >> >>>> >> >> >>>> >> Pauline it could be a number of issues including too little CPU or >> >>>> >> ram or >> >>>> >> disk. Try with the cirros image first >> >>>> >> >> >>>> >> Tokens in the service project do not expire as far as I know >> >>>> >> >> >>>> >> On Apr 20, 2015 4:53 AM, "pauline phaure" >> >>>> >> wrote: >> >>>> >>> >> >>>> >>> I think it's because of an authentication issue as my plateform >> >>>> >>> was >> >>>> >>> working fine friday. May be the tokens given to nova, neutron... >> >>>> >>> has >> >>>> >>> expired?? how can I fix it if so. plz help >> >>>> >>> >> >>>> >>> 2015-04-20 10:23 GMT+02:00 pauline phaure : >> >>>> >>>> >> >>>> >>>> grep ERROR nova-scheduler.log >> >>>> >>>> >> >>>> >>>> 2015-04-17 15:41:25.119 1976 ERROR >> >>>> >>>> oslo.messaging._drivers.impl_rabbit >> >>>> >>>> [-] Failed to consume message from queue: [Errno 110] Connection >> >>>> >>>> timed out >> >>>> >>>> 2015-04-17 16:34:37.163 1959 ERROR >> >>>> >>>> oslo.messaging._drivers.impl_rabbit >> >>>> >>>> [-] Failed to consume message from queue: [Errno 110] Connection >> >>>> >>>> timed out >> >>>> >>>> 2015-04-20 09:40:21.192 9683 ERROR >> >>>> >>>> oslo.messaging._drivers.impl_rabbit >> >>>> >>>> [-] Failed to consume message from queue: (0, 0): (320) >> >>>> >>>> CONNECTION_FORCED - >> >>>> >>>> broker forced connection closure with reason 'shutdown' >> >>>> >>>> 2015-04-20 09:40:22.217 9683 ERROR >> >>>> >>>> oslo.messaging._drivers.impl_rabbit >> >>>> >>>> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] >> >>>> >>>> ECONNREFUSED. Trying again in 1 seconds. >> >>>> >>>> >> >>>> >>>> >> >>>> >>>> >> >>>> >>>> grep ERROR nova-api.log >> >>>> >>>> >> >>>> >>>> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack >> >>>> >>>> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack >> >>>> >>>> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack >> >>>> >>>> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack >> >>>> >>>> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack >> >>>> >>>> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack >> >>>> >>>> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack >> >>>> >>>> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack >> >>>> >>>> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack >> >>>> >>>> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack >> >>>> >>>> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack >> >>>> >>>> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >> >>>> >>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >> >>>> >>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >> >>>> >>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >> >>>> >>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: >> >>>> >>>> auth_url was >> >>>> >>>> not provided to the Neutron client >> >>>> >>>> >> >>>> >>>> >> >>>> >>>> >> >>>> >>>> 2015-04-20 10:15 GMT+02:00 pauline phaure : >> >>>> >>>>> >> >>>> >>>>> hello guys , >> >>>> >>>>> When I try to launch an instance, have this error, no valmid >> >>>> >>>>> host >> >>>> >>>>> was >> >>>> >>>>> found. I found these errors in the file logs of nova and >> >>>> >>>>> rabbitmq >> >>>> >>>>> but don't >> >>>> >>>>> know how to proceed: >> >>>> >>>>> >> >>>> >>>>> >> >>>> >>>>> >> >>>> >>>>> grep error nova-scheduler.log >> >>>> >>>>> 2015-04-17 15:41:25.119 1976 TRACE >> >>>> >>>>> oslo.messaging._drivers.impl_rabbit >> >>>> >>>>> error: [Errno 110] Connection timed out >> >>>> >>>>> 2015-04-17 16:34:37.163 1959 TRACE >> >>>> >>>>> oslo.messaging._drivers.impl_rabbit >> >>>> >>>>> error: [Errno 110] Connection timed out >> >>>> >>>>> >> >>>> >>>>> grep error nova-api.log >> >>>> >>>>> >> >>>> >>>>> 2015-04-20 01:13:52.005 4748 WARNING >> >>>> >>>>> keystonemiddleware.auth_token >> >>>> >>>>> [-] >> >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>> >>>>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not >> >>>> >>>>> Found"}} >> >>>> >>>>> 2015-04-20 02:13:52.022 4741 WARNING >> >>>> >>>>> keystonemiddleware.auth_token >> >>>> >>>>> [-] >> >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>> >>>>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not >> >>>> >>>>> Found"}} >> >>>> >>>>> 2015-04-20 03:13:52.004 4737 WARNING >> >>>> >>>>> keystonemiddleware.auth_token >> >>>> >>>>> [-] >> >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>> >>>>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not >> >>>> >>>>> Found"}} >> >>>> >>>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >> >>>> >>>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: >> >>>> >>>>> auth_url was >> >>>> >>>>> not provided to the Neutron client >> >>>> >>>>> 2015-04-20 04:23:52.044 4742 WARNING >> >>>> >>>>> keystonemiddleware.auth_token >> >>>> >>>>> [-] >> >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>> >>>>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not >> >>>> >>>>> Found"}} >> >>>> >>>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >> >>>> >>>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: >> >>>> >>>>> auth_url was >> >>>> >>>>> not provided to the Neutron client >> >>>> >>>>> 2015-04-20 05:33:52.024 4731 WARNING >> >>>> >>>>> keystonemiddleware.auth_token >> >>>> >>>>> [-] >> >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>> >>>>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not >> >>>> >>>>> Found"}} >> >>>> >>>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >> >>>> >>>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: >> >>>> >>>>> auth_url was >> >>>> >>>>> not provided to the Neutron client >> >>>> >>>>> 2015-04-20 06:43:51.887 4737 WARNING >> >>>> >>>>> keystonemiddleware.auth_token >> >>>> >>>>> [-] >> >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>> >>>>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not >> >>>> >>>>> Found"}} >> >>>> >>>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >> >>>> >>>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: >> >>>> >>>>> auth_url was >> >>>> >>>>> not provided to the Neutron client >> >>>> >>>>> 2015-04-20 07:53:52.037 4742 WARNING >> >>>> >>>>> keystonemiddleware.auth_token >> >>>> >>>>> [-] >> >>>> >>>>> Identity response: {"error": {"message": "Could not find token: >> >>>> >>>>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not >> >>>> >>>>> Found"}} >> >>>> >>>>> >> >>>> >>>>> >> >>>> >>>>> grep error /var/log/rabbitmq/rabbit\@localhost.log >> >>>> >>>>> >> >>>> >>>>> >> >>>> >>>>> AMQP connection <0.865.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1312.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.499.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1657.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.9349.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.761.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.2801.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.4447.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1065.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.712.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.618.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.664.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.9428.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1222.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.9530.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.9496.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.6746.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.9479.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.9280.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.9295.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1203.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1420.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.9513.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1048.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.741.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1734.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1390.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1400.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> AMQP connection <0.1430.0> (running), channel 0 - error: >> >>>> >>>>> {amqp_error,connection_forced, >> >>>> >>>>> >> >>>> >>>>> >> >>>> >>>>> >> >>>> >>>>> >> >>>> >>>>> >> >>>> >>>>> >> >>>> >>>> >> >>>> >>> >> >>>> >>> >> >>>> >>> _______________________________________________ >> >>>> >>> Rdo-list mailing list >> >>>> >>> Rdo-list at redhat.com >> >>>> >>> https://www.redhat.com/mailman/listinfo/rdo-list >> >>>> >>> >> >>>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >>>> > >> >>>> > >> >>>> > >> >>>> > _______________________________________________ >> >>>> > Rdo-list mailing list >> >>>> > Rdo-list at redhat.com >> >>>> > https://www.redhat.com/mailman/listinfo/rdo-list >> >>>> > >> >>>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >>> >> >>> >> >> >> > > > From phaurep at gmail.com Mon Apr 20 12:57:49 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 20 Apr 2015 14:57:49 +0200 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: I deleted one of my old instances and swpaned and other one and everything went fine. My problem is that i'm using 2 baremetal servers each one have 1500GB and when I run df-h I only see this . [root at localhost ~(keystone_admin)]# df -h Sys. de fichiers Taille Utilis? Dispo Uti% Mont? sur /dev/mapper/centos-root 50G 4,0G 47G 8% / devtmpfs 5,8G 0 5,8G 0% /dev tmpfs 5,8G 4,0K 5,8G 1% /dev/shm tmpfs 5,8G 17M 5,8G 1% /run tmpfs 5,8G 0 5,8G 0% /sys/fs/cgroup /dev/loop0 1,9G 6,1M 1,7G 1% /srv/node/swiftloopback /dev/sda1 497M 171M 327M 35% /boot /dev/mapper/centos-home 1,8T 33M 1,8T 1% /home tmpfs 5,8G 17M 5,8G 1% /run/netns 2015-04-20 13:28 GMT+02:00 Mohammed Arafa : > Pauline it could be a number of issues including too little CPU or ram or > disk. Try with the cirros image first > > Tokens in the service project do not expire as far as I know > On Apr 20, 2015 4:53 AM, "pauline phaure" wrote: > >> I think it's because of an authentication issue as my plateform was >> working fine friday. May be the tokens given to nova, neutron... has >> expired?? how can I fix it if so. plz help >> >> 2015-04-20 10:23 GMT+02:00 pauline phaure : >> >>> grep ERROR nova-scheduler.log >>> >>> 2015-04-17 15:41:25.119 1976 ERROR oslo.messaging._drivers.impl_rabbit >>> [-] Failed to consume message from queue: [Errno 110] Connection timed out >>> 2015-04-17 16:34:37.163 1959 ERROR oslo.messaging._drivers.impl_rabbit >>> [-] Failed to consume message from queue: [Errno 110] Connection timed out >>> 2015-04-20 09:40:21.192 9683 ERROR oslo.messaging._drivers.impl_rabbit >>> [-] Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - >>> broker forced connection closure with reason 'shutdown' >>> 2015-04-20 09:40:22.217 9683 ERROR oslo.messaging._drivers.impl_rabbit >>> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] >>> ECONNREFUSED. Trying again in 1 seconds. >>> >>> >>> >>> grep ERROR nova-api.log >>> >>> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack >>> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack >>> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack >>> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack >>> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack >>> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack >>> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack >>> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack >>> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack >>> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack >>> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack >>> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >>> not provided to the Neutron client >>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >>> not provided to the Neutron client >>> >>> >>> >>> 2015-04-20 10:15 GMT+02:00 pauline phaure : >>> >>>> hello guys , >>>> When I try to launch an instance, have this error, no valmid host was >>>> found. I found these errors in the file logs of nova and rabbitmq but don't >>>> know how to proceed: >>>> >>>> >>>> >>>> *grep error nova-scheduler.log* >>>> 2015-04-17 15:41:25.119 1976 TRACE oslo.messaging._drivers.impl_rabbit >>>> error: [Errno 110] Connection timed out >>>> 2015-04-17 16:34:37.163 1959 TRACE oslo.messaging._drivers.impl_rabbit >>>> error: [Errno 110] Connection timed out >>>> >>>> >>>> *grep error nova-api.log* >>>> 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not Found"}} >>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token [-] >>>> Identity response: {"error": {"message": "Could not find token: >>>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not Found"}} >>>> >>>> >>>> *grep error /var/log/rabbitmq/rabbit\@localhost.log* >>>> >>>> >>>> AMQP connection <0.865.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1312.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.499.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1657.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9349.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.761.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.2801.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.4447.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1065.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.712.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.618.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.664.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9428.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1222.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9530.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9496.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.6746.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9479.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9280.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9295.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1203.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1420.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.9513.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1048.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.741.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1734.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1390.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1400.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> AMQP connection <0.1430.0> (running), channel 0 - error: >>>> {amqp_error,connection_forced, >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Mon Apr 20 13:01:00 2015 From: dprince at redhat.com (Dan Prince) Date: Mon, 20 Apr 2015 09:01:00 -0400 Subject: [Rdo-list] Glance packaging and RDO Kilo In-Reply-To: References: Message-ID: <1429534860.23745.19.camel@dovetail.localdomain> On Mon, 2015-04-20 at 12:12 +0200, Alan Pevec wrote: > Moving discussion to rdo-list > > > I have spent some time during the weekend thinking about the options here. > > Looking at the requirements from all parties, I see the following: > > a) From the packaging side, we want to split Glance into several packages (glance,-common,-api and -registry). > > b) From the deployment side, we want the glance package to behave as it did before, i.e. pull -api and -registry. > > c) From the puppet-glance side, if we have separate -api and -registry packages, we want to reflect that change in the Puppet modules and be able to configure -api and -registry independently. Also, this package split is already happening in Debian/Ubuntu, so removing distro-specific code is always welcome. > > > > With that in mind, I think the following options as the easiest ones to implement: > > > > 1- Split packages, with the following deps: > > > > * -api and -registry depend on -common > > * glance depends on -api and -registry > > > > This would require moving the existing content in glance (/usr/bin/glance-manage and /usr/bin/glance-control) into -common, so glance becomes a meta-package. With this, we would get b) and c), and most of a). The only drawback is that glance-manage and glance-control may not be a good fit for the -common package (Haikel, can you comment on this?). FWIW, this is how it is being packaged today in Debian. > > > > 2- Keep the old situation (no Glance package split) > > > > This obviously negates a), and keeps distro-specific code in c), but still works and does not break any existing code. > > > > Any thoughts? > > > > Regards, > > Javier > > Thanks for the summary Javier, 1) is the right thing to do. For the > record, history of this change was: > * https://review.gerrithub.io/229724 - Split openstack-glance into new > subpackages > * https://review.gerrithub.io/229980 - Backward compatiblity with > previous all-in-one main package (quickfix after I've seen Packstack > failures, in retrospect that were I should've introduced -common with > glance-manage as you propose) > ** in the meantime, puppet-glance was adjusted to take advantage of > the subpackages: https://review.openstack.org/172440 - Separate api > and registry packages for Red Hat > * https://review.gerrithub.io/230356 - Revert dependencies between > services and the main packages (followup, b/c after puppet-glance > change glance-manage was not getting installed) > * https://review.gerrithub.io/230453 - Revert "Split openstack-glance > into new subpackage" (merged to unblock tripleo) > ** https://review.openstack.org/174872 - Revert "Separate api and > registry packages for Red Hat" in puppet-glance > > So the plan is to re-propose "Split openstack-glance into new > subpackages" and merged only after it's verified by all interested > teams and then re-propose "Separate api and registry packages for Red > Hat" in puppet-glance. If we do the package split correctly there should be no rush to go and update puppet-glance or any of the configuration tools out there. I would actually suggest waiting a bit to modify puppet-glance to prove the new packaging split works with the old puppet-glance implementation (perhaps filing a bug on this so we don't forget when to do it). If we do it this way then someone who wants to selectively use an older Glance package could still do so with the upstream Glance modules. >From an end user prospective I don't think there isn't a lot of value in this split. I mean don't get me wrong, I wish we would have split out glance-registry 4 years ago. I just don't see why there is all of a sudden a rush to split it now, especially considering that glance-registry would eventually get deprecated anyway? Regardless, so long as we do the split in such a manner as it doesn't functionally break users who upgrade (this is the most important thing) I think it is probably okay. We should just make sure that during this split we test all the cases where this package may actually get used. I would like to see the split package working with the (old) puppet-glance modules and the existing TripleO image elements as well. If it works in both of those locations it is probably fine. This may involve manual testing time as we don't yet have RDO trunk CI on all these packaging changes. Dan > > > Cheers, > Alan From rcritten at redhat.com Mon Apr 20 13:43:52 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 20 Apr 2015 09:43:52 -0400 Subject: [Rdo-list] security q: horizon and ssl In-Reply-To: References: <55349934.8030903@redhat.com> Message-ID: <55350298.6070104@redhat.com> Mohammed Arafa wrote: > nope didnt work > > "connection reset" on both firefox and chrome. > > konqueror gave me this: > The requested operation could not be completed > > Connection to Server Refused The novnc proxy needs to be started in SSL mode, so provided with a cert and key (and IIRC there is an option to enable SSL). rob From mohammed.arafa at gmail.com Mon Apr 20 13:59:14 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 20 Apr 2015 09:59:14 -0400 Subject: [Rdo-list] security q: horizon and ssl In-Reply-To: <55350298.6070104@redhat.com> References: <55349934.8030903@redhat.com> <55350298.6070104@redhat.com> Message-ID: this is in packstack answer file. there is no mention of vnc CONFIG_HORIZON_SSL=y # PEM encoded certificate to be used for ssl on the https server, # leave blank if one should be generated, this certificate should not # require a passphrase CONFIG_SSL_CERT= # SSL keyfile corresponding to the certificate if one was entered CONFIG_SSL_KEY= # PEM encoded CA certificates from which the certificate chain of the # server certificate can be assembled. CONFIG_SSL_CACHAIN= On Mon, Apr 20, 2015 at 9:43 AM, Rob Crittenden wrote: > Mohammed Arafa wrote: > > nope didnt work > > > > "connection reset" on both firefox and chrome. > > > > konqueror gave me this: > > The requested operation could not be completed > > > > Connection to Server Refused > > The novnc proxy needs to be started in SSL mode, so provided with a cert > and key (and IIRC there is an option to enable SSL). > > rob > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Mon Apr 20 14:10:33 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 20 Apr 2015 10:10:33 -0400 Subject: [Rdo-list] no valid host was found: nova problem In-Reply-To: References: Message-ID: yes all storage services use root partition i strongly recommend getting rid of your centos6 /home partition and lumping it together with root note /var/logs too can take up lots of space if u left it in DEBUG mode On Mon, Apr 20, 2015 at 8:57 AM, pauline phaure wrote: > I deleted one of my old instances and swpaned and other one and everything > went fine. My problem is that i'm using 2 baremetal servers each one have > 1500GB and when I run df-h I only see this . > > [root at localhost ~(keystone_admin)]# df -h > Sys. de fichiers Taille Utilis? Dispo Uti% Mont? sur > /dev/mapper/centos-root 50G 4,0G 47G 8% / > devtmpfs 5,8G 0 5,8G 0% /dev > tmpfs 5,8G 4,0K 5,8G 1% /dev/shm > tmpfs 5,8G 17M 5,8G 1% /run > tmpfs 5,8G 0 5,8G 0% /sys/fs/cgroup > /dev/loop0 1,9G 6,1M 1,7G 1% /srv/node/swiftloopback > /dev/sda1 497M 171M 327M 35% /boot > /dev/mapper/centos-home 1,8T 33M 1,8T 1% /home > tmpfs 5,8G 17M 5,8G 1% /run/netns > > > 2015-04-20 13:28 GMT+02:00 Mohammed Arafa : > >> Pauline it could be a number of issues including too little CPU or ram or >> disk. Try with the cirros image first >> >> Tokens in the service project do not expire as far as I know >> On Apr 20, 2015 4:53 AM, "pauline phaure" wrote: >> >>> I think it's because of an authentication issue as my plateform was >>> working fine friday. May be the tokens given to nova, neutron... has >>> expired?? how can I fix it if so. plz help >>> >>> 2015-04-20 10:23 GMT+02:00 pauline phaure : >>> >>>> grep ERROR nova-scheduler.log >>>> >>>> 2015-04-17 15:41:25.119 1976 ERROR oslo.messaging._drivers.impl_rabbit >>>> [-] Failed to consume message from queue: [Errno 110] Connection timed out >>>> 2015-04-17 16:34:37.163 1959 ERROR oslo.messaging._drivers.impl_rabbit >>>> [-] Failed to consume message from queue: [Errno 110] Connection timed out >>>> 2015-04-20 09:40:21.192 9683 ERROR oslo.messaging._drivers.impl_rabbit >>>> [-] Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - >>>> broker forced connection closure with reason 'shutdown' >>>> 2015-04-20 09:40:22.217 9683 ERROR oslo.messaging._drivers.impl_rabbit >>>> [-] AMQP server on 192.168.2.34:5672 is unreachable: [Errno 111] >>>> ECONNREFUSED. Trying again in 1 seconds. >>>> >>>> >>>> >>>> grep ERROR nova-api.log >>>> >>>> 2015-04-19 02:23:52.043 4750 ERROR nova.api.openstack >>>> [req-6cdb1bcf-18e3-4366-bc33-8afad6a0be9e None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 04:33:52.051 4745 ERROR nova.api.openstack >>>> [req-a682ff4c-86ee-4d7e-97fa-df0741c4c5ef None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 05:43:52.048 4749 ERROR nova.api.openstack >>>> [req-afec3fef-95a9-4b62-b531-a8c686648dc7 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 07:53:52.060 4743 ERROR nova.api.openstack >>>> [req-55074cb4-4dd9-442a-9e77-a6db54c41124 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 11:03:52.072 4733 ERROR nova.api.openstack >>>> [req-e7ec8959-17b7-4c20-b81f-eb4457e8d23e None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 12:13:52.046 4747 ERROR nova.api.openstack >>>> [req-02fcdd32-18da-4e57-90fa-a269ac6446bb None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 15:23:52.078 4741 ERROR nova.api.openstack >>>> [req-581682f3-4677-49ba-b6cf-4e7cd253b47f None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 17:33:52.102 4750 ERROR nova.api.openstack >>>> [req-2f22a04c-5d84-4279-97e7-48ad99d99f17 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 20:43:52.252 4741 ERROR nova.api.openstack >>>> [req-48dc8185-d826-48eb-a629-e442d7e56181 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-19 21:53:52.107 4745 ERROR nova.api.openstack >>>> [req-c3b66cf9-eb12-4ddc-b929-c108cbce8575 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 01:03:52.125 4737 ERROR nova.api.openstack >>>> [req-31034151-4a94-4b5a-a925-e975912ec93a None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >>>> not provided to the Neutron client >>>> >>>> >>>> >>>> 2015-04-20 10:15 GMT+02:00 pauline phaure : >>>> >>>>> hello guys , >>>>> When I try to launch an instance, have this error, no valmid host was >>>>> found. I found these errors in the file logs of nova and rabbitmq but don't >>>>> know how to proceed: >>>>> >>>>> >>>>> >>>>> *grep error nova-scheduler.log* >>>>> 2015-04-17 15:41:25.119 1976 TRACE oslo.messaging._drivers.impl_rabbit >>>>> error: [Errno 110] Connection timed out >>>>> 2015-04-17 16:34:37.163 1959 TRACE oslo.messaging._drivers.impl_rabbit >>>>> error: [Errno 110] Connection timed out >>>>> >>>>> >>>>> *grep error nova-api.log* >>>>> 2015-04-20 01:13:52.005 4748 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> 868d85f2de3d49878af5b6f79d80e8de", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 02:13:52.022 4741 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> b309783b561b44ef904b3c4a6ab474bd", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 03:13:52.004 4737 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> 24eed0bb37b54445bd753359b234a0c4", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 04:13:52.033 4741 ERROR nova.api.openstack >>>>> [req-b521803c-e80c-4e86-a30b-16e5c57da918 None] Caught error: auth_url was >>>>> not provided to the Neutron client >>>>> 2015-04-20 04:23:52.044 4742 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> 6d3e77f2cf75495297c34133bc765bd8", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 05:23:52.047 4736 ERROR nova.api.openstack >>>>> [req-95702e17-0960-451a-b5c1-828172b27bc0 None] Caught error: auth_url was >>>>> not provided to the Neutron client >>>>> 2015-04-20 05:33:52.024 4731 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> ccaee1e396614df5a753a331345e3e24", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 06:33:52.013 4740 ERROR nova.api.openstack >>>>> [req-711d560d-e9c3-48f3-aabe-28e138ae06e1 None] Caught error: auth_url was >>>>> not provided to the Neutron client >>>>> 2015-04-20 06:43:51.887 4737 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> 6ad03759c8f446d09d4babde1aa7f63d", "code": 404, "title": "Not Found"}} >>>>> 2015-04-20 07:43:52.178 4747 ERROR nova.api.openstack >>>>> [req-d0caca70-b960-4612-b99f-ca42292cb6b4 None] Caught error: auth_url was >>>>> not provided to the Neutron client >>>>> 2015-04-20 07:53:52.037 4742 WARNING keystonemiddleware.auth_token [-] >>>>> Identity response: {"error": {"message": "Could not find token: >>>>> 0bcaad841985487bbfe4bce038b49c9e", "code": 404, "title": "Not Found"}} >>>>> >>>>> >>>>> *grep error /var/log/rabbitmq/rabbit\@localhost.log* >>>>> >>>>> >>>>> AMQP connection <0.865.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1312.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.499.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1657.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9349.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.761.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.2801.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.4447.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1065.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.712.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.618.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.664.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9428.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1222.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9530.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9496.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.6746.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9479.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9280.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9295.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1203.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1420.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.9513.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1048.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.741.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1734.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1390.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1400.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> AMQP connection <0.1430.0> (running), channel 0 - error: >>>>> {amqp_error,connection_forced, >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rcritten at redhat.com Mon Apr 20 14:29:20 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 20 Apr 2015 10:29:20 -0400 Subject: [Rdo-list] security q: horizon and ssl In-Reply-To: References: <55349934.8030903@redhat.com> <55350298.6070104@redhat.com> Message-ID: <55350D40.3040604@redhat.com> Mohammed Arafa wrote: > this is in packstack answer file. there is no mention of vnc > > CONFIG_HORIZON_SSL=y > > # PEM encoded certificate to be used for ssl on the https server, > # leave blank if one should be generated, this certificate should not > # require a passphrase > CONFIG_SSL_CERT= > > # SSL keyfile corresponding to the certificate if one was entered > CONFIG_SSL_KEY= > > # PEM encoded CA certificates from which the certificate chain of the > # server certificate can be assembled. > CONFIG_SSL_CACHAIN= Just trying to be helpful. You'd need to configure this post-install. I've never actually tried securing this particular service so I don't know if there be dragons or not. rob From hguemar at fedoraproject.org Mon Apr 20 15:00:02 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 20 Apr 2015 15:00:02 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150420150003.02F5C60A958A@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-04-22 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From jslagle at redhat.com Mon Apr 20 16:28:36 2015 From: jslagle at redhat.com (James Slagle) Date: Mon, 20 Apr 2015 12:28:36 -0400 Subject: [Rdo-list] rdo-manager installs failing at horizon's manage.py step In-Reply-To: <55349B62.1060806@redhat.com> References: <20150419191035.GQ29586@teletran-1.redhat.com> <55349B62.1060806@redhat.com> Message-ID: <20150420162836.GW29586@teletran-1.redhat.com> On Mon, Apr 20, 2015 at 08:23:30AM +0200, Matthias Runge wrote: > On 19/04/15 21:10, James Slagle wrote: > >FYI, all rdo-manager installs are currently failing during the Undercloud > >installation with: > > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/horizon/templates/horizon/_conf.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/index.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/undeploy_confirmation.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/scale_out.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/deploy_confirmation.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/_workflow_base.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_boxes/templates/tuskar_boxes/overview/index.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/nodes/register.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/post_deploy_init.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/_fullscreen_workflow_base.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/horizon/templates/horizon/_scripts.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/nodes/index.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/share/openstack-dashboard/openstack_dashboard/templates/_stylesheets.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/base.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/base_detail.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/nodes/detail.html > >16:26:11 Notice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]/returns: Compressing... > >16:26:11 Error: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]: Failed to call refresh: /usr/share/openstack-dashboard/manage.py compress returned 1 instead of one of [0] > >16:26:11 Error: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]: /usr/share/openstack-dashboard/manage.py compress returned 1 instead of one of [0] > > > >Initial investigation indicates it's probably related to this Horizon packaging > >change from Friday: > > > >https://review.gerrithub.io/#/c/230349/ > > The issue is produced in tuskar-ui. > > Your mentioned config change was a change in compression to remove the > compression step from package build. > That should make it easier for anyone to package add-ons to horizon. > It was briefly discussed on #rdo and on this list last week > > https://www.redhat.com/archives/rdo-list/2015-April/msg00046.html > (and a few other mentions) > > For horizon itself, I saw some changes in static files required, because > location of them changed during kilo cycle. That's the pain, we're going > through, when packaging stuff depending on Horizon upstream. This should be resolved now, http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/ has been rolled back to a previous known good repo. The root cause of the actual issue is that the horizon package was no longer running collectstatic, yet puppet-horizon was relying on that having been done before compressing the static assets. This will be addressed by these 2 patches: puppet-horizon: https://review.openstack.org/175404 openstack-packages/horizon: https://review.gerrithub.io/#/c/230650/ -- -- James Slagle -- From christian at berendt.io Mon Apr 20 17:10:36 2015 From: christian at berendt.io (Christian Berendt) Date: Mon, 20 Apr 2015 19:10:36 +0200 Subject: [Rdo-list] Glance packaging and RDO Kilo In-Reply-To: References: Message-ID: <5535330C.1040602@berendt.io> On 04/20/2015 12:12 PM, Alan Pevec wrote: >> 1- Split packages, with the following deps: >> >> * -api and -registry depend on -common >> * glance depends on -api and -registry It looks like openstack-glance-api and openstack-glance-registry do not longer require python-glance? openstack-glance-api and openstack-glance-registry only require openstack-glance-common. openstack-glance-api and openstack-glance-registry are not workable without python-glance. For me it does not makes sense to not install python-glance. Christian. From mohammed.arafa at gmail.com Mon Apr 20 17:31:51 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 20 Apr 2015 13:31:51 -0400 Subject: [Rdo-list] 10 days of rdo manager In-Reply-To: References: Message-ID: Hello all. I am currently transitioning and have 10 days available to run with testing rdo manager. I am offering my help with testing and documenting as needed. What do you guys need? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Mon Apr 20 17:49:27 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 20 Apr 2015 18:49:27 +0100 Subject: [Rdo-list] 10 days of rdo manager In-Reply-To: References: Message-ID: Hi, I haven't played with it yet, but I would like to know what happens to overcloud nodes when you reboot/loose or break your undercloud node. Do overcloud nodes loose ip connectivity? My understanding is that overcloud nodes get dhcp from Neutron. Or do I need to have some HA for undercloud in place? Thanks, Pedro Sousa On Mon, Apr 20, 2015 at 6:31 PM, Mohammed Arafa wrote: > Hello all. I am currently transitioning and have 10 days available to run > with testing rdo manager. > I am offering my help with testing and documenting as needed. > > What do you guys need? > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady_Kanevsky at dell.com Mon Apr 20 18:43:30 2015 From: Arkady_Kanevsky at dell.com (Arkady_Kanevsky at dell.com) Date: Mon, 20 Apr 2015 13:43:30 -0500 Subject: [Rdo-list] 10 days of rdo manager In-Reply-To: References: Message-ID: <336424C1A5A44044B29030055527AA7504B8854C11@AUSX7MCPS301.AMER.DELL.COM> Dell - Internal Use - Confidential Mohammed, Can you also test what happens when a single node undercloud comes back? Start with simple case when overcloud is fully deployed and functions. Then try restarting services in overcloud when undercloud is not there, When it is coming up, and finally when it is fully running again. For undercloud start with simple shutdown, then boot case. Then we can dive into what happens with sensu which is monitoring overcloud nodes, then tempest for testing overcloud, and finally ceph monitoring of overcloud. Thanks, Arkady From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Mohammed Arafa Sent: Monday, April 20, 2015 12:32 PM To: rdo-list at redhat.com Subject: [Rdo-list] 10 days of rdo manager Hello all. I am currently transitioning and have 10 days available to run with testing rdo manager. I am offering my help with testing and documenting as needed. What do you guys need? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Mon Apr 20 19:17:44 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 20 Apr 2015 15:17:44 -0400 Subject: [Rdo-list] 10 days of rdo manager In-Reply-To: <336424C1A5A44044B29030055527AA7504B8854C11@AUSX7MCPS301.AMER.DELL.COM> References: <336424C1A5A44044B29030055527AA7504B8854C11@AUSX7MCPS301.AMER.DELL.COM> Message-ID: Hello Currently, I am still setting up my environment. Once it is setup properly, I will get around to your requests. At this very moment, I have this problem on instack-install-undercloud + setup-neutron -n /tmp/tmp.Kuen3PhqAF /usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30: UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for novaclient.v2). The preferable way to get client class or object you can find in novaclient.client module. warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis for " 2015-04-20 19:00:03 - root - ERROR - Unexpected error during command execution Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/os_cloud_config/cmd/setup_neutron.py", line 77, in main keystone_client=keystone_client) File "/usr/lib/python2.7/site-packages/os_cloud_config/neutron.py", line 46, in initialize_neutron net = _create_net(neutron_client, network_desc, network_type, admin_tenant) File "/usr/lib/python2.7/site-packages/os_cloud_config/neutron.py", line 95, in _create_net return neutron.create_network({'network': network}) File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 102, in with_params ret = self.function(instance, *args, **kwargs) File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 571, in create_network return self.post(self.networks_path, body=body) File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 298, in post headers=headers, params=params) File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 211, in do_request self._handle_fault_response(status_code, replybody) File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 185, in _handle_fault_response exception_handler_v20(status_code, des_error_body) File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 70, in exception_handler_v20 status_code=status_code) Conflict: Unable to create the flat network. Physical network ctlplane is in use. [2015-04-20 19:00:03,079] (os-refresh-config) [ERROR] during post-configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero exit status 1] [2015-04-20 19:00:03,079] (os-refresh-config) [ERROR] Aborting... I am using the generic instack.answers file. my network devices: [stack at instack ~]$ ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:e2:71:1b brd ff:ff:ff:ff:ff:ff inet 192.168.122.243/24 brd 192.168.122.255 scope global dynamic eth0 valid_lft 3318sec preferred_lft 3318sec inet6 fe80::5054:ff:fee2:711b/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000 link/ether 00:0c:63:21:8e:8c brd ff:ff:ff:ff:ff:ff 4: ovs-system: mtu 1500 qdisc noop state DOWN link/ether ba:52:30:e2:e4:9f brd ff:ff:ff:ff:ff:ff 5: br-ctlplane: mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:63:21:8e:8c brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane valid_lft forever preferred_lft forever inet6 fe80::20c:63ff:fe21:8e8c/64 scope link valid_lft forever preferred_lft forever 6: br-int: mtu 1500 qdisc noop state DOWN link/ether f6:a8:58:76:12:43 brd ff:ff:ff:ff:ff:ff 7: br-tun: mtu 1500 qdisc noop state DOWN link/ether 76:78:4c:9e:71:4d brd ff:ff:ff:ff:ff:ff any help is appreciated On Mon, Apr 20, 2015 at 2:43 PM, wrote: > *Dell - Internal Use - Confidential * > > Mohammed, > > Can you also test what happens when a single node undercloud comes back? > > Start with simple case when overcloud is fully deployed and functions. > > Then try restarting services in overcloud when undercloud is not there, > > When it is coming up, and finally when it is fully running again. > > For undercloud start with simple shutdown, then boot case. > > > > Then we can dive into what happens with sensu which is monitoring > overcloud nodes, then tempest for testing overcloud, and finally ceph > monitoring of overcloud. > > > > Thanks, > > Arkady > > > > *From:* rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] *On > Behalf Of *Mohammed Arafa > *Sent:* Monday, April 20, 2015 12:32 PM > *To:* rdo-list at redhat.com > *Subject:* [Rdo-list] 10 days of rdo manager > > > > Hello all. I am currently transitioning and have 10 days available to run > with testing rdo manager. > I am offering my help with testing and documenting as needed. > > What do you guys need? > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Mon Apr 20 19:38:33 2015 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 20 Apr 2015 19:38:33 +0000 Subject: [Rdo-list] [RDO] RDO Blog roundup, April 20 2015 Message-ID: <0000014cd856dc4d-88a78299-79f8-4139-8934-f004d8f999f2-000000@email.amazonses.com> rbowen started a discussion. RDO Blog roundup, April 20 2015 --- Follow the link below to check it out: https://www.rdoproject.org/forum/discussion/1012/rdo-blog-roundup-april-20-2015 Have a great day! From rbowen at redhat.com Mon Apr 20 20:35:47 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 20 Apr 2015 16:35:47 -0400 Subject: [Rdo-list] RDO/OpenStack meetups coming up (Monday, April 20, 2015) Message-ID: <55356323.90201@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Monday, April 20 in Paris, FR: Meetup#14 Placement intelligent et SLA avanc? avec Nova Scheduler - http://www.meetup.com/OpenStack-France/events/221773375/ * Monday, April 20 in Guadalajara, MX: Tools for fullstack: "Especificacion de requerimientos" - http://www.meetup.com/OpenStack-GDL/events/221741194/ * Monday, April 20 in Cheltenham, E6, GB: Openstack Night - http://www.meetup.com/Cheltenham-GeekNights/events/220929989/ * Tuesday, April 21 in Melbourne, AU: OpenStack Conference, part of the CONNECT Show - http://www.meetup.com/Australian-OpenStack-User-Group/events/220314515/ * Tuesday, April 21 in King of Prussia, PA, US: [Tech Talk] OpenStack 101 - http://www.meetup.com/ValleyForgeTech/events/210471742/ * Tuesday, April 21 in Stuttgart, DE: Zweites spontanes und informelles Treffen zum Erfahrungsaustausch - http://www.meetup.com/OpenStack-Baden-Wuerttemberg/events/221970705/ * Wednesday, April 22 in Amersfoort, NL: Canonical Ubuntu OpenStack Roadshow - http://www.meetup.com/Openstack-Netherlands/events/221727218/ * Wednesday, April 22 in New York, NY, US: Deploying OpenStack with Mirantis FUEL/ Billing and Metering with Talligent - http://www.meetup.com/OpenStack-New-York-Meetup/events/220648431/ * Wednesday, April 22 in Durham, NC, US: Bonus April Meetup: OpenStack Storage Projects & An Overview of Open vStorage - http://www.meetup.com/Triangle-OpenStack-Meetup/events/221194351/ * Wednesday, April 22 in Athens, GR: Data storage in clouds - http://www.meetup.com/Athens-OpenStack-User-Group/events/219017094/ * Wednesday, April 22 in Istanbul, TR: OpenStack?te farkli mimari ornekleri, farklari, artilari, eksileri - http://www.meetup.com/Turkey-OpenStack-Meetup/events/221151225/ * Thursday, April 23 in Philadelphia, PA, US: Deploying OpenStack with Mirantis FUEL/ Billing and Metering with Talligent - http://www.meetup.com/Philly-OpenStack-Meetup-Group/events/220648495/ * Thursday, April 23 in Denver, CO, US: Ceph, A Distributed Object Store and File System - http://www.meetup.com/Distributed-Computing-Denver/events/220642902/ * Thursday, April 23 in Pasadena, CA, US: Highly Available, Performant, VXLAN Service Node. The April OpenStack LA Meetup. - http://www.meetup.com/OpenStack-LA/events/221553823/ * Thursday, April 23 in Berlin, DE: Infracoders Berlin Meetup - CI/CD with the OpenStack Infra Project - http://www.meetup.com/Infracoders-Berlin/events/220873576/ * Saturday, April 25 in Bangalore, IN: OpenStack India Meetup, Bangalore - http://www.meetup.com/Indian-OpenStack-User-Group/events/221391632/ * Saturday, April 25 in Beijing, CN: ?OpenStack??Docker - http://www.meetup.com/China-OpenStack-User-Group/events/221807891/ * Sunday, April 26 in Xian, CN: OpenStack ?? Meet Up April - http://www.meetup.com/Xian-OpenStack-Meetup/events/221926606/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From jslagle at redhat.com Mon Apr 20 22:42:56 2015 From: jslagle at redhat.com (James Slagle) Date: Mon, 20 Apr 2015 18:42:56 -0400 Subject: [Rdo-list] 10 days of rdo manager In-Reply-To: References: Message-ID: <20150420224256.GZ29586@teletran-1.redhat.com> On Mon, Apr 20, 2015 at 06:49:27PM +0100, Pedro Sousa wrote: > Hi, > > I haven't played with it yet, but I would like to know what happens to > overcloud nodes when you reboot/loose or break your undercloud node. > > Do overcloud nodes loose ip connectivity? My understanding is that > overcloud nodes get dhcp from Neutron. Or do I need to have some HA for > undercloud in place? With the network architecture we're moving towards, overcloud nodes will only get dhcp from Neutron for the provisioning network. The api, data, storage, etc network will support static IP configuration, or possibly, non-Neutron provided dhcp. Further, after initial provisioning, overcloud nodes will boot off the local disk instead of pxe booting via Neutron on subsequent reboots. localboot support is a relatively new feature in upstream Ironic, and we'll be enabling it soon in rdo-manager. With these changes, when the undercloud is stopped or goes down unexpectedly the overcloud would be unaffected. That being said, we still plan to have an HA undercloud at some point in the future. Also, the current virt-setup that allows testing rdo-manager via deploying the undercloud and overcloud all on vm's still relies on the undercloud vm to continue to run for connectivity to overcloud nodes. That could also be enhanced though to not require the undercloud vm to stay up. > > Thanks, > Pedro Sousa > > On Mon, Apr 20, 2015 at 6:31 PM, Mohammed Arafa > wrote: > > > Hello all. I am currently transitioning and have 10 days available to run > > with testing rdo manager. > > I am offering my help with testing and documenting as needed. > > > > What do you guys need? > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From jslagle at redhat.com Mon Apr 20 22:52:34 2015 From: jslagle at redhat.com (James Slagle) Date: Mon, 20 Apr 2015 18:52:34 -0400 Subject: [Rdo-list] 10 days of rdo manager In-Reply-To: References: <336424C1A5A44044B29030055527AA7504B8854C11@AUSX7MCPS301.AMER.DELL.COM> Message-ID: <20150420225234.GA29586@teletran-1.redhat.com> On Mon, Apr 20, 2015 at 03:17:44PM -0400, Mohammed Arafa wrote: > Hello > > Currently, I am still setting up my environment. Once it is setup properly, > I will get around to your requests. > > At this very moment, I have this problem on instack-install-undercloud > > + setup-neutron -n /tmp/tmp.Kuen3PhqAF > /usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30: > UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for > novaclient.v2). The preferable way to get client class or object you can > find in novaclient.client module. > warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis for > " > 2015-04-20 19:00:03 - root - ERROR - Unexpected error during command > execution > Traceback (most recent call last): > File > "/usr/lib/python2.7/site-packages/os_cloud_config/cmd/setup_neutron.py", > line 77, in main > keystone_client=keystone_client) > File "/usr/lib/python2.7/site-packages/os_cloud_config/neutron.py", line > 46, in initialize_neutron > net = _create_net(neutron_client, network_desc, network_type, > admin_tenant) > File "/usr/lib/python2.7/site-packages/os_cloud_config/neutron.py", line > 95, in _create_net > return neutron.create_network({'network': network}) > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > line 102, in with_params > ret = self.function(instance, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > line 571, in create_network > return self.post(self.networks_path, body=body) > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > line 298, in post > headers=headers, params=params) > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > line 211, in do_request > self._handle_fault_response(status_code, replybody) > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > line 185, in _handle_fault_response > exception_handler_v20(status_code, des_error_body) > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > line 70, in exception_handler_v20 > status_code=status_code) > Conflict: Unable to create the flat network. Physical network ctlplane is > in use. > [2015-04-20 19:00:03,079] (os-refresh-config) [ERROR] during post-configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero exit > status 1] > > [2015-04-20 19:00:03,079] (os-refresh-config) [ERROR] Aborting... > > > I am using the generic instack.answers file. > > my network devices: > [stack at instack ~]$ ip addr show > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: eth0: mtu 1500 qdisc pfifo_fast state > UP qlen 1000 > link/ether 52:54:00:e2:71:1b brd ff:ff:ff:ff:ff:ff > inet 192.168.122.243/24 brd 192.168.122.255 scope global dynamic eth0 > valid_lft 3318sec preferred_lft 3318sec > inet6 fe80::5054:ff:fee2:711b/64 scope link > valid_lft forever preferred_lft forever > 3: eth1: mtu 1500 qdisc pfifo_fast master > ovs-system state UP qlen 1000 > link/ether 00:0c:63:21:8e:8c brd ff:ff:ff:ff:ff:ff > 4: ovs-system: mtu 1500 qdisc noop state DOWN > link/ether ba:52:30:e2:e4:9f brd ff:ff:ff:ff:ff:ff > 5: br-ctlplane: mtu 1500 qdisc noqueue > state UNKNOWN > link/ether 00:0c:63:21:8e:8c brd ff:ff:ff:ff:ff:ff > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > valid_lft forever preferred_lft forever > inet6 fe80::20c:63ff:fe21:8e8c/64 scope link > valid_lft forever preferred_lft forever > 6: br-int: mtu 1500 qdisc noop state DOWN > link/ether f6:a8:58:76:12:43 brd ff:ff:ff:ff:ff:ff > 7: br-tun: mtu 1500 qdisc noop state DOWN > link/ether 76:78:4c:9e:71:4d brd ff:ff:ff:ff:ff:ff > > any help is appreciated Hi Mohammed, Thanks for offering some time to try out rdo-manager. I take it you've found the documentation[1] given the progress you've made so far. It looks like you made it through the virt-setup based on the network device output. As for the setup-neutron error, the first thing that comes to mind is if this is a clean install on the instack vm? The only reason I can think of right off as to why the ctlplane network would be in use is if neutron ports were allocated on the assigned subnet. So, I'm wondering if this a 2nd run through of instack-install-undercloud perhaps after a failed deployment? [1] https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/ > > On Mon, Apr 20, 2015 at 2:43 PM, wrote: > > > *Dell - Internal Use - Confidential * > > > > Mohammed, > > > > Can you also test what happens when a single node undercloud comes back? > > > > Start with simple case when overcloud is fully deployed and functions. > > > > Then try restarting services in overcloud when undercloud is not there, > > > > When it is coming up, and finally when it is fully running again. > > > > For undercloud start with simple shutdown, then boot case. > > > > > > > > Then we can dive into what happens with sensu which is monitoring > > overcloud nodes, then tempest for testing overcloud, and finally ceph > > monitoring of overcloud. > > > > > > > > Thanks, > > > > Arkady > > > > > > > > *From:* rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] *On > > Behalf Of *Mohammed Arafa > > *Sent:* Monday, April 20, 2015 12:32 PM > > *To:* rdo-list at redhat.com > > *Subject:* [Rdo-list] 10 days of rdo manager > > > > > > > > Hello all. I am currently transitioning and have 10 days available to run > > with testing rdo manager. > > I am offering my help with testing and documenting as needed. > > > > What do you guys need? > > > > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From mohammed.arafa at gmail.com Mon Apr 20 23:16:17 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 20 Apr 2015 19:16:17 -0400 Subject: [Rdo-list] 10 days of rdo manager In-Reply-To: <20150420225234.GA29586@teletran-1.redhat.com> References: <336424C1A5A44044B29030055527AA7504B8854C11@AUSX7MCPS301.AMER.DELL.COM> <20150420225234.GA29586@teletran-1.redhat.com> Message-ID: James I am on my 4th run now. 1st 2 times as you say with a repeated run. then i deleted the instack vm and started fresh for the 3rd and 4th time. i believe it is because rabbitmq wasnt running. so between 3rd and 4th run i restarted rabbitmq. i have also discovered that the instack-install-undercloud doesnt keep logs in /var/log so it is a bit difficult to debug and have started to tee the output to log instead anyways, i am going to undefine the instack vm now and try again On Mon, Apr 20, 2015 at 6:52 PM, James Slagle wrote: > On Mon, Apr 20, 2015 at 03:17:44PM -0400, Mohammed Arafa wrote: > > Hello > > > > Currently, I am still setting up my environment. Once it is setup > properly, > > I will get around to your requests. > > > > At this very moment, I have this problem on instack-install-undercloud > > > > + setup-neutron -n /tmp/tmp.Kuen3PhqAF > > /usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30: > > UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for > > novaclient.v2). The preferable way to get client class or object you can > > find in novaclient.client module. > > warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis > for > > " > > 2015-04-20 19:00:03 - root - ERROR - Unexpected error during command > > execution > > Traceback (most recent call last): > > File > > "/usr/lib/python2.7/site-packages/os_cloud_config/cmd/setup_neutron.py", > > line 77, in main > > keystone_client=keystone_client) > > File "/usr/lib/python2.7/site-packages/os_cloud_config/neutron.py", > line > > 46, in initialize_neutron > > net = _create_net(neutron_client, network_desc, network_type, > > admin_tenant) > > File "/usr/lib/python2.7/site-packages/os_cloud_config/neutron.py", > line > > 95, in _create_net > > return neutron.create_network({'network': network}) > > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > > line 102, in with_params > > ret = self.function(instance, *args, **kwargs) > > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > > line 571, in create_network > > return self.post(self.networks_path, body=body) > > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > > line 298, in post > > headers=headers, params=params) > > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > > line 211, in do_request > > self._handle_fault_response(status_code, replybody) > > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > > line 185, in _handle_fault_response > > exception_handler_v20(status_code, des_error_body) > > File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", > > line 70, in exception_handler_v20 > > status_code=status_code) > > Conflict: Unable to create the flat network. Physical network ctlplane is > > in use. > > [2015-04-20 19:00:03,079] (os-refresh-config) [ERROR] during > post-configure > > phase. [Command '['dib-run-parts', > > '/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero > exit > > status 1] > > > > [2015-04-20 19:00:03,079] (os-refresh-config) [ERROR] Aborting... > > > > > > I am using the generic instack.answers file. > > > > my network devices: > > [stack at instack ~]$ ip addr show > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > inet6 ::1/128 scope host > > valid_lft forever preferred_lft forever > > 2: eth0: mtu 1500 qdisc pfifo_fast > state > > UP qlen 1000 > > link/ether 52:54:00:e2:71:1b brd ff:ff:ff:ff:ff:ff > > inet 192.168.122.243/24 brd 192.168.122.255 scope global dynamic > eth0 > > valid_lft 3318sec preferred_lft 3318sec > > inet6 fe80::5054:ff:fee2:711b/64 scope link > > valid_lft forever preferred_lft forever > > 3: eth1: mtu 1500 qdisc pfifo_fast > master > > ovs-system state UP qlen 1000 > > link/ether 00:0c:63:21:8e:8c brd ff:ff:ff:ff:ff:ff > > 4: ovs-system: mtu 1500 qdisc noop state DOWN > > link/ether ba:52:30:e2:e4:9f brd ff:ff:ff:ff:ff:ff > > 5: br-ctlplane: mtu 1500 qdisc noqueue > > state UNKNOWN > > link/ether 00:0c:63:21:8e:8c brd ff:ff:ff:ff:ff:ff > > inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane > > valid_lft forever preferred_lft forever > > inet6 fe80::20c:63ff:fe21:8e8c/64 scope link > > valid_lft forever preferred_lft forever > > 6: br-int: mtu 1500 qdisc noop state DOWN > > link/ether f6:a8:58:76:12:43 brd ff:ff:ff:ff:ff:ff > > 7: br-tun: mtu 1500 qdisc noop state DOWN > > link/ether 76:78:4c:9e:71:4d brd ff:ff:ff:ff:ff:ff > > > > any help is appreciated > > Hi Mohammed, > > Thanks for offering some time to try out rdo-manager. I take it you've > found > the documentation[1] given the progress you've made so far. It looks like > you > made it through the virt-setup based on the network device output. > > As for the setup-neutron error, the first thing that comes to mind is if > this > is a clean install on the instack vm? The only reason I can think of right > off > as to why the ctlplane network would be in use is if neutron ports were > allocated on the assigned subnet. So, I'm wondering if this a 2nd run > through > of instack-install-undercloud perhaps after a failed deployment? > > [1] > https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/ > > > > > On Mon, Apr 20, 2015 at 2:43 PM, wrote: > > > > > *Dell - Internal Use - Confidential * > > > > > > Mohammed, > > > > > > Can you also test what happens when a single node undercloud comes > back? > > > > > > Start with simple case when overcloud is fully deployed and functions. > > > > > > Then try restarting services in overcloud when undercloud is not there, > > > > > > When it is coming up, and finally when it is fully running again. > > > > > > For undercloud start with simple shutdown, then boot case. > > > > > > > > > > > > Then we can dive into what happens with sensu which is monitoring > > > overcloud nodes, then tempest for testing overcloud, and finally ceph > > > monitoring of overcloud. > > > > > > > > > > > > Thanks, > > > > > > Arkady > > > > > > > > > > > > *From:* rdo-list-bounces at redhat.com [mailto: > rdo-list-bounces at redhat.com] *On > > > Behalf Of *Mohammed Arafa > > > *Sent:* Monday, April 20, 2015 12:32 PM > > > *To:* rdo-list at redhat.com > > > *Subject:* [Rdo-list] 10 days of rdo manager > > > > > > > > > > > > Hello all. I am currently transitioning and have 10 days available to > run > > > with testing rdo manager. > > > I am offering my help with testing and documenting as needed. > > > > > > What do you guys need? > > > > > > > > > > > -- > > > > > > < > https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995903C076FA394F816CC136539DBA6A32D7305539E4219F5A650358C02CA2ED9F1F26319&AspxAutoDetectCookieSupport=1 > > > > > > *805010942448935* > > < > https://www.redhat.com/wapps/training/certification/verify.html?certNumber=805010942448935&verify=Verify > > > > > > *GR750055912MA* > > < > https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995903C076FA394F816CC136539DBA6A32D7305539E4219F5A650358C02CA2ED9F1F26319&AspxAutoDetectCookieSupport=1 > > > > > > *Link to me on LinkedIn * > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -- > -- James Slagle > -- > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jweber at cofront.net Mon Apr 20 23:45:58 2015 From: jweber at cofront.net (Jeff Weber) Date: Mon, 20 Apr 2015 19:45:58 -0400 Subject: [Rdo-list] RDO Juno incorrect OVS package Message-ID: The openvswitch 2.3.1 package is missing from the RDO Juno EPEL-7 repository found at https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/. It looks like all the other sub-packages got synced, but the main package didn't. Opened bz#1211072 as well for this issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Mon Apr 20 23:58:21 2015 From: jslagle at redhat.com (James Slagle) Date: Mon, 20 Apr 2015 19:58:21 -0400 Subject: [Rdo-list] 10 days of rdo manager In-Reply-To: <55358253.7090105@redhat.com> References: <20150420224256.GZ29586@teletran-1.redhat.com> <55358253.7090105@redhat.com> Message-ID: <20150420235821.GB29586@teletran-1.redhat.com> On Mon, Apr 20, 2015 at 03:48:51PM -0700, Dan Sneddon wrote: > On 04/20/2015 03:42 PM, James Slagle wrote: > > On Mon, Apr 20, 2015 at 06:49:27PM +0100, Pedro Sousa wrote: > >> Hi, > >> > >> I haven't played with it yet, but I would like to know what happens to > >> overcloud nodes when you reboot/loose or break your undercloud node. > >> > >> Do overcloud nodes loose ip connectivity? My understanding is that > >> overcloud nodes get dhcp from Neutron. Or do I need to have some HA for > >> undercloud in place? > > > > With the network architecture we're moving towards, overcloud nodes will only > > get dhcp from Neutron for the provisioning network. The api, data, storage, etc > > network will support static IP configuration, or possibly, non-Neutron provided > > dhcp. > > > > Further, after initial provisioning, overcloud nodes will boot off the local > > disk instead of pxe booting via Neutron on subsequent reboots. localboot > > support is a relatively new feature in upstream Ironic, and we'll be enabling > > it soon in rdo-manager. > > > > With these changes, when the undercloud is stopped or goes down unexpectedly > > the overcloud would be unaffected. That being said, we still plan to have an HA > > undercloud at some point in the future. > > > > Also, the current virt-setup that allows testing rdo-manager via deploying the > > undercloud and overcloud all on vm's still relies on the undercloud vm to > > continue to run for connectivity to overcloud nodes. That could also be > > enhanced though to not require the undercloud vm to stay up. > > > > > > When a node is configured for local boot, does it get a static IP > address? Maybe it turns the DHCP address into a static? Or does it > still rely on the undercloud for DHCP? The interface connected to the provisioning network will still get dhcp from neutron on the undercloud if that interface is configured to start on boot, and configured to try dhcp. Since we're currently including the dhcp-all-interfaces[1] element in our image builds, that will indeed be the case. If not, it wouldn't get any IP address. It doesn't seem like you'd want to configure it with a static IP. [1] https://github.com/openstack/diskimage-builder/tree/master/elements/dhcp-all-interfaces Note we've had a few requests to not default to including this element, or at least it make it configurable not to include it. -- -- James Slagle -- From dsneddon at redhat.com Tue Apr 21 00:14:56 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Mon, 20 Apr 2015 17:14:56 -0700 Subject: [Rdo-list] 10 days of rdo manager In-Reply-To: <20150420235821.GB29586@teletran-1.redhat.com> References: <20150420224256.GZ29586@teletran-1.redhat.com> <55358253.7090105@redhat.com> <20150420235821.GB29586@teletran-1.redhat.com> Message-ID: <55359680.2080201@redhat.com> On 04/20/2015 04:58 PM, James Slagle wrote: > On Mon, Apr 20, 2015 at 03:48:51PM -0700, Dan Sneddon wrote: >> On 04/20/2015 03:42 PM, James Slagle wrote: >>> On Mon, Apr 20, 2015 at 06:49:27PM +0100, Pedro Sousa wrote: >>>> Hi, >>>> >>>> I haven't played with it yet, but I would like to know what >>>> happens to overcloud nodes when you reboot/loose or break >>>> your undercloud node. >>>> >>>> Do overcloud nodes loose ip connectivity? My understanding >>>> is that overcloud nodes get dhcp from Neutron. Or do I need >>>> to have some HA for undercloud in place? >>> >>> With the network architecture we're moving towards, overcloud >>> nodes will only get dhcp from Neutron for the provisioning >>> network. The api, data, storage, etc network will support >>> static IP configuration, or possibly, non-Neutron provided >>> dhcp. >>> >>> Further, after initial provisioning, overcloud nodes will boot >>> off the local disk instead of pxe booting via Neutron on >>> subsequent reboots. localboot support is a relatively new >>> feature in upstream Ironic, and we'll be enabling it soon in >>> rdo-manager. >>> >>> With these changes, when the undercloud is stopped or goes >>> down unexpectedly the overcloud would be unaffected. That >>> being said, we still plan to have an HA undercloud at some >>> point in the future. >>> >>> Also, the current virt-setup that allows testing rdo-manager >>> via deploying the undercloud and overcloud all on vm's still >>> relies on the undercloud vm to continue to run for >>> connectivity to overcloud nodes. That could also be enhanced >>> though to not require the undercloud vm to stay up. >>> >>> >> >> When a node is configured for local boot, does it get a static >> IP address? Maybe it turns the DHCP address into a static? Or >> does it still rely on the undercloud for DHCP? > > The interface connected to the provisioning network will still get > dhcp from neutron on the undercloud if that interface is > configured to start on boot, and configured to try dhcp. Since > we're currently including the dhcp-all-interfaces[1] element in > our image builds, that will indeed be the case. > > If not, it wouldn't get any IP address. It doesn't seem like you'd > want to configure it with a static IP. > > [1] > https://github.com/openstack/diskimage-builder/tree/master/elements/dhcp-all-interfaces > > > Note we've had a few requests to not default to including this > element, or at least it make it configurable not to include it. > > -- -- James Slagle -- > I was just confirming that the undercloud is still a single point of failure for the provisioning interface, even with localboot. With that in mind, it should be possible to configure static IP addresses on the other interfaces. The OpenStack services would be configured to use the interfaces with static IPs to ensure that the undercloud isn't required for ongoing operation of the overcloud. The pieces to configure static IP addresses are just landing. We will also need the accompanying puppet logic to place the services onto the proper interfaces. That will follow soon. We should definitely remove (or have an option to disable) dhcp-all-interfaces. This could lead to unexpected behavior if an otherwise unconfigured interface is plugged in to the wrong network, and could even be a security risk. Without dhcp-all-interfaces we can still specify multiple DHCP interfaces in the net-config-*.yaml files if that behavior is desired. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From pgsousa at gmail.com Tue Apr 21 00:17:40 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 21 Apr 2015 01:17:40 +0100 Subject: [Rdo-list] 10 days of rdo manager In-Reply-To: <20150420224256.GZ29586@teletran-1.redhat.com> References: <20150420224256.GZ29586@teletran-1.redhat.com> Message-ID: Hi James, very glad to hear that, it was a show stopper for me. Do you think that network architecture will be available in Kilo? Regards, Pedro Sousa On Mon, Apr 20, 2015 at 11:42 PM, James Slagle wrote: > On Mon, Apr 20, 2015 at 06:49:27PM +0100, Pedro Sousa wrote: > > Hi, > > > > I haven't played with it yet, but I would like to know what happens to > > overcloud nodes when you reboot/loose or break your undercloud node. > > > > Do overcloud nodes loose ip connectivity? My understanding is that > > overcloud nodes get dhcp from Neutron. Or do I need to have some HA for > > undercloud in place? > > With the network architecture we're moving towards, overcloud nodes will > only > get dhcp from Neutron for the provisioning network. The api, data, > storage, etc > network will support static IP configuration, or possibly, non-Neutron > provided > dhcp. > > Further, after initial provisioning, overcloud nodes will boot off the > local > disk instead of pxe booting via Neutron on subsequent reboots. localboot > support is a relatively new feature in upstream Ironic, and we'll be > enabling > it soon in rdo-manager. > > With these changes, when the undercloud is stopped or goes down > unexpectedly > the overcloud would be unaffected. That being said, we still plan to have > an HA > undercloud at some point in the future. > > Also, the current virt-setup that allows testing rdo-manager via deploying > the > undercloud and overcloud all on vm's still relies on the undercloud vm to > continue to run for connectivity to overcloud nodes. That could also be > enhanced though to not require the undercloud vm to stay up. > > > > > > Thanks, > > Pedro Sousa > > > > On Mon, Apr 20, 2015 at 6:31 PM, Mohammed Arafa < > mohammed.arafa at gmail.com> > > wrote: > > > > > Hello all. I am currently transitioning and have 10 days available to > run > > > with testing rdo manager. > > > I am offering my help with testing and documenting as needed. > > > > > > What do you guys need? > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -- > -- James Slagle > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Tue Apr 21 01:31:27 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 20 Apr 2015 21:31:27 -0400 Subject: [Rdo-list] instack-build-images defaults to fedora Message-ID: hi https://bugzilla.redhat.com/show_bug.cgi?id=1213645 this page https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/build-images.html says The built images will automatically have the same base OS as the running undercloud. See the Note below to choose a different OS: so my instack is centos7.1 but all of a sudden i saw instack-build-images downloading fedora 21 images. + curl -o fedora-user.qcow2 -L http://cloud.fedoraproject.org/fedora-21.x86_64.qcow2 which meant that export NODE_DIST=centos7 is mandatory as fedora is the default Thank you -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lb.gutierrezg at gmail.com Tue Apr 21 04:38:24 2015 From: lb.gutierrezg at gmail.com (Luis Gutierrez) Date: Tue, 21 Apr 2015 01:38:24 -0300 Subject: [Rdo-list] Not connect to MongoDB Message-ID: hi, I need to install OpenStack all in one on centos 7, but an error occurred while configuring the installation MongoDB. the problem is that you can not connect to MongoDB, in the attached image detail. if they have a patch I would like to know how to install it please greetings and thanks [image: Im?genes integradas 1] -- *Luis Guti?rrez* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 26805 bytes Desc: not available URL: From lb.gutierrezg at gmail.com Tue Apr 21 05:50:11 2015 From: lb.gutierrezg at gmail.com (Luis Gutierrez) Date: Tue, 21 Apr 2015 02:50:11 -0300 Subject: [Rdo-list] Not connect to MongoDB In-Reply-To: References: Message-ID: This problem is related to bug 1212174 and solved by modifying a field configuration in the /usr/share/openstack-puppet/modules/mongodb/manifests/params.pp file. On the next page describes where to make the change. http://www.linuxfly.org/post/724/ regards 2015-04-21 1:38 GMT-03:00 Luis Gutierrez : > hi, > > I need to install OpenStack all in one on centos 7, but an error occurred > while configuring the installation MongoDB. > > the problem is that you can not connect to MongoDB, in the attached image > detail. > > if they have a patch I would like to know how to install it please > > greetings and thanks > > [image: Im?genes integradas 1] > -- > *Luis Guti?rrez* > > -- *Luis Basti?n Guti?rrez Guti?rrez* Ingeniero de Ejecuci?n en Inform?tica -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 26805 bytes Desc: not available URL: From jcoufal at redhat.com Tue Apr 21 07:05:07 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Tue, 21 Apr 2015 09:05:07 +0200 Subject: [Rdo-list] instack-build-images defaults to fedora In-Reply-To: References: Message-ID: <5535F6A3.2020601@redhat.com> On 21/04/15 03:31, Mohammed Arafa wrote: > hi > > https://bugzilla.redhat.com/show_bug.cgi?id=1213645 > > this page > https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/build-images.html > > says > The built images will automatically have the same base OS as the running > undercloud. See the Note below to choose a different OS: > so my instack is centos7.1 but all of a sudden i saw > instack-build-images downloading fedora 21 images. > > + curl -o fedora-user.qcow2 -L > http://cloud.fedoraproject.org/fedora-21.x86_64.qcow2 > > which meant that > export NODE_DIST=centos7 is mandatory as fedora is the default > > Thank you Hi Mohammed, Thanks a lot for trying rdo-manager. The docs are correct here. It always builds fedora-user image for testing the overcloud puproses (testing script is spinning up a VM in overcloud). The main overcloud images (overcloud-full.*) should be fully CentOS based in your case. -- Jarda From javier.pena at redhat.com Tue Apr 21 08:54:39 2015 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 21 Apr 2015 04:54:39 -0400 (EDT) Subject: [Rdo-list] Not connect to MongoDB In-Reply-To: References: Message-ID: <1734336028.4464693.1429606478990.JavaMail.zimbra@redhat.com> ----- Original Message ----- > hi, > I need to install OpenStack all in one on centos 7, but an error occurred > while configuring the installation MongoDB. > the problem is that you can not connect to MongoDB, in the attached image > detail. > if they have a patch I would like to know how to install it please > greetings and thanks Hi Luis, If you are facing this issue in RDO Juno, the patch is ready and only waiting for the package to be created ( https://review.gerrithub.io/#/c/230643/ ). You can manually patch it by applying the fix from https://review.openstack.org/174445 . Regards, Javier > -- > Luis Guti?rrez > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 26805 bytes Desc: not available URL: From ihrachys at redhat.com Tue Apr 21 08:55:08 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 21 Apr 2015 10:55:08 +0200 Subject: [Rdo-list] Glance packaging and RDO Kilo In-Reply-To: <5535330C.1040602@berendt.io> References: <5535330C.1040602@berendt.io> Message-ID: <5536106C.6080203@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 openstack-$service-common should depend on python-$service. That's what we did for Designate. On 04/20/2015 07:10 PM, Christian Berendt wrote: > On 04/20/2015 12:12 PM, Alan Pevec wrote: >>> 1- Split packages, with the following deps: >>> >>> * -api and -registry depend on -common * glance depends on -api >>> and -registry > > It looks like openstack-glance-api and openstack-glance-registry do > not longer require python-glance? > > openstack-glance-api and openstack-glance-registry only require > openstack-glance-common. > > openstack-glance-api and openstack-glance-registry are not > workable without python-glance. For me it does not makes sense to > not install python-glance. > > Christian. > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVNhBsAAoJEC5aWaUY1u57riIIAIKB+EXtvoBF1Xsu/6plqkfx grsGYr4yPJ4JZoDMmnws55YQmzI3+Rr2LQVLccJ/KVTrqIT56L6OZRxGK58NIUO8 G2KvojBeN6LA2qGLZrjL//j7X3wT+PBenfoaLoHsvwIJBIGItdcsEbcN8jI4vfK0 xCB3nbKcft9QnRKrKNPIh+qWjil+EbC9Oqxb6x8xM/zdcD8jExzJgtPhain6SQOg 9agUYe1L9rCc+tZk+JbVmrKzk2LPFS2Dw6JLlisv/dhY0I0fkxDb/O3qif/mQKcu EuHT1jLZucsI1D7OT7TamjW9iwEsW+4ucFd/o58yB8YannI+7MAcD7q8ToNDNFI= =nEpO -----END PGP SIGNATURE----- From mohammed.arafa at gmail.com Tue Apr 21 10:46:03 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 21 Apr 2015 06:46:03 -0400 Subject: [Rdo-list] instack-build-images defaults to fedora In-Reply-To: <5535F6A3.2020601@redhat.com> References: <5535F6A3.2020601@redhat.com> Message-ID: Jaromir there is no mention of the "fedora" on the page. that is what mislead me to believe since my instack is centos7.1 then the images generated would also be centos7.1 instead it said "The built images will automatically have the same base OS as the running" i believe the documentation needs to be updated thanks On Tue, Apr 21, 2015 at 3:05 AM, Jaromir Coufal wrote: > On 21/04/15 03:31, Mohammed Arafa wrote: > >> hi >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1213645 >> >> this page >> >> https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/build-images.html >> >> says >> The built images will automatically have the same base OS as the running >> undercloud. See the Note below to choose a different OS: >> so my instack is centos7.1 but all of a sudden i saw >> instack-build-images downloading fedora 21 images. >> >> + curl -o fedora-user.qcow2 -L >> http://cloud.fedoraproject.org/fedora-21.x86_64.qcow2 >> >> which meant that >> export NODE_DIST=centos7 is mandatory as fedora is the default >> >> Thank you >> > > Hi Mohammed, > > Thanks a lot for trying rdo-manager. > > The docs are correct here. It always builds fedora-user image for testing > the overcloud puproses (testing script is spinning up a VM in overcloud). > The main overcloud images (overcloud-full.*) should be fully CentOS based > in your case. > > -- Jarda > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcoufal at redhat.com Tue Apr 21 11:40:31 2015 From: jcoufal at redhat.com (Jaromir Coufal) Date: Tue, 21 Apr 2015 13:40:31 +0200 Subject: [Rdo-list] instack-build-images defaults to fedora In-Reply-To: References: <5535F6A3.2020601@redhat.com> Message-ID: <5536372F.9060501@redhat.com> Yeah, this is good point. We should get out off this script and replace it with proper CLI call which will build just overcloud image (the same OS unless you change it), this should resolve the confusion. Until then we need to update docs: https://review.gerrithub.io/#/c/230721/ Thanks -- Jarda On 21/04/15 12:46, Mohammed Arafa wrote: > Jaromir > > there is no mention of the "fedora" on the page. that is what mislead me > to believe since my instack is centos7.1 then the images generated would > also be centos7.1 > instead it said "The built images will automatically have the same base > OS as the running" > > i believe the documentation needs to be updated > > thanks > > On Tue, Apr 21, 2015 at 3:05 AM, Jaromir Coufal > wrote: > > On 21/04/15 03:31, Mohammed Arafa wrote: > > hi > > https://bugzilla.redhat.com/show_bug.cgi?id=1213645 > > this page > https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/build-images.html > > says > The built images will automatically have the same base OS as the > running > undercloud. See the Note below to choose a different OS: > so my instack is centos7.1 but all of a sudden i saw > instack-build-images downloading fedora 21 images. > > + curl -o fedora-user.qcow2 -L > http://cloud.fedoraproject.org/fedora-21.x86_64.qcow2 > > which meant that > export NODE_DIST=centos7 is mandatory as fedora is the default > > Thank you > > > Hi Mohammed, > > Thanks a lot for trying rdo-manager. > > The docs are correct here. It always builds fedora-user image for > testing the overcloud puproses (testing script is spinning up a VM > in overcloud). The main overcloud images (overcloud-full.*) should > be fully CentOS based in your case. > > -- Jarda > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -- > > > > > > > > *805010942448935* > ** > > > > > *GR750055912MA* > > > > > *Link to me on LinkedIn * > From mohammed.arafa at gmail.com Tue Apr 21 12:28:36 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 21 Apr 2015 08:28:36 -0400 Subject: [Rdo-list] rdo-manager - verify services are running on each node Message-ID: hi so, once rdo manager setup is complete, how does one verify it is complete and correct. that is what services should be running on each server? instack and baremetal 0 and 1 (and ceph) . instantiating an instance would not be the optimal method :) openstack-status is not a good tool for this of course as it doesnt support ironic nor tuskar (unless it will be updated in some future version) thanks -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Tue Apr 21 13:00:22 2015 From: pbrady at redhat.com (=?windows-1252?Q?P=E1draig_Brady?=) Date: Tue, 21 Apr 2015 14:00:22 +0100 Subject: [Rdo-list] rdo-manager - verify services are running on each node In-Reply-To: References: Message-ID: <553649E6.80906@redhat.com> On 21/04/15 13:28, Mohammed Arafa wrote: > hi > > so, once rdo manager setup is complete, how does one verify it is complete and correct. that is what services should be running on each server? instack and baremetal 0 and 1 (and ceph) . instantiating an instance would not be the optimal method :) > > openstack-status is not a good tool for this of course as it doesnt support ironic nor tuskar (unless it will be updated in some future version) openstack-status should probably get updated at least. It's meant to be a catch all status reporting tool. Please log bug(s) with the exact services you would like reported. Note also openstack-status is not currently distributed, and only reports on the current node. thanks, P?draig. From phaurep at gmail.com Tue Apr 21 13:00:54 2015 From: phaurep at gmail.com (pauline phaure) Date: Tue, 21 Apr 2015 15:00:54 +0200 Subject: [Rdo-list] failed to allocate network Message-ID: hey every one. Yesterday I had I discovered that I was running out of space disk as I misspartioned my disks so I decided to start from scratch. Now I have two baremetal servers both with 1.8TB allocated to root. But , I still have the message *no valid host was found* when I try to spawn a big instance. When I try to do it with a small instance I have the error *Failed to allocate the network(s),* not rescheduling. any ideas that might help me?? plz find attached my logs. [root at localhost ~]# df -h Sys. de fichiers Taille Utilis? Dispo Uti% Mont? sur /dev/mapper/centos-root 1,8T 2,3G 1,8T 1% / devtmpfs 5,8G 0 5,8G 0% /dev tmpfs 5,8G 0 5,8G 0% /dev/shm tmpfs 5,8G 17M 5,8G 1% /run tmpfs 5,8G 0 5,8G 0% /sys/fs/cgroup /dev/sda1 7,0G 133M 6,9G 2% /boot /dev/mapper/centos-home 52G 33M 52G 1% /home tmpfs 5,8G 17M 5,8G 1% /run/netns [root at localhost ~]# free -m total used free shared buff/cache available Mem: 11842 9943 208 32 1690 1386 Swap: 10239 1 10238 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dhcp-agent.log Type: application/octet-stream Size: 5238 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-api.log Type: application/octet-stream Size: 332653 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-compute.log Type: application/octet-stream Size: 200296 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-scheduler.log Type: application/octet-stream Size: 2450 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: openvswitch-agent.log Type: application/octet-stream Size: 22650 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovsdb-server.log Type: application/octet-stream Size: 345 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: server.log Type: application/octet-stream Size: 1157036 bytes Desc: not available URL: From lars at redhat.com Tue Apr 21 14:32:55 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 21 Apr 2015 10:32:55 -0400 Subject: [Rdo-list] failed to allocate network In-Reply-To: References: Message-ID: <20150421143255.GD10224@redhat.com> On Tue, Apr 21, 2015 at 03:00:54PM +0200, pauline phaure wrote: > I try to do it with a small instance I have the error *Failed to allocate > the network(s),* not rescheduling. any ideas that might help me?? Are there any errors that crop up in the service logs when you try to launch an instance? I would start by watching both the nova-scheduler and nova-compute logs when you boot an instance. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From phaurep at gmail.com Tue Apr 21 14:44:12 2015 From: phaurep at gmail.com (pauline phaure) Date: Tue, 21 Apr 2015 16:44:12 +0200 Subject: [Rdo-list] failed to allocate network In-Reply-To: <20150421143255.GD10224@redhat.com> References: <20150421143255.GD10224@redhat.com> Message-ID: the errors I found are these: nova-scheduler.log 2015-04-21 11:18:03.540 30016 AUDIT nova.service [-] Starting scheduler node (version 2014.2.2-1.el7) 2015-04-21 11:18:04.137 30016 INFO oslo.messaging._drivers.impl_rabbit [req-cb56623e-2b4c-4081-a3bd-b82f4550cf50 ] Connecting to AMQP server on 192.168.2.34:5672 2015-04-21 11:18:04.154 30016 INFO oslo.messaging._drivers.impl_rabbit [req-cb56623e-2b4c-4081-a3bd-b82f4550cf50 ] Connected to AMQP server on 192.168.2.34:5672 2015-04-21 11:54:31.839 30016 INFO oslo.messaging._drivers.impl_rabbit [req-1c90a911-f0f2-4516-967d-5250f0dff88a ] Connecting to AMQP server on 192.168.2.34:5672 2015-04-21 11:54:31.856 30016 INFO oslo.messaging._drivers.impl_rabbit [req-1c90a911-f0f2-4516-967d-5250f0dff88a ] Connected to AMQP server on 192.168.2.34:5672 2015-04-21 12:00:51.494 30016 WARNING nova.scheduler.host_manager [req-5e515beb-e6dc-4058-b66e-e49506f75d82 None] Host has more disk space than database expected (1797gb > 1759gb) 2015-04-21 12:03:42.812 30016 WARNING nova.scheduler.host_manager [req-576f5647-e124-463f-856e-a00155c52c4e None] Host has more disk space than database expected (1716gb > 1679gb) 2015-04-21 12:03:42.820 30016 INFO nova.filters [req-576f5647-e124-463f-856e-a00155c52c4e None] Filter RamFilter returned 0 hosts 2015-04-21 12:06:56.411 30016 WARNING nova.scheduler.host_manager [req-67d189e3-6a14-4335-878a-f77cd2976b29 None] Host has more disk space than database expected (1654gb > 1579gb) 2015-04-21 13:42:26.111 30016 WARNING nova.scheduler.host_manager [req-bf732df1-eeea-45be-97fb-27139a7f13ac None] Host has more disk space than database expected (1654gb > 1578gb) 2015-04-21 13:48:48.034 30016 WARNING nova.scheduler.host_manager [req-3d3635cf-2000-49c5-992f-cf24feeb4f4f None] Host has more disk space than database expected (1653gb > 1579gb) 2015-04-21 14:14:48.326 30016 WARNING nova.scheduler.host_manager [req-c04d4b82-ff71-45a0-be26-2cacbcdba633 None] Host has more disk space than database expected (1716gb > 1679gb) 2015-04-21 14:14:48.672 30016 INFO nova.filters [req-c04d4b82-ff71-45a0-be26-2cacbcdba633 None] Filter RetryFilter returned 0 hosts 2015-04-21 14:15:14.627 30016 WARNING nova.scheduler.host_manager [req-dcd3a6ef-eaba-41b0-8bf6-ebb3691c7b41 None] Host has more disk space than database expected (1653gb > 1579gb) 2015-04-21 14:43:58.949 30016 WARNING nova.scheduler.host_manager [req-6c08dee0-2eb2-47d7-a3d1-6c70d9094f58 None] Host has more disk space than database expected (1653gb > 1579gb) nova-compute.log 2015-04-21 14:49:05.018 7743 *ERROR* nova.compute.manager [-] [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] Instance failed to spawn 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] Traceback (most recent call last): 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2246, in _build_resources 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] yield resources 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2116, in _build_and_run_instance 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] block_device_info=block_device_info) 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2622, in spawn 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] block_device_info, disk_info=disk_info) 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4446, in _create_domain_and_network 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] raise exception.VirtualInterfaceCreateException() 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] VirtualInterfaceCreateException: Virtual Interface creation failed 2015-04-21 14:49:05.018 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] 2015-04-21 14:49:05.018 7743 AUDIT nova.compute.manager [req-6c08dee0-2eb2-47d7-a3d1-6c70d9094f58 None] [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] Terminating instance 2015-04-21 14:49:05.027 7743 WARNING nova.virt.libvirt.driver [-] [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] During wait destroy, instance disappeared. 2015-04-21 14:49:05.171 7743 INFO nova.virt.libvirt.driver [-] [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] Deletion of /var/lib/nova/instances/6d4a32d5-ac96-4143-8f09-fc16f149db52_del complete 2015-04-21 14:49:05.268 7743 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain') 2015-04-21 14:49:05.268 7743 *ERRO*R nova.compute.manager [-] [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] Failed to allocate network(s) 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] Traceback (most recent call last): 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2116, in _build_and_run_instance 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] block_device_info=block_device_info) 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2622, in spawn 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] block_device_info, disk_info=disk_info) 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4446, in _create_domain_and_network 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] raise exception.VirtualInterfaceCreateException() 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] VirtualInterfaceCreateException: Virtual Interface creation failed 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] 2015-04-21 14:49:05.273 7743 *ERROR *nova.compute.manager [-] [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] Build of instance 6d4a32d5-ac96-4143-8f09-fc16f149db52 aborted: Failed to allocate the network(s), not rescheduling. 2015-04-21 16:32 GMT+02:00 Lars Kellogg-Stedman : > On Tue, Apr 21, 2015 at 03:00:54PM +0200, pauline phaure wrote: > > I try to do it with a small instance I have the error *Failed to allocate > > the network(s),* not rescheduling. any ideas that might help me?? > > Are there any errors that crop up in the service logs when you try to > launch an instance? I would start by watching both the nova-scheduler > and nova-compute logs when you boot an instance. > > -- > Lars Kellogg-Stedman | larsks @ > {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Tue Apr 21 15:36:04 2015 From: trown at redhat.com (John Trowbridge) Date: Tue, 21 Apr 2015 11:36:04 -0400 Subject: [Rdo-list] [RDO-Manager] [AHC] allow matching without re-sending to ironic-discoverd Message-ID: <55366E64.60709@redhat.com> The current AHC workflow[1] requires us to send the already introspected nodes back to ironic-discoverd, if we change the matching rules after the initial introspection step. This is problematic, because if we want to match on the benchmark data, the benchmarks will need to be re-run. Currently, the edeploy plugin[2] to ironic-discoverd is doing the matching, and it only deals with data posted by the discovery ramdisk. Running the benchmarks can be very time consuming on a typical production server, and we already store the results in the ironic db. The benchmarks should not vary much between runs, so this time is wasted in future runs. One solution would be to add a feature to the benchmark analysis tool, ironic-cardiff,[3] to do the subsequent rounds of matching. This would be straight forward as this tool already gets an ironic client, and already requires the hardware library which has the matching logic. I would like to gather feedback on whether this approach seems reasonable, or if there are any better suggestions to solve this problem. [1] https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/ahc-workflow.html [2] https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/plugins/edeploy.py [3] https://github.com/rdo-management/rdo-ramdisk-tools/blob/master/rdo_ramdisk_tools/ironic_cardiff.py From ihrachys at redhat.com Tue Apr 21 16:06:03 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 21 Apr 2015 18:06:03 +0200 Subject: [Rdo-list] failed to allocate network In-Reply-To: References: <20150421143255.GD10224@redhat.com> Message-ID: <5536756B.7050607@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 04/21/2015 04:44 PM, pauline phaure wrote: > Host has more disk space than database expected Are you sure your compute node has enough virtual resources (in that particular case, disk space) to start an instance of your flavour of choice? /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVNnVrAAoJEC5aWaUY1u57iToH/AgH5zErxo+aclZ/O1DciJ5g CISVtzBrHGZ/SdVfWi40fuAkyBasTtNKe5+nrH/WSl0LbW86+/7leXsjHIqGt5Dm WvbpOHnBi3Zd7K86GFABNDc17pyf0O/vzV7QXN/0TxUbh2U4w2Mi+d3gfEj84r1c d637fEVc9I/Cc6gq66q67IvlNd6RdRLhTPEbXU5x/hFMxWoH9VdDnZPfLTmx6TFS njsMpt5JeojPRlzenJAAilv84fXR9D1HKAW/+BWcbipV7G3kckEqyyG+TMUjSQaj ZFcG5hioxn4XawlLNlxhICgg6XzXisKyPIdqg3zf6SqLltrNTRmWgl5Z1QAnTs4= =k80h -----END PGP SIGNATURE----- From roxenham at redhat.com Tue Apr 21 16:40:55 2015 From: roxenham at redhat.com (Rhys Oxenham) Date: Tue, 21 Apr 2015 17:40:55 +0100 Subject: [Rdo-list] failed to allocate network In-Reply-To: References: <20150421143255.GD10224@redhat.com> Message-ID: Hi Pauline, > On 21 Apr 2015, at 15:44, pauline phaure wrote: > > 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: 6d4a32d5-ac96-4143-8f09-fc16f149db52] VirtualInterfaceCreateException: Virtual Interface creation failed Can you please upload sanitised configuration files for Nova and Neutron (with relevant plugin configuration files)? Many thanks Rhys From mohammed.arafa at gmail.com Tue Apr 21 20:20:37 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 21 Apr 2015 16:20:37 -0400 Subject: [Rdo-list] rdo-manager - verify services are running on each node In-Reply-To: <553649E6.80906@redhat.com> References: <553649E6.80906@redhat.com> Message-ID: Padraig Thanks for your response Bug opened at: https://bugzilla.redhat.com/show_bug.cgi?id=1214044 On Tue, Apr 21, 2015 at 9:00 AM, P?draig Brady wrote: > On 21/04/15 13:28, Mohammed Arafa wrote: > > hi > > > > so, once rdo manager setup is complete, how does one verify it is > complete and correct. that is what services should be running on each > server? instack and baremetal 0 and 1 (and ceph) . instantiating an > instance would not be the optimal method :) > > > > openstack-status is not a good tool for this of course as it doesnt > support ironic nor tuskar (unless it will be updated in some future version) > > openstack-status should probably get updated at least. > It's meant to be a catch all status reporting tool. > Please log bug(s) with the exact services you would like reported. > Note also openstack-status is not currently distributed, and > only reports on the current node. > > thanks, > P?draig. > > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Tue Apr 21 20:28:07 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 21 Apr 2015 16:28:07 -0400 Subject: [Rdo-list] rdo-manager - network Message-ID: hi just want to confirm a couple of things regarding the network on the ironic + tuskar network setup. while browsing horizon i kept getting errors on the network side: no routers: no networks i verified from the neutron command line that a network called ctlpane on the 192.0.2.0/24 network exists. i also verified that neither router-list nor gateway-device-list was able to give me anything other than a 404 Not Found The resource could not be found. is this what we are supposed to see? -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Tue Apr 21 20:37:26 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 21 Apr 2015 16:37:26 -0400 Subject: [Rdo-list] rdo-manager failures: instack-install-undercloud failing for non-obvious reasons Message-ID: <20150421203726.GH10224@redhat.com> Running "instack-install-undercloud" is failing for me: + echo 'puppet apply exited with exit code 6' puppet apply exited with exit code 6 + '[' 6 '!=' 2 -a 6 '!=' 0 ']' + exit 6 [2015-04-21 20:13:20,426] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 6] Unfortunately, the failure doesn't provide much in the way of useful information. If I scroll up several pages, I find: Notice: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]/ensure: defined content as '{md5}63d7331e825c865a97b7a8d1299841ff' Error: /Stage[main]/Main/Rabbitmq_user[neutron]: Could not evaluate: Command is still failing after 180 seconds expired! Error: /Stage[main]/Main/Rabbitmq_user[heat]: Could not evaluate: Command is still failing after 180 seconds expired! Error: /Stage[main]/Main/Rabbitmq_user[ceilometer]: Could not evaluate: Command is still failing after 180 seconds expired! Error: /Stage[main]/Main/Rabbitmq_user[nova]: Could not evaluate: Command is still failing after 180 seconds expired! Error: /Stage[main]/Main/Rabbitmq_vhost[/]: Could not evaluate: Command is still failing after 180 seconds expired! But again, that doesn't really tell me what is failing either (a command is still failing? Which command?). It looks like rabbitmq is having some problems: [stack at localhost ~]$ sudo rabbitmqctl status Status of node rabbit at localhost ... Error: unable to connect to node rabbit at localhost: nodedown DIAGNOSTICS =========== attempted to contact: [rabbit at localhost] rabbit at localhost: * connected to epmd (port 4369) on localhost * epmd reports node 'rabbit' running on port 25672 * TCP connection succeeded but Erlang distribution failed * suggestion: hostname mismatch? * suggestion: is the cookie set correctly? current node details: - node name: rabbitmqctl20640 at stack - home dir: /var/lib/rabbitmq - cookie hash: 4DA3U2yua3rw7wYLr+PbiQ== If I manually stop and then start rabbitmq: sudo systemctl stop rabbitmq-server sudo systemctl start rabbitmq-server It seems to work: # rabbitmqctl status Status of node rabbit at stack ... [{pid,20946}, {running_applications, [{rabbitmq_management,"RabbitMQ Management Console","3.3.5"}, ... After manually starting rabbit and re-running instack-install-undercloud, the process is able to successfully create the rabbitmq_user resources and completes successfully. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From mohammed.arafa at gmail.com Tue Apr 21 20:45:21 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Tue, 21 Apr 2015 16:45:21 -0400 Subject: [Rdo-list] rdo-manager failures: instack-install-undercloud failing for non-obvious reasons In-Reply-To: <20150421203726.GH10224@redhat.com> References: <20150421203726.GH10224@redhat.com> Message-ID: Lars Yes, I saw issues with RabbitMQ yesterday. it was basically hit and miss whether the undercloud installer would work. for me it was the 5th install or 20% of the time? to make matters a bit more difficult, i coudlnt find rabbitmq install logs in /var/log. are they located somewhere else? On Tue, Apr 21, 2015 at 4:37 PM, Lars Kellogg-Stedman wrote: > Running "instack-install-undercloud" is failing for me: > > + echo 'puppet apply exited with exit code 6' > puppet apply exited with exit code 6 > + '[' 6 '!=' 2 -a 6 '!=' 0 ']' > + exit 6 > [2015-04-21 20:13:20,426] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 6] > > Unfortunately, the failure doesn't provide much in the way of useful > information. If I scroll up several pages, I find: > > Notice: > /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]/ensure: > defined content as '{md5}63d7331e825c865a97b7a8d1299841ff' > Error: /Stage[main]/Main/Rabbitmq_user[neutron]: Could not evaluate: > Command is still failing after 180 seconds expired! > Error: /Stage[main]/Main/Rabbitmq_user[heat]: Could not evaluate: > Command is still failing after 180 seconds expired! > Error: /Stage[main]/Main/Rabbitmq_user[ceilometer]: Could not evaluate: > Command is still failing after 180 seconds expired! > Error: /Stage[main]/Main/Rabbitmq_user[nova]: Could not evaluate: > Command is still failing after 180 seconds expired! > Error: /Stage[main]/Main/Rabbitmq_vhost[/]: Could not evaluate: Command > is still failing after 180 seconds expired! > > But again, that doesn't really tell me what is failing either (a > command is still failing? Which command?). > > It looks like rabbitmq is having some problems: > > [stack at localhost ~]$ sudo rabbitmqctl status > Status of node rabbit at localhost ... > Error: unable to connect to node rabbit at localhost: nodedown > > DIAGNOSTICS > =========== > > attempted to contact: [rabbit at localhost] > > rabbit at localhost: > * connected to epmd (port 4369) on localhost > * epmd reports node 'rabbit' running on port 25672 > * TCP connection succeeded but Erlang distribution failed > * suggestion: hostname mismatch? > * suggestion: is the cookie set correctly? > > current node details: > - node name: rabbitmqctl20640 at stack > - home dir: /var/lib/rabbitmq > - cookie hash: 4DA3U2yua3rw7wYLr+PbiQ== > > If I manually stop and then start rabbitmq: > > sudo systemctl stop rabbitmq-server > sudo systemctl start rabbitmq-server > > It seems to work: > > # rabbitmqctl status > Status of node rabbit at stack ... > [{pid,20946}, > {running_applications, > [{rabbitmq_management,"RabbitMQ Management Console","3.3.5"}, > ... > > After manually starting rabbit and re-running > instack-install-undercloud, the process is able to successfully create > the rabbitmq_user resources and completes successfully. > > -- > Lars Kellogg-Stedman | larsks @ > {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Tue Apr 21 21:00:58 2015 From: jslagle at redhat.com (James Slagle) Date: Tue, 21 Apr 2015 17:00:58 -0400 Subject: [Rdo-list] rdo-manager failures: instack-install-undercloud failing for non-obvious reasons In-Reply-To: <20150421203726.GH10224@redhat.com> References: <20150421203726.GH10224@redhat.com> Message-ID: <20150421210058.GD29586@teletran-1.redhat.com> On Tue, Apr 21, 2015 at 04:37:26PM -0400, Lars Kellogg-Stedman wrote: > Running "instack-install-undercloud" is failing for me: > > + echo 'puppet apply exited with exit code 6' > puppet apply exited with exit code 6 > + '[' 6 '!=' 2 -a 6 '!=' 0 ']' > + exit 6 > [2015-04-21 20:13:20,426] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 6] > > Unfortunately, the failure doesn't provide much in the way of useful > information. If I scroll up several pages, I find: > > Notice: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]/ensure: defined content as '{md5}63d7331e825c865a97b7a8d1299841ff' > Error: /Stage[main]/Main/Rabbitmq_user[neutron]: Could not evaluate: Command is still failing after 180 seconds expired! > Error: /Stage[main]/Main/Rabbitmq_user[heat]: Could not evaluate: Command is still failing after 180 seconds expired! > Error: /Stage[main]/Main/Rabbitmq_user[ceilometer]: Could not evaluate: Command is still failing after 180 seconds expired! > Error: /Stage[main]/Main/Rabbitmq_user[nova]: Could not evaluate: Command is still failing after 180 seconds expired! > Error: /Stage[main]/Main/Rabbitmq_vhost[/]: Could not evaluate: Command is still failing after 180 seconds expired! > > But again, that doesn't really tell me what is failing either (a > command is still failing? Which command?). Unfortunately we're pretty much at the mercy of puppet and all of the external puppet modules here in terms of its helpful output, and the point at which it chooses to stop applying after an error is encountered. Perhaps some people more familiar with puppet might chime in here on how to improve this. > > It looks like rabbitmq is having some problems: > > [stack at localhost ~]$ sudo rabbitmqctl status > Status of node rabbit at localhost ... > Error: unable to connect to node rabbit at localhost: nodedown > > DIAGNOSTICS > =========== > > attempted to contact: [rabbit at localhost] > > rabbit at localhost: > * connected to epmd (port 4369) on localhost > * epmd reports node 'rabbit' running on port 25672 > * TCP connection succeeded but Erlang distribution failed > * suggestion: hostname mismatch? > * suggestion: is the cookie set correctly? > > current node details: > - node name: rabbitmqctl20640 at stack > - home dir: /var/lib/rabbitmq > - cookie hash: 4DA3U2yua3rw7wYLr+PbiQ== > > If I manually stop and then start rabbitmq: > > sudo systemctl stop rabbitmq-server > sudo systemctl start rabbitmq-server > > It seems to work: > > # rabbitmqctl status > Status of node rabbit at stack ... > [{pid,20946}, > {running_applications, > [{rabbitmq_management,"RabbitMQ Management Console","3.3.5"}, > ... > > After manually starting rabbit and re-running > instack-install-undercloud, the process is able to successfully create > the rabbitmq_user resources and completes successfully. Are you on RHEL 7.1 or CentOS 7? I'll try to reproduce locally and see if I can get to the bottom of it. > > -- > Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From lars at redhat.com Tue Apr 21 23:02:26 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 21 Apr 2015 19:02:26 -0400 Subject: [Rdo-list] rdo-manager failures: instack-install-undercloud failing for non-obvious reasons In-Reply-To: <20150421210058.GD29586@teletran-1.redhat.com> References: <20150421203726.GH10224@redhat.com> <20150421210058.GD29586@teletran-1.redhat.com> Message-ID: <20150421230226.GI10224@redhat.com> On Tue, Apr 21, 2015 at 05:00:58PM -0400, James Slagle wrote: > Are you on RHEL 7.1 or CentOS 7? I'll try to reproduce locally and see if I can > get to the bottom of it. This was on CentOS 7. I sort of suspect a hostname-related problem, only because I've seen exactly this sort of behavior before due to hostname issues (to which rabbitmq seems unusually sensitive). For the record: - the undercloud host boots up with the name "localhost.localdomain". - I run "hostnamectl set-hostname stack.localdomain". - I edit /etc/hosts and add "stack.localdomain" as an entry for 127.0.0.1. - After running instack-install-undercloud and having it explode because something is looking for "stack.localhost", I add that as an alias for 127.0.0.1 as well. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lars at redhat.com Wed Apr 22 01:38:26 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 21 Apr 2015 21:38:26 -0400 Subject: [Rdo-list] rdo-manager failures: heat stack-create timing out In-Reply-To: <20150421203726.GH10224@redhat.com> References: <20150421203726.GH10224@redhat.com> Message-ID: <20150422013825.GJ10224@redhat.com> After running "instack-deploy-overcloud --tuskar", I got: [...] + OVERCLOUD_YAML_PATH=tuskar_templates/plan.yaml + ENVIROMENT_YAML_PATH=tuskar_templates/environment.yaml + heat stack-create -t 240 -f tuskar_templates/plan.yaml -e tuskar_templates/environment.yaml overcloud ERROR: Timed out waiting for a reply to message ID 408a3091486a4924b8812f6aabcb7471 heat-api.log tells me: DEBUG root [-] Calling : create __call__ /usr/lib/python2.7/site-packages/heat/common/wsgi.py:655 INFO heat.openstack.common.policy [-] Can not find policy directory: policy.d DEBUG oslo_messaging._drivers.amqpdriver [-] MSG_ID is 408a3091486a4924b8812f6aabcb7471 _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:311 ERROR root [-] Unexpected error occurred serving API: Timed out waiting for a reply to message ID 408a3091486a4924b8812f6aabcb7471 ...which makes me wonder if maybe rabbitmq is still causing problems. I rebooted the system, after which the deployment was able to successfully run 'heat stack-create'. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From apevec at gmail.com Wed Apr 22 02:15:54 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 22 Apr 2015 04:15:54 +0200 Subject: [Rdo-list] RDO Kilo RC snapshot - core packages Message-ID: Hi all, unofficial[*] Kilo RC builds are now available for testing. This snapshot completes packstack --allinone i.e. issue in provision_glance reported on IRC has been fixed. Quick installation HOWTO yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm # Following works out-of-the-box on CentOS7 # For RHEL see http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F yum install epel-release cd /etc/yum.repos.d curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo After above steps, regular Quickstart continues: yum install openstack-packstack packstack --allinone NB this snapshot has NOT been tested with rdo-management! If testing rdo-management, please follow their instructions. Cheers, Alan [*] Apr21 evening snapshot built from stable/kilo branches in Delorean Kilo instance, official RDO Kilo builds will come from CentOS CloudSIG CBS From phaurep at gmail.com Wed Apr 22 07:16:54 2015 From: phaurep at gmail.com (pauline phaure) Date: Wed, 22 Apr 2015 09:16:54 +0200 Subject: [Rdo-list] Fwd: failed to allocate network In-Reply-To: References: <20150421143255.GD10224@redhat.com> Message-ID: ---------- Forwarded message ---------- From: pauline phaure Date: 2015-04-22 9:04 GMT+02:00 Subject: Fwd: [Rdo-list] failed to allocate network To: Rhys Oxenham , rdo-list ---------- Forwarded message ---------- From: pauline phaure Date: 2015-04-22 9:02 GMT+02:00 Subject: Re: [Rdo-list] failed to allocate network To: Rhys Oxenham Cc: rdo-list As I can't connect to my controller node ( I lost my connection to it) I'm uploading here the files I found on my compute node. 2015-04-21 18:40 GMT+02:00 Rhys Oxenham : > Hi Pauline, > > > On 21 Apr 2015, at 15:44, pauline phaure wrote: > > > > 2015-04-21 14:49:05.268 7743 TRACE nova.compute.manager [instance: > 6d4a32d5-ac96-4143-8f09-fc16f149db52] VirtualInterfaceCreateException: > Virtual Interface creation failed > > Can you please upload sanitised configuration files for Nova and Neutron > (with relevant plugin configuration files)? > > Many thanks > Rhys -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- [ovs] # (StrOpt) Type of network to allocate for tenant networks. The # default value 'local' is useful only for single-box testing and # provides no connectivity between hosts. You MUST either change this # to 'vlan' and configure network_vlan_ranges below or change this to # 'gre' or 'vxlan' and configure tunnel_id_ranges below in order for # tenant networks to provide connectivity between hosts. Set to 'none' # to disable creation of tenant networks. # # tenant_network_type = local # Example: tenant_network_type = gre # Example: tenant_network_type = vxlan # (ListOpt) Comma-separated list of # [::] tuples enumerating ranges # of VLAN IDs on named physical networks that are available for # allocation. All physical networks listed are available for flat and # VLAN provider network creation. Specified ranges of VLAN IDs are # available for tenant network allocation if tenant_network_type is # 'vlan'. If empty, only gre, vxlan and local networks may be created. # # network_vlan_ranges = # Example: network_vlan_ranges = physnet1:1000:2999 # (BoolOpt) Set to True in the server and the agents to enable support # for GRE or VXLAN networks. Requires kernel support for OVS patch ports and # GRE or VXLAN tunneling. # # WARNING: This option will be deprecated in the Icehouse release, at which # point setting tunnel_type below will be required to enable # tunneling. # # enable_tunneling = False enable_tunneling = True # (StrOpt) The type of tunnel network, if any, supported by the plugin. If # this is set, it will cause tunneling to be enabled. If this is not set and # the option enable_tunneling is set, this will default to 'gre'. # # tunnel_type = # Example: tunnel_type = gre # Example: tunnel_type = vxlan # (ListOpt) Comma-separated list of : tuples # enumerating ranges of GRE or VXLAN tunnel IDs that are available for # tenant network allocation if tenant_network_type is 'gre' or 'vxlan'. # # tunnel_id_ranges = # Example: tunnel_id_ranges = 1:1000 # Do not change this parameter unless you have a good reason to. # This is the name of the OVS integration bridge. There is one per hypervisor. # The integration bridge acts as a virtual "patch bay". All VM VIFs are # attached to this bridge and then "patched" according to their network # connectivity. # # integration_bridge = br-int integration_bridge = br-int # Only used for the agent if tunnel_id_ranges (above) is not empty for # the server. In most cases, the default value should be fine. # # tunnel_bridge = br-tun tunnel_bridge = br-tun # Peer patch port in integration bridge for tunnel bridge # int_peer_patch_port = patch-tun # Peer patch port in tunnel bridge for integration bridge # tun_peer_patch_port = patch-int # Uncomment this line for the agent if tunnel_id_ranges (above) is not # empty for the server. Set local-ip to be the local IP address of # this hypervisor. # # local_ip = local_ip =192.168.2.35 # (ListOpt) Comma-separated list of : tuples # mapping physical network names to the agent's node-specific OVS # bridge names to be used for flat and VLAN networks. The length of # bridge names should be no more than 11. Each bridge must # exist, and should have a physical network interface configured as a # port. All physical networks listed in network_vlan_ranges on the # server should have mappings to appropriate bridges on each agent. # # bridge_mappings = # Example: bridge_mappings = physnet1:br-eth1 # (BoolOpt) Use veths instead of patch ports to interconnect the integration # bridge to physical networks. Support kernel without ovs patch port support # so long as it is set to True. # use_veth_interconnection = False [agent] # Agent's polling interval in seconds # polling_interval = 2 polling_interval = 2 # Minimize polling by monitoring ovsdb for interface changes # minimize_polling = True # When minimize_polling = True, the number of seconds to wait before # respawning the ovsdb monitor after losing communication with it # ovsdb_monitor_respawn_interval = 30 # (ListOpt) The types of tenant network tunnels supported by the agent. # Setting this will enable tunneling support in the agent. This can be set to # either 'gre' or 'vxlan'. If this is unset, it will default to [] and # disable tunneling support in the agent. When running the agent with the OVS # plugin, this value must be the same as "tunnel_type" in the "[ovs]" section. # When running the agent with ML2, you can specify as many values here as # your compute hosts supports. # # tunnel_types = tunnel_types =vxlan # Example: tunnel_types = gre # Example: tunnel_types = vxlan # Example: tunnel_types = vxlan, gre # (IntOpt) The port number to utilize if tunnel_types includes 'vxlan'. By # default, this will make use of the Open vSwitch default value of '4789' if # not specified. # # vxlan_udp_port = vxlan_udp_port =4789 # Example: vxlan_udp_port = 8472 # (IntOpt) This is the MTU size of veth interfaces. # Do not change unless you have a good reason to. # The default MTU size of veth interfaces is 1500. # This option has no effect if use_veth_interconnection is False # veth_mtu = # Example: veth_mtu = 1504 # (BoolOpt) Flag to enable l2-population extension. This option should only be # used in conjunction with ml2 plugin and l2population mechanism driver. It'll # enable plugin to populate remote ports macs and IPs (using fdb_add/remove # RPC calbbacks instead of tunnel_sync/update) on OVS agents in order to # optimize tunnel management. # # l2_population = False l2_population = False # Enable local ARP responder. Requires OVS 2.1. This is only used by the l2 # population ML2 MechanismDriver. # # arp_responder = False arp_responder = False # (BoolOpt) Set or un-set the don't fragment (DF) bit on outgoing IP packet # carrying GRE/VXLAN tunnel. The default value is True. # # dont_fragment = True # (BoolOpt) Set to True on L2 agents to enable support # for distributed virtual routing. # # enable_distributed_routing = False enable_distributed_routing = False [securitygroup] # Firewall driver for realizing neutron security group function. # firewall_driver = neutron.agent.firewall.NoopFirewallDriver firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver # Example: firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver # Controls if neutron security group is enabled or not. # It should be false when you use nova security group. # enable_security_group = True #----------------------------------------------------------------------------- # Sample Configurations. #----------------------------------------------------------------------------- # # 1. With VLANs on eth1. # [ovs] # network_vlan_ranges = default:2000:3999 # tunnel_id_ranges = # integration_bridge = br-int # bridge_mappings = default:br-eth1 # # 2. With GRE tunneling. # [ovs] # network_vlan_ranges = # tunnel_id_ranges = 1:1000 # integration_bridge = br-int # tunnel_bridge = br-tun # local_ip = 10.0.0.3 # # 3. With VXLAN tunneling. # [ovs] # network_vlan_ranges = # tenant_network_type = vxlan # tunnel_type = vxlan # tunnel_id_ranges = 1:1000 # integration_bridge = br-int # tunnel_bridge = br-tun # local_ip = 10.0.0.3 # [agent] # tunnel_types = vxlan -------------- next part -------------- A non-text attachment was scrubbed... Name: neutron.conf Type: application/octet-stream Size: 23671 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova.conf Type: application/octet-stream Size: 103356 bytes Desc: not available URL: From apevec at gmail.com Wed Apr 22 07:43:22 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 22 Apr 2015 09:43:22 +0200 Subject: [Rdo-list] RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: > After above steps, regular Quickstart continues: Forgot one more: setenforce 0 openstack-selinux has not been updated for Kilo yet. > yum install openstack-packstack > packstack --allinone From bderzhavets at hotmail.com Wed Apr 22 11:02:32 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 22 Apr 2015 07:02:32 -0400 Subject: [Rdo-list] RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: Alan, # packstack --allinone completes successfully on CentOS 7.1 However, when attaching interface to private subnet to neutron router (as demo or as admin ) port status is down . I tested it via Horizon and via Neutron CLI result was the same. Instance (cirros) been launched cannot access nova meta-data server and obtain instance-id Lease of 50.0.0.12 obtained, lease time 86400 cirros-ds 'net' up at 7.14 checking http://169.254.169.254/2009-04-04/instance-id failed 1/20: up 7.47. request failed failed 2/20: up 12.81. request failed failed 3/20: up 15.82. request failed . . . . . . . . . failed 18/20: up 78.28. request failed failed 19/20: up 81.27. request failed failed 20/20: up 86.50. request failed failed to read iid from metadata. tried 20 no results found for mode=net. up 89.53. searched: nocloud configdrive ec2 failed to get instance-id of datasource Thanks. Boris > Date: Wed, 22 Apr 2015 04:15:54 +0200 > From: apevec at gmail.com > To: rdo-list at redhat.com > Subject: [Rdo-list] RDO Kilo RC snapshot - core packages > > Hi all, > > unofficial[*] Kilo RC builds are now available for testing. This > snapshot completes packstack --allinone i.e. issue in provision_glance > reported on IRC has been fixed. > > Quick installation HOWTO > > yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > # Following works out-of-the-box on CentOS7 > # For RHEL see http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F > yum install epel-release > cd /etc/yum.repos.d > curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo > > After above steps, regular Quickstart continues: > yum install openstack-packstack > packstack --allinone > > NB this snapshot has NOT been tested with rdo-management! If testing > rdo-management, please follow their instructions. > > > Cheers, > Alan > > [*] Apr21 evening snapshot built from stable/kilo branches in > Delorean Kilo instance, official RDO Kilo builds will come from CentOS > CloudSIG CBS > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Wed Apr 22 15:01:08 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 22 Apr 2015 11:01:08 -0400 Subject: [Rdo-list] rdo-manager failures: failed to deploy Nova instances Message-ID: <20150422150108.GO10224@redhat.com> The stack deployment implemented by "instack-deploy-overcloud --tuskar" failed because Heat was unable to create Nova instances: $ nova list +----...-+------...-+--------+------------+-------------+---------------------+ | ID ... | Name ... | Status | Task State | Power State | Networks | +----...-+------...-+--------+------------+-------------+---------------------+ | 56f... | ov-k2... | ERROR | - | NOSTATE | ctlplane=192.0.2.11 | | f1f... | ov-oc... | ERROR | - | NOSTATE | ctlplane=192.0.2.10 | +----...-+------...-+--------+------------+-------------+---------------------+ Running "nova show" reveals the following error: "message": "No valid host was found. There are not enough hosts available." Running "instack-ironic-deployment --show-profile" seems to indicate that the appropriate nodes were discovered and were assigned the correct roles: Querying assigned profiles ... f32c7e5c-d06c-4811-b0ba-76da571f91da "profile:control,boot_option:local" 5a04f842-7d6d-4d41-b26e-e71f121259d6 "profile:compute,boot_option:local" DONE. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From jistr at redhat.com Wed Apr 22 15:05:26 2015 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Wed, 22 Apr 2015 17:05:26 +0200 Subject: [Rdo-list] rdo-manager failures: instack-install-undercloud failing for non-obvious reasons In-Reply-To: <20150421210058.GD29586@teletran-1.redhat.com> References: <20150421203726.GH10224@redhat.com> <20150421210058.GD29586@teletran-1.redhat.com> Message-ID: <5537B8B6.8000405@redhat.com> On 21.4.2015 23:00, James Slagle wrote: > On Tue, Apr 21, 2015 at 04:37:26PM -0400, Lars Kellogg-Stedman wrote: >> Running "instack-install-undercloud" is failing for me: >> >> + echo 'puppet apply exited with exit code 6' >> puppet apply exited with exit code 6 >> + '[' 6 '!=' 2 -a 6 '!=' 0 ']' >> + exit 6 >> [2015-04-21 20:13:20,426] (os-refresh-config) [ERROR] during configure >> phase. [Command '['dib-run-parts', >> '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit >> status 6] >> >> Unfortunately, the failure doesn't provide much in the way of useful >> information. If I scroll up several pages, I find: >> >> Notice: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]/ensure: defined content as '{md5}63d7331e825c865a97b7a8d1299841ff' >> Error: /Stage[main]/Main/Rabbitmq_user[neutron]: Could not evaluate: Command is still failing after 180 seconds expired! >> Error: /Stage[main]/Main/Rabbitmq_user[heat]: Could not evaluate: Command is still failing after 180 seconds expired! >> Error: /Stage[main]/Main/Rabbitmq_user[ceilometer]: Could not evaluate: Command is still failing after 180 seconds expired! >> Error: /Stage[main]/Main/Rabbitmq_user[nova]: Could not evaluate: Command is still failing after 180 seconds expired! >> Error: /Stage[main]/Main/Rabbitmq_vhost[/]: Could not evaluate: Command is still failing after 180 seconds expired! >> >> But again, that doesn't really tell me what is failing either (a >> command is still failing? Which command?). > > Unfortunately we're pretty much at the mercy of puppet and all of the external > puppet modules here in terms of its helpful output, and the point at which it > chooses to stop applying after an error is encountered. Perhaps some people > more familiar with puppet might chime in here on how to improve this. Changing "puppet apply" to "puppet apply -d" here [1] should give you more output including the commands which are being run at each step. (Hoping i've found the right spot, i'm not very familiar with instack-undercloud.) Perhaps an env variable could be added to switch on the debug output? (It shouldn't be on by default i guess because Puppet prints a lot of stuff then, including potentially sensitive info.) Regarding the problem as a whole, the hostname/domain issue you outlined in another e-mail might be a good clue. It certainly wouldn't be the first time i've seen problems with Puppet caused by FQDN settings. In general it's a good idea to verify that `facter fqdn` prints the same thing as `hostname -f` before running Puppet. Cheers J. [1] https://github.com/rdo-management/instack-undercloud/blob/6f75c8dc3c37d489763b7310a7b57d00e1e70da2/elements/puppet-stack-config/os-refresh-config/configure.d/50-puppet-stack-config#L7 > >> >> It looks like rabbitmq is having some problems: >> >> [stack at localhost ~]$ sudo rabbitmqctl status >> Status of node rabbit at localhost ... >> Error: unable to connect to node rabbit at localhost: nodedown >> >> DIAGNOSTICS >> =========== >> >> attempted to contact: [rabbit at localhost] >> >> rabbit at localhost: >> * connected to epmd (port 4369) on localhost >> * epmd reports node 'rabbit' running on port 25672 >> * TCP connection succeeded but Erlang distribution failed >> * suggestion: hostname mismatch? >> * suggestion: is the cookie set correctly? >> >> current node details: >> - node name: rabbitmqctl20640 at stack >> - home dir: /var/lib/rabbitmq >> - cookie hash: 4DA3U2yua3rw7wYLr+PbiQ== >> >> If I manually stop and then start rabbitmq: >> >> sudo systemctl stop rabbitmq-server >> sudo systemctl start rabbitmq-server >> >> It seems to work: >> >> # rabbitmqctl status >> Status of node rabbit at stack ... >> [{pid,20946}, >> {running_applications, >> [{rabbitmq_management,"RabbitMQ Management Console","3.3.5"}, >> ... >> >> After manually starting rabbit and re-running >> instack-install-undercloud, the process is able to successfully create >> the rabbitmq_user resources and completes successfully. > > Are you on RHEL 7.1 or CentOS 7? I'll try to reproduce locally and see if I can > get to the bottom of it. > >> >> -- >> Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} >> Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > > >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > -- > -- James Slagle > -- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From jistr at redhat.com Wed Apr 22 15:08:32 2015 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Wed, 22 Apr 2015 17:08:32 +0200 Subject: [Rdo-list] rdo-manager failures: instack-install-undercloud failing for non-obvious reasons In-Reply-To: <5537B8B6.8000405@redhat.com> References: <20150421203726.GH10224@redhat.com> <20150421210058.GD29586@teletran-1.redhat.com> <5537B8B6.8000405@redhat.com> Message-ID: <5537B970.7020309@redhat.com> On 22.4.2015 17:05, Ji?? Str?nsk? wrote: > On 21.4.2015 23:00, James Slagle wrote: >> On Tue, Apr 21, 2015 at 04:37:26PM -0400, Lars Kellogg-Stedman wrote: >>> Running "instack-install-undercloud" is failing for me: >>> >>> + echo 'puppet apply exited with exit code 6' >>> puppet apply exited with exit code 6 >>> + '[' 6 '!=' 2 -a 6 '!=' 0 ']' >>> + exit 6 >>> [2015-04-21 20:13:20,426] (os-refresh-config) [ERROR] during configure >>> phase. [Command '['dib-run-parts', >>> '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit >>> status 6] >>> >>> Unfortunately, the failure doesn't provide much in the way of useful >>> information. If I scroll up several pages, I find: >>> >>> Notice: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]/ensure: defined content as '{md5}63d7331e825c865a97b7a8d1299841ff' >>> Error: /Stage[main]/Main/Rabbitmq_user[neutron]: Could not evaluate: Command is still failing after 180 seconds expired! >>> Error: /Stage[main]/Main/Rabbitmq_user[heat]: Could not evaluate: Command is still failing after 180 seconds expired! >>> Error: /Stage[main]/Main/Rabbitmq_user[ceilometer]: Could not evaluate: Command is still failing after 180 seconds expired! >>> Error: /Stage[main]/Main/Rabbitmq_user[nova]: Could not evaluate: Command is still failing after 180 seconds expired! >>> Error: /Stage[main]/Main/Rabbitmq_vhost[/]: Could not evaluate: Command is still failing after 180 seconds expired! >>> >>> But again, that doesn't really tell me what is failing either (a >>> command is still failing? Which command?). >> >> Unfortunately we're pretty much at the mercy of puppet and all of the external >> puppet modules here in terms of its helpful output, and the point at which it >> chooses to stop applying after an error is encountered. Perhaps some people >> more familiar with puppet might chime in here on how to improve this. > > Changing "puppet apply" to "puppet apply -d" here [1] should give you > more output including the commands which are being run at each step. > (Hoping i've found the right spot, i'm not very familiar with > instack-undercloud.) Perhaps an env variable could be added to switch on > the debug output? (It shouldn't be on by default i guess because Puppet > prints a lot of stuff then, including potentially sensitive info.) > > Regarding the problem as a whole, the hostname/domain issue you outlined > in another e-mail might be a good clue. It certainly wouldn't be the > first time i've seen problems with Puppet caused by FQDN settings. In > general it's a good idea to verify that `facter fqdn` prints the same > thing as `hostname -f` before running Puppet. Actually, now i recall that staypuft-installer has a check for this ^^ built in, and if the two values don't match, it refuses to run. Maybe we should do the same with instack-undercloud. J. > > Cheers > > J. > > [1] > https://github.com/rdo-management/instack-undercloud/blob/6f75c8dc3c37d489763b7310a7b57d00e1e70da2/elements/puppet-stack-config/os-refresh-config/configure.d/50-puppet-stack-config#L7 > >> >>> >>> It looks like rabbitmq is having some problems: >>> >>> [stack at localhost ~]$ sudo rabbitmqctl status >>> Status of node rabbit at localhost ... >>> Error: unable to connect to node rabbit at localhost: nodedown >>> >>> DIAGNOSTICS >>> =========== >>> >>> attempted to contact: [rabbit at localhost] >>> >>> rabbit at localhost: >>> * connected to epmd (port 4369) on localhost >>> * epmd reports node 'rabbit' running on port 25672 >>> * TCP connection succeeded but Erlang distribution failed >>> * suggestion: hostname mismatch? >>> * suggestion: is the cookie set correctly? >>> >>> current node details: >>> - node name: rabbitmqctl20640 at stack >>> - home dir: /var/lib/rabbitmq >>> - cookie hash: 4DA3U2yua3rw7wYLr+PbiQ== >>> >>> If I manually stop and then start rabbitmq: >>> >>> sudo systemctl stop rabbitmq-server >>> sudo systemctl start rabbitmq-server >>> >>> It seems to work: >>> >>> # rabbitmqctl status >>> Status of node rabbit at stack ... >>> [{pid,20946}, >>> {running_applications, >>> [{rabbitmq_management,"RabbitMQ Management Console","3.3.5"}, >>> ... >>> >>> After manually starting rabbit and re-running >>> instack-install-undercloud, the process is able to successfully create >>> the rabbitmq_user resources and completes successfully. >> >> Are you on RHEL 7.1 or CentOS 7? I'll try to reproduce locally and see if I can >> get to the bottom of it. >> >>> >>> -- >>> Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} >>> Cloud Engineering / OpenStack | http://blog.oddbit.com/ >> >> >> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> -- >> -- James Slagle >> -- >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mohammed.arafa at gmail.com Wed Apr 22 15:46:38 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 22 Apr 2015 11:46:38 -0400 Subject: [Rdo-list] rdo-manager - instack-install-undercloud fail Message-ID: hi just did a reinstall and it failed. rabbitmq again. i am attaching the screen grabs. if someone wants the logs, i can send them. but i expect to get rid of the instack vm later this afternoon if i cannot get instack to run after fiddling with rabbitmq. logs again, pls specify which logs and the location you need my hosts file which worked yesterday [stack at instack ~]$ cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 127.0.0.1 instack.marafa.vm pertinent output of openstack-status == Support services == openvswitch: active dbus: active rabbitmq-server: failed (disabled on boot) memcached: active == Keystone users == /usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient. 'python-keystoneclient.', DeprecationWarning) Could not find user: admin (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-0037d194-7b75-4c9e-a48f-c8ac122a99ff) == Glance images == Could not find user: admin (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-3fed983d-76b2-43cb-b9ff-b0445e470773) == Nova managed services == ERROR (Unauthorized): Could not find user: admin (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-1cafa2c3-fa56-4ea5-804b-4910c0fde9ba) == Nova networks == ERROR (Unauthorized): Could not find user: admin (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-3de8f916-c7c6-4e9e-a309-73359261366c) == Nova instance flavors == ERROR (Unauthorized): Could not find user: admin (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-6f360f3f-8f0f-4d65-b489-e7050df1fd49) == Nova instances == ERROR (Unauthorized): Could not find user: admin (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-37f006f0-d121-4ba5-9663-717e1457d54d) -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: instack-install-undercloud.log.2015-04-22_15-19-02 Type: application/octet-stream Size: 685976 bytes Desc: not available URL: From mohammed.arafa at gmail.com Wed Apr 22 15:49:56 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 22 Apr 2015 11:49:56 -0400 Subject: [Rdo-list] rdo-manager - instack-install-undercloud fail In-Reply-To: References: Message-ID: this might also be useful [root at instack ~]# netstat -tupane | grep 4369 tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 0 62670 8536/epmd On Wed, Apr 22, 2015 at 11:46 AM, Mohammed Arafa wrote: > hi > > just did a reinstall and it failed. rabbitmq again. > > i am attaching the screen grabs. if someone wants the logs, i can send > them. but i expect to get rid of the instack vm later this afternoon if i > cannot get instack to run after fiddling with rabbitmq. > > logs again, pls specify which logs and the location you need > > my hosts file which worked yesterday > > [stack at instack ~]$ cat /etc/hosts > 127.0.0.1 localhost localhost.localdomain localhost4 > localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > > 127.0.0.1 instack.marafa.vm > > > pertinent output of openstack-status > > == Support services == > openvswitch: active > dbus: active > rabbitmq-server: failed (disabled on boot) > memcached: active > == Keystone users == > /usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: > DeprecationWarning: The keystone CLI is deprecated in favor of > python-openstackclient. For a Python library, continue using > python-keystoneclient. > 'python-keystoneclient.', DeprecationWarning) > Could not find user: admin (Disable debug mode to suppress these details.) > (HTTP 401) (Request-ID: req-0037d194-7b75-4c9e-a48f-c8ac122a99ff) > == Glance images == > Could not find user: admin (Disable debug mode to suppress these details.) > (HTTP 401) (Request-ID: req-3fed983d-76b2-43cb-b9ff-b0445e470773) > == Nova managed services == > ERROR (Unauthorized): Could not find user: admin (Disable debug mode to > suppress these details.) (HTTP 401) (Request-ID: > req-1cafa2c3-fa56-4ea5-804b-4910c0fde9ba) > == Nova networks == > ERROR (Unauthorized): Could not find user: admin (Disable debug mode to > suppress these details.) (HTTP 401) (Request-ID: > req-3de8f916-c7c6-4e9e-a309-73359261366c) > == Nova instance flavors == > ERROR (Unauthorized): Could not find user: admin (Disable debug mode to > suppress these details.) (HTTP 401) (Request-ID: > req-6f360f3f-8f0f-4d65-b489-e7050df1fd49) > == Nova instances == > ERROR (Unauthorized): Could not find user: admin (Disable debug mode to > suppress these details.) (HTTP 401) (Request-ID: > req-37f006f0-d121-4ba5-9663-717e1457d54d) > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Wed Apr 22 15:53:15 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 22 Apr 2015 11:53:15 -0400 Subject: [Rdo-list] rdo-manager - instack-install-undercloud fail In-Reply-To: References: Message-ID: apologies i was looking up epmd and the port because of the rabbitmq status [stack at instack ~]$ sudo service rabbitmq-server status Redirecting to /bin/systemctl status rabbitmq-server.service rabbitmq-server.service - RabbitMQ broker Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; disabled) Drop-In: /etc/systemd/system/rabbitmq-server.service.d ??limits.conf Active: failed (Result: exit-code) since Wed 2015-04-22 15:31:53 UTC; 15min ago Main PID: 10749 (code=exited, status=1/FAILURE) CGroup: /system.slice/rabbitmq-server.service Apr 22 15:31:50 instack.marafa.vm rabbitmqctl[10778]: Error: unable to connect to node rabbit at instack: nodedown Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: DIAGNOSTICS Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: =========== Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: attempted to contact: [rabbit at instack] Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: rabbit at instack: Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: * unable to connect to epmd (port 4369) on instack: address (cannot connect to host/port) Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: current node details: Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: - node name: rabbitmqctl10778 at instack Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: - home dir: /var/lib/rabbitmq Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: - cookie hash: Nbo8xxB/ssykTxV/kUOjdQ== Apr 22 15:31:53 instack.marafa.vm systemd[1]: rabbitmq-server.service: control process exited, code=exited status=2 Apr 22 15:31:53 instack.marafa.vm systemd[1]: Failed to start RabbitMQ broker. Apr 22 15:31:53 instack.marafa.vm systemd[1]: Unit rabbitmq-server.service entered failed state. On Wed, Apr 22, 2015 at 11:49 AM, Mohammed Arafa wrote: > this might also be useful > > [root at instack ~]# netstat -tupane | grep 4369 > tcp 0 0 0.0.0.0:4369 0.0.0.0:* > LISTEN 0 62670 8536/epmd > > On Wed, Apr 22, 2015 at 11:46 AM, Mohammed Arafa > wrote: > >> hi >> >> just did a reinstall and it failed. rabbitmq again. >> >> i am attaching the screen grabs. if someone wants the logs, i can send >> them. but i expect to get rid of the instack vm later this afternoon if i >> cannot get instack to run after fiddling with rabbitmq. >> >> logs again, pls specify which logs and the location you need >> >> my hosts file which worked yesterday >> >> [stack at instack ~]$ cat /etc/hosts >> 127.0.0.1 localhost localhost.localdomain localhost4 >> localhost4.localdomain4 >> ::1 localhost localhost.localdomain localhost6 >> localhost6.localdomain6 >> >> 127.0.0.1 instack.marafa.vm >> >> >> pertinent output of openstack-status >> >> == Support services == >> openvswitch: active >> dbus: active >> rabbitmq-server: failed (disabled on boot) >> memcached: active >> == Keystone users == >> /usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: >> DeprecationWarning: The keystone CLI is deprecated in favor of >> python-openstackclient. For a Python library, continue using >> python-keystoneclient. >> 'python-keystoneclient.', DeprecationWarning) >> Could not find user: admin (Disable debug mode to suppress these >> details.) (HTTP 401) (Request-ID: req-0037d194-7b75-4c9e-a48f-c8ac122a99ff) >> == Glance images == >> Could not find user: admin (Disable debug mode to suppress these >> details.) (HTTP 401) (Request-ID: req-3fed983d-76b2-43cb-b9ff-b0445e470773) >> == Nova managed services == >> ERROR (Unauthorized): Could not find user: admin (Disable debug mode to >> suppress these details.) (HTTP 401) (Request-ID: >> req-1cafa2c3-fa56-4ea5-804b-4910c0fde9ba) >> == Nova networks == >> ERROR (Unauthorized): Could not find user: admin (Disable debug mode to >> suppress these details.) (HTTP 401) (Request-ID: >> req-3de8f916-c7c6-4e9e-a309-73359261366c) >> == Nova instance flavors == >> ERROR (Unauthorized): Could not find user: admin (Disable debug mode to >> suppress these details.) (HTTP 401) (Request-ID: >> req-6f360f3f-8f0f-4d65-b489-e7050df1fd49) >> == Nova instances == >> ERROR (Unauthorized): Could not find user: admin (Disable debug mode to >> suppress these details.) (HTTP 401) (Request-ID: >> req-37f006f0-d121-4ba5-9663-717e1457d54d) >> >> >> -- >> >> >> >> >> *805010942448935* >> >> >> *GR750055912MA* >> >> >> *Link to me on LinkedIn * >> > > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Apr 22 16:29:45 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 22 Apr 2015 12:29:45 -0400 Subject: [Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: , Message-ID: I made one more attempt of `packstack --allinone` install on CentOS 7.1 KVM running on F22 Host. Finally, when new "demo_net" created after install completed with interface in "down" state, I've dropped "private" subnet from the same tenant "demo" (the one created by installer) , what resulted switching interface of "demo_net" to "Active" status and allowed to launch CirrOS VM via Horizon completely functional. Then I reproduced same procedure in first time environment been created on CentOS 7.1 KVM running on Ubuntu 15.04 Host and got same results . As soon as I dropped "private" network created by installer for demo tenant , interface for "demo_net" ( created manually as post installation step) switched to "Active" status. Still have issue with openstack-nova-novncproxy.service :- [root at centos71 nova(keystone_admin)]# systemctl status openstack-nova-novncproxy.service -l openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled) Active: failed (Result: exit-code) since Wed 2015-04-22 18:41:51 MSK; 18min ago Process: 25663 ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS (code=exited, status=1/FAILURE) Main PID: 25663 (code=exited, status=1/FAILURE) Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd.novncproxy import main Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 25, in Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd import baseproxy Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 26, in Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.console import websocketproxy Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 154, in Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: websockify.ProxyRequestHandler): Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler' Apr 22 18:41:51 centos71.localdomain systemd[1]: openstack-nova-novncproxy.service: main process exited, code=exited, status=1/FAILURE Apr 22 18:41:51 centos71.localdomain systemd[1]: Unit openstack-nova-novncproxy.service entered failed state. Boris From: bderzhavets at hotmail.com To: apevec at gmail.com; rdo-list at redhat.com Date: Wed, 22 Apr 2015 07:02:32 -0400 Subject: Re: [Rdo-list] RDO Kilo RC snapshot - core packages Alan, # packstack --allinone completes successfully on CentOS 7.1 However, when attaching interface to private subnet to neutron router (as demo or as admin ) port status is down . I tested it via Horizon and via Neutron CLI result was the same. Instance (cirros) been launched cannot access nova meta-data server and obtain instance-id Lease of 50.0.0.12 obtained, lease time 86400 cirros-ds 'net' up at 7.14 checking http://169.254.169.254/2009-04-04/instance-id failed 1/20: up 7.47. request failed failed 2/20: up 12.81. request failed failed 3/20: up 15.82. request failed . . . . . . . . . failed 18/20: up 78.28. request failed failed 19/20: up 81.27. request failed failed 20/20: up 86.50. request failed failed to read iid from metadata. tried 20 no results found for mode=net. up 89.53. searched: nocloud configdrive ec2 failed to get instance-id of datasource Thanks. Boris > Date: Wed, 22 Apr 2015 04:15:54 +0200 > From: apevec at gmail.com > To: rdo-list at redhat.com > Subject: [Rdo-list] RDO Kilo RC snapshot - core packages > > Hi all, > > unofficial[*] Kilo RC builds are now available for testing. This > snapshot completes packstack --allinone i.e. issue in provision_glance > reported on IRC has been fixed. > > Quick installation HOWTO > > yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > # Following works out-of-the-box on CentOS7 > # For RHEL see http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F > yum install epel-release > cd /etc/yum.repos.d > curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo > > After above steps, regular Quickstart continues: > yum install openstack-packstack > packstack --allinone > > NB this snapshot has NOT been tested with rdo-management! If testing > rdo-management, please follow their instructions. > > > Cheers, > Alan > > [*] Apr21 evening snapshot built from stable/kilo branches in > Delorean Kilo instance, official RDO Kilo builds will come from CentOS > CloudSIG CBS > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Wed Apr 22 16:49:25 2015 From: trown at redhat.com (John Trowbridge) Date: Wed, 22 Apr 2015 12:49:25 -0400 Subject: [Rdo-list] [RDO-Manager] [AHC] allow matching without re-sending to ironic-discoverd In-Reply-To: <55366E64.60709@redhat.com> References: <55366E64.60709@redhat.com> Message-ID: <5537D115.1090706@redhat.com> On 04/21/2015 11:36 AM, John Trowbridge wrote: > > I would like to gather feedback on whether this approach seems > reasonable, or if there are any better suggestions to solve this problem. > I put up a POC patch for this here: https://review.gerrithub.io/#/c/230849/ From mohammed.arafa at gmail.com Wed Apr 22 18:04:16 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 22 Apr 2015 14:04:16 -0400 Subject: [Rdo-list] rdo-manager python? Message-ID: so .. i edited instack's host file to show 127.0.0.1 instack.domain.tld instack #shortname added and on this pass rabbitmq worked then i got to neutron setup and it hung at + setup-neutron -n /tmp/tmp.miEe7xK1qL /usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30: UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for novaclient.v2). The preferable way to get client class or object you can find in novaclient.client module. warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis for " neutron logs were full of this: 2015-04-22 17:14:37.666 10981 DEBUG oslo_messaging._drivers.impl_rabbit [-] Received recoverable error from kombu: on_error /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:789 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit Traceback (most recent call last): 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit File "/usr/lib/python2.7/site-packages/kombu/utils/__init__.py", line 217, in retry_over_time 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit return fun(*args, **kwargs) 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 246, in connect 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit return self.connection 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 761, in connection 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit self._connection = self._establish_connection() 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 720, in _establish_connection 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit conn = self.transport.establish_connection() 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit File "/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 115, in establish_connection 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit conn = self.Connection(**opts) 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 180, in __init__ 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit (10, 30), # tune 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit File "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 67, in wait 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit self.channel_id, allowed_methods) 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 240, in _wait_method 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit self.method_reader.read_method() 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit File "/usr/lib/python2.7/site-packages/amqp/method_framing.py", line 189, in read_method 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit raise m 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit IOError: Socket closed 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit 2015-04-22 17:14:37.667 10981 ERROR oslo_messaging._drivers.impl_rabbit [-] AMQP server 192.0.2.1:5672 closed the connection. Check login credentials: Socket closed -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Wed Apr 22 19:10:59 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 22 Apr 2015 15:10:59 -0400 Subject: [Rdo-list] rdo-manager python? In-Reply-To: References: Message-ID: my second run is on a fresh vm and it has the same issue. i am stuck thanks On Wed, Apr 22, 2015 at 2:04 PM, Mohammed Arafa wrote: > so .. i edited instack's host file to show > 127.0.0.1 instack.domain.tld instack #shortname added > > and on this pass rabbitmq worked then i got to neutron setup and it hung at > > + setup-neutron -n /tmp/tmp.miEe7xK1qL > /usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30: > UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for > novaclient.v2). The preferable way to get client class or object you can > find in novaclient.client module. > warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis > for " > > > neutron logs were full of this: > > > 2015-04-22 17:14:37.666 10981 DEBUG oslo_messaging._drivers.impl_rabbit > [-] Received recoverable error from kombu: on_error > /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:789 > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > Traceback (most recent call last): > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > File "/usr/lib/python2.7/site-packages/kombu/utils/__init__.py", line 217, > in retry_over_time > 2015-04-22 17:14:37.666 10981 TRACE > oslo_messaging._drivers.impl_rabbit return fun(*args, **kwargs) > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 246, in > connect > 2015-04-22 17:14:37.666 10981 TRACE > oslo_messaging._drivers.impl_rabbit return self.connection > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 761, in > connection > 2015-04-22 17:14:37.666 10981 TRACE > oslo_messaging._drivers.impl_rabbit self._connection = > self._establish_connection() > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 720, in > _establish_connection > 2015-04-22 17:14:37.666 10981 TRACE > oslo_messaging._drivers.impl_rabbit conn = > self.transport.establish_connection() > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > File "/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line > 115, in establish_connection > 2015-04-22 17:14:37.666 10981 TRACE > oslo_messaging._drivers.impl_rabbit conn = self.Connection(**opts) > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 180, in > __init__ > 2015-04-22 17:14:37.666 10981 TRACE > oslo_messaging._drivers.impl_rabbit (10, 30), # tune > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > File "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 67, > in wait > 2015-04-22 17:14:37.666 10981 TRACE > oslo_messaging._drivers.impl_rabbit self.channel_id, allowed_methods) > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 240, in > _wait_method > 2015-04-22 17:14:37.666 10981 TRACE > oslo_messaging._drivers.impl_rabbit self.method_reader.read_method() > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > File "/usr/lib/python2.7/site-packages/amqp/method_framing.py", line 189, > in read_method > 2015-04-22 17:14:37.666 10981 TRACE > oslo_messaging._drivers.impl_rabbit raise m > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > IOError: Socket closed > 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit > 2015-04-22 17:14:37.667 10981 ERROR oslo_messaging._drivers.impl_rabbit > [-] AMQP server 192.0.2.1:5672 closed the connection. Check login > credentials: Socket closed > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Apr 22 21:13:22 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 22 Apr 2015 17:13:22 -0400 Subject: [Rdo-list] rdoproject.org plans Message-ID: <55380EF2.4090008@redhat.com> This has taken some doing, but I wanted to follow up on some earlier conversations about the rdoproject.org wiki and possible better tools. Shortly after the OpenStack Summit, we'll be migrating the website over to us Middleman, rather than the current Vanilla + Mediawiki toolset. Existing wiki documents, and the blog content in Vanilla, will be migrated over to markdown files that will then be editable via Github. (Exact details to be determined.) We're not the guinea pigs in this process - the ovirt.org site is undergoing a similar migration, so hopefully by the time they're done we'll know all the things that can go wrong and avoid then. This solves two immediate problems. One is that the Google-based auth on the website is broken, and at this time there's no working plugin that fixes the problem that we're having there. The other is simply bringing the site content management more into line with the tools that we're using for everything else, and putting it revision control along the way. I will have more details, and dates, in a few weeks, but I wanted to let you know that I haven't dropped this - it's just taken a while to work out the process of importing and converting all of the content into the new format. Thanks again for your patience. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From jslagle at redhat.com Wed Apr 22 21:34:39 2015 From: jslagle at redhat.com (James Slagle) Date: Wed, 22 Apr 2015 17:34:39 -0400 Subject: [Rdo-list] rdo-manager failures: instack-install-undercloud failing for non-obvious reasons In-Reply-To: <5537B970.7020309@redhat.com> References: <20150421203726.GH10224@redhat.com> <20150421210058.GD29586@teletran-1.redhat.com> <5537B8B6.8000405@redhat.com> <5537B970.7020309@redhat.com> Message-ID: <20150422213439.GG29586@teletran-1.redhat.com> On Wed, Apr 22, 2015 at 05:08:32PM +0200, Ji?? Str?nsk? wrote: > On 22.4.2015 17:05, Ji?? Str?nsk? wrote: > >On 21.4.2015 23:00, James Slagle wrote: > >>On Tue, Apr 21, 2015 at 04:37:26PM -0400, Lars Kellogg-Stedman wrote: > >>>Running "instack-install-undercloud" is failing for me: > >>> > >>> + echo 'puppet apply exited with exit code 6' > >>> puppet apply exited with exit code 6 > >>> + '[' 6 '!=' 2 -a 6 '!=' 0 ']' > >>> + exit 6 > >>> [2015-04-21 20:13:20,426] (os-refresh-config) [ERROR] during configure > >>> phase. [Command '['dib-run-parts', > >>> '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > >>> status 6] > >>> > >>>Unfortunately, the failure doesn't provide much in the way of useful > >>>information. If I scroll up several pages, I find: > >>> > >>> Notice: /Stage[main]/Rabbitmq::Install::Rabbitmqadmin/File[/usr/local/bin/rabbitmqadmin]/ensure: defined content as '{md5}63d7331e825c865a97b7a8d1299841ff' > >>> Error: /Stage[main]/Main/Rabbitmq_user[neutron]: Could not evaluate: Command is still failing after 180 seconds expired! > >>> Error: /Stage[main]/Main/Rabbitmq_user[heat]: Could not evaluate: Command is still failing after 180 seconds expired! > >>> Error: /Stage[main]/Main/Rabbitmq_user[ceilometer]: Could not evaluate: Command is still failing after 180 seconds expired! > >>> Error: /Stage[main]/Main/Rabbitmq_user[nova]: Could not evaluate: Command is still failing after 180 seconds expired! > >>> Error: /Stage[main]/Main/Rabbitmq_vhost[/]: Could not evaluate: Command is still failing after 180 seconds expired! > >>> > >>>But again, that doesn't really tell me what is failing either (a > >>>command is still failing? Which command?). > >> > >>Unfortunately we're pretty much at the mercy of puppet and all of the external > >>puppet modules here in terms of its helpful output, and the point at which it > >>chooses to stop applying after an error is encountered. Perhaps some people > >>more familiar with puppet might chime in here on how to improve this. > > > >Changing "puppet apply" to "puppet apply -d" here [1] should give you > >more output including the commands which are being run at each step. > >(Hoping i've found the right spot, i'm not very familiar with > >instack-undercloud.) Perhaps an env variable could be added to switch on > >the debug output? (It shouldn't be on by default i guess because Puppet > >prints a lot of stuff then, including potentially sensitive info.) > > > >Regarding the problem as a whole, the hostname/domain issue you outlined > >in another e-mail might be a good clue. It certainly wouldn't be the > >first time i've seen problems with Puppet caused by FQDN settings. In > >general it's a good idea to verify that `facter fqdn` prints the same > >thing as `hostname -f` before running Puppet. > > Actually, now i recall that staypuft-installer has a check for this ^^ built > in, and if the two values don't match, it refuses to run. Maybe we should do > the same with instack-undercloud. That's something we could add. And also document that you should use 'hostnamectl set-hostname' to set the hostname. FWIW, I haven't been able to reproduce this issue testing on CentOS 7. I briefly tried to get 'facter fqdn' and 'hostnamectl' or 'hostname' to report something *different* to see if that would cause the problem, but despite my efforts to break it...they all always report the same thing. > > J. > > > > >Cheers > > > >J. > > > >[1] > >https://github.com/rdo-management/instack-undercloud/blob/6f75c8dc3c37d489763b7310a7b57d00e1e70da2/elements/puppet-stack-config/os-refresh-config/configure.d/50-puppet-stack-config#L7 > > > >> > >>> > >>>It looks like rabbitmq is having some problems: > >>> > >>> [stack at localhost ~]$ sudo rabbitmqctl status > >>> Status of node rabbit at localhost ... > >>> Error: unable to connect to node rabbit at localhost: nodedown > >>> > >>> DIAGNOSTICS > >>> =========== > >>> > >>> attempted to contact: [rabbit at localhost] > >>> > >>> rabbit at localhost: > >>> * connected to epmd (port 4369) on localhost > >>> * epmd reports node 'rabbit' running on port 25672 > >>> * TCP connection succeeded but Erlang distribution failed > >>> * suggestion: hostname mismatch? > >>> * suggestion: is the cookie set correctly? > >>> > >>> current node details: > >>> - node name: rabbitmqctl20640 at stack > >>> - home dir: /var/lib/rabbitmq > >>> - cookie hash: 4DA3U2yua3rw7wYLr+PbiQ== > >>> > >>>If I manually stop and then start rabbitmq: > >>> > >>> sudo systemctl stop rabbitmq-server > >>> sudo systemctl start rabbitmq-server > >>> > >>>It seems to work: > >>> > >>> # rabbitmqctl status > >>> Status of node rabbit at stack ... > >>> [{pid,20946}, > >>> {running_applications, > >>> [{rabbitmq_management,"RabbitMQ Management Console","3.3.5"}, > >>> ... > >>> > >>>After manually starting rabbit and re-running > >>>instack-install-undercloud, the process is able to successfully create > >>>the rabbitmq_user resources and completes successfully. > >> > >>Are you on RHEL 7.1 or CentOS 7? I'll try to reproduce locally and see if I can > >>get to the bottom of it. > >> > >>> > >>>-- > >>>Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} > >>>Cloud Engineering / OpenStack | http://blog.oddbit.com/ > >> > >> > >> > >>>_______________________________________________ > >>>Rdo-list mailing list > >>>Rdo-list at redhat.com > >>>https://www.redhat.com/mailman/listinfo/rdo-list > >>> > >>>To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >>-- > >>-- James Slagle > >>-- > >> > >>_______________________________________________ > >>Rdo-list mailing list > >>Rdo-list at redhat.com > >>https://www.redhat.com/mailman/listinfo/rdo-list > >> > >>To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > >_______________________________________________ > >Rdo-list mailing list > >Rdo-list at redhat.com > >https://www.redhat.com/mailman/listinfo/rdo-list > > > >To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- -- James Slagle -- From ak at cloudssky.com Wed Apr 22 22:21:03 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Thu, 23 Apr 2015 00:21:03 +0200 Subject: [Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: Hi, I'm running CentOS Linux release 7.1.1503 (Core) VM on OpenStack and followed the steps and I'm getting: 10.0.0.16_prescript.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 10.0.0.16_prescript.pp Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. Force execution using --nocheck, but the results are unpredictable. Thanks, Arash On Wed, Apr 22, 2015 at 6:29 PM, Boris Derzhavets wrote: > I made one more attempt of `packstack --allinone` install on CentOS > 7.1 KVM running on F22 Host. > Finally, when new "demo_net" created after install completed with > interface in "down" state, I've dropped "private" subnet from the same > tenant "demo" (the one created by installer) , what resulted switching > interface of "demo_net" to "Active" status and allowed to launch CirrOS VM > via Horizon completely functional. > > Then I reproduced same procedure in first time environment been created > on CentOS 7.1 KVM running on Ubuntu 15.04 Host and got same results . As > soon as I dropped "private" network created by installer for demo tenant , > interface for "demo_net" ( created manually as post installation step) > switched to "Active" status. > > Still have issue with openstack-nova-novncproxy.service :- > > [root at centos71 nova(keystone_admin)]# systemctl status > openstack-nova-novncproxy.service -l > openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server > Loaded: loaded > (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled) > Active: failed (Result: exit-code) since Wed 2015-04-22 18:41:51 MSK; > 18min ago > Process: 25663 ExecStart=/usr/bin/nova-novncproxy --web > /usr/share/novnc/ $OPTIONS (code=exited, status=1/FAILURE) > Main PID: 25663 (code=exited, status=1/FAILURE) > > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from > nova.cmd.novncproxy import main > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File > "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 25, in > > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd > import baseproxy > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File > "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 26, in > > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from > nova.console import websocketproxy > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File > "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line > 154, in > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: > websockify.ProxyRequestHandler): > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: > AttributeError: 'module' object has no attribute 'ProxyRequestHandler' > Apr 22 18:41:51 centos71.localdomain systemd[1]: > openstack-nova-novncproxy.service: main process exited, code=exited, > status=1/FAILURE > Apr 22 18:41:51 centos71.localdomain systemd[1]: Unit > openstack-nova-novncproxy.service entered failed state. > > Boris > > ------------------------------ > From: bderzhavets at hotmail.com > To: apevec at gmail.com; rdo-list at redhat.com > Date: Wed, 22 Apr 2015 07:02:32 -0400 > Subject: Re: [Rdo-list] RDO Kilo RC snapshot - core packages > > Alan, > > # packstack --allinone > > completes successfully on CentOS 7.1 > > However, when attaching interface to private subnet to neutron router > (as demo or as admin ) port status is down . I tested it via Horizon and > via Neutron CLI result was the same. Instance (cirros) been launched > cannot access nova meta-data server and obtain instance-id > > Lease of 50.0.0.12 obtained, lease time 86400 > cirros-ds 'net' up at 7.14 > checking http://169.254.169.254/2009-04-04/instance-id > failed 1/20: up 7.47. request failed > failed 2/20: up 12.81. request failed > failed 3/20: up 15.82. request failed > . . . . . . . . . > failed 18/20: up 78.28. request failed > failed 19/20: up 81.27. request failed > failed 20/20: up 86.50. request failed > failed to read iid from metadata. tried 20 > no results found for mode=net. up 89.53. searched: nocloud configdrive ec2 > failed to get instance-id of datasource > > > Thanks. > Boris > > > Date: Wed, 22 Apr 2015 04:15:54 +0200 > > From: apevec at gmail.com > > To: rdo-list at redhat.com > > Subject: [Rdo-list] RDO Kilo RC snapshot - core packages > > > > Hi all, > > > > unofficial[*] Kilo RC builds are now available for testing. This > > snapshot completes packstack --allinone i.e. issue in provision_glance > > reported on IRC has been fixed. > > > > Quick installation HOWTO > > > > yum install > http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > > # Following works out-of-the-box on CentOS7 > > # For RHEL see > http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F > > yum install epel-release > > cd /etc/yum.repos.d > > curl -O > https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo > > > > After above steps, regular Quickstart continues: > > yum install openstack-packstack > > packstack --allinone > > > > NB this snapshot has NOT been tested with rdo-management! If testing > > rdo-management, please follow their instructions. > > > > > > Cheers, > > Alan > > > > [*] Apr21 evening snapshot built from stable/kilo branches in > > Delorean Kilo instance, official RDO Kilo builds will come from CentOS > > CloudSIG CBS > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ Rdo-list mailing list > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To > unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phaurep at gmail.com Thu Apr 23 06:41:14 2015 From: phaurep at gmail.com (pauline phaure) Date: Thu, 23 Apr 2015 08:41:14 +0200 Subject: [Rdo-list] no valid host was found Message-ID: hey everyone, I hope this time somebody would help me. I still getting the same error when I try to spawn a VM which is no valid host was found, knowing that I have already spawned two successful VMs (one small and one xlarge). I'm sure it's not a problem of ressources I have 16 cores on each of my servers, 12 Gb of RAM and 1.7 TeraB of disk space available. But still my openstack is acting like if he lacks of ressources because when I delete when of the existing VM, nova responds correctly and I can then Launch my instance. PLZ found attahced My logs. [root at localhost ~]# lscpu Architecture : x86_64 Mode(s) op?ratoire(s) des processeurs : 32-bit, 64-bit Boutisme : Little Endian Processeur(s) : 16 Liste de processeur(s) en ligne : 0-15 Thread(s) par c?ur : 2 C?ur(s) par socket : 4 Socket(s) : 2 N?ud(s) NUMA : 2 Identifiant constructeur : GenuineIntel Famille de processeur : 6 Mod?le : 26 Nom de mod?le : Intel(R) Xeon(R) CPU E5540 @ 2.53GHz R?vision : 5 Vitesse du processeur en MHz : 2533.000 BogoMIPS : 5065.94 Virtualisation : VT-x Cache L1d : 32K Cache L1i : 32K Cache L2 : 256K Cache L3 : 8192K N?ud NUMA 0 de processeur(s) : 0,2,4,6,8,10,12,14 N?ud NUMA 1 de processeur(s) : 1,3,5,7,9,11,13,15 [root at localhost ~]# lscpu Architecture : x86_64 Mode(s) op?ratoire(s) des processeurs : 32-bit, 64-bit Boutisme : Little Endian Processeur(s) : 16 Liste de processeur(s) en ligne : 0-15 Thread(s) par c?ur : 2 C?ur(s) par socket : 4 Socket(s) : 2 N?ud(s) NUMA : 2 Identifiant constructeur : GenuineIntel Famille de processeur : 6 Mod?le : 26 Nom de mod?le : Intel(R) Xeon(R) CPU E5540 @ 2.53GHz R?vision : 5 Vitesse du processeur en MHz : 1600.000 BogoMIPS : 5065.93 Virtualisation : VT-x Cache L1d : 32K Cache L1i : 32K Cache L2 : 256K Cache L3 : 8192K N?ud NUMA 0 de processeur(s) : 0,2,4,6,8,10,12,14 N?ud NUMA 1 de processeur(s) : 1,3,5,7,9,11,13,15 [root at localhost ~]# df -h Sys. de fichiers Taille Utilis? Dispo Uti% Mont? sur /dev/mapper/centos-root 1,8T 2,8G 1,8T 1% / devtmpfs 5,8G 0 5,8G 0% /dev tmpfs 5,8G 0 5,8G 0% /dev/shm tmpfs 5,8G 8,9M 5,8G 1% /run tmpfs 5,8G 0 5,8G 0% /sys/fs/cgroup /dev/sda1 497M 126M 372M 26% /boot /dev/mapper/centos-home 50G 33M 50G 1% /home [root at localhost ~]# df -h Sys. de fichiers Taille Utilis? Dispo Uti% Mont? sur /dev/mapper/centos-root 1,5T 3,3G 1,5T 1% / devtmpfs 5,8G 0 5,8G 0% /dev tmpfs 5,8G 4,0K 5,8G 1% /dev/shm tmpfs 5,8G 8,9M 5,8G 1% /run tmpfs 5,8G 0 5,8G 0% /sys/fs/cgroup /dev/loop0 1,9G 6,1M 1,7G 1% /srv/node/swiftloopback /dev/sda1 497M 171M 327M 35% /boot /dev/mapper/centos-home 50G 33M 50G 1% /home tmpfs 5,8G 8,9M 5,8G 1% /run/netns MemTotal: 12126860 kB MemFree: 11487076 kB MemAvailable: 11457400 kB Buffers: 764 kB Cached: 154492 kB SwapCached: 0 kB Active: 203440 kB Inactive: 111920 kB Active(anon): 169672 kB Inactive(anon): 8260 kB Active(file): 33768 kB Inactive(file): 103660 kB Unevictable: 84820 kB Mlocked: 84820 kB SwapTotal: 6160380 kB SwapFree: 6160380 kB Dirty: 8 kB Writeback: 0 kB AnonPages: 244968 kB Mapped: 33992 kB Shmem: 8940 kB Slab: 84552 kB SReclaimable: 28472 kB SUnreclaim: 56080 kB KernelStack: 5984 kB PageTables: 6404 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 12223808 kB Committed_AS: 658040 kB VmallocTotal: 34359738367 kB VmallocUsed: 109428 kB VmallocChunk: 34355304448 kB HardwareCorrupted: 0 kB AnonHugePages: 77824 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 110716 kB DirectMap2M: 12462080 kB cat /proc/meminfo MemTotal: 12126860 kB MemFree: 7088560 kB MemAvailable: 7148508 kB Buffers: 1156 kB Cached: 280384 kB SwapCached: 0 kB Active: 4320672 kB Inactive: 202656 kB Active(anon): 4251044 kB Inactive(anon): 8704 kB Active(file): 69628 kB Inactive(file): 193952 kB Unevictable: 84884 kB Mlocked: 84884 kB SwapTotal: 6160380 kB SwapFree: 6160380 kB Dirty: 40 kB Writeback: 0 kB AnonPages: 4326216 kB Mapped: 49868 kB Shmem: 9072 kB Slab: 135476 kB SReclaimable: 43228 kB SUnreclaim: 92248 kB KernelStack: 12976 kB PageTables: 118868 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 12223808 kB Committed_AS: 13439552 kB VmallocTotal: 34359738367 kB VmallocUsed: 110196 kB VmallocChunk: 34355295000 kB HardwareCorrupted: 0 kB AnonHugePages: 317440 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 114812 kB DirectMap2M: 12457984 kB [image: Images int?gr?es 1] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dhcp-agent.log Type: application/octet-stream Size: 10376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: l3-agent.log Type: application/octet-stream Size: 4974 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: metadata-agent.log Type: application/octet-stream Size: 59303 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: neutron-ns-metadata-proxy-9b1ab6d1-7431-403e-a14d-ae1afa5e5fa9.log Type: application/octet-stream Size: 179 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: neutron-ns-metadata-proxy-71144798-614b-48ae-ac2e-94481e0c65de.log Type: application/octet-stream Size: 352 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: neutron-ns-metadata-proxy-ad2b1c5c-6cdf-4088-9e70-3eb0437b5c14.log Type: application/octet-stream Size: 11624 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: openvswitch-agent.log Type: application/octet-stream Size: 81413 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovs-cleanup.log Type: application/octet-stream Size: 1767 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: server.log Type: application/octet-stream Size: 863273 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-api.log Type: application/octet-stream Size: 291821 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-cert.log Type: application/octet-stream Size: 1948 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-conductor.log Type: application/octet-stream Size: 61445 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-consoleauth.log Type: application/octet-stream Size: 3113 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-manage.log Type: application/octet-stream Size: 5780 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-novncproxy.log Type: application/octet-stream Size: 3263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-scheduler.log Type: application/octet-stream Size: 3337 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-compute.log Type: application/octet-stream Size: 112329 bytes Desc: not available URL: From apevec at gmail.com Thu Apr 23 08:09:51 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 23 Apr 2015 10:09:51 +0200 Subject: [Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: > Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. > Force execution using --nocheck, but the results are unpredictable. Not sure how could that happen, nmcli is part of NetworkManager RPM. Does yum update before running packstack help? Cheers, Alan From outbackdingo at gmail.com Thu Apr 23 08:59:55 2015 From: outbackdingo at gmail.com (Outback Dingo) Date: Thu, 23 Apr 2015 18:59:55 +1000 Subject: [Rdo-list] Objective - Feasible ? Message-ID: Hi we are looking to deploy a new lab, for the feature set we would like the following RDO Centos 7.1 Kilo with XENServer 6.5, RiakCS and OpenDayLight controller. Basically We prefer XenServer to KVM, and wish to roll a RiakCS storage cluster, for Networking OpenDayLight managing the Network pieces, Ive figured out how to deploy the pieces, XenServer, no brainer..... RiakCS in a 3 node cluster ok.... check OpenDayLight.... on a node...... check... is it possible to use RDO to "wrap" them all together into a viable working solution, im not afraid of some manual intervention. Any insight into the pieces is welcome but it might be out of RDOs capabilities. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Thu Apr 23 09:03:53 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 23 Apr 2015 11:03:53 +0200 Subject: [Rdo-list] no valid host was found In-Reply-To: References: Message-ID: <5538B579.10004@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Please don't create new threads for the same issue. If there are no responses, it means there are no answers. On 04/23/2015 08:41 AM, pauline phaure wrote: > hey everyone, I hope this time somebody would help me. I still > getting the same error when I try to spawn a VM which is no valid > host was found, knowing that I have already spawned two successful > VMs (one small and one xlarge). I'm sure it's not a problem of > ressources I have 16 cores on each of my servers, 12 Gb of RAM and > 1.7 TeraB of disk space available. But still my openstack is acting > like if he lacks of ressources because when I delete when of the > existing VM, nova responds correctly and I can then Launch my > instance. PLZ found attahced My logs. [root at localhost ~]# lscpu > Architecture : x86_64 Mode(s) op?ratoire(s) des processeurs > : 32-bit, 64-bit Boutisme : Little Endian Processeur(s) > : 16 Liste de processeur(s) en ligne : 0-15 Thread(s) par > c?ur : 2 C?ur(s) par socket : 4 Socket(s) : 2 N?ud(s) > NUMA : 2 Identifiant constructeur : GenuineIntel Famille de > processeur : 6 Mod?le : 26 Nom de mod?le : > Intel(R) Xeon(R) CPU E5540 @ 2.53GHz R?vision : > 5 Vitesse du processeur en MHz : 2533.000 BogoMIPS : > 5065.94 Virtualisation : VT-x Cache L1d : 32K Cache > L1i : 32K Cache L2 : 256K Cache L3 : > 8192K N?ud NUMA 0 de processeur(s) : 0,2,4,6,8,10,12,14 N?ud NUMA 1 > de processeur(s) : 1,3,5,7,9,11,13,15 > > > [root at localhost ~]# lscpu Architecture : x86_64 Mode(s) > op?ratoire(s) des processeurs : 32-bit, 64-bit Boutisme : > Little Endian Processeur(s) : 16 Liste de processeur(s) en > ligne : 0-15 Thread(s) par c?ur : 2 C?ur(s) par socket : 4 > Socket(s) : 2 N?ud(s) NUMA : 2 Identifiant > constructeur : GenuineIntel Famille de processeur : 6 Mod?le : > 26 Nom de mod?le : Intel(R) Xeon(R) CPU E5540 @ > 2.53GHz R?vision : 5 Vitesse du processeur en MHz : > 1600.000 BogoMIPS : 5065.93 Virtualisation : VT-x > Cache L1d : 32K Cache L1i : 32K Cache L2 : > 256K Cache L3 : 8192K N?ud NUMA 0 de processeur(s) : > 0,2,4,6,8,10,12,14 N?ud NUMA 1 de processeur(s) : > 1,3,5,7,9,11,13,15 > > [root at localhost ~]# df -h Sys. de fichiers Taille Utilis? > Dispo Uti% Mont? sur /dev/mapper/centos-root 1,8T 2,8G 1,8T > 1% / devtmpfs 5,8G 0 5,8G 0% /dev tmpfs > 5,8G 0 5,8G 0% /dev/shm tmpfs 5,8G > 8,9M 5,8G 1% /run tmpfs 5,8G 0 5,8G > 0% /sys/fs/cgroup /dev/sda1 497M 126M 372M 26% > /boot /dev/mapper/centos-home 50G 33M 50G 1% /home > > > [root at localhost ~]# df -h Sys. de fichiers Taille Utilis? > Dispo Uti% Mont? sur /dev/mapper/centos-root 1,5T 3,3G 1,5T > 1% / devtmpfs 5,8G 0 5,8G 0% /dev tmpfs > 5,8G 4,0K 5,8G 1% /dev/shm tmpfs 5,8G > 8,9M 5,8G 1% /run tmpfs 5,8G 0 5,8G > 0% /sys/fs/cgroup /dev/loop0 1,9G 6,1M 1,7G 1% > /srv/node/swiftloopback /dev/sda1 497M 171M > 327M 35% /boot /dev/mapper/centos-home 50G 33M 50G 1% > /home tmpfs 5,8G 8,9M 5,8G 1% /run/netns > > > MemTotal: 12126860 kB MemFree: 11487076 kB > MemAvailable: 11457400 kB Buffers: 764 kB Cached: > 154492 kB SwapCached: 0 kB Active: 203440 kB > Inactive: 111920 kB Active(anon): 169672 kB > Inactive(anon): 8260 kB Active(file): 33768 kB > Inactive(file): 103660 kB Unevictable: 84820 kB Mlocked: > 84820 kB SwapTotal: 6160380 kB SwapFree: 6160380 kB > Dirty: 8 kB Writeback: 0 kB AnonPages: > 244968 kB Mapped: 33992 kB Shmem: 8940 kB > Slab: 84552 kB SReclaimable: 28472 kB SUnreclaim: > 56080 kB KernelStack: 5984 kB PageTables: 6404 kB > NFS_Unstable: 0 kB Bounce: 0 kB > WritebackTmp: 0 kB CommitLimit: 12223808 kB > Committed_AS: 658040 kB VmallocTotal: 34359738367 kB > VmallocUsed: 109428 kB VmallocChunk: 34355304448 kB > HardwareCorrupted: 0 kB AnonHugePages: 77824 kB > HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: > 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: > 110716 kB DirectMap2M: 12462080 kB > > > > > cat /proc/meminfo MemTotal: 12126860 kB MemFree: > 7088560 kB MemAvailable: 7148508 kB Buffers: 1156 kB > Cached: 280384 kB SwapCached: 0 kB Active: > 4320672 kB Inactive: 202656 kB Active(anon): 4251044 kB > Inactive(anon): 8704 kB Active(file): 69628 kB > Inactive(file): 193952 kB Unevictable: 84884 kB Mlocked: > 84884 kB SwapTotal: 6160380 kB SwapFree: 6160380 kB > Dirty: 40 kB Writeback: 0 kB AnonPages: > 4326216 kB Mapped: 49868 kB Shmem: 9072 kB > Slab: 135476 kB SReclaimable: 43228 kB SUnreclaim: > 92248 kB KernelStack: 12976 kB PageTables: 118868 kB > NFS_Unstable: 0 kB Bounce: 0 kB > WritebackTmp: 0 kB CommitLimit: 12223808 kB > Committed_AS: 13439552 kB VmallocTotal: 34359738367 kB > VmallocUsed: 110196 kB VmallocChunk: 34355295000 kB > HardwareCorrupted: 0 kB AnonHugePages: 317440 kB > HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: > 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: > 114812 kB DirectMap2M: 12457984 kB > > > Images int?gr?es 1 > > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJVOLV5AAoJEC5aWaUY1u57tQsIALfwkLKNstRLMOJ2dlVEmYyJ 4ejktuDzuu2ED9EU4P4dLBF1f2l+rGygP4VpzxtYPuM1WIeFMARPy97vbevbCI5Q CLy2YWbQ86ve3FdxFiRDRKKLDvXFXLrh9TFfB+wP33h5lG4cyhtWEbNwQZPHcQsV r4o6LpbZsCeMRXuFKyh7+279FgWWTjUoYReebrcJYr3J4FmibdXMtIeYUXJYQeps zPgecHXDCZyO7dcksxENyyaaGcOY3r5YhY6wgh9Nida4FMh5FqUQf840iluoSRW4 AGRxwipUWUfSrej4W/jKN4sH5wdb5TiIBAD0DsgKty0vAtMTOapIe3QwXMvLTqo= =0OnW -----END PGP SIGNATURE----- From hbrock at redhat.com Thu Apr 23 09:06:42 2015 From: hbrock at redhat.com (Hugh O. Brock) Date: Thu, 23 Apr 2015 11:06:42 +0200 Subject: [Rdo-list] [RDO-Manager] Rewriting instack scripts into python? In-Reply-To: <55388BF7.7040201@redhat.com> References: <5537839A.4040301@redhat.com> <20150422125115.GE29586@teletran-1.redhat.com> <55379CE2.4010603@redhat.com> <5537D06E.7020808@redhat.com> <5537E669.9010407@redhat.com> <20150423020707.GH29586@teletran-1.redhat.com> <55388BF7.7040201@redhat.com> Message-ID: <20150423090641.GC24873@redhat.com> (moving this thread upstream where it belongs. It should probably in fact be on openstack-dev, I'll leave that step to someone else however.) For context: This is a debate over whether to have a common library for combining instack deployment commands, and hoe the unified CLI plays with that library. I have edited a bit for length, apologies if I have distorted anyone's meaning. On Thu, Apr 23, 2015 at 08:06:47AM +0200, Jaromir Coufal wrote: > > > On 23/04/15 04:07, James Slagle wrote: > >On Wed, Apr 22, 2015 at 01:20:25PM -0500, Jacob Liberman wrote: > >> > >> > >>On 4/22/15 11:46 AM, Ben Nemec wrote: > >>>>I am very concerned about this single call action which is doing all the > >>>>>magic in the background but gives user zero flexibility. It will not > >>>>>help neither educate users about the project. > >>>Our job is not to educate the users about all the implementation details > >>>of the deployment process. Our job is to write a deployment tool that > >>>simplifies that process to the point where an ordinary human can > >>>actually complete it. In theory you could implement the entire > >>>deployment process in documentation without any code whatsoever, and in > >>>fact upstream devtest tries to do exactly that: > >>>http://docs.openstack.org/developer/tripleo-incubator/devtest_overcloud.html > >>> > >>>And let me tell you - as someone who has tried to follow those docs - > >>>it's a horrible user experience. Fortunately we have a tool in > >>>instack-undercloud that rolls up those 100+ steps from devtest into > >>>maybe a dozen or so steps that combine the logically related bits into > >>>single commands. Moving back toward the devtest style is heading in the > >>>wrong direction IMNSHO. > >>> > >>>Does instack-undercloud/rdo-manager need to be more flexible? > >>>Absolutely. Does that mean we should throw out our existing code and > >>>convert it to documentation? I don't believe so. > >>> > >> > >>Definitely get input from our field folks on this topic. > >> [snip] > >> > >>A GUI and unified deployment scripts are nice to have but are not > >>replacements for complete CLIs + docs. > > > >I'm just replying to the thread, I'm not picking on your specific point :-). > >Your feedback is really good and something that we need to keep in mind. > > > >However, this discussion isn't actually about POC vs production vs flexibility. > >That's pretty much a strawman to the discussion given that the 'openstack > >flavor' command is already there, and will always be there. No one is saying > >you shouldn't use it, or we shouldn't document why and how we use flavors. Or > >for that matter that any amount of customization via flavors wouldn't > >eventually be possible. > > But this is our primary focus - production ready flow. Which we should test > as soon as we can and it is still not out there. So this discussion actually > is about it. And also about production ready people's user experience. > > > >There's also going to be a big difference between our end user documentation > >that just gets you any repeatable process (what we have now), and advanced > >workflows we might document for the field or consultants, or hide behind a "not > >officially supported without RH consultants" banner. > > End user documentation is not what we have now. What we have now is very > narrow restricted flow for people to get started with 1 controller and 1 > compute -- which is just POC. With zero knowledge of what is happening in > the background. > > > >Moreso, the point is that: > > > >The shell script we currently have, the proposed Python code, and the proposed > >documentation change all are 100% equivalent in terms of functionality and > >flexiblity. The proposed documentation change doesn't even offer anything other > >than replacing 1 command with 6, with no explanation of why or how you might > >customize them (that's why I say it's worse). > > First of all -- reason why it *has* to run 6 commands is how was written the > instack deployment script which requires 3 flavors with very specific name > for each role, despite the fact that one of the roles is not deployed. > > If user follows regular way (when we get out of the deployment scripts), he > would have to create *one* single flavor (2 commands) and in these commands > is specifically listed what features are being registered with the flavor > (ram, disk, vcpu). So it is not hidden from user. > > This is very important. If you even want to improve this flow, we should > suggest flavors to user and improve unified CLI. > > >Here's really my main point I guess: > > > >If it takes 12 (eventually) CLI commands to create flavors, do we expect people > >to *type* those into their shell? I hope not. > > > >Let's suppose we document it that way...in my experience the most likely thing > >someone would do (especially if they're repeating the process) would be to > >copy/paste those 12 commands out of the documentation, and write their own > >shell script/deployment tool to execute them, or just copy/paste them > >straight into their shell, while perhaps customizing them along the way. > > > >That would certainlly be a totally valid way to do it. > > If user has homogeneous environment, he can have just one simple flavor. If > he has heterogeneous or he wants to get more specific, he will take > additional actions (create more flavors or deal with edeploy matching). > > You are overstating - at the moment the problem with those 6 commands is > really how the instack scripts are written. So that I could get us out of > scripts and replace flavors script I had to create three flavors instead of > one. > > >So...why don't we just give them that shell script to start off with? > >Better yet, let's write something we'd actually like to support long term (in > >Python) and is just as flexible, perhaps taking json (or more likely YAML tbh) > >as input, with a nice Python program to log stuff and offer --help's along the > >way. Something that's actually supportable so we could ask: What input did you > >provide to this tool and what was the output? VS. How did you customize these > >12 commands, now go please run these other commands so we can figure out what > >happened. > > Yes, let's write something what we support longer term. Didn't we agree it > is unified CLI? Didn't we agree we should support and improve it? Feeding > another yaml file to create a single flavor? I disagree that this is better > user experience. > > > >I'm already seeing developers and external people saying that it's too much > >as-is and that they're going to write custom tooling to help out. Why wouldn't > >we just offer that already (especially when 90% of the code is already written > >upstream), and still be able to keep all the flexibility, as long we actually > >document how it works, and what commands you'd run to customize it? > > Are these developers real world users in production environments? Ask field > guys. The feedback you are getting is from people who are running these > commands 20 times a day. Then it is valid point that it is too many steps > for them. But this is *completely* different use case than production > environments. > > And we will not get out of the use case that people will write automation > scripts. But instack will not help them with that because they will write > very specific automation scripts on top of their production environments and > they will use our CLI as the basis. > > We should focus on production environments first and then simplify for POCs. > Not vice-versa. And not improving the scripts. > > -- Jarda I'm sorry to say this Jarda but I think I am with James and Ben here -- although this discussion is way too short on specifics, which I think is part of the problem. What I think Ben is proposing is a Python library that will encapsulate the business logic that we need to do a deployment. If that's the case, then I think that yes, that is exactly what we need. If I understand it correctly the first version of this library will have some things like flavor matching hard-coded, but we will add the ability to configure those things as the library matures. I think this makes a ton of sense. I don't believe that having this library in any way precludes use of the unified CLI, although it may change the way the CLI works -- certainly, we would want the CLI to be able to take advantage of the business logic in the library. Finally, I completely support your desire that the individual operations the manager is taking during deployment be well documented. However, I don't think that need for documentation also means that the recommended path for users should be to follow each individual step. I think the only genuine disagreement here is over the usability of the manager CLI -- the idea that "instack is never going to go away and the unified CLI will never become the primary interface" is a red herring and I think we should stop chasing it. Given that, if we're having a debate about usability, let's please get some actual examples on this thread and get some folks like Jacob to see for themselves what we're talking about. Thanks, --Hugh -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == Tuskar: Elastic Scaling for OpenStack == == http://github.com/tuskar == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From bderzhavets at hotmail.com Thu Apr 23 09:46:18 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 23 Apr 2015 05:46:18 -0400 Subject: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: , , , Message-ID: Arash, System was yum updated, I ran packstack and disabled NetworkManager only after completion. As of now same procedure reproduced successfully 3 times on different VMs ( KVM Hypervisors of F22,F21, Ubuntu 15.04) . Nested KVM enabled for each VM. [root at centos71 ~(keystone_admin)]# uname -a Linux centos71.localdomain 3.10.0-229.1.2.el7.x86_64 #1 SMP Fri Mar 27 03:04:26 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux [root at centos71 ~(keystone_admin)]# openstack-status == Nova services == openstack-nova-api: active openstack-nova-cert: active openstack-nova-compute: active openstack-nova-network: inactive (disabled on boot) openstack-nova-scheduler: active openstack-nova-conductor: active == Glance services == openstack-glance-api: active openstack-glance-registry: active == Keystone service == openstack-keystone: inactive (disabled on boot) == Horizon service == openstack-dashboard: active == neutron services == neutron-server: active neutron-dhcp-agent: active neutron-l3-agent: active neutron-metadata-agent: active neutron-openvswitch-agent: active == Swift services == openstack-swift-proxy: active openstack-swift-account: active openstack-swift-container: active openstack-swift-object: active == Cinder services == openstack-cinder-api: active openstack-cinder-scheduler: active openstack-cinder-volume: active openstack-cinder-backup: active == Ceilometer services == openstack-ceilometer-api: active openstack-ceilometer-central: active openstack-ceilometer-compute: active openstack-ceilometer-collector: active openstack-ceilometer-alarm-notifier: active openstack-ceilometer-alarm-evaluator: active openstack-ceilometer-notification: active == Support services == mysqld: inactive (disabled on boot) libvirtd: active openvswitch: active dbus: active target: active rabbitmq-server: active memcached: active == Keystone users == /usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient. 'python-keystoneclient.', DeprecationWarning) +----------------------------------+------------+---------+----------------------+ | id | name | enabled | email | +----------------------------------+------------+---------+----------------------+ | 1fb446ec99184947bff342188028fddd | admin | True | root at localhost | | 3e76f14038724ef19e804ef99919ae75 | ceilometer | True | ceilometer at localhost | | d63e40e71da84778bdbc89cd0645109c | cinder | True | cinder at localhost | | 75b0b000562f491284043b5c74afbb1e | demo | True | | | bb3d35d9a23443bfb3791545a7aa03b4 | glance | True | glance at localhost | | 573eb12b92fd48e68e5635f3c79b3dec | neutron | True | neutron at localhost | | be6b2d41f55f4c3fab8e02a779de4a63 | nova | True | nova at localhost | | 53e9e3a493244c5e801ba92446c969bc | swift | True | swift at localhost | +----------------------------------+------------+---------+----------------------+ == Glance images == +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ | 0c73a315-8867-472c-bba6-e73a43b9b98d | cirros | qcow2 | bare | 13200896 | active | | 52df1d6d-9eb0-4c09-a9bb-ec5a07bd62eb | Fedora 21 image | qcow2 | bare | 158443520 | active | | 7f128f54-727c-45ad-8891-777aa39ff3e1 | Ubuntu 15.04 image | qcow2 | bare | 284361216 | active | +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ == Nova managed services == +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:57.000000 | - | | 2 | nova-scheduler | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:57.000000 | - | | 3 | nova-conductor | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:58.000000 | - | | 4 | nova-compute | centos71.localdomain | nova | enabled | up | 2015-04-23T09:36:58.000000 | - | | 5 | nova-cert | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:57.000000 | - | +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ == Nova networks == +--------------------------------------+----------+------+ | ID | Label | Cidr | +--------------------------------------+----------+------+ | d3bcf265-2429-4556-b799-16579ba367cf | public | - | | b25422bc-aa87-4007-bf5a-64dde97dd6f7 | demo_net | - | +--------------------------------------+----------+------+ == Nova instance flavors == +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ == Nova instances == +----+------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +----+------+--------+------------+-------------+----------+ +----+------+--------+------------+-------------+----------+ [root at centos71 ~(keystone_admin)]# rpm -qa | grep openstack openstack-nova-novncproxy-2015.1-dev19.el7.centos.noarch python-openstackclient-1.0.3-post3.el7.centos.noarch openstack-keystone-2015.1-dev14.el7.centos.noarch openstack-nova-console-2015.1-dev19.el7.centos.noarch openstack-nova-api-2015.1-dev19.el7.centos.noarch openstack-packstack-2015.1-dev1529.g0605728.el7.centos.noarch openstack-ceilometer-compute-2015.1-dev2.el7.centos.noarch openstack-swift-plugin-swift3-1.7-4.el7.centos.noarch openstack-selinux-0.6.25-1.el7.noarch openstack-cinder-2015.1-dev2.el7.centos.noarch openstack-neutron-openvswitch-2015.1-dev1.el7.centos.noarch openstack-swift-account-2.3.0rc1-post1.el7.centos.noarch openstack-ceilometer-alarm-2015.1-dev2.el7.centos.noarch openstack-utils-2014.2-1.el7.centos.noarch openstack-packstack-puppet-2015.1-dev1529.g0605728.el7.centos.noarch openstack-nova-common-2015.1-dev19.el7.centos.noarch openstack-nova-scheduler-2015.1-dev19.el7.centos.noarch openstack-ceilometer-common-2015.1-dev2.el7.centos.noarch openstack-nova-conductor-2015.1-dev19.el7.centos.noarch openstack-neutron-common-2015.1-dev1.el7.centos.noarch openstack-swift-object-2.3.0rc1-post1.el7.centos.noarch openstack-ceilometer-central-2015.1-dev2.el7.centos.noarch openstack-glance-2015.1-dev1.el7.centos.noarch openstack-nova-compute-2015.1-dev19.el7.centos.noarch openstack-neutron-ml2-2015.1-dev1.el7.centos.noarch python-django-openstack-auth-1.3.0-0.99.20150421.2158git.el7.centos.noarch openstack-swift-2.3.0rc1-post1.el7.centos.noarch openstack-ceilometer-api-2015.1-dev2.el7.centos.noarch openstack-swift-proxy-2.3.0rc1-post1.el7.centos.noarch openstack-swift-container-2.3.0rc1-post1.el7.centos.noarch openstack-ceilometer-collector-2015.1-dev2.el7.centos.noarch openstack-nova-cert-2015.1-dev19.el7.centos.noarch openstack-ceilometer-notification-2015.1-dev2.el7.centos.noarch openstack-puppet-modules-2015.1-dev.2d3528a51091931caef06a5a8d1cfdaaa79d25ec_75763dd0.el7.centos.noarch openstack-neutron-2015.1-dev1.el7.centos.noarch openstack-dashboard-2015.1-dev2.el7.centos.noarch Boris. Date: Thu, 23 Apr 2015 00:21:03 +0200 Subject: Re: [Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com Hi, I'm running CentOS Linux release 7.1.1503 (Core) VM on OpenStack and followedthe steps and I'm getting: 10.0.0.16_prescript.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 10.0.0.16_prescript.pp Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. Force execution using --nocheck, but the results are unpredictable. Thanks, Arash On Wed, Apr 22, 2015 at 6:29 PM, Boris Derzhavets wrote: I made one more attempt of `packstack --allinone` install on CentOS 7.1 KVM running on F22 Host. Finally, when new "demo_net" created after install completed with interface in "down" state, I've dropped "private" subnet from the same tenant "demo" (the one created by installer) , what resulted switching interface of "demo_net" to "Active" status and allowed to launch CirrOS VM via Horizon completely functional. Then I reproduced same procedure in first time environment been created on CentOS 7.1 KVM running on Ubuntu 15.04 Host and got same results . As soon as I dropped "private" network created by installer for demo tenant , interface for "demo_net" ( created manually as post installation step) switched to "Active" status. Still have issue with openstack-nova-novncproxy.service :- [root at centos71 nova(keystone_admin)]# systemctl status openstack-nova-novncproxy.service -l openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled) Active: failed (Result: exit-code) since Wed 2015-04-22 18:41:51 MSK; 18min ago Process: 25663 ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS (code=exited, status=1/FAILURE) Main PID: 25663 (code=exited, status=1/FAILURE) Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd.novncproxy import main Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 25, in Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd import baseproxy Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 26, in Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.console import websocketproxy Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 154, in Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: websockify.ProxyRequestHandler): Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler' Apr 22 18:41:51 centos71.localdomain systemd[1]: openstack-nova-novncproxy.service: main process exited, code=exited, status=1/FAILURE Apr 22 18:41:51 centos71.localdomain systemd[1]: Unit openstack-nova-novncproxy.service entered failed state. Boris From: bderzhavets at hotmail.com To: apevec at gmail.com; rdo-list at redhat.com Date: Wed, 22 Apr 2015 07:02:32 -0400 Subject: Re: [Rdo-list] RDO Kilo RC snapshot - core packages Alan, # packstack --allinone completes successfully on CentOS 7.1 However, when attaching interface to private subnet to neutron router (as demo or as admin ) port status is down . I tested it via Horizon and via Neutron CLI result was the same. Instance (cirros) been launched cannot access nova meta-data server and obtain instance-id Lease of 50.0.0.12 obtained, lease time 86400 cirros-ds 'net' up at 7.14 checking http://169.254.169.254/2009-04-04/instance-id failed 1/20: up 7.47. request failed failed 2/20: up 12.81. request failed failed 3/20: up 15.82. request failed . . . . . . . . . failed 18/20: up 78.28. request failed failed 19/20: up 81.27. request failed failed 20/20: up 86.50. request failed failed to read iid from metadata. tried 20 no results found for mode=net. up 89.53. searched: nocloud configdrive ec2 failed to get instance-id of datasource Thanks. Boris > Date: Wed, 22 Apr 2015 04:15:54 +0200 > From: apevec at gmail.com > To: rdo-list at redhat.com > Subject: [Rdo-list] RDO Kilo RC snapshot - core packages > > Hi all, > > unofficial[*] Kilo RC builds are now available for testing. This > snapshot completes packstack --allinone i.e. issue in provision_glance > reported on IRC has been fixed. > > Quick installation HOWTO > > yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > # Following works out-of-the-box on CentOS7 > # For RHEL see http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F > yum install epel-release > cd /etc/yum.repos.d > curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo > > After above steps, regular Quickstart continues: > yum install openstack-packstack > packstack --allinone > > NB this snapshot has NOT been tested with rdo-management! If testing > rdo-management, please follow their instructions. > > > Cheers, > Alan > > [*] Apr21 evening snapshot built from stable/kilo branches in > Delorean Kilo instance, official RDO Kilo builds will come from CentOS > CloudSIG CBS > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Thu Apr 23 11:01:47 2015 From: jslagle at redhat.com (James Slagle) Date: Thu, 23 Apr 2015 07:01:47 -0400 Subject: [Rdo-list] [rhos-dev] [RDO-Manager] Rewriting instack scripts into python? Why? In-Reply-To: <55388BF7.7040201@redhat.com> References: <5537839A.4040301@redhat.com> <20150422125115.GE29586@teletran-1.redhat.com> <55379CE2.4010603@redhat.com> <5537D06E.7020808@redhat.com> <5537E669.9010407@redhat.com> <20150423020707.GH29586@teletran-1.redhat.com> <55388BF7.7040201@redhat.com> Message-ID: <20150423110147.GI29586@teletran-1.redhat.com> On Thu, Apr 23, 2015 at 08:06:47AM +0200, Jaromir Coufal wrote: > > > On 23/04/15 04:07, James Slagle wrote: > >On Wed, Apr 22, 2015 at 01:20:25PM -0500, Jacob Liberman wrote: > >> > >> > >>On 4/22/15 11:46 AM, Ben Nemec wrote: > >>>>I am very concerned about this single call action which is doing all the > >>>>>magic in the background but gives user zero flexibility. It will not > >>>>>help neither educate users about the project. > >>>Our job is not to educate the users about all the implementation details > >>>of the deployment process. Our job is to write a deployment tool that > >>>simplifies that process to the point where an ordinary human can > >>>actually complete it. In theory you could implement the entire > >>>deployment process in documentation without any code whatsoever, and in > >>>fact upstream devtest tries to do exactly that: > >>>http://docs.openstack.org/developer/tripleo-incubator/devtest_overcloud.html > >>> > >>>And let me tell you - as someone who has tried to follow those docs - > >>>it's a horrible user experience. Fortunately we have a tool in > >>>instack-undercloud that rolls up those 100+ steps from devtest into > >>>maybe a dozen or so steps that combine the logically related bits into > >>>single commands. Moving back toward the devtest style is heading in the > >>>wrong direction IMNSHO. > >>> > >>>Does instack-undercloud/rdo-manager need to be more flexible? > >>>Absolutely. Does that mean we should throw out our existing code and > >>>convert it to documentation? I don't believe so. > >>> > >> > >>Definitely get input from our field folks on this topic. > >> > >>You are basically recapitulating the entire argument against packstack (not > >>flexible enough, obscured details) that brought about foreman installer (too > >>many parameters and steps) and then staypuft (difficult to automate, etc.) > >> > >>Up to this point our field folks have mostly ignored our installers for all > >>but the simplest POCs. > >> > >>My personal belief is that a well documented and complete CLI is an absolute > >>necessity to tackle and automate real world deployments. A big reason > >>Foreman failed IMO is because it did not have hammer out of the box. I > >>submitted the original RFE to include hammer but it didnt make it until > >>foreman was dead. > >> > >>A GUI and unified deployment scripts are nice to have but are not > >>replacements for complete CLIs + docs. > > > >I'm just replying to the thread, I'm not picking on your specific point :-). > >Your feedback is really good and something that we need to keep in mind. > > > >However, this discussion isn't actually about POC vs production vs flexibility. > >That's pretty much a strawman to the discussion given that the 'openstack > >flavor' command is already there, and will always be there. No one is saying > >you shouldn't use it, or we shouldn't document why and how we use flavors. Or > >for that matter that any amount of customization via flavors wouldn't > >eventually be possible. > > But this is our primary focus - production ready flow. Which we should test > as soon as we can and it is still not out there. So this discussion actually > is about it. And also about production ready people's user experience. > > > >There's also going to be a big difference between our end user documentation > >that just gets you any repeatable process (what we have now), and advanced > >workflows we might document for the field or consultants, or hide behind a "not > >officially supported without RH consultants" banner. > > End user documentation is not what we have now. What we have now is very > narrow restricted flow for people to get started with 1 controller and 1 > compute -- which is just POC. With zero knowledge of what is happening in > the background. > > > >Moreso, the point is that: > > > >The shell script we currently have, the proposed Python code, and the proposed > >documentation change all are 100% equivalent in terms of functionality and > >flexiblity. The proposed documentation change doesn't even offer anything other > >than replacing 1 command with 6, with no explanation of why or how you might > >customize them (that's why I say it's worse). > > First of all -- reason why it *has* to run 6 commands is how was written the > instack deployment script which requires 3 flavors with very specific name > for each role, despite the fact that one of the roles is not deployed. > > If user follows regular way (when we get out of the deployment scripts), he > would have to create *one* single flavor (2 commands) and in these commands > is specifically listed what features are being registered with the flavor > (ram, disk, vcpu). So it is not hidden from user. > > This is very important. If you even want to improve this flow, we should > suggest flavors to user and improve unified CLI. > > >Here's really my main point I guess: > > > >If it takes 12 (eventually) CLI commands to create flavors, do we expect people > >to *type* those into their shell? I hope not. > > > >Let's suppose we document it that way...in my experience the most likely thing > >someone would do (especially if they're repeating the process) would be to > >copy/paste those 12 commands out of the documentation, and write their own > >shell script/deployment tool to execute them, or just copy/paste them > >straight into their shell, while perhaps customizing them along the way. > > > >That would certainlly be a totally valid way to do it. > > If user has homogeneous environment, he can have just one simple flavor. If > he has heterogeneous or he wants to get more specific, he will take > additional actions (create more flavors or deal with edeploy matching). > > You are overstating - at the moment the problem with those 6 commands is > really how the instack scripts are written. So that I could get us out of > scripts and replace flavors script I had to create three flavors instead of > one. > > >So...why don't we just give them that shell script to start off with? > >Better yet, let's write something we'd actually like to support long term (in > >Python) and is just as flexible, perhaps taking json (or more likely YAML tbh) > >as input, with a nice Python program to log stuff and offer --help's along the > >way. Something that's actually supportable so we could ask: What input did you > >provide to this tool and what was the output? VS. How did you customize these > >12 commands, now go please run these other commands so we can figure out what > >happened. > > Yes, let's write something what we support longer term. Didn't we agree it > is unified CLI? Didn't we agree we should support and improve it? Feeding > another yaml file to create a single flavor? I disagree that this is better > user experience. > > > >I'm already seeing developers and external people saying that it's too much > >as-is and that they're going to write custom tooling to help out. Why wouldn't > >we just offer that already (especially when 90% of the code is already written > >upstream), and still be able to keep all the flexibility, as long we actually > >document how it works, and what commands you'd run to customize it? > > Are these developers real world users in production environments? Ask field > guys. The feedback you are getting is from people who are running these > commands 20 times a day. Then it is valid point that it is too many steps > for them. But this is *completely* different use case than production > environments. > > And we will not get out of the use case that people will write automation > scripts. But instack will not help them with that because they will write > very specific automation scripts on top of their production environments and > they will use our CLI as the basis. > > We should focus on production environments first and then simplify for POCs. > Not vice-versa. And not improving the scripts. This is why this is not a POC vs. production discussion: - What you've proposed is in no way any more flexible or production ready than what we have today. You've proposed documentation changes that offer no explanation of how anything works, why you would change it, what effect that might have and uses only hardcoded values. That is functionally equivalent to what we already have. - Rewriting our existing shell script into a more supportable and flexible tool is what we're getting with the proposed Python change. If you would go look at the code review posted and os-cloud-config, you'd in fact see it's obviously not just for one flavor at a time. Honestly, we're going to do this regardless, it will be upstream improvements (b/c already exists there). If the choice is made not to consume such a tool in rdo-manager, that would certainly be a fair choice. Both of the above points move us no further or away from POC vs production. If you'd like to see us move towards what you consider a production environment, your documentation changes should include the what/why/how and show the flexibility. I agree this is the right direction to go, but I disagree that anything currently proposed has a tangible effect on that. -- -- James Slagle -- From jslagle at redhat.com Thu Apr 23 11:06:24 2015 From: jslagle at redhat.com (James Slagle) Date: Thu, 23 Apr 2015 07:06:24 -0400 Subject: [Rdo-list] [rhos-dev] [RDO-Manager] Rewriting instack scripts into python? Why? In-Reply-To: <55389F85.20500@redhat.com> References: <5537839A.4040301@redhat.com> <20150422125115.GE29586@teletran-1.redhat.com> <55379CE2.4010603@redhat.com> <5537D06E.7020808@redhat.com> <5537E669.9010407@redhat.com> <20150423020707.GH29586@teletran-1.redhat.com> <55388BF7.7040201@redhat.com> <55389F85.20500@redhat.com> Message-ID: <20150423110624.GJ29586@teletran-1.redhat.com> On Thu, Apr 23, 2015 at 09:30:13AM +0200, Dmitry Tantsur wrote: > On 04/23/2015 08:06 AM, Jaromir Coufal wrote: > > I don't see it a plus tbh, there's no point in showing a person details > he/she don't care about. > > One particular problem is that flavor should match Ironic node data (= > something introspected automagically), not real data. And yes, they differ. > Due to Ironic partitioning limitations we have to -1 actual disk size. > > That's why I proposed an RFE to create flavors automatically: > https://bugzilla.redhat.com/show_bug.cgi?id=1214343 os-cloud-config already has the functionality to create flavors based on nodes definition. It takes the node definitions from a json file though instead of querying Ironic. However, there is also code already that uses that same json file to register the nodes in Ironic, so it would be a simple enhancement I'd think to make it query Ironic for the nodes definition. Can you take a look and see if this is what you had in mind? I'd propose driving this feature request in os-cloud-config directly. -- -- James Slagle -- From dtantsur at redhat.com Thu Apr 23 11:13:49 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 23 Apr 2015 13:13:49 +0200 Subject: [Rdo-list] [rhos-dev] [RDO-Manager] Rewriting instack scripts into python? Why? In-Reply-To: <20150423110624.GJ29586@teletran-1.redhat.com> References: <5537839A.4040301@redhat.com> <20150422125115.GE29586@teletran-1.redhat.com> <55379CE2.4010603@redhat.com> <5537D06E.7020808@redhat.com> <5537E669.9010407@redhat.com> <20150423020707.GH29586@teletran-1.redhat.com> <55388BF7.7040201@redhat.com> <55389F85.20500@redhat.com> <20150423110624.GJ29586@teletran-1.redhat.com> Message-ID: <5538D3ED.9050909@redhat.com> On 04/23/2015 01:06 PM, James Slagle wrote: > On Thu, Apr 23, 2015 at 09:30:13AM +0200, Dmitry Tantsur wrote: >> On 04/23/2015 08:06 AM, Jaromir Coufal wrote: >> >> I don't see it a plus tbh, there's no point in showing a person details >> he/she don't care about. >> >> One particular problem is that flavor should match Ironic node data (= >> something introspected automagically), not real data. And yes, they differ. >> Due to Ironic partitioning limitations we have to -1 actual disk size. >> >> That's why I proposed an RFE to create flavors automatically: >> https://bugzilla.redhat.com/show_bug.cgi?id=1214343 > > os-cloud-config already has the functionality to create flavors based on nodes > definition. It takes the node definitions from a json file though instead of > querying Ironic. However, there is also code already that uses that same json > file to register the nodes in Ironic, so it would be a simple enhancement I'd > think to make it query Ironic for the nodes definition. > > Can you take a look and see if this is what you had in mind? I'd propose > driving this feature request in os-cloud-config directly. Left a comment on the bug. I'd like CLI and/or product folks to make the final decision on where this belongs. > > -- > -- James Slagle > -- > From pgsousa at gmail.com Thu Apr 23 16:10:50 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 23 Apr 2015 17:10:50 +0100 Subject: [Rdo-list] rdo-manager error running instack-deploy-overcloud --tuskar Message-ID: Hi, I'm testing virtual deployment, after registering and discovering my overcloud nodes I'm getting this error, deploying my nodes: $instack-deploy-overcloud --tuskar The following templates will be written: tuskar_templates/puppet/manifests/overcloud_volume.pp tuskar_templates/hieradata/object.yaml tuskar_templates/puppet/manifests/overcloud_controller.pp tuskar_templates/puppet/hieradata/common.yaml tuskar_templates/provider-Swift-Storage-1.yaml tuskar_templates/provider-Cinder-Storage-1.yaml tuskar_templates/provider-Compute-1.yaml tuskar_templates/puppet/bootstrap-config.yaml tuskar_templates/net-config-bridge.yaml tuskar_templates/provider-Ceph-Storage-1.yaml tuskar_templates/puppet/controller-post-puppet.yaml tuskar_templates/puppet/cinder-storage-puppet.yaml tuskar_templates/puppet/manifests/overcloud_cephstorage.pp tuskar_templates/puppet/hieradata/object.yaml tuskar_templates/puppet/controller-puppet.yaml tuskar_templates/puppet/cinder-storage-post.yaml tuskar_templates/puppet/swift-storage-post.yaml tuskar_templates/provider-Controller-1.yaml tuskar_templates/puppet/manifests/overcloud_object.pp tuskar_templates/hieradata/controller.yaml tuskar_templates/hieradata/volume.yaml tuskar_templates/puppet/compute-post-puppet.yaml tuskar_templates/puppet/swift-storage-puppet.yaml tuskar_templates/puppet/swift-devices-and-proxy-config.yaml tuskar_templates/puppet/compute-puppet.yaml tuskar_templates/puppet/hieradata/volume.yaml tuskar_templates/puppet/ceph-storage-post-puppet.yaml tuskar_templates/puppet/ceph-storage-puppet.yaml tuskar_templates/puppet/hieradata/ceph.yaml tuskar_templates/puppet/hieradata/controller.yaml tuskar_templates/plan.yaml tuskar_templates/environment.yaml tuskar_templates/puppet/all-nodes-config.yaml tuskar_templates/hieradata/compute.yaml tuskar_templates/puppet/hieradata/compute.yaml tuskar_templates/hieradata/ceph.yaml tuskar_templates/puppet/manifests/overcloud_compute.pp tuskar_templates/hieradata/common.yaml tuskar_templates/puppet/manifests/ringbuilder.pp tuskar_templates/firstboot/userdata_default.yaml tuskar_templates/net-config-noop.yaml tuskar_templates/puppet/ceph-cluster-config.yaml tuskar_templates/extraconfig/post_deploy/default.yaml + OVERCLOUD_YAML_PATH=tuskar_templates/plan.yaml + ENVIROMENT_YAML_PATH=tuskar_templates/environment.yaml + heat stack-create -t 240 -f tuskar_templates/plan.yaml -e tuskar_templates/environment.yaml overcloud ERROR: Timed out waiting for a reply to message ID 16477b6b6ee04c7fa9f8d7cc45461d8f Any hint? Thanks, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Thu Apr 23 16:25:10 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Thu, 23 Apr 2015 17:25:10 +0100 Subject: [Rdo-list] rdo-manager error running instack-deploy-overcloud --tuskar In-Reply-To: References: Message-ID: Seems some Heat DB issue: 2015-04-23 16:23:50.859 27616 TRACE heat-engine ProgrammingError: (ProgrammingError) (1146, "Table 'heat.stack' doesn't exist") 'SELECT stack.status_reason AS stack_status_reason, stack.created_at AS stack_created_at, stack.deleted_at AS stack_deleted_at, stack.action AS stack_action, stack.status AS stack_status, stack.id AS stack_id, stack.name AS stack_name, stack.raw_template_id AS stack_raw_template_id, stack.prev_raw_template_id AS stack_prev_raw_template_id, stack.username AS stack_username, stack.tenant AS stack_tenant, stack.user_creds_id AS stack_user_creds_id, stack.owner_id AS stack_owner_id, stack.timeout AS stack_timeout, stack.disable_rollback AS stack_disable_rollback, stack.stack_user_project_id AS stack_stack_user_project_id, stack.backup AS stack_backup, stack.nested_depth AS stack_nested_depth, stack.convergence AS stack_convergence, stack.current_traversal AS stack_current_traversal, stack.current_deps AS stack_current_deps, stack.updated_at AS stack_updated_at \nFROM stack \nWHERE stack.deleted_at IS NULL AND stack.owner_id IS NULL ORDER BY stack.created_at DESC, stack.id DESC' () 2015-04-23 16:23:50.859 27616 TRACE heat-engine On Thu, Apr 23, 2015 at 5:10 PM, Pedro Sousa wrote: > Hi, > > I'm testing virtual deployment, after registering and discovering my > overcloud nodes I'm getting this error, deploying my nodes: > > $instack-deploy-overcloud --tuskar > > The following templates will be written: > tuskar_templates/puppet/manifests/overcloud_volume.pp > tuskar_templates/hieradata/object.yaml > tuskar_templates/puppet/manifests/overcloud_controller.pp > tuskar_templates/puppet/hieradata/common.yaml > tuskar_templates/provider-Swift-Storage-1.yaml > tuskar_templates/provider-Cinder-Storage-1.yaml > tuskar_templates/provider-Compute-1.yaml > tuskar_templates/puppet/bootstrap-config.yaml > tuskar_templates/net-config-bridge.yaml > tuskar_templates/provider-Ceph-Storage-1.yaml > tuskar_templates/puppet/controller-post-puppet.yaml > tuskar_templates/puppet/cinder-storage-puppet.yaml > tuskar_templates/puppet/manifests/overcloud_cephstorage.pp > tuskar_templates/puppet/hieradata/object.yaml > tuskar_templates/puppet/controller-puppet.yaml > tuskar_templates/puppet/cinder-storage-post.yaml > tuskar_templates/puppet/swift-storage-post.yaml > tuskar_templates/provider-Controller-1.yaml > tuskar_templates/puppet/manifests/overcloud_object.pp > tuskar_templates/hieradata/controller.yaml > tuskar_templates/hieradata/volume.yaml > tuskar_templates/puppet/compute-post-puppet.yaml > tuskar_templates/puppet/swift-storage-puppet.yaml > tuskar_templates/puppet/swift-devices-and-proxy-config.yaml > tuskar_templates/puppet/compute-puppet.yaml > tuskar_templates/puppet/hieradata/volume.yaml > tuskar_templates/puppet/ceph-storage-post-puppet.yaml > tuskar_templates/puppet/ceph-storage-puppet.yaml > tuskar_templates/puppet/hieradata/ceph.yaml > tuskar_templates/puppet/hieradata/controller.yaml > tuskar_templates/plan.yaml > tuskar_templates/environment.yaml > tuskar_templates/puppet/all-nodes-config.yaml > tuskar_templates/hieradata/compute.yaml > tuskar_templates/puppet/hieradata/compute.yaml > tuskar_templates/hieradata/ceph.yaml > tuskar_templates/puppet/manifests/overcloud_compute.pp > tuskar_templates/hieradata/common.yaml > tuskar_templates/puppet/manifests/ringbuilder.pp > tuskar_templates/firstboot/userdata_default.yaml > tuskar_templates/net-config-noop.yaml > tuskar_templates/puppet/ceph-cluster-config.yaml > tuskar_templates/extraconfig/post_deploy/default.yaml > + OVERCLOUD_YAML_PATH=tuskar_templates/plan.yaml > + ENVIROMENT_YAML_PATH=tuskar_templates/environment.yaml > + heat stack-create -t 240 -f tuskar_templates/plan.yaml -e > tuskar_templates/environment.yaml overcloud > ERROR: Timed out waiting for a reply to message ID > 16477b6b6ee04c7fa9f8d7cc45461d8f > > Any hint? > > Thanks, > Pedro Sousa > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Thu Apr 23 16:36:23 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Thu, 23 Apr 2015 18:36:23 +0200 Subject: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: Alen, Boris, Thanks! Yes, system was yum updated, now I did the following: yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm yum install epel-release cd /etc/yum.repos.d/ curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo yum install openstack-packstack setenforce 0 packstack --allinone and the result was: 10.0.0.16_postscript.pp: [ DONE ] Applying Puppet manifests [ DONE ] Finalizing [ DONE ] **** Installation completed successfully ****** But I can't access the dashboard, over the external IP, I'm getting: Not Found The requested URL /dashboard was not found on this server. and over the internal IP with: http://10.0.0.16/dashboard I'm getting: Internal Server Error The server encountered an internal error ... Also disabled the NetworkManager and rebooted, didn't help to get horizon working. Thx again, Arash On Thu, Apr 23, 2015 at 11:46 AM, Boris Derzhavets wrote: > Arash, > > System was yum updated, I ran packstack and disabled NetworkManager only > after completion. > As of now same procedure reproduced successfully 3 times on different VMs > ( KVM Hypervisors of F22,F21, Ubuntu 15.04) . Nested KVM enabled for each > VM. > > [root at centos71 ~(keystone_admin)]# uname -a > Linux centos71.localdomain 3.10.0-229.1.2.el7.x86_64 #1 SMP Fri Mar 27 > 03:04:26 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux > > [root at centos71 ~(keystone_admin)]# openstack-status > == Nova services == > openstack-nova-api: active > openstack-nova-cert: active > openstack-nova-compute: active > openstack-nova-network: inactive (disabled on boot) > openstack-nova-scheduler: active > openstack-nova-conductor: active > == Glance services == > openstack-glance-api: active > openstack-glance-registry: active > == Keystone service == > openstack-keystone: inactive (disabled on boot) > == Horizon service == > openstack-dashboard: active > == neutron services == > neutron-server: active > neutron-dhcp-agent: active > neutron-l3-agent: active > neutron-metadata-agent: active > neutron-openvswitch-agent: active > == Swift services == > openstack-swift-proxy: active > openstack-swift-account: active > openstack-swift-container: active > openstack-swift-object: active > == Cinder services == > openstack-cinder-api: active > openstack-cinder-scheduler: active > openstack-cinder-volume: active > openstack-cinder-backup: active > == Ceilometer services == > openstack-ceilometer-api: active > openstack-ceilometer-central: active > openstack-ceilometer-compute: active > openstack-ceilometer-collector: active > openstack-ceilometer-alarm-notifier: active > openstack-ceilometer-alarm-evaluator: active > openstack-ceilometer-notification: active > == Support services == > mysqld: inactive (disabled on boot) > libvirtd: active > openvswitch: active > dbus: active > target: active > rabbitmq-server: active > memcached: active > == Keystone users == > /usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: > DeprecationWarning: The keystone CLI is deprecated in favor of > python-openstackclient. For a Python library, continue using > python-keystoneclient. > 'python-keystoneclient.', DeprecationWarning) > > +----------------------------------+------------+---------+----------------------+ > | id | name | enabled | > email | > > +----------------------------------+------------+---------+----------------------+ > | 1fb446ec99184947bff342188028fddd | admin | True | > root at localhost | > | 3e76f14038724ef19e804ef99919ae75 | ceilometer | True | > ceilometer at localhost | > | d63e40e71da84778bdbc89cd0645109c | cinder | True | > cinder at localhost | > | 75b0b000562f491284043b5c74afbb1e | demo | True > | | > | bb3d35d9a23443bfb3791545a7aa03b4 | glance | True | > glance at localhost | > | 573eb12b92fd48e68e5635f3c79b3dec | neutron | True | > neutron at localhost | > | be6b2d41f55f4c3fab8e02a779de4a63 | nova | True | > nova at localhost | > | 53e9e3a493244c5e801ba92446c969bc | swift | True | > swift at localhost | > > +----------------------------------+------------+---------+----------------------+ > == Glance images == > > +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ > | ID | Name | Disk Format > | Container Format | Size | Status | > > +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ > | 0c73a315-8867-472c-bba6-e73a43b9b98d | cirros | qcow2 > | bare | 13200896 | active | > | 52df1d6d-9eb0-4c09-a9bb-ec5a07bd62eb | Fedora 21 image | qcow2 > | bare | 158443520 | active | > | 7f128f54-727c-45ad-8891-777aa39ff3e1 | Ubuntu 15.04 image | qcow2 > | bare | 284361216 | active | > > +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ > == Nova managed services == > > +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ > | Id | Binary | Host | Zone | Status | > State | Updated_at | Disabled Reason | > > +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ > | 1 | nova-consoleauth | centos71.localdomain | internal | enabled | > up | 2015-04-23T09:36:57.000000 | - | > | 2 | nova-scheduler | centos71.localdomain | internal | enabled | > up | 2015-04-23T09:36:57.000000 | - | > | 3 | nova-conductor | centos71.localdomain | internal | enabled | > up | 2015-04-23T09:36:58.000000 | - | > | 4 | nova-compute | centos71.localdomain | nova | enabled | > up | 2015-04-23T09:36:58.000000 | - | > | 5 | nova-cert | centos71.localdomain | internal | enabled | > up | 2015-04-23T09:36:57.000000 | - | > > +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ > == Nova networks == > +--------------------------------------+----------+------+ > | ID | Label | Cidr | > +--------------------------------------+----------+------+ > | d3bcf265-2429-4556-b799-16579ba367cf | public | - | > | b25422bc-aa87-4007-bf5a-64dde97dd6f7 | demo_net | - | > +--------------------------------------+----------+------+ > == Nova instance flavors == > > +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | > RXTX_Factor | Is_Public | > > +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > | 1 | m1.tiny | 512 | 1 | 0 | | 1 | > 1.0 | True | > | 2 | m1.small | 2048 | 20 | 0 | | 1 | > 1.0 | True | > | 3 | m1.medium | 4096 | 40 | 0 | | 2 | > 1.0 | True | > | 4 | m1.large | 8192 | 80 | 0 | | 4 | > 1.0 | True | > | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | > 1.0 | True | > > +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ > == Nova instances == > +----+------+--------+------------+-------------+----------+ > | ID | Name | Status | Task State | Power State | Networks | > +----+------+--------+------------+-------------+----------+ > +----+------+--------+------------+-------------+----------+ > > [root at centos71 ~(keystone_admin)]# rpm -qa | grep openstack > openstack-nova-novncproxy-2015.1-dev19.el7.centos.noarch > python-openstackclient-1.0.3-post3.el7.centos.noarch > openstack-keystone-2015.1-dev14.el7.centos.noarch > openstack-nova-console-2015.1-dev19.el7.centos.noarch > openstack-nova-api-2015.1-dev19.el7.centos.noarch > openstack-packstack-2015.1-dev1529.g0605728.el7.centos.noarch > openstack-ceilometer-compute-2015.1-dev2.el7.centos.noarch > openstack-swift-plugin-swift3-1.7-4.el7.centos.noarch > openstack-selinux-0.6.25-1.el7.noarch > openstack-cinder-2015.1-dev2.el7.centos.noarch > openstack-neutron-openvswitch-2015.1-dev1.el7.centos.noarch > openstack-swift-account-2.3.0rc1-post1.el7.centos.noarch > openstack-ceilometer-alarm-2015.1-dev2.el7.centos.noarch > openstack-utils-2014.2-1.el7.centos.noarch > openstack-packstack-puppet-2015.1-dev1529.g0605728.el7.centos.noarch > openstack-nova-common-2015.1-dev19.el7.centos.noarch > openstack-nova-scheduler-2015.1-dev19.el7.centos.noarch > openstack-ceilometer-common-2015.1-dev2.el7.centos.noarch > openstack-nova-conductor-2015.1-dev19.el7.centos.noarch > openstack-neutron-common-2015.1-dev1.el7.centos.noarch > openstack-swift-object-2.3.0rc1-post1.el7.centos.noarch > openstack-ceilometer-central-2015.1-dev2.el7.centos.noarch > openstack-glance-2015.1-dev1.el7.centos.noarch > openstack-nova-compute-2015.1-dev19.el7.centos.noarch > openstack-neutron-ml2-2015.1-dev1.el7.centos.noarch > python-django-openstack-auth-1.3.0-0.99.20150421.2158git.el7.centos.noarch > openstack-swift-2.3.0rc1-post1.el7.centos.noarch > openstack-ceilometer-api-2015.1-dev2.el7.centos.noarch > openstack-swift-proxy-2.3.0rc1-post1.el7.centos.noarch > openstack-swift-container-2.3.0rc1-post1.el7.centos.noarch > openstack-ceilometer-collector-2015.1-dev2.el7.centos.noarch > openstack-nova-cert-2015.1-dev19.el7.centos.noarch > openstack-ceilometer-notification-2015.1-dev2.el7.centos.noarch > > openstack-puppet-modules-2015.1-dev.2d3528a51091931caef06a5a8d1cfdaaa79d25ec_75763dd0.el7.centos.noarch > openstack-neutron-2015.1-dev1.el7.centos.noarch > openstack-dashboard-2015.1-dev2.el7.centos.noarch > > Boris. > > ------------------------------ > Date: Thu, 23 Apr 2015 00:21:03 +0200 > Subject: Re: [Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages > From: ak at cloudssky.com > To: bderzhavets at hotmail.com > CC: apevec at gmail.com; rdo-list at redhat.com > > Hi, > > I'm running CentOS Linux release 7.1.1503 (Core) VM on OpenStack and > followed > the steps and I'm getting: > > 10.0.0.16_prescript.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > ERROR : Error appeared during Puppet run: 10.0.0.16_prescript.pp > > Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. > Force execution using --nocheck, but the results are unpredictable. > Thanks, > Arash > > > On Wed, Apr 22, 2015 at 6:29 PM, Boris Derzhavets > wrote: > > I made one more attempt of `packstack --allinone` install on CentOS > 7.1 KVM running on F22 Host. > Finally, when new "demo_net" created after install completed with > interface in "down" state, I've dropped "private" subnet from the same > tenant "demo" (the one created by installer) , what resulted switching > interface of "demo_net" to "Active" status and allowed to launch CirrOS VM > via Horizon completely functional. > > Then I reproduced same procedure in first time environment been created > on CentOS 7.1 KVM running on Ubuntu 15.04 Host and got same results . As > soon as I dropped "private" network created by installer for demo tenant , > interface for "demo_net" ( created manually as post installation step) > switched to "Active" status. > > Still have issue with openstack-nova-novncproxy.service :- > > [root at centos71 nova(keystone_admin)]# systemctl status > openstack-nova-novncproxy.service -l > openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server > Loaded: loaded > (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled) > Active: failed (Result: exit-code) since Wed 2015-04-22 18:41:51 MSK; > 18min ago > Process: 25663 ExecStart=/usr/bin/nova-novncproxy --web > /usr/share/novnc/ $OPTIONS (code=exited, status=1/FAILURE) > Main PID: 25663 (code=exited, status=1/FAILURE) > > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from > nova.cmd.novncproxy import main > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File > "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 25, in > > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd > import baseproxy > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File > "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 26, in > > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from > nova.console import websocketproxy > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File > "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line > 154, in > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: > websockify.ProxyRequestHandler): > Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: > AttributeError: 'module' object has no attribute 'ProxyRequestHandler' > Apr 22 18:41:51 centos71.localdomain systemd[1]: > openstack-nova-novncproxy.service: main process exited, code=exited, > status=1/FAILURE > Apr 22 18:41:51 centos71.localdomain systemd[1]: Unit > openstack-nova-novncproxy.service entered failed state. > > Boris > > ------------------------------ > From: bderzhavets at hotmail.com > To: apevec at gmail.com; rdo-list at redhat.com > Date: Wed, 22 Apr 2015 07:02:32 -0400 > Subject: Re: [Rdo-list] RDO Kilo RC snapshot - core packages > > Alan, > > # packstack --allinone > > completes successfully on CentOS 7.1 > > However, when attaching interface to private subnet to neutron router > (as demo or as admin ) port status is down . I tested it via Horizon and > via Neutron CLI result was the same. Instance (cirros) been launched > cannot access nova meta-data server and obtain instance-id > > Lease of 50.0.0.12 obtained, lease time 86400 > cirros-ds 'net' up at 7.14 > checking http://169.254.169.254/2009-04-04/instance-id > failed 1/20: up 7.47. request failed > failed 2/20: up 12.81. request failed > failed 3/20: up 15.82. request failed > . . . . . . . . . > failed 18/20: up 78.28. request failed > failed 19/20: up 81.27. request failed > failed 20/20: up 86.50. request failed > failed to read iid from metadata. tried 20 > no results found for mode=net. up 89.53. searched: nocloud configdrive ec2 > failed to get instance-id of datasource > > > Thanks. > Boris > > > Date: Wed, 22 Apr 2015 04:15:54 +0200 > > From: apevec at gmail.com > > To: rdo-list at redhat.com > > Subject: [Rdo-list] RDO Kilo RC snapshot - core packages > > > > Hi all, > > > > unofficial[*] Kilo RC builds are now available for testing. This > > snapshot completes packstack --allinone i.e. issue in provision_glance > > reported on IRC has been fixed. > > > > Quick installation HOWTO > > > > yum install > http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > > # Following works out-of-the-box on CentOS7 > > # For RHEL see > http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F > > yum install epel-release > > cd /etc/yum.repos.d > > curl -O > https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo > > > > After above steps, regular Quickstart continues: > > yum install openstack-packstack > > packstack --allinone > > > > NB this snapshot has NOT been tested with rdo-management! If testing > > rdo-management, please follow their instructions. > > > > > > Cheers, > > Alan > > > > [*] Apr21 evening snapshot built from stable/kilo branches in > > Delorean Kilo instance, official RDO Kilo builds will come from CentOS > > CloudSIG CBS > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ Rdo-list mailing list > Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To > unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu Apr 23 17:23:34 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 23 Apr 2015 19:23:34 +0200 Subject: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: > But I can't access the dashboard, over the external IP, I'm getting: Which external IP do you mean? Please explain your network setup. > Internal Server Error Please try service httpd restart then check for clues in /var/log/httpd/horizon_error.log. openstack-dashboard-2015.1-dev2.el7 worked for me but in a small VM with 2G RAM and 1G swap I've seen oom-killer in action and httpd was the first victim, ps_mem shows httpd using >300MB after short browsing openstack-dashboard pages... Cheers, Alan From bderzhavets at hotmail.com Thu Apr 23 17:51:02 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 23 Apr 2015 13:51:02 -0400 Subject: [Rdo-list] RE(4): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: , , , , , Message-ID: Arash , I was able to reach dashboard, creating public network same as management ( due to test on libvirt's non-default subnet 192.169.142.0/24) via http://192.169.142.57/dashboard :- # cat ifcfg-br-ex DEVICE="br-ex" BOOTPROTO="static" IPADDR="192.169.142.57" NETMASK="255.255.255.0" DNS1="83.221.202.254" BROADCAST="192.169.142.255" GATEWAY="192.168.142.1" NM_CONTROLLED="no" DEFROUTE="yes" IPV4_FAILURE_FATAL="yes" IPV6INIT=no ONBOOT="yes" TYPE="OVSIntPort" OVS_BRIDGE=br-ex DEVICETYPE="ovs" # cat ifcfg-eth0 DEVICE="eth0" ONBOOT="yes" TYPE="OVSPort" DEVICETYPE="ovs" OVS_BRIDGE=br-ex NM_CONTROLLED=no IPV6INIT=no # service network restart Actually , making eth0 OVS port of OVS bridge br-ex (original VM's IP 192.169.142.57 moved to br-ex ) Boris. -------------------------------------------------------------------------------------------------------------------------------------- Date: Thu, 23 Apr 2015 18:36:23 +0200 Subject: Re: RE(3): [Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com Alen, Boris, Thanks! Yes, system was yum updated, now I did the following: yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpmyum install epel-releasecd /etc/yum.repos.d/curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repoyum install openstack-packstacksetenforce 0packstack --allinone and the result was: 10.0.0.16_postscript.pp: [ DONE ] Applying Puppet manifests [ DONE ] Finalizing [ DONE ] **** Installation completed successfully ****** But I can't access the dashboard, over the external IP, I'm getting: Not FoundThe requested URL /dashboard was not found on this server.and over the internal IP with:http://10.0.0.16/dashboard I'm getting: Internal Server ErrorThe server encountered an internal error ...Also disabled the NetworkManager and rebooted, didn't help to get horizon working.Thx again,Arash On Thu, Apr 23, 2015 at 11:46 AM, Boris Derzhavets wrote: Arash, System was yum updated, I ran packstack and disabled NetworkManager only after completion. As of now same procedure reproduced successfully 3 times on different VMs ( KVM Hypervisors of F22,F21, Ubuntu 15.04) . Nested KVM enabled for each VM. [root at centos71 ~(keystone_admin)]# uname -a Linux centos71.localdomain 3.10.0-229.1.2.el7.x86_64 #1 SMP Fri Mar 27 03:04:26 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux [root at centos71 ~(keystone_admin)]# openstack-status == Nova services == openstack-nova-api: active openstack-nova-cert: active openstack-nova-compute: active openstack-nova-network: inactive (disabled on boot) openstack-nova-scheduler: active openstack-nova-conductor: active == Glance services == openstack-glance-api: active openstack-glance-registry: active == Keystone service == openstack-keystone: inactive (disabled on boot) == Horizon service == openstack-dashboard: active == neutron services == neutron-server: active neutron-dhcp-agent: active neutron-l3-agent: active neutron-metadata-agent: active neutron-openvswitch-agent: active == Swift services == openstack-swift-proxy: active openstack-swift-account: active openstack-swift-container: active openstack-swift-object: active == Cinder services == openstack-cinder-api: active openstack-cinder-scheduler: active openstack-cinder-volume: active openstack-cinder-backup: active == Ceilometer services == openstack-ceilometer-api: active openstack-ceilometer-central: active openstack-ceilometer-compute: active openstack-ceilometer-collector: active openstack-ceilometer-alarm-notifier: active openstack-ceilometer-alarm-evaluator: active openstack-ceilometer-notification: active == Support services == mysqld: inactive (disabled on boot) libvirtd: active openvswitch: active dbus: active target: active rabbitmq-server: active memcached: active == Keystone users == /usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient. 'python-keystoneclient.', DeprecationWarning) +----------------------------------+------------+---------+----------------------+ | id | name | enabled | email | +----------------------------------+------------+---------+----------------------+ | 1fb446ec99184947bff342188028fddd | admin | True | root at localhost | | 3e76f14038724ef19e804ef99919ae75 | ceilometer | True | ceilometer at localhost | | d63e40e71da84778bdbc89cd0645109c | cinder | True | cinder at localhost | | 75b0b000562f491284043b5c74afbb1e | demo | True | | | bb3d35d9a23443bfb3791545a7aa03b4 | glance | True | glance at localhost | | 573eb12b92fd48e68e5635f3c79b3dec | neutron | True | neutron at localhost | | be6b2d41f55f4c3fab8e02a779de4a63 | nova | True | nova at localhost | | 53e9e3a493244c5e801ba92446c969bc | swift | True | swift at localhost | +----------------------------------+------------+---------+----------------------+ == Glance images == +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ | 0c73a315-8867-472c-bba6-e73a43b9b98d | cirros | qcow2 | bare | 13200896 | active | | 52df1d6d-9eb0-4c09-a9bb-ec5a07bd62eb | Fedora 21 image | qcow2 | bare | 158443520 | active | | 7f128f54-727c-45ad-8891-777aa39ff3e1 | Ubuntu 15.04 image | qcow2 | bare | 284361216 | active | +--------------------------------------+--------------------+-------------+------------------+-----------+--------+ == Nova managed services == +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:57.000000 | - | | 2 | nova-scheduler | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:57.000000 | - | | 3 | nova-conductor | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:58.000000 | - | | 4 | nova-compute | centos71.localdomain | nova | enabled | up | 2015-04-23T09:36:58.000000 | - | | 5 | nova-cert | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:57.000000 | - | +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ == Nova networks == +--------------------------------------+----------+------+ | ID | Label | Cidr | +--------------------------------------+----------+------+ | d3bcf265-2429-4556-b799-16579ba367cf | public | - | | b25422bc-aa87-4007-bf5a-64dde97dd6f7 | demo_net | - | +--------------------------------------+----------+------+ == Nova instance flavors == +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ == Nova instances == +----+------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +----+------+--------+------------+-------------+----------+ +----+------+--------+------------+-------------+----------+ [root at centos71 ~(keystone_admin)]# rpm -qa | grep openstack openstack-nova-novncproxy-2015.1-dev19.el7.centos.noarch python-openstackclient-1.0.3-post3.el7.centos.noarch openstack-keystone-2015.1-dev14.el7.centos.noarch openstack-nova-console-2015.1-dev19.el7.centos.noarch openstack-nova-api-2015.1-dev19.el7.centos.noarch openstack-packstack-2015.1-dev1529.g0605728.el7.centos.noarch openstack-ceilometer-compute-2015.1-dev2.el7.centos.noarch openstack-swift-plugin-swift3-1.7-4.el7.centos.noarch openstack-selinux-0.6.25-1.el7.noarch openstack-cinder-2015.1-dev2.el7.centos.noarch openstack-neutron-openvswitch-2015.1-dev1.el7.centos.noarch openstack-swift-account-2.3.0rc1-post1.el7.centos.noarch openstack-ceilometer-alarm-2015.1-dev2.el7.centos.noarch openstack-utils-2014.2-1.el7.centos.noarch openstack-packstack-puppet-2015.1-dev1529.g0605728.el7.centos.noarch openstack-nova-common-2015.1-dev19.el7.centos.noarch openstack-nova-scheduler-2015.1-dev19.el7.centos.noarch openstack-ceilometer-common-2015.1-dev2.el7.centos.noarch openstack-nova-conductor-2015.1-dev19.el7.centos.noarch openstack-neutron-common-2015.1-dev1.el7.centos.noarch openstack-swift-object-2.3.0rc1-post1.el7.centos.noarch openstack-ceilometer-central-2015.1-dev2.el7.centos.noarch openstack-glance-2015.1-dev1.el7.centos.noarch openstack-nova-compute-2015.1-dev19.el7.centos.noarch openstack-neutron-ml2-2015.1-dev1.el7.centos.noarch python-django-openstack-auth-1.3.0-0.99.20150421.2158git.el7.centos.noarch openstack-swift-2.3.0rc1-post1.el7.centos.noarch openstack-ceilometer-api-2015.1-dev2.el7.centos.noarch openstack-swift-proxy-2.3.0rc1-post1.el7.centos.noarch openstack-swift-container-2.3.0rc1-post1.el7.centos.noarch openstack-ceilometer-collector-2015.1-dev2.el7.centos.noarch openstack-nova-cert-2015.1-dev19.el7.centos.noarch openstack-ceilometer-notification-2015.1-dev2.el7.centos.noarch openstack-puppet-modules-2015.1-dev.2d3528a51091931caef06a5a8d1cfdaaa79d25ec_75763dd0.el7.centos.noarch openstack-neutron-2015.1-dev1.el7.centos.noarch openstack-dashboard-2015.1-dev2.el7.centos.noarch Boris. Date: Thu, 23 Apr 2015 00:21:03 +0200 Subject: Re: [Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: apevec at gmail.com; rdo-list at redhat.com Hi, I'm running CentOS Linux release 7.1.1503 (Core) VM on OpenStack and followedthe steps and I'm getting: 10.0.0.16_prescript.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 10.0.0.16_prescript.pp Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. Force execution using --nocheck, but the results are unpredictable. Thanks, Arash On Wed, Apr 22, 2015 at 6:29 PM, Boris Derzhavets wrote: I made one more attempt of `packstack --allinone` install on CentOS 7.1 KVM running on F22 Host. Finally, when new "demo_net" created after install completed with interface in "down" state, I've dropped "private" subnet from the same tenant "demo" (the one created by installer) , what resulted switching interface of "demo_net" to "Active" status and allowed to launch CirrOS VM via Horizon completely functional. Then I reproduced same procedure in first time environment been created on CentOS 7.1 KVM running on Ubuntu 15.04 Host and got same results . As soon as I dropped "private" network created by installer for demo tenant , interface for "demo_net" ( created manually as post installation step) switched to "Active" status. Still have issue with openstack-nova-novncproxy.service :- [root at centos71 nova(keystone_admin)]# systemctl status openstack-nova-novncproxy.service -l openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled) Active: failed (Result: exit-code) since Wed 2015-04-22 18:41:51 MSK; 18min ago Process: 25663 ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS (code=exited, status=1/FAILURE) Main PID: 25663 (code=exited, status=1/FAILURE) Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd.novncproxy import main Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 25, in Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd import baseproxy Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 26, in Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.console import websocketproxy Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 154, in Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: websockify.ProxyRequestHandler): Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler' Apr 22 18:41:51 centos71.localdomain systemd[1]: openstack-nova-novncproxy.service: main process exited, code=exited, status=1/FAILURE Apr 22 18:41:51 centos71.localdomain systemd[1]: Unit openstack-nova-novncproxy.service entered failed state. Boris From: bderzhavets at hotmail.com To: apevec at gmail.com; rdo-list at redhat.com Date: Wed, 22 Apr 2015 07:02:32 -0400 Subject: Re: [Rdo-list] RDO Kilo RC snapshot - core packages Alan, # packstack --allinone completes successfully on CentOS 7.1 However, when attaching interface to private subnet to neutron router (as demo or as admin ) port status is down . I tested it via Horizon and via Neutron CLI result was the same. Instance (cirros) been launched cannot access nova meta-data server and obtain instance-id Lease of 50.0.0.12 obtained, lease time 86400 cirros-ds 'net' up at 7.14 checking http://169.254.169.254/2009-04-04/instance-id failed 1/20: up 7.47. request failed failed 2/20: up 12.81. request failed failed 3/20: up 15.82. request failed . . . . . . . . . failed 18/20: up 78.28. request failed failed 19/20: up 81.27. request failed failed 20/20: up 86.50. request failed failed to read iid from metadata. tried 20 no results found for mode=net. up 89.53. searched: nocloud configdrive ec2 failed to get instance-id of datasource Thanks. Boris > Date: Wed, 22 Apr 2015 04:15:54 +0200 > From: apevec at gmail.com > To: rdo-list at redhat.com > Subject: [Rdo-list] RDO Kilo RC snapshot - core packages > > Hi all, > > unofficial[*] Kilo RC builds are now available for testing. This > snapshot completes packstack --allinone i.e. issue in provision_glance > reported on IRC has been fixed. > > Quick installation HOWTO > > yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > # Following works out-of-the-box on CentOS7 > # For RHEL see http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F > yum install epel-release > cd /etc/yum.repos.d > curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo > > After above steps, regular Quickstart continues: > yum install openstack-packstack > packstack --allinone > > NB this snapshot has NOT been tested with rdo-management! If testing > rdo-management, please follow their instructions. > > > Cheers, > Alan > > [*] Apr21 evening snapshot built from stable/kilo branches in > Delorean Kilo instance, official RDO Kilo builds will come from CentOS > CloudSIG CBS > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Thu Apr 23 18:32:54 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Thu, 23 Apr 2015 20:32:54 +0200 Subject: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: With external IP, I mean the floating IP address which is assigned to the Kilo VM on OpenStack. I use an OpenVPN VM on OpenStack to connect to the internal IP 10.0.0.16 of the kilo-vm: [root at kilo-rc1 ~]# openstack-status == Nova services == openstack-nova-api: active ...... == Horizon service == openstack-dashboard: 500 ... neutron-lbaas-agent: inactive (disabled on boot) (alle other services are active) and in the log file I see: [root at kilo-rc1 ~]# tail -f /var/log/httpd/horizon_error.log *...* [Thu Apr 23 18:16:14.264150 2015] [:error] [pid 1489] [remote ::1:200] File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 132, in __init__ [Thu Apr 23 18:16:14.264160 2015] [:error] [pid 1489] [remote ::1:200] % (self.SETTINGS_MODULE, e) [Thu Apr 23 18:16:14.264175 2015] [:error] [pid 1489] [remote ::1:200] ImportError: Could not import settings 'openstack_dashboard.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named angular_cookies ... [Thu Apr 23 18:21:15.819875 2015] [:error] [pid 1489] [remote 10.0.0.15:200] self._wrapped = Settings(settings_module) [Thu Apr 23 18:21:15.819881 2015] [:error] [pid 1489] [remote 10.0.0.15:200] File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 132, in __init__ [Thu Apr 23 18:21:15.819891 2015] [:error] [pid 1489] [remote 10.0.0.15:200] % (self.SETTINGS_MODULE, e) [Thu Apr 23 18:21:15.819906 2015] [:error] [pid 1489] [remote 10.0.0.15:200] ImportError: Could not import settings 'openstack_dashboard.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named angular_cookies No module named angular_cookies ???? On Thu, Apr 23, 2015 at 7:23 PM, Alan Pevec wrote: > > But I can't access the dashboard, over the external IP, I'm getting: > > Which external IP do you mean? Please explain your network setup. > > > Internal Server Error > > Please try service httpd restart then check for clues in > /var/log/httpd/horizon_error.log. > openstack-dashboard-2015.1-dev2.el7 worked for me but in a small VM > with 2G RAM and 1G swap I've seen oom-killer in action and httpd was > the first victim, ps_mem shows httpd using >300MB after short browsing > openstack-dashboard pages... > > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at arif-ali.co.uk Thu Apr 23 22:12:01 2015 From: mail at arif-ali.co.uk (Arif Ali) Date: Thu, 23 Apr 2015 23:12:01 +0100 Subject: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: <55396E31.4030603@arif-ali.co.uk> On 23/04/2015 19:32, Arash Kaffamanesh wrote: > With external IP, I mean the floating IP address which is assigned to > the Kilo VM on OpenStack. > > I use an OpenVPN VM on OpenStack to connect to the internal IP > 10.0.0.16 of the kilo-vm: > > [root at kilo-rc1 ~]# openstack-status > > == Nova services == > > openstack-nova-api: active > > ...... > > == Horizon service == > > openstack-dashboard: 500 > > ... > > neutron-lbaas-agent: inactive (disabled on boot) > > (alle other services are active) > > > and in the log file I see: > > [root at kilo-rc1 ~]# tail -f /var/log/httpd/horizon_error.log > > *...* > > [Thu Apr 23 18:16:14.264150 2015] [:error] [pid 1489] [remote ::1:200] > File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", > line 132, in __init__ > > [Thu Apr 23 18:16:14.264160 2015] [:error] [pid 1489] [remote > ::1:200] % (self.SETTINGS_MODULE, e) > > [Thu Apr 23 18:16:14.264175 2015] [:error] [pid 1489] [remote ::1:200] > ImportError: Could not import settings 'openstack_dashboard.settings' > (Is it on sys.path? Is there an import error in the settings file?): > No module named angular_cookies > > ... > > [Thu Apr 23 18:21:15.819875 2015] [:error] [pid 1489] [remote > 10.0.0.15:200 ] self._wrapped = > Settings(settings_module) > > [Thu Apr 23 18:21:15.819881 2015] [:error] [pid 1489] [remote > 10.0.0.15:200 ] File > "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 132, > in __init__ > > [Thu Apr 23 18:21:15.819891 2015] [:error] [pid 1489] [remote > 10.0.0.15:200 ] % (self.SETTINGS_MODULE, e) > > [Thu Apr 23 18:21:15.819906 2015] [:error] [pid 1489] [remote > 10.0.0.15:200 ] ImportError: Could not import > settings 'openstack_dashboard.settings' (Is it on sys.path? Is there > an import error in the settings file?): No module named angular_cookies > > > No module named angular_cookies ???? > > Hi Arash, I was getting a similar issue, and I changed the SECRET_KEY in /etc/openstack-dashboard/local_settings to some random letters, as in SECRET_KEY = '1245g789sdfgr697jg8asdn5' , and my dashboard started to work. For reference, my error was as below, not sure if it is anything similar to yours [Thu Apr 23 19:04:47.408743 2015] [:error] [pid 9052] [remote 192.168.10.76:12] mod_wsgi (pid=9052): Exception occurred processing WSGI script '/usr/share/openstack-dashboard/open [Thu Apr 23 19:04:47.408851 2015] [:error] [pid 9052] [remote 192.168.10.76:12] Traceback (most recent call last): [Thu Apr 23 19:04:47.408889 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 187, in __call__ [Thu Apr 23 19:04:47.409042 2015] [:error] [pid 9052] [remote 192.168.10.76:12] self.load_middleware() [Thu Apr 23 19:04:47.409061 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 44, in load_middleware [Thu Apr 23 19:04:47.409214 2015] [:error] [pid 9052] [remote 192.168.10.76:12] for middleware_path in settings.MIDDLEWARE_CLASSES: [Thu Apr 23 19:04:47.409232 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__ [Thu Apr 23 19:04:47.409348 2015] [:error] [pid 9052] [remote 192.168.10.76:12] self._setup(name) [Thu Apr 23 19:04:47.409365 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 49, in _setup [Thu Apr 23 19:04:47.409389 2015] [:error] [pid 9052] [remote 192.168.10.76:12] self._wrapped = Settings(settings_module) [Thu Apr 23 19:04:47.409404 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/lib/python2.7/site-packages/django/conf/__init__.py", line 128, in __init__ [Thu Apr 23 19:04:47.409426 2015] [:error] [pid 9052] [remote 192.168.10.76:12] mod = importlib.import_module(self.SETTINGS_MODULE) [Thu Apr 23 19:04:47.409442 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/lib/python2.7/site-packages/django/utils/importlib.py", line 40, in import_module [Thu Apr 23 19:04:47.409501 2015] [:error] [pid 9052] [remote 192.168.10.76:12] __import__(name) [Thu Apr 23 19:04:47.409517 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/settings. [Thu Apr 23 19:04:47.409658 2015] [:error] [pid 9052] [remote 192.168.10.76:12] from local.local_settings import * # noqa [Thu Apr 23 19:04:47.409719 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/local/loc [Thu Apr 23 19:04:47.409976 2015] [:error] [pid 9052] [remote 192.168.10.76:12] os.path.join(LOCAL_PATH, '.secret_key_store')) [Thu Apr 23 19:04:47.409994 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/lib/python2.7/site-packages/horizon/utils/secret_key.py", line 54, in generate_or_read [Thu Apr 23 19:04:47.410062 2015] [:error] [pid 9052] [remote 192.168.10.76:12] with lock: [Thu Apr 23 19:04:47.410077 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 217, in __enter__ [Thu Apr 23 19:04:47.410312 2015] [:error] [pid 9052] [remote 192.168.10.76:12] self.acquire() [Thu Apr 23 19:04:47.410328 2015] [:error] [pid 9052] [remote 192.168.10.76:12] File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 200, in acquire [Thu Apr 23 19:04:47.410351 2015] [:error] [pid 9052] [remote 192.168.10.76:12] self.lockfile = open(self.fname, 'a') [Thu Apr 23 19:04:47.410386 2015] [:error] [pid 9052] [remote 192.168.10.76:12] IOError: [Errno 13] Permission denied: '/tmp/_tmp_.secret_key_store.lock' I hope that this helpful > On Thu, Apr 23, 2015 at 7:23 PM, Alan Pevec > wrote: > > > But I can't access the dashboard, over the external IP, I'm getting: > > Which external IP do you mean? Please explain your network setup. > > > Internal Server Error > > Please try service httpd restart then check for clues in > /var/log/httpd/horizon_error.log. > openstack-dashboard-2015.1-dev2.el7 worked for me but in a small VM > with 2G RAM and 1G swap I've seen oom-killer in action and httpd was > the first victim, ps_mem shows httpd using >300MB after short browsing > openstack-dashboard pages... > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Fri Apr 24 06:20:12 2015 From: mrunge at redhat.com (Matthias Runge) Date: Fri, 24 Apr 2015 08:20:12 +0200 Subject: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: <5539E09C.5050109@redhat.com> On 23/04/15 18:36, Arash Kaffamanesh wrote: > Not Found > > The requested URL /dashboard was not found on this server. > > and over the internal IP with: > > http://10.0.0.16/dashboard > > I'm getting: > > > Internal Server Error > > The server encountered an internal error ... > > Also disabled the NetworkManager and rebooted, didn't help to get > horizon working. > when you see not being redirected from / to /dashboard, puppet-modules didn't add the redirection. Taking this and your internal server error, I have the stronger feeling, your dashboard was installed, but puppet modules didn't do the after install configs, which were necessary for horizon to work, when you got a package produced between last Friday and yesterday morning. That issue was patched and should be fixed in a later version. I can not say anything, why puppet modules failed. Matthias From mrunge at redhat.com Fri Apr 24 06:21:59 2015 From: mrunge at redhat.com (Matthias Runge) Date: Fri, 24 Apr 2015 08:21:59 +0200 Subject: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: <55396E31.4030603@arif-ali.co.uk> References: <55396E31.4030603@arif-ali.co.uk> Message-ID: <5539E107.1010701@redhat.com> On 24/04/15 00:12, Arif Ali wrote: > > 192.168.10.76:12] IOError: [Errno 13] Permission denied: > '/tmp/_tmp_.secret_key_store.lock' > > I hope that this helpful > This was fixed yesterday by https://review.gerrithub.io/#/c/230645/ and following patch. Matthias From phaurep at gmail.com Fri Apr 24 08:32:43 2015 From: phaurep at gmail.com (pauline phaure) Date: Fri, 24 Apr 2015 10:32:43 +0200 Subject: [Rdo-list] physical and virtual ressources matching Message-ID: hey, please I wanna know for OpenStack the matching that should exists between virtual ressources and physical ones. any idea? should we stick to 1vcp=cpu?? what about the memory and disk space? pauline, -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Apr 24 10:50:05 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 24 Apr 2015 12:50:05 +0200 Subject: [Rdo-list] [RDO-Manager] [AHC] allow matching without re-sending to ironic-discoverd In-Reply-To: <5537D115.1090706@redhat.com> References: <55366E64.60709@redhat.com> <5537D115.1090706@redhat.com> Message-ID: <553A1FDD.3060308@redhat.com> On 04/22/2015 06:49 PM, John Trowbridge wrote: > On 04/21/2015 11:36 AM, John Trowbridge wrote: >> >> I would like to gather feedback on whether this approach seems >> reasonable, or if there are any better suggestions to solve this problem. >> > I put up a POC patch for this here: > https://review.gerrithub.io/#/c/230849/ As we discusses already, I'm somewhat worried that we put too much non-ramdisk code into something called rdo-ramdisk-tools :D Probably it's my fault: I didn't realize what you're planning to do. If we keep it that way, we might want to rename the package. Or even more both things to some existing common package - not sure. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From trown at redhat.com Fri Apr 24 11:43:02 2015 From: trown at redhat.com (John Trowbridge) Date: Fri, 24 Apr 2015 07:43:02 -0400 Subject: [Rdo-list] [RDO-Manager] [AHC] allow matching without re-sending to ironic-discoverd In-Reply-To: <553A1FDD.3060308@redhat.com> References: <55366E64.60709@redhat.com> <5537D115.1090706@redhat.com> <553A1FDD.3060308@redhat.com> Message-ID: <553A2C46.7080300@redhat.com> On 04/24/2015 06:50 AM, Dmitry Tantsur wrote: > On 04/22/2015 06:49 PM, John Trowbridge wrote: >> On 04/21/2015 11:36 AM, John Trowbridge wrote: >>> >>> I would like to gather feedback on whether this approach seems >>> reasonable, or if there are any better suggestions to solve this >>> problem. >>> >> I put up a POC patch for this here: >> https://review.gerrithub.io/#/c/230849/ > > As we discusses already, I'm somewhat worried that we put too much > non-ramdisk code into something called rdo-ramdisk-tools :D Probably > it's my fault: I didn't realize what you're planning to do. If we keep > it that way, we might want to rename the package. Or even more both > things to some existing common package - not sure. I think that benchmark analysis and matching could stand alone as a package. I would like to include an openstack client plugin,[1] and it would also make sense for the distributed benchmark tools to be in this package when we get to that point. The only place I could see this fitting currently is in the enovance/hardware library, as it is doing all of the heavy lifting in the current implementation. However, I would eventually like to decouple this operator tooling from the backend logic. I think this would be a cool idea to present to Ironic, and I see having a hard dependence on the hardware library as a deal breaker for that. That said, the python ramdisk work that you are working on is really about benchmark collection. So it would make sense to include in the same package as the other operator tooling focused on benchmarks. What are your thoughts wrt just changing the package name? [1] I was thinking of using an `openstack health` namespace. From dtantsur at redhat.com Fri Apr 24 12:22:16 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 24 Apr 2015 14:22:16 +0200 Subject: [Rdo-list] [RDO-Manager] [AHC] allow matching without re-sending to ironic-discoverd In-Reply-To: <553A2C46.7080300@redhat.com> References: <55366E64.60709@redhat.com> <5537D115.1090706@redhat.com> <553A1FDD.3060308@redhat.com> <553A2C46.7080300@redhat.com> Message-ID: <553A3578.1020304@redhat.com> On 04/24/2015 01:43 PM, John Trowbridge wrote: > > > On 04/24/2015 06:50 AM, Dmitry Tantsur wrote: >> On 04/22/2015 06:49 PM, John Trowbridge wrote: >>> On 04/21/2015 11:36 AM, John Trowbridge wrote: >>>> >>>> I would like to gather feedback on whether this approach seems >>>> reasonable, or if there are any better suggestions to solve this >>>> problem. >>>> >>> I put up a POC patch for this here: >>> https://review.gerrithub.io/#/c/230849/ >> >> As we discusses already, I'm somewhat worried that we put too much >> non-ramdisk code into something called rdo-ramdisk-tools :D Probably >> it's my fault: I didn't realize what you're planning to do. If we keep >> it that way, we might want to rename the package. Or even more both >> things to some existing common package - not sure. > > I think that benchmark analysis and matching could stand alone as a > package. I would like to include an openstack client plugin,[1] and it > would also make sense for the distributed benchmark tools to be in this > package when we get to that point. > > The only place I could see this fitting currently is in the > enovance/hardware library, as it is doing all of the heavy lifting in > the current implementation. However, I would eventually like to decouple > this operator tooling from the backend logic. I think this would be a > cool idea to present to Ironic, and I see having a hard dependence on > the hardware library as a deal breaker for that. > > That said, the python ramdisk work that you are working on is really > about benchmark collection. So it would make sense to include in the > same package as the other operator tooling focused on benchmarks. > > What are your thoughts wrt just changing the package name? We had a quick chat off-channel, and I kind of changed my mind. People are not found of calling something "rdo-ramdisk-tools", but I realized that nothing prevents me from putting the ramdisk script into ironic-discoverd upstream repo. This way ironic-discoverd will be more feature-complete: we already have plugins there, but we don't have ramdisk implementation, only generic one in the ramdisk. Once we start adopting IPA, this code may be of a lot of help. Maybe we'll transform it into an IPA module. So it's up to you now how to call/where to put AHC code :) > > > [1] I was thinking of using an `openstack health` namespace. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From phaurep at gmail.com Fri Apr 24 13:12:26 2015 From: phaurep at gmail.com (pauline phaure) Date: Fri, 24 Apr 2015 15:12:26 +0200 Subject: [Rdo-list] dashboard not displaying the right number of hypervisors Message-ID: hello everybody As I have an installation of openstack with 2 servers (1 compute + 1compute_&_controller) I was expecting to see two hypervisors in the dashboard But I only see one. Is it normal? Thanks in advance for your response, -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Fri Apr 24 14:08:07 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 24 Apr 2015 10:08:07 -0400 Subject: [Rdo-list] rdo-manager - instack-install-undercloud fail In-Reply-To: References: Message-ID: just an update now that i have added 127.0.0.1 shortname.F.Q.D.N shortname to /etc/hosts .. i am now passing this point so .. it looks like the documentation needs to be updated who can take care of that pls? thanks On Wed, Apr 22, 2015 at 11:53 AM, Mohammed Arafa wrote: > apologies > > i was looking up epmd and the port because of the rabbitmq status > [stack at instack ~]$ sudo service rabbitmq-server status > Redirecting to /bin/systemctl status rabbitmq-server.service > rabbitmq-server.service - RabbitMQ broker > Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; > disabled) > Drop-In: /etc/systemd/system/rabbitmq-server.service.d > ??limits.conf > Active: failed (Result: exit-code) since Wed 2015-04-22 15:31:53 UTC; > 15min ago > Main PID: 10749 (code=exited, status=1/FAILURE) > CGroup: /system.slice/rabbitmq-server.service > > Apr 22 15:31:50 instack.marafa.vm rabbitmqctl[10778]: Error: unable to > connect to node rabbit at instack: nodedown > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: DIAGNOSTICS > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: =========== > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: attempted to > contact: [rabbit at instack] > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: rabbit at instack: > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: * unable to connect > to epmd (port 4369) on instack: address (cannot connect to host/port) > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: current node details: > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: - node name: > rabbitmqctl10778 at instack > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: - home dir: > /var/lib/rabbitmq > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: - cookie hash: > Nbo8xxB/ssykTxV/kUOjdQ== > Apr 22 15:31:53 instack.marafa.vm systemd[1]: rabbitmq-server.service: > control process exited, code=exited status=2 > Apr 22 15:31:53 instack.marafa.vm systemd[1]: Failed to start RabbitMQ > broker. > Apr 22 15:31:53 instack.marafa.vm systemd[1]: Unit rabbitmq-server.service > entered failed state. > > > On Wed, Apr 22, 2015 at 11:49 AM, Mohammed Arafa > wrote: > >> this might also be useful >> >> [root at instack ~]# netstat -tupane | grep 4369 >> tcp 0 0 0.0.0.0:4369 0.0.0.0:* >> LISTEN 0 62670 8536/epmd >> >> On Wed, Apr 22, 2015 at 11:46 AM, Mohammed Arafa < >> mohammed.arafa at gmail.com> wrote: >> >>> hi >>> >>> just did a reinstall and it failed. rabbitmq again. >>> >>> i am attaching the screen grabs. if someone wants the logs, i can send >>> them. but i expect to get rid of the instack vm later this afternoon if i >>> cannot get instack to run after fiddling with rabbitmq. >>> >>> logs again, pls specify which logs and the location you need >>> >>> my hosts file which worked yesterday >>> >>> [stack at instack ~]$ cat /etc/hosts >>> 127.0.0.1 localhost localhost.localdomain localhost4 >>> localhost4.localdomain4 >>> ::1 localhost localhost.localdomain localhost6 >>> localhost6.localdomain6 >>> >>> 127.0.0.1 instack.marafa.vm >>> >>> >>> pertinent output of openstack-status >>> >>> == Support services == >>> openvswitch: active >>> dbus: active >>> rabbitmq-server: failed (disabled on boot) >>> memcached: active >>> == Keystone users == >>> /usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: >>> DeprecationWarning: The keystone CLI is deprecated in favor of >>> python-openstackclient. For a Python library, continue using >>> python-keystoneclient. >>> 'python-keystoneclient.', DeprecationWarning) >>> Could not find user: admin (Disable debug mode to suppress these >>> details.) (HTTP 401) (Request-ID: req-0037d194-7b75-4c9e-a48f-c8ac122a99ff) >>> == Glance images == >>> Could not find user: admin (Disable debug mode to suppress these >>> details.) (HTTP 401) (Request-ID: req-3fed983d-76b2-43cb-b9ff-b0445e470773) >>> == Nova managed services == >>> ERROR (Unauthorized): Could not find user: admin (Disable debug mode to >>> suppress these details.) (HTTP 401) (Request-ID: >>> req-1cafa2c3-fa56-4ea5-804b-4910c0fde9ba) >>> == Nova networks == >>> ERROR (Unauthorized): Could not find user: admin (Disable debug mode to >>> suppress these details.) (HTTP 401) (Request-ID: >>> req-3de8f916-c7c6-4e9e-a309-73359261366c) >>> == Nova instance flavors == >>> ERROR (Unauthorized): Could not find user: admin (Disable debug mode to >>> suppress these details.) (HTTP 401) (Request-ID: >>> req-6f360f3f-8f0f-4d65-b489-e7050df1fd49) >>> == Nova instances == >>> ERROR (Unauthorized): Could not find user: admin (Disable debug mode to >>> suppress these details.) (HTTP 401) (Request-ID: >>> req-37f006f0-d121-4ba5-9663-717e1457d54d) >>> >>> >>> -- >>> >>> >>> >>> >>> *805010942448935* >>> >>> >>> *GR750055912MA* >>> >>> >>> *Link to me on LinkedIn * >>> >> >> >> >> -- >> >> >> >> >> *805010942448935* >> >> >> *GR750055912MA* >> >> >> *Link to me on LinkedIn * >> > > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Fri Apr 24 14:09:20 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 24 Apr 2015 10:09:20 -0400 Subject: [Rdo-list] rdo-manager python? In-Reply-To: References: Message-ID: my 3rd run on a fresh instack vm and i have no progress. i keep "pausing" at this point with the same symptoms anyone able and willing to help? On Wed, Apr 22, 2015 at 3:10 PM, Mohammed Arafa wrote: > my second run is on a fresh vm and it has the same issue. i am stuck > > thanks > > On Wed, Apr 22, 2015 at 2:04 PM, Mohammed Arafa > wrote: > >> so .. i edited instack's host file to show >> 127.0.0.1 instack.domain.tld instack #shortname added >> >> and on this pass rabbitmq worked then i got to neutron setup and it hung >> at >> >> + setup-neutron -n /tmp/tmp.miEe7xK1qL >> /usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30: >> UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for >> novaclient.v2). The preferable way to get client class or object you can >> find in novaclient.client module. >> warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis >> for " >> >> >> neutron logs were full of this: >> >> >> 2015-04-22 17:14:37.666 10981 DEBUG oslo_messaging._drivers.impl_rabbit >> [-] Received recoverable error from kombu: on_error >> /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:789 >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> Traceback (most recent call last): >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> File "/usr/lib/python2.7/site-packages/kombu/utils/__init__.py", line 217, >> in retry_over_time >> 2015-04-22 17:14:37.666 10981 TRACE >> oslo_messaging._drivers.impl_rabbit return fun(*args, **kwargs) >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 246, in >> connect >> 2015-04-22 17:14:37.666 10981 TRACE >> oslo_messaging._drivers.impl_rabbit return self.connection >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 761, in >> connection >> 2015-04-22 17:14:37.666 10981 TRACE >> oslo_messaging._drivers.impl_rabbit self._connection = >> self._establish_connection() >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 720, in >> _establish_connection >> 2015-04-22 17:14:37.666 10981 TRACE >> oslo_messaging._drivers.impl_rabbit conn = >> self.transport.establish_connection() >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> File "/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line >> 115, in establish_connection >> 2015-04-22 17:14:37.666 10981 TRACE >> oslo_messaging._drivers.impl_rabbit conn = self.Connection(**opts) >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 180, in >> __init__ >> 2015-04-22 17:14:37.666 10981 TRACE >> oslo_messaging._drivers.impl_rabbit (10, 30), # tune >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> File "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 67, >> in wait >> 2015-04-22 17:14:37.666 10981 TRACE >> oslo_messaging._drivers.impl_rabbit self.channel_id, allowed_methods) >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 240, in >> _wait_method >> 2015-04-22 17:14:37.666 10981 TRACE >> oslo_messaging._drivers.impl_rabbit self.method_reader.read_method() >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> File "/usr/lib/python2.7/site-packages/amqp/method_framing.py", line 189, >> in read_method >> 2015-04-22 17:14:37.666 10981 TRACE >> oslo_messaging._drivers.impl_rabbit raise m >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> IOError: Socket closed >> 2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit >> 2015-04-22 17:14:37.667 10981 ERROR oslo_messaging._drivers.impl_rabbit >> [-] AMQP server 192.0.2.1:5672 closed the connection. Check login >> credentials: Socket closed >> >> -- >> >> >> >> >> *805010942448935* >> >> >> *GR750055912MA* >> >> >> *Link to me on LinkedIn * >> > > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.dexter at servicemesh.com Fri Apr 24 15:11:14 2015 From: jeff.dexter at servicemesh.com (Jeff Dexter) Date: Fri, 24 Apr 2015 11:11:14 -0400 Subject: [Rdo-list] dashboard not displaying the right number of hypervisors In-Reply-To: References: Message-ID: Pauline, Check to make sure nova-compute is running on both nodes. $ openstack-service status nova You can also check the nova services. [admin] $ nova service-list This will show all of your nova services, which are up, which are enabled. And the last time they checked in with the api server. From: pauline phaure Date: Friday, April 24, 2015 at 9:12 AM To: rdo-list Subject: [Rdo-list] dashboard not displaying the right number of hypervisors hello everybody As I have an installation of openstack with 2 servers (1 compute + 1compute_&_controller) I was expecting to see two hypervisors in the dashboard But I only see one. Is it normal? Thanks in advance for your response, _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxenham at redhat.com Fri Apr 24 19:17:33 2015 From: roxenham at redhat.com (Rhys Oxenham) Date: Fri, 24 Apr 2015 15:17:33 -0400 Subject: [Rdo-list] physical and virtual ressources matching In-Reply-To: References: Message-ID: <912F04E4-9F4F-4271-806D-E016C58D36B7@redhat.com> By default, I believe it operates a 16:1 vCPU:physical and 1.5:1 on the memory. It?s possible to modify these values, of course, but ultimately it depends on your workload choices. Cheers Rhys > On 24 Apr 2015, at 04:32, pauline phaure wrote: > > hey, > please I wanna know for OpenStack the matching that should exists between virtual ressources and physical ones. any idea? should we stick to 1vcp=cpu?? what about the memory and disk space? > > pauline, > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ayoung at redhat.com Fri Apr 24 21:14:31 2015 From: ayoung at redhat.com (Adam Young) Date: Fri, 24 Apr 2015 17:14:31 -0400 Subject: [Rdo-list] Objective - Feasible ? In-Reply-To: References: Message-ID: <553AB237.2020304@redhat.com> On 04/23/2015 04:59 AM, Outback Dingo wrote: > Hi we are looking to deploy a new lab, for the feature set we would > like the following > > RDO Centos 7.1 Kilo with XENServer 6.5, RiakCS and OpenDayLight > controller. > > Basically We prefer XenServer to KVM, and wish to roll a RiakCS > storage cluster, > for Networking OpenDayLight managing the Network pieces, Ive figured > out how to > deploy the pieces, XenServer, no brainer..... RiakCS in a 3 node > cluster ok.... check > OpenDayLight.... on a node...... check... is it possible to use RDO to > "wrap" them all > together into a viable working solution, im not afraid of some manual > intervention. > Any insight into the pieces is welcome but it might be out of RDOs > capabilities. RDO doesn't cover OpenDaylight. Unless someone has made OpenDaylight to Keystone integration, I think you are going to have an authorization quagmire on your hands. I don't think Nova is capable of talking to OpenDayklight, so booting a compute node and getting it on the Network is going to be broken at worst, problematic at best. No clue on how good the Xen support is (suspect it is OK), but I think that is the least of your problems with this setup. > > Thanks > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Fri Apr 24 22:17:30 2015 From: sgordon at redhat.com (Steve Gordon) Date: Fri, 24 Apr 2015 18:17:30 -0400 (EDT) Subject: [Rdo-list] Objective - Feasible ? In-Reply-To: <553AB237.2020304@redhat.com> References: <553AB237.2020304@redhat.com> Message-ID: <2093944359.7795211.1429913850634.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Adam Young" > To: rdo-list at redhat.com > > On 04/23/2015 04:59 AM, Outback Dingo wrote: > > Hi we are looking to deploy a new lab, for the feature set we would > > like the following > > > > RDO Centos 7.1 Kilo with XENServer 6.5, RiakCS and OpenDayLight > > controller. > > > > Basically We prefer XenServer to KVM, and wish to roll a RiakCS > > storage cluster, > > for Networking OpenDayLight managing the Network pieces, Ive figured > > out how to > > deploy the pieces, XenServer, no brainer..... RiakCS in a 3 node > > cluster ok.... check > > OpenDayLight.... on a node...... check... is it possible to use RDO to > > "wrap" them all > > together into a viable working solution, im not afraid of some manual > > intervention. > > Any insight into the pieces is welcome but it might be out of RDOs > > capabilities. > > RDO doesn't cover OpenDaylight. Unless someone has made OpenDaylight to > Keystone integration, I think you are going to have an authorization > quagmire on your hands. Dan Radez and others had I believe done some work on PackStack integration of ODL in support of OPNFV. > I don't think Nova is capable of talking to OpenDaylight, so booting a > compute node and getting it on the Network is going to be broken at > worst, problematic at best. I may be missing something here, but how is the ODL ML2 plugin different to any other ML2 plugin in this respect? Per the above there has been some work done on this, and testing of it, albeit focused on KVM in the context of OPNFV. > No clue on how good the Xen support is (suspect it is OK), but I think > that is the least of your problems with this setup. Thanks, Steve From sgordon at redhat.com Fri Apr 24 22:19:40 2015 From: sgordon at redhat.com (Steve Gordon) Date: Fri, 24 Apr 2015 18:19:40 -0400 (EDT) Subject: [Rdo-list] Objective - Feasible ? In-Reply-To: <2093944359.7795211.1429913850634.JavaMail.zimbra@redhat.com> References: <553AB237.2020304@redhat.com> <2093944359.7795211.1429913850634.JavaMail.zimbra@redhat.com> Message-ID: <1381531955.7795357.1429913980462.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Steve Gordon" > To: "Adam Young" > ----- Original Message ----- > > From: "Adam Young" > > To: rdo-list at redhat.com > > > > On 04/23/2015 04:59 AM, Outback Dingo wrote: > > > Hi we are looking to deploy a new lab, for the feature set we would > > > like the following > > > > > > RDO Centos 7.1 Kilo with XENServer 6.5, RiakCS and OpenDayLight > > > controller. > > > > > > Basically We prefer XenServer to KVM, and wish to roll a RiakCS > > > storage cluster, > > > for Networking OpenDayLight managing the Network pieces, Ive figured > > > out how to > > > deploy the pieces, XenServer, no brainer..... RiakCS in a 3 node > > > cluster ok.... check > > > OpenDayLight.... on a node...... check... is it possible to use RDO to > > > "wrap" them all > > > together into a viable working solution, im not afraid of some manual > > > intervention. > > > Any insight into the pieces is welcome but it might be out of RDOs > > > capabilities. > > > > RDO doesn't cover OpenDaylight. Unless someone has made OpenDaylight to > > Keystone integration, I think you are going to have an authorization > > quagmire on your hands. > > Dan Radez and others had I believe done some work on PackStack integration of > ODL in support of OPNFV. > > > I don't think Nova is capable of talking to OpenDaylight, so booting a > > compute node and getting it on the Network is going to be broken at > > worst, problematic at best. > > I may be missing something here, but how is the ODL ML2 plugin different to > any other ML2 plugin in this respect? Per the above there has been some work > done on this, and testing of it, albeit focused on KVM in the context of > OPNFV. > > > No clue on how good the Xen support is (suspect it is OK), but I think > > that is the least of your problems with this setup. > > Thanks, > > Steve I'll wait for Dan to follow up with regards to PackStack integration but this may help in the meantime: https://www.rdoproject.org/Helium_OpenDaylight_Juno_OpenStack Thanks, Steve From apevec at gmail.com Sat Apr 25 21:29:45 2015 From: apevec at gmail.com (Alan Pevec) Date: Sat, 25 Apr 2015 23:29:45 +0200 Subject: [Rdo-list] Objective - Feasible ? In-Reply-To: <1381531955.7795357.1429913980462.JavaMail.zimbra@redhat.com> References: <553AB237.2020304@redhat.com> <2093944359.7795211.1429913850634.JavaMail.zimbra@redhat.com> <1381531955.7795357.1429913980462.JavaMail.zimbra@redhat.com> Message-ID: >> Dan Radez and others had I believe done some work on PackStack integration of >> ODL in support of OPNFV. > > I'll wait for Dan to follow up with regards to PackStack integration but this may help in the meantime: > https://www.rdoproject.org/Helium_OpenDaylight_Juno_OpenStack This wiki should probably now redirect to OPNFV BGS project. Not Packstack but Quickstack manifests were used as the basis for work Dan did in OPNFV Project Bootstrap/Get started (Genesis): https://gerrit.opnfv.org/gerrit/gitweb?p=genesis.git;a=blob;f=foreman/docs/src/release-notes.rst;hb=HEAD Cheers, Alan From outbackdingo at gmail.com Sat Apr 25 23:29:49 2015 From: outbackdingo at gmail.com (Outback Dingo) Date: Sun, 26 Apr 2015 09:29:49 +1000 Subject: [Rdo-list] Objective - Feasible ? In-Reply-To: References: <553AB237.2020304@redhat.com> <2093944359.7795211.1429913850634.JavaMail.zimbra@redhat.com> <1381531955.7795357.1429913980462.JavaMail.zimbra@redhat.com> Message-ID: On Sun, Apr 26, 2015 at 7:29 AM, Alan Pevec wrote: > >> Dan Radez and others had I believe done some work on PackStack > integration of > >> ODL in support of OPNFV. > > > > I'll wait for Dan to follow up with regards to PackStack integration but > this may help in the meantime: > > https://www.rdoproject.org/Helium_OpenDaylight_Juno_OpenStack > > This wiki should probably now redirect to OPNFV BGS project. > Not Packstack but Quickstack manifests were used as the basis for work > Dan did in OPNFV Project Bootstrap/Get started (Genesis): > > https://gerrit.opnfv.org/gerrit/gitweb?p=genesis.git;a=blob;f=foreman/docs/src/release-notes.rst;hb=HEAD > > Interesting read, now if only i could find this iso they mention in 4.3.1 Software deliverables 135 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 136 Foreman/QuickStack at OPNFV .iso file 137 deploy.sh - Automatically deploys Target OPNFV System to Bare Metal > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Sun Apr 26 16:24:20 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sun, 26 Apr 2015 18:24:20 +0200 Subject: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: <5539E107.1010701@redhat.com> References: <55396E31.4030603@arif-ali.co.uk> <5539E107.1010701@redhat.com> Message-ID: I made 2 more attempts and installed the delorean release on bare metal: 1st install: AIO 2nd install: 2 node (controller + compute) By the AIO install the dashboard worked fine. By the 2nd install, I got "Bad Request (400)" by accessing the dashboard and it seems that the DocumentRoot in horizon vhost generated by puppet is not correct, so I changed it in: /etc/httpd/conf.d/15-horizon_vhost.conf ## Vhost docroot #DocumentRoot "/var/www/" DocumentRoot "/usr/share/openstack-dashboard/static/" Now the dashboard is working here too. (if I re-run packstack the vhost will get overwritten again with the wrong docroot) Another issue, after spawning a new cirros instance, it goes into error state and on the compute node I'm getting: [root at compute1 ~]# tail -f /var/log/nova/nova-compute.log 2015-04-26 11:17:16.027 13142 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0 2015-04-26 11:17:16.027 13142 AUDIT nova.compute.resource_tracker [-] PCI stats: [] 2015-04-26 11:17:16.063 13142 WARNING nova.compute.resource_tracker [-] No service record for host compute1 But the compute1 is enabled: [root at csky06 ~(keystone_admin)]# nova service-list +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | controller | internal | enabled | up | 2015-04-26T15:23:41.000000 | - | | 2 | nova-scheduler | controller | internal | enabled | up | 2015-04-26T15:23:41.000000 | - | | 3 | nova-conductor | controller | internal | enabled | up | 2015-04-26T15:23:40.000000 | - | | 5 | nova-cert | controller | internal | enabled | up | 2015-04-26T15:23:40.000000 | - | | 6 | nova-compute | compute1 | nova | enabled | up | 2015-04-26T15:23:39.000000 | - | +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ And if I call nova hypervisor-list, the list is empty [root at controller (keystone_admin)]# nova hypervisor-list +----+---------------------+ | ID | Hypervisor hostname | +----+---------------------+ +----+---------------------+ Not sure if that has something to do with the empty hypervisor list (?). Thanks! On Fri, Apr 24, 2015 at 8:21 AM, Matthias Runge wrote: > On 24/04/15 00:12, Arif Ali wrote: > >> >> > 192.168.10.76:12] IOError: [Errno 13] Permission denied: >> '/tmp/_tmp_.secret_key_store.lock' >> >> I hope that this helpful >> >> This was fixed yesterday by https://review.gerrithub.io/#/c/230645/ and > following patch. > > Matthias > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sun Apr 26 16:52:36 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sun, 26 Apr 2015 12:52:36 -0400 Subject: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: , , , , , , , , <55396E31.4030603@arif-ali.co.uk> <5539E107.1010701@redhat.com>, Message-ID: Confirmed version as of 04/21/2015 ( via delorean repo ) was tested with answer-file for Two Node install Controller&&Network and Compute Answer-file sample was brought from blog entry ( works fine on Juno) http://bderzhavets.blogspot.com/2015/04/switching-to-ethx-interfaces-on-fedora.html Packstack completed OK . However, table "compute_nodes" in nova database appeared to be empty up on completion. System doesn't see Hypervisor Host either via Dashboard or via nova CLI as mentioned by Arash. So it cannot scheduler instance, no matter that `nova service-list` is OK and openstack-nova-compute is up and running on Compute Node. I've also I've got same "WARNING nova.compute.resource_tracker [-] No service record for host compute1" in /var/log/nova/nova-compute.log on Compute node. Boris. Date: Sun, 26 Apr 2015 18:24:20 +0200 From: ak at cloudssky.com To: mrunge at redhat.com CC: rdo-list at redhat.com Subject: Re: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages I made 2 more attempts and installed the delorean release on bare metal: 1st install: AIO2nd install: 2 node (controller + compute) By the AIO install the dashboard worked fine. By the 2nd install, I got "Bad Request (400)" by accessing the dashboard and it seems that the DocumentRoot in horizon vhost generated by puppet is not correct, so I changed it in: /etc/httpd/conf.d/15-horizon_vhost.conf ## Vhost docroot #DocumentRoot "/var/www/" DocumentRoot "/usr/share/openstack-dashboard/static/" Now the dashboard is working here too. (if I re-run packstack the vhost will get overwritten again with the wrong docroot) Another issue, after spawning a new cirros instance, it goes into error state and on the compute node I'm getting: [root at compute1 ~]# tail -f /var/log/nova/nova-compute.log 2015-04-26 11:17:16.027 13142 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 8, total allocated vcpus: 0 2015-04-26 11:17:16.027 13142 AUDIT nova.compute.resource_tracker [-] PCI stats: [] 2015-04-26 11:17:16.063 13142 WARNING nova.compute.resource_tracker [-] No service record for host compute1 But the compute1 is enabled: [root at csky06 ~(keystone_admin)]# nova service-list +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | controller | internal | enabled | up | 2015-04-26T15:23:41.000000 | - | | 2 | nova-scheduler | controller | internal | enabled | up | 2015-04-26T15:23:41.000000 | - | | 3 | nova-conductor | controller | internal | enabled | up | 2015-04-26T15:23:40.000000 | - | | 5 | nova-cert | controller | internal | enabled | up | 2015-04-26T15:23:40.000000 | - | | 6 | nova-compute | compute1 | nova | enabled | up | 2015-04-26T15:23:39.000000 | - | +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ And if I call nova hypervisor-list, the list is empty [root at controller (keystone_admin)]# nova hypervisor-list +----+---------------------+ | ID | Hypervisor hostname | +----+---------------------+ +----+---------------------+ Not sure if that has something to do with the empty hypervisor list (?). Thanks! On Fri, Apr 24, 2015 at 8:21 AM, Matthias Runge wrote: On 24/04/15 00:12, Arif Ali wrote: 192.168.10.76:12] IOError: [Errno 13] Permission denied: '/tmp/_tmp_.secret_key_store.lock' I hope that this helpful This was fixed yesterday by https://review.gerrithub.io/#/c/230645/ and following patch. Matthias _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From phaurep at gmail.com Mon Apr 27 07:24:10 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 27 Apr 2015 09:24:10 +0200 Subject: [Rdo-list] physical and virtual ressources matching In-Reply-To: <912F04E4-9F4F-4271-806D-E016C58D36B7@redhat.com> References: <912F04E4-9F4F-4271-806D-E016C58D36B7@redhat.com> Message-ID: ok thank you Rhys. I check those values and they were as you said. but I still have an issue. If we apply those ratios to my available ressources my dashbord should display vcpu =16*16 (as i have 16 cores on my server) and memory=1.5*12 (as I have 12 Gb) BUUUT It only displays vcpu=16 and memory=12. any hint ? 2015-04-24 21:17 GMT+02:00 Rhys Oxenham : > By default, I believe it operates a 16:1 vCPU:physical and 1.5:1 on the > memory. It?s possible to modify these values, of course, but ultimately it > depends on your workload choices. > > Cheers > Rhys > > > On 24 Apr 2015, at 04:32, pauline phaure wrote: > > > > hey, > > please I wanna know for OpenStack the matching that should exists > between virtual ressources and physical ones. any idea? should we stick to > 1vcp=cpu?? what about the memory and disk space? > > > > pauline, > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phaurep at gmail.com Mon Apr 27 07:26:11 2015 From: phaurep at gmail.com (pauline phaure) Date: Mon, 27 Apr 2015 09:26:11 +0200 Subject: [Rdo-list] OPNFV and RDO Message-ID: hey everybody, is there anyone who already deployed OPNFV + RDO? pauline, -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Apr 27 09:50:03 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 27 Apr 2015 11:50:03 +0200 Subject: [Rdo-list] rdo-manager - instack-install-undercloud fail In-Reply-To: References: Message-ID: <553E064B.2080004@redhat.com> On 04/24/2015 04:08 PM, Mohammed Arafa wrote: > just an update > now that i have added > 127.0.0.1 shortname.F.Q.D.N shortname > to /etc/hosts .. i am now passing this point > > so .. it looks like the documentation needs to be updated > who can take care of that pls? Hi! I see a note under point 3 in https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/install-undercloud.html Is it what you're looking for? > > thanks > > > On Wed, Apr 22, 2015 at 11:53 AM, Mohammed Arafa > > wrote: > > apologies > > i was looking up epmd and the port because of the rabbitmq status > [stack at instack ~]$ sudo service rabbitmq-server status > Redirecting to /bin/systemctl status rabbitmq-server.service > rabbitmq-server.service - RabbitMQ broker > Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; > disabled) > Drop-In: /etc/systemd/system/rabbitmq-server.service.d > ??limits.conf > Active: failed (Result: exit-code) since Wed 2015-04-22 15:31:53 > UTC; 15min ago > Main PID: 10749 (code=exited, status=1/FAILURE) > CGroup: /system.slice/rabbitmq-server.service > > Apr 22 15:31:50 instack.marafa.vm rabbitmqctl[10778]: Error: unable > to connect to node rabbit at instack: nodedown > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: DIAGNOSTICS > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: =========== > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: attempted to > contact: [rabbit at instack] > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: rabbit at instack: > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: * unable to > connect to epmd (port 4369) on instack: address (cannot connect to > host/port) > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: current node > details: > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: - node name: > rabbitmqctl10778 at instack > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: - home dir: > /var/lib/rabbitmq > Apr 22 15:31:53 instack.marafa.vm rabbitmqctl[10778]: - cookie hash: > Nbo8xxB/ssykTxV/kUOjdQ== > Apr 22 15:31:53 instack.marafa.vm systemd[1]: > rabbitmq-server.service: control process exited, code=exited status=2 > Apr 22 15:31:53 instack.marafa.vm systemd[1]: Failed to start > RabbitMQ broker. > Apr 22 15:31:53 instack.marafa.vm systemd[1]: Unit > rabbitmq-server.service entered failed state. > > > On Wed, Apr 22, 2015 at 11:49 AM, Mohammed Arafa > > wrote: > > this might also be useful > > [root at instack ~]# netstat -tupane | grep 4369 > tcp 0 0 0.0.0.0:4369 > 0.0.0.0:* > LISTEN 0 62670 8536/epmd > > On Wed, Apr 22, 2015 at 11:46 AM, Mohammed Arafa > > wrote: > > hi > > just did a reinstall and it failed. rabbitmq again. > > i am attaching the screen grabs. if someone wants the logs, > i can send them. but i expect to get rid of the instack vm > later this afternoon if i cannot get instack to run after > fiddling with rabbitmq. > > logs again, pls specify which logs and the location you need > > my hosts file which worked yesterday > > [stack at instack ~]$ cat /etc/hosts > 127.0.0.1 localhost localhost.localdomain localhost4 > localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > > 127.0.0.1 instack.marafa.vm > > > pertinent output of openstack-status > > == Support services == > openvswitch: active > dbus: active > rabbitmq-server: failed (disabled > on boot) > memcached: active > == Keystone users == > /usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: > DeprecationWarning: The keystone CLI is deprecated in favor > of python-openstackclient. For a Python library, continue > using python-keystoneclient. > 'python-keystoneclient.', DeprecationWarning) > Could not find user: admin (Disable debug mode to suppress > these details.) (HTTP 401) (Request-ID: > req-0037d194-7b75-4c9e-a48f-c8ac122a99ff) > == Glance images == > Could not find user: admin (Disable debug mode to suppress > these details.) (HTTP 401) (Request-ID: > req-3fed983d-76b2-43cb-b9ff-b0445e470773) > == Nova managed services == > ERROR (Unauthorized): Could not find user: admin (Disable > debug mode to suppress these details.) (HTTP 401) > (Request-ID: req-1cafa2c3-fa56-4ea5-804b-4910c0fde9ba) > == Nova networks == > ERROR (Unauthorized): Could not find user: admin (Disable > debug mode to suppress these details.) (HTTP 401) > (Request-ID: req-3de8f916-c7c6-4e9e-a309-73359261366c) > == Nova instance flavors == > ERROR (Unauthorized): Could not find user: admin (Disable > debug mode to suppress these details.) (HTTP 401) > (Request-ID: req-6f360f3f-8f0f-4d65-b489-e7050df1fd49) > == Nova instances == > ERROR (Unauthorized): Could not find user: admin (Disable > debug mode to suppress these details.) (HTTP 401) > (Request-ID: req-37f006f0-d121-4ba5-9663-717e1457d54d) > > > -- > > > > > > > > *805010942448935* > ** > > > > > *GR750055912MA* > > > > > *Link to me on LinkedIn > * > > > > > -- > > > > > > > > *805010942448935* > ** > > > > > *GR750055912MA* > > > > > *Link to me on LinkedIn * > > > > > -- > > > > > > > > *805010942448935* > ** > > > > > *GR750055912MA* > > > > > *Link to me on LinkedIn * > > > > > -- > > > > > > > > *805010942448935* > ** > > > > > *GR750055912MA* > > > > > *Link to me on LinkedIn * > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mohammed.arafa at gmail.com Mon Apr 27 10:39:18 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 27 Apr 2015 06:39:18 -0400 Subject: [Rdo-list] physical and virtual ressources matching In-Reply-To: References: <912F04E4-9F4F-4271-806D-E016C58D36B7@redhat.com> Message-ID: The horizon dashboard shows project quotas On Apr 27, 2015 3:26 AM, "pauline phaure" wrote: > ok thank you Rhys. I check those values and they were as you said. but I > still have an issue. If we apply those ratios to my available ressources my > dashbord should display > vcpu =16*16 (as i have 16 cores on my server) and memory=1.5*12 (as I have > 12 Gb) BUUUT It only displays vcpu=16 and memory=12. > > any hint ? > > 2015-04-24 21:17 GMT+02:00 Rhys Oxenham : > >> By default, I believe it operates a 16:1 vCPU:physical and 1.5:1 on the >> memory. It?s possible to modify these values, of course, but ultimately it >> depends on your workload choices. >> >> Cheers >> Rhys >> >> > On 24 Apr 2015, at 04:32, pauline phaure wrote: >> > >> > hey, >> > please I wanna know for OpenStack the matching that should exists >> between virtual ressources and physical ones. any idea? should we stick to >> 1vcp=cpu?? what about the memory and disk space? >> > >> > pauline, >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.dexter at servicemesh.com Mon Apr 27 10:58:45 2015 From: jeff.dexter at servicemesh.com (Jeff Dexter) Date: Mon, 27 Apr 2015 06:58:45 -0400 Subject: [Rdo-list] physical and virtual ressources matching In-Reply-To: References: <912F04E4-9F4F-4271-806D-E016C58D36B7@redhat.com> Message-ID: <94C50943-5A35-415B-B883-AB29BAD3C49C@servicemesh.com> When the stats are displayed it only shows the raw values for the systems, not the available after calculating the ratios. Sent from my iPhone > On Apr 27, 2015, at 6:39 AM, Mohammed Arafa wrote: > > The horizon dashboard shows project quotas > >> On Apr 27, 2015 3:26 AM, "pauline phaure" wrote: >> ok thank you Rhys. I check those values and they were as you said. but I still have an issue. If we apply those ratios to my available ressources my dashbord should display >> vcpu =16*16 (as i have 16 cores on my server) and memory=1.5*12 (as I have 12 Gb) BUUUT It only displays vcpu=16 and memory=12. >> >> any hint ? >> >> 2015-04-24 21:17 GMT+02:00 Rhys Oxenham : >>> By default, I believe it operates a 16:1 vCPU:physical and 1.5:1 on the memory. It?s possible to modify these values, of course, but ultimately it depends on your workload choices. >>> >>> Cheers >>> Rhys >>> >>> > On 24 Apr 2015, at 04:32, pauline phaure wrote: >>> > >>> > hey, >>> > please I wanna know for OpenStack the matching that should exists between virtual ressources and physical ones. any idea? should we stick to 1vcp=cpu?? what about the memory and disk space? >>> > >>> > pauline, >>> > _______________________________________________ >>> > Rdo-list mailing list >>> > Rdo-list at redhat.com >>> > https://www.redhat.com/mailman/listinfo/rdo-list >>> > >>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Mon Apr 27 12:19:23 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 27 Apr 2015 14:19:23 +0200 Subject: [Rdo-list] RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: > Quick installation HOWTO > > yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > # Following works out-of-the-box on CentOS7 > # For RHEL see http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F > yum install epel-release > cd /etc/yum.repos.d Updated snapshot with RC2 builds: curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc2/delorean-kilo.repo I'll remove rc1-Apr21 snapshot to avoid confusion. Cheers, Alan From roxenham at redhat.com Mon Apr 27 13:01:35 2015 From: roxenham at redhat.com (Rhys Oxenham) Date: Mon, 27 Apr 2015 09:01:35 -0400 Subject: [Rdo-list] physical and virtual ressources matching In-Reply-To: <94C50943-5A35-415B-B883-AB29BAD3C49C@servicemesh.com> References: <912F04E4-9F4F-4271-806D-E016C58D36B7@redhat.com> <94C50943-5A35-415B-B883-AB29BAD3C49C@servicemesh.com> Message-ID: <38C4690C-40B3-4EF7-B986-F1D7963D57EF@redhat.com> > On 27 Apr 2015, at 06:58, Jeff Dexter wrote: > > When the stats are displayed it only shows the raw values for the systems, not the available after calculating the ratios. Exactly. If you watch the nova-compute log on one of your hypervisors it will show the figures it?s reporting back to the scheduler. Cheers Rhys > > Sent from my iPhone > > On Apr 27, 2015, at 6:39 AM, Mohammed Arafa wrote: > >> The horizon dashboard shows project quotas >> >> On Apr 27, 2015 3:26 AM, "pauline phaure" wrote: >> ok thank you Rhys. I check those values and they were as you said. but I still have an issue. If we apply those ratios to my available ressources my dashbord should display >> vcpu =16*16 (as i have 16 cores on my server) and memory=1.5*12 (as I have 12 Gb) BUUUT It only displays vcpu=16 and memory=12. >> >> any hint ? >> >> 2015-04-24 21:17 GMT+02:00 Rhys Oxenham : >> By default, I believe it operates a 16:1 vCPU:physical and 1.5:1 on the memory. It?s possible to modify these values, of course, but ultimately it depends on your workload choices. >> >> Cheers >> Rhys >> >> > On 24 Apr 2015, at 04:32, pauline phaure wrote: >> > >> > hey, >> > please I wanna know for OpenStack the matching that should exists between virtual ressources and physical ones. any idea? should we stick to 1vcp=cpu?? what about the memory and disk space? >> > >> > pauline, >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From Yaniv.Kaul at emc.com Mon Apr 27 13:14:21 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Mon, 27 Apr 2015 09:14:21 -0400 Subject: [Rdo-list] RDO Kilo RC snapshot - core packages In-Reply-To: References: Message-ID: <648473255763364B961A02AC3BE1060D03D0B8F028@MX19A.corp.emc.com> Does http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/delorean.repo point to it or do I need to update my links? > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Alan Pevec > Sent: Monday, April 27, 2015 3:19 PM > To: Rdo-list at redhat.com > Subject: Re: [Rdo-list] RDO Kilo RC snapshot - core packages > > > Quick installation HOWTO > > > > yum install > > http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm > > # Following works out-of-the-box on CentOS7 # For RHEL see > > http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages. > > 3F > > yum install epel-release > > cd /etc/yum.repos.d > > Updated snapshot with RC2 builds: > curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel- > 7/rc2/delorean-kilo.repo > > I'll remove rc1-Apr21 snapshot to avoid confusion. > > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Mon Apr 27 13:53:54 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 27 Apr 2015 15:53:54 +0200 Subject: [Rdo-list] RDO Kilo RC snapshot - core packages In-Reply-To: <648473255763364B961A02AC3BE1060D03D0B8F028@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03D0B8F028@MX19A.corp.emc.com> Message-ID: 2015-04-27 15:14 GMT+02:00 Kaul, Yaniv : > Does http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/delorean.repo point to it or do I need to update my links? No, Delotran Trunk is building from master branches which are now open for Liberty changes, RC2 snap is built by the separate Delorean Kilo instance http://trunk.rdoproject.org/kilo/centos7/ building stable/kilo branches. I'll update symlink there once I get confirmation from rdo-management team. openstack-trunk/rc2 is snapshot http://trunk.rdoproject.org/kilo/centos7/93/b0/93b0d5fce3a41e4a3a549f98f78b6681cbc3ea95_2e85c529/ Cheers, Alan From rdo-info at redhat.com Mon Apr 27 14:28:19 2015 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 27 Apr 2015 14:28:19 +0000 Subject: [Rdo-list] [RDO] RDO blog roundup, April 27, 2015 Message-ID: <0000014cfb4759db-e56610eb-c94c-4a54-9696-7d8c66892fb6-000000@email.amazonses.com> rbowen started a discussion. RDO blog roundup, April 27, 2015 --- Follow the link below to check it out: https://www.rdoproject.org/forum/discussion/1013/rdo-blog-roundup-april-27-2015 Have a great day! From hguemar at fedoraproject.org Mon Apr 27 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 27 Apr 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150427150003.13E146004C66@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-04-29 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From apevec at gmail.com Mon Apr 27 16:21:37 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 27 Apr 2015 18:21:37 +0200 Subject: [Rdo-list] Objective - Feasible ? In-Reply-To: References: <553AB237.2020304@redhat.com> <2093944359.7795211.1429913850634.JavaMail.zimbra@redhat.com> <1381531955.7795357.1429913980462.JavaMail.zimbra@redhat.com> Message-ID: 2015-04-26 1:29 GMT+02:00 Outback Dingo : >> >> Dan Radez and others had I believe done some work on PackStack >> >> integration of >> >> ODL in support of OPNFV. >> > I'll wait for Dan to follow up with regards to PackStack integration but >> > this may help in the meantime: >> > https://www.rdoproject.org/Helium_OpenDaylight_Juno_OpenStack >> This wiki should probably now redirect to OPNFV BGS project. >> Not Packstack but Quickstack manifests were used as the basis for work >> Dan did in OPNFV Project Bootstrap/Get started (Genesis): >> https://gerrit.opnfv.org/gerrit/gitweb?p=genesis.git;a=blob;f=foreman/docs/src/release-notes.rst;hb=HEAD > Interesting read, now if only i could find this iso they mention in > 4.3.1 Software deliverables > 135 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 136 Foreman/QuickStack at OPNFV .iso file > 137 deploy.sh - Automatically deploys Target OPNFV System to Bare Metal It's at http://artifacts.opnfv.org/ I'll let Dan tell us which build is known good. Cheers, Alan From ak at cloudssky.com Mon Apr 27 16:25:31 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Mon, 27 Apr 2015 18:25:31 +0200 Subject: [Rdo-list] RDO Kilo RC snapshot - core packages In-Reply-To: References: <648473255763364B961A02AC3BE1060D03D0B8F028@MX19A.corp.emc.com> Message-ID: Now updated to RC2, on compute1 I'm getting: IncompatibleObjectVersion: Version 1.12 of Service is not supported: [root at compute1 ~(keystone_admin)]# systemctl status openstack-nova-compute.service openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled) Active: inactive (dead) since Mo 2015-04-27 12:14:56 EDT; 4min 7s ago Process: 15797 ExecStart=/usr/bin/nova-compute (code=exited, status=0/SUCCESS) Main PID: 15797 (code=exited, status=0/SUCCESS) Apr 27 12:14:56 compute1 nova-compute[15797]: executor_callback)) Apr 27 12:14:56 compute1 nova-compute[15797]: File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch .. Apr 27 12:14:56 compute1 nova-compute[15797]: IncompatibleObjectVersion: Version 1.12 of Service is not supported And in the dashboard or via CLI the state of the compute host is down (as expected). Thx, -Arash On Mon, Apr 27, 2015 at 3:53 PM, Alan Pevec wrote: > 2015-04-27 15:14 GMT+02:00 Kaul, Yaniv : > > Does > http://trunk.rdoproject.org/centos70/latest-RDO-trunk-CI/delorean.repo > point to it or do I need to update my links? > > No, Delotran Trunk is building from master branches which are now open > for Liberty changes, > RC2 snap is built by the separate Delorean Kilo instance > http://trunk.rdoproject.org/kilo/centos7/ building stable/kilo > branches. > I'll update symlink there once I get confirmation from rdo-management team. > openstack-trunk/rc2 is snapshot > > http://trunk.rdoproject.org/kilo/centos7/93/b0/93b0d5fce3a41e4a3a549f98f78b6681cbc3ea95_2e85c529/ > > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Apr 27 17:35:42 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 27 Apr 2015 13:35:42 -0400 Subject: [Rdo-list] RDO/OpenStack meetups coming up (Monday, April 27, 2015) Message-ID: <553E736E.9020707@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Wednesday, April 29 in San Jose, CA, US: Come learn about OpenStack for Operators - http://www.meetup.com/Silicon-Valley-OpenStack-Ops-Meetup/events/221999533/ * Thursday, April 30 in Prague, CZ: OpenStack Howto part 2 - Identity - http://www.meetup.com/OpenStack-Czech-User-Group-Meetup/events/221143231/ * Thursday, April 30 in Wellington, NZ: Trove - Database as a Service - http://www.meetup.com/New-Zealand-OpenStack-User-Group/events/221737612/ * Thursday, April 30 in Mountain View, CA, US: Applications on OpenStack with Kubernetes, Cloud Foundry and Murano - http://www.meetup.com/Cloud-Platform-at-Symantec/events/221873079/ * Thursday, April 30 in Pittsburgh, PA, US: Let's Talk About Ceph - http://www.meetup.com/openstack-pittsburgh/events/221895192/ * Thursday, April 30 in Sydney, AU: Kickstarter - http://www.meetup.com/Sydney-OpenShift-Meetup/events/221412487/ * Thursday, April 30 in Littleton, CO, US: Discuss and Learn about OpenStack - http://www.meetup.com/OpenStack-Denver/events/221330889/ * Thursday, April 30 in Melbourne, AU: Canberra Openstack Meetup - http://www.meetup.com/Australian-OpenStack-User-Group/events/221180707/ * Thursday, April 30 in Taipei, TW: 4? Taipei.py ?? OpenStack/BlueMix/Softlayer - http://www.meetup.com/Taipei-py/events/222014347/ * Thursday, April 30 in Amsterdam, NL: Openstack&Ceph Spring Meetup - http://www.meetup.com/Openstack-Amsterdam/events/221966665/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Mon Apr 27 18:34:18 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 27 Apr 2015 14:34:18 -0400 Subject: [Rdo-list] Operators and Developers Message-ID: <553E812A.2090001@redhat.com> I had a very interesting conversation with a large-scale OpenStack user at ApacheCon a week ago, who was complaining about the disconnect between OpenStack developers and the actual OpenStack operators who deploy OpenStack in the real world. He felt that the OpenStack developers are out of touch with what it takes to run clouds in the real world, and are developing features that are academically interesting but practically useless. I mention this here because he promised to block out some time to sit down with me, and possibly some of you, in Vancouver, to discuss his concerns, and where he feels that the disconnects are. So I was wondering if any of you might be interested in sitting in on such a conversation, and seeing what we can learn. (And, yes, he's having this conversation in the upstream as well, not just with me. The organization in question is a large one, and has been a frequent sponsor of the OpenStack Design Summit.) -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From Arkady_Kanevsky at dell.com Mon Apr 27 18:47:11 2015 From: Arkady_Kanevsky at dell.com (Arkady_Kanevsky at dell.com) Date: Mon, 27 Apr 2015 13:47:11 -0500 Subject: [Rdo-list] Operators and Developers In-Reply-To: <553E812A.2090001@redhat.com> References: <553E812A.2090001@redhat.com> Message-ID: <336424C1A5A44044B29030055527AA7504B897F27C@AUSX7MCPS301.AMER.DELL.COM> Dell - Internal Use - Confidential There is a WG for openstack operators. It is very active and had met at multiple summits. In order to be even more responsive to Operators and Users there a new openstack Product WG that has presentation on the first day of the summit. To be fair Openstack projects are much more responsive to Operator input now and over last 2 releases work much more rigorously on robustness and testing. Distributed CI and testing of all PRs on (cinder, neutron, and other) drivers as part of CI is key example of it, neutron stability and transition plan from nova-networking to neutron (not just deprecation) is another. RedHat and Dell work on HA and Ceph integration are another example of it. Albeit we need to integrate it into RDO. And much much more in pipeline. Definitely will be interested in talking to a person. Encourage him/her to attend Operator WG meetings. The more we understand operational requirements of Customers the better off we can make RDO fit their needs. Thanks, Arkady -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Rich Bowen Sent: Monday, April 27, 2015 1:34 PM To: rdo-list at redhat.com Subject: [Rdo-list] Operators and Developers I had a very interesting conversation with a large-scale OpenStack user at ApacheCon a week ago, who was complaining about the disconnect between OpenStack developers and the actual OpenStack operators who deploy OpenStack in the real world. He felt that the OpenStack developers are out of touch with what it takes to run clouds in the real world, and are developing features that are academically interesting but practically useless. I mention this here because he promised to block out some time to sit down with me, and possibly some of you, in Vancouver, to discuss his concerns, and where he feels that the disconnects are. So I was wondering if any of you might be interested in sitting in on such a conversation, and seeing what we can learn. (And, yes, he's having this conversation in the upstream as well, not just with me. The organization in question is a large one, and has been a frequent sponsor of the OpenStack Design Summit.) -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Mon Apr 27 18:47:24 2015 From: Tim.Bell at cern.ch (Tim Bell) Date: Mon, 27 Apr 2015 18:47:24 +0000 Subject: [Rdo-list] Operators and Developers In-Reply-To: <553E812A.2090001@redhat.com> References: <553E812A.2090001@redhat.com> Message-ID: There are scheduled sections of the design summit for operator/developer discussions. As a member of the OpenStack user committee, I?d be happy to find ways to improve the conversation if this is not sufficient. With mutual understanding, we?ve had many discussions in the past which have then become part of the development roadmap. Tim On 4/27/15, 8:34 PM, "Rich Bowen" wrote: >I had a very interesting conversation with a large-scale OpenStack user >at ApacheCon a week ago, who was complaining about the disconnect >between OpenStack developers and the actual OpenStack operators who >deploy OpenStack in the real world. > >He felt that the OpenStack developers are out of touch with what it >takes to run clouds in the real world, and are developing features that >are academically interesting but practically useless. > >I mention this here because he promised to block out some time to sit >down with me, and possibly some of you, in Vancouver, to discuss his >concerns, and where he feels that the disconnects are. So I was >wondering if any of you might be interested in sitting in on such a >conversation, and seeing what we can learn. > >(And, yes, he's having this conversation in the upstream as well, not >just with me. The organization in question is a large one, and has been >a frequent sponsor of the OpenStack Design Summit.) > > >-- >Rich Bowen - rbowen at redhat.com >OpenStack Community Liaison >http://rdoproject.org/ > >_______________________________________________ >Rdo-list mailing list >Rdo-list at redhat.com >https://www.redhat.com/mailman/listinfo/rdo-list > >To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Mon Apr 27 18:51:17 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 27 Apr 2015 14:51:17 -0400 Subject: [Rdo-list] Operators and Developers In-Reply-To: <553E812A.2090001@redhat.com> References: <553E812A.2090001@redhat.com> Message-ID: <553E8525.6090501@redhat.com> On 04/27/2015 02:34 PM, Rich Bowen wrote: > I had a very interesting conversation with a large-scale OpenStack user > at ApacheCon a week ago, who was complaining about the disconnect > between OpenStack developers and the actual OpenStack operators who > deploy OpenStack in the real world. > > He felt that the OpenStack developers are out of touch with what it > takes to run clouds in the real world, and are developing features that > are academically interesting but practically useless. > > I mention this here because he promised to block out some time to sit > down with me, and possibly some of you, in Vancouver, to discuss his > concerns, and where he feels that the disconnects are. So I was > wondering if any of you might be interested in sitting in on such a > conversation, and seeing what we can learn. > > (And, yes, he's having this conversation in the upstream as well, not > just with me. The organization in question is a large one, and has been > a frequent sponsor of the OpenStack Design Summit.) > Thanks for all the responses, on-list and off-list. I will attempt to communicate all of this back to him, and ensure that he attends the relevant sessions, as well as having the conversation with those of us that are able to meet with him at summit. Thanks. --Rich > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Mon Apr 27 20:58:15 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 27 Apr 2015 16:58:15 -0400 Subject: [Rdo-list] RDO meetup at OpenStack Summit Message-ID: <553EA2E7.9000002@redhat.com> I am still working on getting a room for an RDO meetup at OpenStack Summit, but I wanted to go ahead and get this out there to start collecting agenda of what people want to discuss and/or work on at that meetup. I've put up an etherpad at https://etherpad.openstack.org/p/RDO_Vancouver where we can start collect those ideas, and where I will post times/locations once I get them. Thanks -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From hguemar at fedoraproject.org Mon Apr 27 23:40:17 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 28 Apr 2015 01:40:17 +0200 Subject: [Rdo-list] RDO meetup at OpenStack Summit In-Reply-To: <553EA2E7.9000002@redhat.com> References: <553EA2E7.9000002@redhat.com> Message-ID: 2015-04-27 22:58 GMT+02:00 Rich Bowen : > I am still working on getting a room for an RDO meetup at OpenStack Summit, > but I wanted to go ahead and get this out there to start collecting agenda > of what people want to discuss and/or work on at that meetup. > > I've put up an etherpad at https://etherpad.openstack.org/p/RDO_Vancouver > where we can start collect those ideas, and where I will post > times/locations once I get them. > I would suggest to notify CentOS devel list too. H. > Thanks > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From itbrown at redhat.com Tue Apr 28 07:54:47 2015 From: itbrown at redhat.com (Itzik Brown) Date: Tue, 28 Apr 2015 03:54:47 -0400 (EDT) Subject: [Rdo-list] RDO build that passed CI (rc2) In-Reply-To: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com> Message-ID: <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> Hi, I installed Openstack Kilo (rc2) on RHEL7.1 using RDO repositories. It's a distributed environment (Controller and 2 compute nodes). The installation process itself finished without errors. Issues: 1)Problem with Horizon - getting permission denied error. There is an old bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1150678. I added a comment there. Workaround - Changing the ownership of the /usr/share/openstack-dashboard/static/dashboard to apache:apache solves the issue 2) openstack-nova-novncproxy service fails to start: There is a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1200701 3) When enabling LBaaS neutron-lbaas-agent fails to start: neutron-lbaas-agent: Error importing loadbalancer device driver: neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver There is a bug: https://bugs.launchpad.net/neutron/+bug/1441107/ A fix is in review for Kilo Workaround: In /etc/neutron/lbaas_agent.ini change: device_driver = neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver Itzik From bderzhavets at hotmail.com Tue Apr 28 12:22:39 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 28 Apr 2015 08:22:39 -0400 Subject: [Rdo-list] RDO build that passed CI (rc2) In-Reply-To: <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> References: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com>, <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> Message-ID: > Date: Tue, 28 Apr 2015 03:54:47 -0400 > From: itbrown at redhat.com > To: rdo-list at redhat.com > Subject: [Rdo-list] RDO build that passed CI (rc2) > > Hi, > > I installed Openstack Kilo (rc2) on RHEL7.1 using RDO repositories. > It's a distributed environment (Controller and 2 compute nodes). > The installation process itself finished without errors. > Are you able to launch VM on Compute Node ? Thanks Boris. > Issues: > > 1)Problem with Horizon - getting permission denied error. > There is an old bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1150678. > I added a comment there. > > Workaround - Changing the ownership of the /usr/share/openstack-dashboard/static/dashboard to > apache:apache solves the issue > > 2) openstack-nova-novncproxy service fails to start: > There is a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1200701 > > 3) When enabling LBaaS neutron-lbaas-agent fails to start: > > neutron-lbaas-agent: Error importing loadbalancer device driver: neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver > > There is a bug: > > https://bugs.launchpad.net/neutron/+bug/1441107/ > > A fix is in review for Kilo > > Workaround: > > In /etc/neutron/lbaas_agent.ini change: > > device_driver = neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver > > Itzik > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Apr 28 16:55:34 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 28 Apr 2015 12:55:34 -0400 Subject: [Rdo-list] RDO Test Day, tentative, May 5th, 6th? Message-ID: <553FBB86.4010009@redhat.com> Per conversation on IRC, we're thinking that May 5th/6th would be a good time for a test day, to run the Kilo release through its paces. As usual, we'd need * Documentation of how to do the usual packstack installation * New documentation for using RDO-Manager (Hugh says that this will be ready) * Suggested test cases I've started a wiki page at http://rdoproject.org/RDO_test_day_Kilo like we usually do. If there's any compelling reason *not* to pick these dates, please speak up ASAP. I'll be sending out wider announcements tomorrow if I don't hear any objections here. Thanks. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From lars at redhat.com Tue Apr 28 17:29:59 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 28 Apr 2015 13:29:59 -0400 Subject: [Rdo-list] Looking for install steps of Openstack (Kilo) on Centos 7 In-Reply-To: References: Message-ID: <20150428172959.GB4447@redhat.com> On Tue, Apr 28, 2015 at 01:21:23PM -0400, Flavio Fernandes wrote: > Mat (cc) is asking me for a recommended install procedure of Openstack Kilo on Centos7. > Is there a good pointer on that? Is packstack still an available alternative? This sort of question is best for the rdo-list mailing list, where you'll get more authoritative eyes on it. I am reasonably certain that packstack remains available for our Kilo release, but I haven't tried it myself. There has been a variety of email recently on rdo-list concerning the Kilo release. I have cc'd this to rdo-list. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ak at cloudssky.com Tue Apr 28 18:41:21 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Tue, 28 Apr 2015 20:41:21 +0200 Subject: [Rdo-list] Looking for install steps of Openstack (Kilo) on Centos 7 In-Reply-To: <20150428172959.GB4447@redhat.com> References: <20150428172959.GB4447@redhat.com> Message-ID: Yes, we (Boris, Arif and myself with great help from Alan Pevec) tested the Delorean release and the latest Kilo RC2 on CentOS 7.1 last week and the result was that most things seems to work, only at the end we can't spawn an instance on AIO or multi-node environments. But I think these issues will get fixed in the next days. In general I think Packstack is great to get an OpenStack environment up and running very quickly, and we still enjoy RDO Havana in production for our development and testing of OpenCms and other apps. By the way, it seems that your RDO-Manager is going to compete with Packstack and that's great, since competition leads to innovation :-) Thanks, Arash On Tue, Apr 28, 2015 at 7:29 PM, Lars Kellogg-Stedman wrote: > On Tue, Apr 28, 2015 at 01:21:23PM -0400, Flavio Fernandes wrote: > > Mat (cc) is asking me for a recommended install procedure of Openstack > Kilo on Centos7. > > Is there a good pointer on that? Is packstack still an available > alternative? > > This sort of question is best for the rdo-list mailing list, where > you'll get more authoritative eyes on it. I am reasonably certain > that packstack remains available for our Kilo release, but I haven't > tried it myself. > > There has been a variety of email recently on rdo-list concerning the > Kilo release. > > I have cc'd this to rdo-list. > > -- > Lars Kellogg-Stedman | larsks @ > {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at berendt.io Tue Apr 28 19:03:37 2015 From: christian at berendt.io (Christian Berendt) Date: Tue, 28 Apr 2015 21:03:37 +0200 Subject: [Rdo-list] Looking for install steps of Openstack (Kilo) on Centos 7 In-Reply-To: References: <20150428172959.GB4447@redhat.com> Message-ID: <553FD989.6020109@berendt.io> On 04/28/2015 08:41 PM, Arash Kaffamanesh wrote: > In general I think Packstack is great to get an OpenStack environment up > and running very quickly, If you want to manually install OpenStack you can have a look at the installation guide, this is a good way to get in touch with mostly every component. A preview version for the upcoming kilo release is available at http://docs.openstack.org/draft/install-guide/install/yum/content/ and will be published at http://docs.openstack.org/kilo/install-guide/install/yum/content/ a few days after the release. Christian. -- Christian Berendt Cloud Solution Architect Mail: berendt at b1-systems.de B1 Systems GmbH Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 From ak at cloudssky.com Tue Apr 28 20:40:13 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Tue, 28 Apr 2015 22:40:13 +0200 Subject: [Rdo-list] RDO Test Day, tentative, May 5th, 6th? In-Reply-To: <553FBB86.4010009@redhat.com> References: <553FBB86.4010009@redhat.com> Message-ID: I'd like to participate and provide the test results on 3 node (controller, network, compute) on CentOS 7 and Fedora 21 (possibly with NFS Storage) and contribute to the documentation. Thanks, Arash On Tue, Apr 28, 2015 at 6:55 PM, Rich Bowen wrote: > Per conversation on IRC, we're thinking that May 5th/6th would be a good > time for a test day, to run the Kilo release through its paces. > > As usual, we'd need > * Documentation of how to do the usual packstack installation > * New documentation for using RDO-Manager (Hugh says that this will be > ready) > * Suggested test cases > > I've started a wiki page at http://rdoproject.org/RDO_test_day_Kilo like > we usually do. > > If there's any compelling reason *not* to pick these dates, please speak > up ASAP. I'll be sending out wider announcements tomorrow if I don't hear > any objections here. > > Thanks. > > --Rich > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgomes at redhat.com Wed Apr 29 13:20:11 2015 From: lgomes at redhat.com (Lucas Alvares Gomes) Date: Wed, 29 Apr 2015 09:20:11 -0400 (EDT) Subject: [Rdo-list] Instack: Enabling local boot by default on the undercloud In-Reply-To: <924836462.8605429.1430312784927.JavaMail.zimbra@redhat.com> Message-ID: <1203510426.8617103.1430313611069.JavaMail.zimbra@redhat.com> Hi, We are about to merge a patch[1] which will enable local boot by default on the undercloud in instack. In order to that to work the deploy ramdisk should be updated (it should contain this change[2]), so if have an old deploy ramdisk that is being reused across deployments please update it to take advantage of the local boot feature. [1] https://review.gerrithub.io/#/c/230166 [2] https://github.com/rdo-management/diskimage-builder/commit/9fb2d14cf10184b22d00038973256a13114bfd17 Cheers, Lucas From dradez at redhat.com Wed Apr 29 14:09:52 2015 From: dradez at redhat.com (Dan Radez) Date: Wed, 29 Apr 2015 10:09:52 -0400 Subject: [Rdo-list] Objective - Feasible ? In-Reply-To: References: <553AB237.2020304@redhat.com> <2093944359.7795211.1429913850634.JavaMail.zimbra@redhat.com> <1381531955.7795357.1429913980462.JavaMail.zimbra@redhat.com> Message-ID: <5540E630.3030704@redhat.com> On 04/27/2015 12:21 PM, Alan Pevec wrote: > 2015-04-26 1:29 GMT+02:00 Outback Dingo : >>>>> Dan Radez and others had I believe done some work on PackStack >>>>> integration of >>>>> ODL in support of OPNFV. >>>> I'll wait for Dan to follow up with regards to PackStack integration but >>>> this may help in the meantime: >>>> https://www.rdoproject.org/Helium_OpenDaylight_Juno_OpenStack >>> This wiki should probably now redirect to OPNFV BGS project. >>> Not Packstack but Quickstack manifests were used as the basis for work >>> Dan did in OPNFV Project Bootstrap/Get started (Genesis): >>> https://gerrit.opnfv.org/gerrit/gitweb?p=genesis.git;a=blob;f=foreman/docs/src/release-notes.rst;hb=HEAD >> Interesting read, now if only i could find this iso they mention in >> 4.3.1 Software deliverables >> 135 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> 136 Foreman/QuickStack at OPNFV .iso file >> 137 deploy.sh - Automatically deploys Target OPNFV System to Bare Metal > > It's at http://artifacts.opnfv.org/ I'll let Dan tell us which build > is known good. > > Cheers, > Alan > Been on PTO, sry for delay in response. The integration work I did was in quickstack, not packstack. This is an old commit but shows what was done: https://github.com/radez/astapor/commit/1603fa4008eed4a28efc626294b28856d089c52a the ovs specific stuff is being moved out to the ODL puppet module so there isn't much to it. It's just setting the ML2 driver to ODL and throwiing in a couple params to tell ML2 where to find ODL. The deploy script is being written by Tim Rozet. I copied him too. We have gotten RDO and ODL to play nice with each other and that deploy script is the right way to make it happen. OPNFV is working towards R1 and what is trying to be done will become more stable as we get closer to R1 Dan From kchamart at redhat.com Wed Apr 29 15:26:24 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 29 Apr 2015 17:26:24 +0200 Subject: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: <55396E31.4030603@arif-ali.co.uk> <5539E107.1010701@redhat.com> Message-ID: <20150429152624.GF2764@tesla.redhat.com> On Sun, Apr 26, 2015 at 12:52:36PM -0400, Boris Derzhavets wrote: > Confirmed version as of 04/21/2015 ( via delorean repo ) was tested > with answer-file for Two Node install Controller&&Network and Compute > Answer-file sample was brought from blog entry ( works fine on Juno) > http://bderzhavets.blogspot.com/2015/04/switching-to-ethx-interfaces-on-fedora.html > > Packstack completed OK . However, table "compute_nodes" in nova > database appeared to be empty up on completion. System doesn't see > Hypervisor Host either via Dashboard or via nova CLI as mentioned by > Arash. So it cannot scheduler instance, no matter that `nova > service-list` is OK and openstack-nova-compute is up and running on > Compute Node. I've also I've got same "WARNING > nova.compute.resource_tracker [-] No service record for host compute1" > in /var/log/nova/nova-compute.log on Compute node. Boris, if you're able to reproduce this consistently, can you please file a Nova bug with clear reproducer details? Along with contextual log files w/ debug enabled. -- /kashyap From itzikb at redhat.com Wed Apr 29 15:43:38 2015 From: itzikb at redhat.com (Itzik Brown) Date: Wed, 29 Apr 2015 18:43:38 +0300 Subject: [Rdo-list] RDO build that passed CI (rc2) In-Reply-To: References: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com>, <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> Message-ID: <5540FC2A.9050906@redhat.com> Yes. In addition: 1)I don't see any problem now regarding the 'permission denied error'. 2) openstack-nova-novncproxy service is running. For some reason I can't see the console of an instance with Firefox but I can with Chrome. 3) neutron-lbaas-agent still fails to start. Itzik On 04/28/2015 03:22 PM, Boris Derzhavets wrote: > Are you able to launch VM on Compute Node ? From apevec at gmail.com Wed Apr 29 16:29:02 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 29 Apr 2015 18:29:02 +0200 Subject: [Rdo-list] [meeting] RDO packaging meeting minutes (2015-04-29) Message-ID: ======================================== #rdo: RDO packaging meeting (2015-04-29) ======================================== Meeting started by apevec at 15:08:26 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-04-29/rdo.2015-04-29-15.08.log.html . Meeting summary --------------- * roll call (apevec, 15:08:39) * test day (apevec, 15:10:18) * LINK: https://www.rdoproject.org/RDO_test_day_Kilo is the test day doc. I don't have a link to the rdo-manager docs yet, but it will be linked from that page soon. (rbowen, 15:12:47) * RC2 bug triage (apevec, 15:16:35) * "compute_nodes" in nova database appeared to be empty up on completion (apevec, 15:16:37) * Horizon permission denied error, old bug: https://bugzilla.redhat.com/show_bug.cgi?id=1150678 => mrunge cannot reproduce (apevec, 15:21:39) * openstack-nova-novncproxy service fails to start: https://bugzilla.redhat.com/show_bug.cgi?id=1200701 => RDO Juno updated, Nova spec needs versioned dep (apevec, 15:31:40) * neutron-lbaas-agent fails to start (apevec, 15:33:38) * fixed in Neutron RC3 (apevec, 15:33:48) * distrepos @ rdoinfo (apevec, 15:36:50) * LINK: https://github.com/redhat-openstack/rdoinfo/blob/master/rdo.yml#L29 (apevec, 15:37:29) * ACTION: apevec to update distrepos in rdoinfo (apevec, 15:45:40) * rdopkg reqcheck available to help with requirements.txt management in rdopkg-0.27 (apevec, 15:50:14) * Packaging tests packages for openstack-components (apevec, 15:56:24) * ACTION: chandankumar will list all the test packages and package it (apevec, 15:58:39) * selinux enforcing jobs enabled for kilo (apevec, 16:00:21) * trystack is down atm w/ floating ip issues, public ci down (apevec, 16:01:20) * juno el6 in CBS (apevec, 16:02:30) * Enabling cloud6-openstack-juno-candidate and cloud6-openstack-common-candidate CBS repositories on CentOS 6 (No EPEL) you will be able to "yum install openstrack-nova openstack-ceilometer-*". (alphacc, 16:02:35) * ACTION: alphacc to post about EL6 neutron status on rdo-list, to gauge interest (apevec, 16:13:03) * open floor (apevec, 16:13:51) Meeting ended at 16:18:06 UTC. Action Items ------------ * apevec to update distrepos in rdoinfo * chandankumar will list all the test packages and package it * alphacc to post about EL6 neutron status on rdo-list, to gauge interest Action Items, by person ----------------------- * alphacc * alphacc to post about EL6 neutron status on rdo-list, to gauge interest * apevec * apevec to update distrepos in rdoinfo * chandankumar * chandankumar will list all the test packages and package it * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (156) * jruzicka (73) * ndipanov (25) * bauzas (17) * jcoufal (16) * rbowen (14) * alphacc (14) * weshay (9) * number80 (8) * kashyap (6) * mmagr (6) * chandankumar (5) * itzikb (4) * zodbot (3) * social (2) * derekh (2) * DrBacchus (1) * eggmaster (1) * aortega (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From ak at cloudssky.com Wed Apr 29 17:11:12 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Wed, 29 Apr 2015 19:11:12 +0200 Subject: [Rdo-list] rdo-manager install problems (virtual environment) Message-ID: Hi, I tried to install rdo-manager (virtual environment) and was able to ssh into the instack vm and run: [stack at instack ~]$ instack-install-undercloud The installation aborts with: ........ Notice: /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Service[nova-cert]: Triggered 'refresh' from 3 events Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]/ensure: ensure changed 'stopped' to 'running' Notice: Finished catalog run in 625.05 seconds + rc=6 + set -e + echo 'puppet apply exited with exit code 6' puppet apply exited with exit code 6 + '[' 6 '!=' 2 -a 6 '!=' 0 ']' + exit 6 [stack at instack ~]$ [2015-04-29 16:43:38,404] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 6] [2015-04-29 16:43:38,405] (os-refresh-config) [ERROR] Aborting... ^C The red warning and errors during the instack-install-undercloud run are attached. Any ideas? Thanks! Arash -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2015-04-29 at 19.00.06.png Type: image/png Size: 280026 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2015-04-29 at 18.59.26.png Type: image/png Size: 59870 bytes Desc: not available URL: From jslagle at redhat.com Wed Apr 29 18:54:32 2015 From: jslagle at redhat.com (James Slagle) Date: Wed, 29 Apr 2015 14:54:32 -0400 Subject: [Rdo-list] rdo-manager install problems (virtual environment) In-Reply-To: References: Message-ID: <20150429185432.GR29586@teletran-1.redhat.com> On Wed, Apr 29, 2015 at 07:11:12PM +0200, Arash Kaffamanesh wrote: > Hi, > > I tried to install rdo-manager (virtual environment) and was able to ssh > into the > instack vm and run: > > [stack at instack ~]$ instack-install-undercloud > > The installation aborts with: > > ........ > > Notice: > /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Service[nova-cert]: > Triggered 'refresh' from 3 events > > Notice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]/ensure: > ensure changed 'stopped' to 'running' > > Notice: Finished catalog run in 625.05 seconds > > + rc=6 > > + set -e > > + echo 'puppet apply exited with exit code 6' > > puppet apply exited with exit code 6 > > + '[' 6 '!=' 2 -a 6 '!=' 0 ']' > > + exit 6 > > [stack at instack ~]$ [2015-04-29 16:43:38,404] (os-refresh-config) [ERROR] > during configure phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 6] > > [2015-04-29 16:43:38,405] (os-refresh-config) [ERROR] Aborting... > > ^C > > > The red warning and errors during the instack-install-undercloud run are > attached. > > Any ideas? It looks the rabbitmq puppet module failed to download rabbitmqadmin from the local rabbitmq server. The curl command exited with code 7, which means: 7 Failed to connect to host. Can you look up ealier in the log (~/.instack/install-undercloud.log) to see if there were any problems starting rabbitmq-server? Also check the logs under /var/log/rabbitmq and: sudo systemctl status rabbitmq-server There were some problems reported earlier last week about rabbitmq failing to start and it seemed related to not having a fqdn hostname set. I haven't had a chance to dig into that deeper yet, but it could be related to what you're seeing here. > > Thanks! > Arash > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From bderzhavets at hotmail.com Wed Apr 29 22:04:31 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 29 Apr 2015 18:04:31 -0400 Subject: [Rdo-list] RE(4): RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: <20150429152624.GF2764@tesla.redhat.com> References: , , , , , , <55396E31.4030603@arif-ali.co.uk>, <5539E107.1010701@redhat.com>, , , <20150429152624.GF2764@tesla.redhat.com> Message-ID: Arash, Please, create BZ for this issue. I don't have that environment in meantime. Thanks. Boris. > Date: Wed, 29 Apr 2015 17:26:24 +0200 > From: kchamart at redhat.com > To: bderzhavets at hotmail.com > CC: ak at cloudssky.com; apevec at gmail.com; rdo-list at redhat.com > Subject: Re: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core packages > > On Sun, Apr 26, 2015 at 12:52:36PM -0400, Boris Derzhavets wrote: > > Confirmed version as of 04/21/2015 ( via delorean repo ) was tested > > with answer-file for Two Node install Controller&&Network and Compute > > Answer-file sample was brought from blog entry ( works fine on Juno) > > http://bderzhavets.blogspot.com/2015/04/switching-to-ethx-interfaces-on-fedora.html > > > > Packstack completed OK . However, table "compute_nodes" in nova > > database appeared to be empty up on completion. System doesn't see > > Hypervisor Host either via Dashboard or via nova CLI as mentioned by > > Arash. So it cannot scheduler instance, no matter that `nova > > service-list` is OK and openstack-nova-compute is up and running on > > Compute Node. I've also I've got same "WARNING > > nova.compute.resource_tracker [-] No service record for host compute1" > > in /var/log/nova/nova-compute.log on Compute node. > > Boris, if you're able to reproduce this consistently, can you please > file a Nova bug with clear reproducer details? Along with contextual log > files w/ debug enabled. > > -- > /kashyap -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.poe at gmail.com Wed Apr 29 23:24:28 2015 From: steve.poe at gmail.com (Steve Poe) Date: Wed, 29 Apr 2015 16:24:28 -0700 Subject: [Rdo-list] OpenStack Juno: best way to implement quotas? Message-ID: My colleagues and I are looking at using OpenStack Juno internally using in various install methods (pre-packaged enterprise offering, manual via packages, etc. I picked RDO. I am responsible for evaluating quota usage. We'll have different development teams using our environment in one network. Each development team may have their own project where quotas can be adjusted when necessary. However, each project/tenant gets its own network. I wish I could associate multiple projects with one network. I was hoping to use projects in managing/implementing quotas (instances, cores, ram, storage, etc.) for the dev teams, but maybe I am approaching incorrectly. Any suggestions? Thanks. Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Wed Apr 29 23:29:37 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 29 Apr 2015 19:29:37 -0400 Subject: [Rdo-list] OpenStack Juno: best way to implement quotas? In-Reply-To: References: Message-ID: Unless I am missing something that sounds like the perfect use case for quotas. Project Quotas are on virtual resources like cpu ram and storage On Apr 29, 2015 7:26 PM, "Steve Poe" wrote: > My colleagues and I are looking at using OpenStack Juno internally using > in various install methods (pre-packaged enterprise offering, manual via > packages, etc. I picked RDO. I am responsible for evaluating quota usage. > > We'll have different development teams using our environment in one > network. Each development team may have their own project where quotas can > be adjusted when necessary. However, each project/tenant gets its own > network. I wish I could associate multiple projects with one network. > > I was hoping to use projects in managing/implementing quotas (instances, > cores, ram, storage, etc.) for the dev teams, but maybe I am approaching > incorrectly. Any suggestions? > > Thanks. > > Steve > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.poe at gmail.com Wed Apr 29 23:46:28 2015 From: steve.poe at gmail.com (Steve Poe) Date: Wed, 29 Apr 2015 16:46:28 -0700 Subject: [Rdo-list] OpenStack Juno: best way to implement quotas? In-Reply-To: References: Message-ID: Mohammed, I think so too, but when I create a new projects (DevTeam_A and DevTeam B), I cannot figure it out how to create one network that both will use. I maybe smart enough to route myself into a corner (pun intended). When login as admin, go to Networking, and try associate a new project to the same network name that is shared. Steve On Wed, Apr 29, 2015 at 4:29 PM, Mohammed Arafa wrote: > Unless I am missing something that sounds like the perfect use case for > quotas. > Project Quotas are on virtual resources like cpu ram and storage > On Apr 29, 2015 7:26 PM, "Steve Poe" wrote: > >> My colleagues and I are looking at using OpenStack Juno internally using >> in various install methods (pre-packaged enterprise offering, manual via >> packages, etc. I picked RDO. I am responsible for evaluating quota usage. >> >> We'll have different development teams using our environment in one >> network. Each development team may have their own project where quotas can >> be adjusted when necessary. However, each project/tenant gets its own >> network. I wish I could associate multiple projects with one network. >> >> I was hoping to use projects in managing/implementing quotas (instances, >> cores, ram, storage, etc.) for the dev teams, but maybe I am approaching >> incorrectly. Any suggestions? >> >> Thanks. >> >> Steve >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Wed Apr 29 23:57:16 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 29 Apr 2015 19:57:16 -0400 Subject: [Rdo-list] OpenStack Juno: best way to implement quotas? In-Reply-To: References: Message-ID: that is correct: project a seperate. think of them as mini virtual datacenters if you like have you tried to set up a GRE tunnel between the two projects? i recommend you search the internet for a good blog post or walkthrough. other than https://www.rdoproject.org/Using_GRE_Tenant_Networks i unfortunately do not have one i can recommend .also note that you will be doing fundamental changes to your networking if you go the GRE way however, if you have 2 separate teams why not have them communicate via the public ip? On Wed, Apr 29, 2015 at 7:46 PM, Steve Poe wrote: > Mohammed, > > I think so too, but when I create a new projects (DevTeam_A and DevTeam > B), I cannot figure it out how to create one network that both will use. I > maybe smart enough to route myself into a corner (pun intended). When login > as admin, go to Networking, and try associate a new project to the same > network name that is shared. > > Steve > > On Wed, Apr 29, 2015 at 4:29 PM, Mohammed Arafa > wrote: > >> Unless I am missing something that sounds like the perfect use case for >> quotas. >> Project Quotas are on virtual resources like cpu ram and storage >> On Apr 29, 2015 7:26 PM, "Steve Poe" wrote: >> >>> My colleagues and I are looking at using OpenStack Juno internally using >>> in various install methods (pre-packaged enterprise offering, manual via >>> packages, etc. I picked RDO. I am responsible for evaluating quota usage. >>> >>> We'll have different development teams using our environment in one >>> network. Each development team may have their own project where quotas can >>> be adjusted when necessary. However, each project/tenant gets its own >>> network. I wish I could associate multiple projects with one network. >>> >>> I was hoping to use projects in managing/implementing quotas (instances, >>> cores, ram, storage, etc.) for the dev teams, but maybe I am approaching >>> incorrectly. Any suggestions? >>> >>> Thanks. >>> >>> Steve >>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Thu Apr 30 00:10:49 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 29 Apr 2015 17:10:49 -0700 Subject: [Rdo-list] OpenStack Juno: best way to implement quotas? In-Reply-To: References: Message-ID: <55417309.6060502@redhat.com> On 04/29/2015 04:24 PM, Steve Poe wrote: > My colleagues and I are looking at using OpenStack Juno internally > using in various install methods (pre-packaged enterprise offering, > manual via packages, etc. I picked RDO. I am responsible for evaluating > quota usage. > > We'll have different development teams using our environment in one > network. Each development team may have their own project where quotas > can be adjusted when necessary. However, each project/tenant gets its > own network. I wish I could associate multiple projects with one network. > > I was hoping to use projects in managing/implementing quotas > (instances, cores, ram, storage, etc.) for the dev teams, but maybe I > am approaching incorrectly. Any suggestions? > > Thanks. > > Steve > Have you tried creating a shared network? I haven't tried to share a network between projects, but this works for creating one network that multiple tenants can use: neutron net-create --shared my-network You would need to run that with admin credentials. Please let the list know if this works for you. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From steve.poe at gmail.com Thu Apr 30 00:11:12 2015 From: steve.poe at gmail.com (Steve Poe) Date: Wed, 29 Apr 2015 17:11:12 -0700 Subject: [Rdo-list] OpenStack Juno: best way to implement quotas? In-Reply-To: References: Message-ID: It's all internal where typical instances will associate to a static/floating IP. Again, the goal is to manage the resources so we don't have a default where everyone can launch 5 or 50 instances. I maybe over-thinking it; I hope it becomes easier than what my brain can handle at the moment. :-) Steve On Wed, Apr 29, 2015 at 4:57 PM, Mohammed Arafa wrote: > that is correct: project a seperate. think of them as mini virtual > datacenters if you like > > have you tried to set up a GRE tunnel between the two projects? i > recommend you search the internet for a good blog post or walkthrough. > other than https://www.rdoproject.org/Using_GRE_Tenant_Networks i > unfortunately do not have one i can recommend .also note that you will be > doing fundamental changes to your networking if you go the GRE way > > however, if you have 2 separate teams why not have them communicate via > the public ip? > > On Wed, Apr 29, 2015 at 7:46 PM, Steve Poe wrote: > >> Mohammed, >> >> I think so too, but when I create a new projects (DevTeam_A and DevTeam >> B), I cannot figure it out how to create one network that both will use. I >> maybe smart enough to route myself into a corner (pun intended). When login >> as admin, go to Networking, and try associate a new project to the same >> network name that is shared. >> >> Steve >> >> On Wed, Apr 29, 2015 at 4:29 PM, Mohammed Arafa > > wrote: >> >>> Unless I am missing something that sounds like the perfect use case for >>> quotas. >>> Project Quotas are on virtual resources like cpu ram and storage >>> On Apr 29, 2015 7:26 PM, "Steve Poe" wrote: >>> >>>> My colleagues and I are looking at using OpenStack Juno internally >>>> using in various install methods (pre-packaged enterprise offering, manual >>>> via packages, etc. I picked RDO. I am responsible for evaluating quota >>>> usage. >>>> >>>> We'll have different development teams using our environment in one >>>> network. Each development team may have their own project where quotas can >>>> be adjusted when necessary. However, each project/tenant gets its own >>>> network. I wish I could associate multiple projects with one network. >>>> >>>> I was hoping to use projects in managing/implementing quotas >>>> (instances, cores, ram, storage, etc.) for the dev teams, but maybe I am >>>> approaching incorrectly. Any suggestions? >>>> >>>> Thanks. >>>> >>>> Steve >>>> >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >> > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.poe at gmail.com Thu Apr 30 00:18:44 2015 From: steve.poe at gmail.com (Steve Poe) Date: Wed, 29 Apr 2015 17:18:44 -0700 Subject: [Rdo-list] OpenStack Juno: best way to implement quotas? In-Reply-To: <55417309.6060502@redhat.com> References: <55417309.6060502@redhat.com> Message-ID: I'll try it again, but the shared network did not seem to help me. Maybe a little pseudo example of adding projects DevTeamA and DevTeamB to your "my-network" would help me? I understand you don't have the tenat-ids but let's assume id: 1234 for DevTeamA and tenant-id for 5678. Feel free to fill-in the pieces if my logic is off. Thanks. Steve On Wed, Apr 29, 2015 at 5:10 PM, Dan Sneddon wrote: > On 04/29/2015 04:24 PM, Steve Poe wrote: > > My colleagues and I are looking at using OpenStack Juno internally > > using in various install methods (pre-packaged enterprise offering, > > manual via packages, etc. I picked RDO. I am responsible for evaluating > > quota usage. > > > > We'll have different development teams using our environment in one > > network. Each development team may have their own project where quotas > > can be adjusted when necessary. However, each project/tenant gets its > > own network. I wish I could associate multiple projects with one network. > > > > I was hoping to use projects in managing/implementing quotas > > (instances, cores, ram, storage, etc.) for the dev teams, but maybe I > > am approaching incorrectly. Any suggestions? > > > > Thanks. > > > > Steve > > > > Have you tried creating a shared network? I haven't tried to share a > network between projects, but this works for creating one network that > multiple tenants can use: > > neutron net-create --shared my-network > > You would need to run that with admin credentials. Please let the list > know if this works for you. > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Thu Apr 30 00:20:56 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 29 Apr 2015 17:20:56 -0700 Subject: [Rdo-list] OpenStack Juno: best way to implement quotas? In-Reply-To: References: <55417309.6060502@redhat.com> Message-ID: <55417568.2010008@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 04/29/2015 05:18 PM, Steve Poe wrote: > I'll try it again, but the shared network did not seem to help > me. Maybe a little pseudo example of adding projects DevTeamA and > DevTeamB to your "my-network" would help me? I understand you > don't have the tenat-ids but let's assume id: 1234 for DevTeamA > and tenant-id for 5678. Feel free to fill-in the pieces if my > logic is off. > > Thanks. > > Steve > > > On Wed, Apr 29, 2015 at 5:10 PM, Dan Sneddon > wrote: > > On 04/29/2015 04:24 PM, Steve Poe wrote: >> My colleagues and I are looking at using OpenStack Juno >> internally using in various install methods (pre-packaged >> enterprise offering, manual via packages, etc. I picked RDO. I >> am responsible for > evaluating >> quota usage. >> >> We'll have different development teams using our environment in >> one network. Each development team may have their own project >> where > quotas >> can be adjusted when necessary. However, each project/tenant >> gets its own network. I wish I could associate multiple projects >> with one > network. >> >> I was hoping to use projects in managing/implementing quotas >> (instances, cores, ram, storage, etc.) for the dev teams, but >> maybe I am approaching incorrectly. Any suggestions? >> >> Thanks. >> >> Steve >> > > Have you tried creating a shared network? I haven't tried to share > a network between projects, but this works for creating one > network that multiple tenants can use: > > neutron net-create --shared my-network > > You would need to run that with admin credentials. Please let the > list know if this works for you. > > -- Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | > redhat.com/openstack 650.254.4025 > | dsneddon:irc @dxs:twitter > If the shared networks in Neutron work the way I think they do, then you don't have to associate the project with the network. A shared network should appear for all tenants and all projects. - -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBAgAGBQJVQXVoAAoJEFkV3ypsGNbjih0IAMfcNE4KqCE+RmxFKGOK88HT FLO6lS61pJTnEe9prTi3U5nX2NEeiSF65URJLr8MphfrmSMdsdjmkGELgC2kGjcD L/oB6aQ3aaQ85pQZ5JCEbNAds7oIXr49DEuPp8mpY7dkPQRZFHKlG8h1Rc6DrEPF KLxqrq6ds7PtiMHKQox882FhChHD6QSS4pQt1cGopdXTFliVcKp1fAvO5ngUOAyH Py/KGMF+lZdKvVQenqcISY0PiHn4zkBXob/eNfsTuNOO52sxRfRthq8eOGVQyw0a UiznRnxjbv07KIRX0klPY0yc41tN8zJ1aryrTxhJMjtwcKyS870gtqH8+SlOsc4= =RonP -----END PGP SIGNATURE----- From ayoung at redhat.com Thu Apr 30 00:26:28 2015 From: ayoung at redhat.com (Adam Young) Date: Wed, 29 Apr 2015 20:26:28 -0400 Subject: [Rdo-list] RDO Test Day, tentative, May 5th, 6th? In-Reply-To: <553FBB86.4010009@redhat.com> References: <553FBB86.4010009@redhat.com> Message-ID: <554176B4.7030606@redhat.com> On 04/28/2015 12:55 PM, Rich Bowen wrote: > Per conversation on IRC, we're thinking that May 5th/6th would be a > good time for a test day, to run the Kilo release through its paces. Sounds good. > > As usual, we'd need > * Documentation of how to do the usual packstack installation > * New documentation for using RDO-Manager (Hugh says that this will be > ready) Are there steps for trying this now? I'd rather not hit it for the first time on the test day. > * Suggested test cases I have set up an Ipsilon server. I'd like to have WebSSO/Federation as part of the test plan. Any objections? > > I've started a wiki page at http://rdoproject.org/RDO_test_day_Kilo > like we usually do. > > If there's any compelling reason *not* to pick these dates, please > speak up ASAP. I'll be sending out wider announcements tomorrow if I > don't hear any objections here. > > Thanks. > > --Rich > From steve.poe at gmail.com Thu Apr 30 00:41:37 2015 From: steve.poe at gmail.com (Steve Poe) Date: Wed, 29 Apr 2015 17:41:37 -0700 Subject: [Rdo-list] OpenStack Juno: best way to implement quotas? In-Reply-To: <55417568.2010008@redhat.com> References: <55417309.6060502@redhat.com> <55417568.2010008@redhat.com> Message-ID: In my situation, I had a network that was private. I have since made it shareable, It was not showing for DevTeam project. I log out and tried again, and now I can see my "shared" network. Now, I can actually test quotas. :-) Thanks! Steve On Wed, Apr 29, 2015 at 5:20 PM, Dan Sneddon wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 04/29/2015 05:18 PM, Steve Poe wrote: > > I'll try it again, but the shared network did not seem to help > > me. Maybe a little pseudo example of adding projects DevTeamA and > > DevTeamB to your "my-network" would help me? I understand you > > don't have the tenat-ids but let's assume id: 1234 for DevTeamA > > and tenant-id for 5678. Feel free to fill-in the pieces if my > > logic is off. > > > > Thanks. > > > > Steve > > > > > > On Wed, Apr 29, 2015 at 5:10 PM, Dan Sneddon > > wrote: > > > > On 04/29/2015 04:24 PM, Steve Poe wrote: > >> My colleagues and I are looking at using OpenStack Juno > >> internally using in various install methods (pre-packaged > >> enterprise offering, manual via packages, etc. I picked RDO. I > >> am responsible for > > evaluating > >> quota usage. > >> > >> We'll have different development teams using our environment in > >> one network. Each development team may have their own project > >> where > > quotas > >> can be adjusted when necessary. However, each project/tenant > >> gets its own network. I wish I could associate multiple projects > >> with one > > network. > >> > >> I was hoping to use projects in managing/implementing quotas > >> (instances, cores, ram, storage, etc.) for the dev teams, but > >> maybe I am approaching incorrectly. Any suggestions? > >> > >> Thanks. > >> > >> Steve > >> > > > > Have you tried creating a shared network? I haven't tried to share > > a network between projects, but this works for creating one > > network that multiple tenants can use: > > > > neutron net-create --shared my-network > > > > You would need to run that with admin credentials. Please let the > > list know if this works for you. > > > > -- Dan Sneddon | Principal OpenStack Engineer > > dsneddon at redhat.com | > > redhat.com/openstack 650.254.4025 > > | dsneddon:irc @dxs:twitter > > > > If the shared networks in Neutron work the way I think they do, then > you don't have to associate the project with the network. A shared > network should appear for all tenants and all projects. > > - -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQEcBAEBAgAGBQJVQXVoAAoJEFkV3ypsGNbjih0IAMfcNE4KqCE+RmxFKGOK88HT > FLO6lS61pJTnEe9prTi3U5nX2NEeiSF65URJLr8MphfrmSMdsdjmkGELgC2kGjcD > L/oB6aQ3aaQ85pQZ5JCEbNAds7oIXr49DEuPp8mpY7dkPQRZFHKlG8h1Rc6DrEPF > KLxqrq6ds7PtiMHKQox882FhChHD6QSS4pQt1cGopdXTFliVcKp1fAvO5ngUOAyH > Py/KGMF+lZdKvVQenqcISY0PiHn4zkBXob/eNfsTuNOO52sxRfRthq8eOGVQyw0a > UiznRnxjbv07KIRX0klPY0yc41tN8zJ1aryrTxhJMjtwcKyS870gtqH8+SlOsc4= > =RonP > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at progbau.de Thu Apr 30 07:06:09 2015 From: contact at progbau.de (Chris) Date: Thu, 30 Apr 2015 14:06:09 +0700 Subject: [Rdo-list] Instance auto resume after compute node restart Message-ID: <000001d08314$2540e960$6fc2bc20$@progbau.de> Hello, We want to have instances auto resume their status after a compute node reboot/failure. Means when the VM has the running state before it should be automatically started. We are using Icehouse. There is the option resume_guests_state_on_host_boot=true|false which should exactly do what we want: # Whether to start guests that were running before the host # rebooted (boolean value) resume_guests_state_on_host_boot=true I tried it out and it just didn't work. Libvirt fails to start the VMs because I couldn't find the interfaces: 2015-04-30 06:16:00.783+0000: 3091: error : virNetDevGetMTU:343 : Cannot get interface MTU on 'qbr62d7e489-f8': No such device 2015-04-30 06:16:00.897+0000: 3091: warning : qemuDomainObjStart:6144 : Unable to restore from managed state /var/lib/libvirt/qemu/save/instance-0000025f.save. Maybe the file is corrupted? I did some research and found some corresponding experiences from other users: "AFAIK at the present time OpenStack (Icehouse) still not completely aware about environments inside it, so it can't restore completely after reboot." Source: http://stackoverflow.com/questions/23150148/how-to-get-instances-back-after- reboot-in-openstack Is this feature really broken or do I just miss something? Thanks in advance! Cheers Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at progbau.de Thu Apr 30 08:49:24 2015 From: contact at progbau.de (Chris) Date: Thu, 30 Apr 2015 15:49:24 +0700 Subject: [Rdo-list] neutron-openvswitch-agent without ping lost Message-ID: <004b01d08322$90ff46c0$b2fdd440$@progbau.de> Hello, We made some changes on our compute nodes in the "/etc/neutron/neutron.conf". For example qpid_hostname. But nothing what effects the network infrastructure in the compute node. To apply the changes I think we need to restart the "neutron-openvswitch-agent" service. By restarting this service the VM gets disconnected for around one ping, the reason is the restart causes recreation of the int-br-bond0 and phy-br-bond0 interfaces: ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --may-exist add-br br-int ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- set-fail-mode br-int secure ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --if-exists del-port br-int patch-tun ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --if-exists del-port br-int int-br-bond0 kernel: [73873.047999] device int-br-bond0 left promiscuous mode ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --if-exists del-port br-bond0 phy-br-bond0 kernel: [73873.086241] device phy-br-bond0 left promiscuous mode ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --may-exist add-port br-int int-br-bond0 kernel: [73873.287466] device int-br-bond0 entered promiscuous mode ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 -- --may-exist add-port br-bond0 phy-br-bond0 Is there a way to apply this changes without loose pings? Cheers Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From chamambom at afri-com.net Thu Apr 30 07:44:49 2015 From: chamambom at afri-com.net (Chamambo Martin) Date: Thu, 30 Apr 2015 09:44:49 +0200 Subject: [Rdo-list] sourcing admin-openrc.sh gives me an error Message-ID: <003a01d08319$89760aa0$9c621fe0$@afri-com.net> http://docs.openstack.org/icehouse/install-guide/install/yum/content/keyston e-verify.html I have followed this document to check if my admin-openrc.sh file is configured correctly and everything is working as expected until I do this source admin-openrc.sh keystone user-list this command returns an error [root at controller ~]# keystone user-list WARNING:keystoneclient.httpclient:Failed to retrieve management_url from token What am I missing NB: All The above commands work if I manually input the following on the command line but not when I source the admin-openrc.sh file export OS_SERVICE_TOKEN=xxxxxxxxxxxxxxxxxxxxx export OS_SERVICE_ENDPOINT=http://xxxxxxx.ai.co.zw:35357/v2.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chamambom at afri-com.net Thu Apr 30 10:03:08 2015 From: chamambom at afri-com.net (Chamambo Martin) Date: Thu, 30 Apr 2015 12:03:08 +0200 Subject: [Rdo-list] [Openstack] sourcing admin-openrc.sh gives me an error In-Reply-To: <003a01d08319$89760aa0$9c621fe0$@afri-com.net> References: <003a01d08319$89760aa0$9c621fe0$@afri-com.net> Message-ID: <005c01d0832c$dc0996c0$941cc440$@afri-com.net> And on the httpd error log im getting EndpointNotFound: publicURL endpoint for identity service not found From: Chamambo Martin [mailto:chamambom at afri-com.net] Sent: Thursday, April 30, 2015 9:45 AM To: openstack at lists.openstack.org; rdo-list at redhat.com Subject: [Openstack] sourcing admin-openrc.sh gives me an error http://docs.openstack.org/icehouse/install-guide/install/yum/content/keyston e-verify.html I have followed this document to check if my admin-openrc.sh file is configured correctly and everything is working as expected until I do this source admin-openrc.sh keystone user-list this command returns an error [root at controller ~]# keystone user-list WARNING:keystoneclient.httpclient:Failed to retrieve management_url from token What am I missing NB: All The above commands work if I manually input the following on the command line but not when I source the admin-openrc.sh file export OS_SERVICE_TOKEN=xxxxxxxxxxxxxxxxxxxxx export OS_SERVICE_ENDPOINT=http://xxxxxxx.ai.co.zw:35357/v2.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Thu Apr 30 12:15:35 2015 From: jslagle at redhat.com (James Slagle) Date: Thu, 30 Apr 2015 08:15:35 -0400 Subject: [Rdo-list] rdo-manager hostname issues with rabbitmq Message-ID: <20150430121535.GS29586@teletran-1.redhat.com> Hi, there were a few different threads related to rdo-manager and encountering rabbitmq errors related to the hostname when installing the undercloud. I was able to reproduce a couple different issues, and in my environment it came down to $HOSTNAME not matching the defined FQDN hostname. You could use hostnamectl to set a hostname, but that does not update $HOSTNAME in your current shell. rabbitmq-env, which is sourced at the start of rabbitmq-server, reads the hostname from $HOSTNAME in some scenarios, and then uses that value to define the rabbit node name. Therefore, if you have a mismatch between $HOSTNAME and the actual FQDN, things can go off the rails with rabbitmq. I've tried to address this issue in this patch: https://review.gerrithub.io/#/c/232052/ For the virt-setup, setting of the FQDN hostname and adding it to /etc/hosts will now be done automatically, with the instack.localdomain used. The hostname can always be redefined later if desired. For baremetal, I've added some notes to the docs to hopefully cover the requirements. -- -- James Slagle -- From hbrock at redhat.com Thu Apr 30 13:03:07 2015 From: hbrock at redhat.com (Hugh O. Brock) Date: Thu, 30 Apr 2015 15:03:07 +0200 Subject: [Rdo-list] rdo-manager hostname issues with rabbitmq In-Reply-To: <20150430121535.GS29586@teletran-1.redhat.com> References: <20150430121535.GS29586@teletran-1.redhat.com> Message-ID: <20150430130307.GG3816@redhat.com> On Thu, Apr 30, 2015 at 08:15:35AM -0400, James Slagle wrote: > Hi, there were a few different threads related to rdo-manager and encountering > rabbitmq errors related to the hostname when installing the undercloud. > > I was able to reproduce a couple different issues, and in my environment it > came down to $HOSTNAME not matching the defined FQDN hostname. You could > use hostnamectl to set a hostname, but that does not update $HOSTNAME in your > current shell. > > rabbitmq-env, which is sourced at the start of rabbitmq-server, reads the > hostname from $HOSTNAME in some scenarios, and then uses that value to define > the rabbit node name. Therefore, if you have a mismatch between $HOSTNAME and > the actual FQDN, things can go off the rails with rabbitmq. > > I've tried to address this issue in this patch: > https://review.gerrithub.io/#/c/232052/ > > For the virt-setup, setting of the FQDN hostname and adding it to /etc/hosts > will now be done automatically, with the instack.localdomain used. The hostname > can always be redefined later if desired. > > For baremetal, I've added some notes to the docs to hopefully cover the > requirements. > > -- > -- James Slagle > -- Good catch James. Hopefully this will unblock a number of folks who have been hitting this. --Hugh -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == Tuskar: Elastic Scaling for OpenStack == == http://github.com/tuskar == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From mohammed.arafa at gmail.com Thu Apr 30 13:22:16 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 30 Apr 2015 09:22:16 -0400 Subject: [Rdo-list] [Openstack] sourcing admin-openrc.sh gives me an error In-Reply-To: <005c01d0832c$dc0996c0$941cc440$@afri-com.net> References: <003a01d08319$89760aa0$9c621fe0$@afri-com.net> <005c01d0832c$dc0996c0$941cc440$@afri-com.net> Message-ID: RDO's packstack automatically creates a file for your environment variables it is called keystonerc_admin pls use that instead On Thu, Apr 30, 2015 at 6:03 AM, Chamambo Martin wrote: > And on the httpd error log im getting > > > > EndpointNotFound: publicURL endpoint for identity service not found > > > > > > > > *From:* Chamambo Martin [mailto:chamambom at afri-com.net] > *Sent:* Thursday, April 30, 2015 9:45 AM > *To:* openstack at lists.openstack.org; rdo-list at redhat.com > *Subject:* [Openstack] sourcing admin-openrc.sh gives me an error > > > > > http://docs.openstack.org/icehouse/install-guide/install/yum/content/keystone-verify.html > > > > I have followed this document to check if my admin-openrc.sh file is configured correctly and everything is working as expected until I do this > > > > source admin-openrc.sh > > > > keystone user-list > > this command returns an error > > > > [root at controller ~]# keystone user-list > > > > WARNING:keystoneclient.httpclient:Failed to retrieve management_url from token > > > > What am I missing > > > > NB: All The above commands work if I manually input the following on the command line but not when I source the admin-openrc.sh file > > > > export OS_SERVICE_TOKEN=xxxxxxxxxxxxxxxxxxxxx > > > > export OS_SERVICE_ENDPOINT=http://xxxxxxx.ai.co.zw:35357/v2.0 > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Thu Apr 30 13:23:54 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 30 Apr 2015 09:23:54 -0400 Subject: [Rdo-list] rdo-manager hostname issues with rabbitmq In-Reply-To: <20150430130307.GG3816@redhat.com> References: <20150430121535.GS29586@teletran-1.redhat.com> <20150430130307.GG3816@redhat.com> Message-ID: James will the shortname also be added to the hosts file along with the fqdn? On Thu, Apr 30, 2015 at 9:03 AM, Hugh O. Brock wrote: > On Thu, Apr 30, 2015 at 08:15:35AM -0400, James Slagle wrote: > > Hi, there were a few different threads related to rdo-manager and > encountering > > rabbitmq errors related to the hostname when installing the undercloud. > > > > I was able to reproduce a couple different issues, and in my environment > it > > came down to $HOSTNAME not matching the defined FQDN hostname. You could > > use hostnamectl to set a hostname, but that does not update $HOSTNAME in > your > > current shell. > > > > rabbitmq-env, which is sourced at the start of rabbitmq-server, reads the > > hostname from $HOSTNAME in some scenarios, and then uses that value to > define > > the rabbit node name. Therefore, if you have a mismatch between > $HOSTNAME and > > the actual FQDN, things can go off the rails with rabbitmq. > > > > I've tried to address this issue in this patch: > > https://review.gerrithub.io/#/c/232052/ > > > > For the virt-setup, setting of the FQDN hostname and adding it to > /etc/hosts > > will now be done automatically, with the instack.localdomain used. The > hostname > > can always be redefined later if desired. > > > > For baremetal, I've added some notes to the docs to hopefully cover the > > requirements. > > > > -- > > -- James Slagle > > -- > > Good catch James. Hopefully this will unblock a number of folks who have > been hitting this. > > --Hugh > > -- > == Hugh Brock, hbrock at redhat.com == > == Senior Engineering Manager, Cloud Engineering == > == Tuskar: Elastic Scaling for OpenStack == > == http://github.com/tuskar == > > "I know that you believe you understand what you think I said, but I?m > not sure you realize that what you heard is not what I meant." > --Robert McCloskey > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Thu Apr 30 13:27:17 2015 From: jslagle at redhat.com (James Slagle) Date: Thu, 30 Apr 2015 09:27:17 -0400 Subject: [Rdo-list] rdo-manager hostname issues with rabbitmq In-Reply-To: References: <20150430121535.GS29586@teletran-1.redhat.com> <20150430130307.GG3816@redhat.com> Message-ID: <20150430132717.GU29586@teletran-1.redhat.com> On Thu, Apr 30, 2015 at 09:23:54AM -0400, Mohammed Arafa wrote: > James > > will the shortname also be added to the hosts file along with the fqdn? We could. The patch I proposed does not however, it merely addresses the issues I was seeing. > > On Thu, Apr 30, 2015 at 9:03 AM, Hugh O. Brock wrote: > > > On Thu, Apr 30, 2015 at 08:15:35AM -0400, James Slagle wrote: > > > Hi, there were a few different threads related to rdo-manager and > > encountering > > > rabbitmq errors related to the hostname when installing the undercloud. > > > > > > I was able to reproduce a couple different issues, and in my environment > > it > > > came down to $HOSTNAME not matching the defined FQDN hostname. You could > > > use hostnamectl to set a hostname, but that does not update $HOSTNAME in > > your > > > current shell. > > > > > > rabbitmq-env, which is sourced at the start of rabbitmq-server, reads the > > > hostname from $HOSTNAME in some scenarios, and then uses that value to > > define > > > the rabbit node name. Therefore, if you have a mismatch between > > $HOSTNAME and > > > the actual FQDN, things can go off the rails with rabbitmq. > > > > > > I've tried to address this issue in this patch: > > > https://review.gerrithub.io/#/c/232052/ > > > > > > For the virt-setup, setting of the FQDN hostname and adding it to > > /etc/hosts > > > will now be done automatically, with the instack.localdomain used. The > > hostname > > > can always be redefined later if desired. > > > > > > For baremetal, I've added some notes to the docs to hopefully cover the > > > requirements. > > > > > > -- > > > -- James Slagle > > > -- > > > > Good catch James. Hopefully this will unblock a number of folks who have > > been hitting this. > > > > --Hugh > > > > -- > > == Hugh Brock, hbrock at redhat.com == > > == Senior Engineering Manager, Cloud Engineering == > > == Tuskar: Elastic Scaling for OpenStack == > > == http://github.com/tuskar == > > > > "I know that you believe you understand what you think I said, but I?m > > not sure you realize that what you heard is not what I meant." > > --Robert McCloskey > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * -- -- James Slagle -- From mohammed.arafa at gmail.com Thu Apr 30 13:33:42 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Thu, 30 Apr 2015 09:33:42 -0400 Subject: [Rdo-list] rdo-manager hostname issues with rabbitmq In-Reply-To: <20150430132717.GU29586@teletran-1.redhat.com> References: <20150430121535.GS29586@teletran-1.redhat.com> <20150430130307.GG3816@redhat.com> <20150430132717.GU29586@teletran-1.redhat.com> Message-ID: yes so i had the fqdn only in my hosts file and rabbitmq couldnt find the host by its shortname after i added shortname to my hosts file i definitely passed over that bump (adn stopped elsewhere). there is a thread about it on this list ... somewhere :) On Thu, Apr 30, 2015 at 9:27 AM, James Slagle wrote: > On Thu, Apr 30, 2015 at 09:23:54AM -0400, Mohammed Arafa wrote: > > James > > > > will the shortname also be added to the hosts file along with the fqdn? > > We could. The patch I proposed does not however, it merely addresses the > issues > I was seeing. > > > > > On Thu, Apr 30, 2015 at 9:03 AM, Hugh O. Brock > wrote: > > > > > On Thu, Apr 30, 2015 at 08:15:35AM -0400, James Slagle wrote: > > > > Hi, there were a few different threads related to rdo-manager and > > > encountering > > > > rabbitmq errors related to the hostname when installing the > undercloud. > > > > > > > > I was able to reproduce a couple different issues, and in my > environment > > > it > > > > came down to $HOSTNAME not matching the defined FQDN hostname. You > could > > > > use hostnamectl to set a hostname, but that does not update > $HOSTNAME in > > > your > > > > current shell. > > > > > > > > rabbitmq-env, which is sourced at the start of rabbitmq-server, > reads the > > > > hostname from $HOSTNAME in some scenarios, and then uses that value > to > > > define > > > > the rabbit node name. Therefore, if you have a mismatch between > > > $HOSTNAME and > > > > the actual FQDN, things can go off the rails with rabbitmq. > > > > > > > > I've tried to address this issue in this patch: > > > > https://review.gerrithub.io/#/c/232052/ > > > > > > > > For the virt-setup, setting of the FQDN hostname and adding it to > > > /etc/hosts > > > > will now be done automatically, with the instack.localdomain used. > The > > > hostname > > > > can always be redefined later if desired. > > > > > > > > For baremetal, I've added some notes to the docs to hopefully cover > the > > > > requirements. > > > > > > > > -- > > > > -- James Slagle > > > > -- > > > > > > Good catch James. Hopefully this will unblock a number of folks who > have > > > been hitting this. > > > > > > --Hugh > > > > > > -- > > > == Hugh Brock, hbrock at redhat.com == > > > == Senior Engineering Manager, Cloud Engineering == > > > == Tuskar: Elastic Scaling for OpenStack == > > > == http://github.com/tuskar == > > > > > > "I know that you believe you understand what you think I said, but I?m > > > not sure you realize that what you heard is not what I meant." > > > --Robert McCloskey > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > -- > > > > > > < > https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995903C076FA394F816CC136539DBA6A32D7305539E4219F5A650358C02CA2ED9F1F26319&AspxAutoDetectCookieSupport=1 > > > > > > *805010942448935* > > < > https://www.redhat.com/wapps/training/certification/verify.html?certNumber=805010942448935&verify=Verify > > > > > > *GR750055912MA* > > < > https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995903C076FA394F816CC136539DBA6A32D7305539E4219F5A650358C02CA2ED9F1F26319&AspxAutoDetectCookieSupport=1 > > > > > > *Link to me on LinkedIn * > -- > -- James Slagle > -- > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Thu Apr 30 16:22:04 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 30 Apr 2015 12:22:04 -0400 Subject: [Rdo-list] rdo-manager hostname issues with rabbitmq In-Reply-To: <20150430121535.GS29586@teletran-1.redhat.com> References: <20150430121535.GS29586@teletran-1.redhat.com> Message-ID: <20150430162204.GD14827@redhat.com> On Thu, Apr 30, 2015 at 08:15:35AM -0400, James Slagle wrote: > For baremetal, I've added some notes to the docs to hopefully cover the > requirements. I'm not familiar with AMQP. Is there a particular reason why rabbitmq is so sensitive to hostname issues? I was surprised to run into this with a packstack install the other day, especially given that packstack configures everything uses ip addresses rather than names. rabbitmq seems to perform some sort of reverse-lookup of the hostname, and there was a stale dns entry out there that it was picking up. This wouldn't have caused a problem for anything else (because the entire configuration is based on ip addresses rather than hostnames), but it prevented rabbitmq from starting up correctly. Is there any chance that we should also be pursuing this as a bug in rabbitmq? Or is there a configuration we can tweak in rabbitmq that will make it less sensitive to hostname issues? -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From jeckersb at redhat.com Thu Apr 30 16:33:12 2015 From: jeckersb at redhat.com (John Eckersberg) Date: Thu, 30 Apr 2015 12:33:12 -0400 Subject: [Rdo-list] rdo-manager hostname issues with rabbitmq In-Reply-To: <20150430162204.GD14827@redhat.com> References: <20150430121535.GS29586@teletran-1.redhat.com> <20150430162204.GD14827@redhat.com> Message-ID: <87618d8tkn.fsf@redhat.com> Lars Kellogg-Stedman writes: > On Thu, Apr 30, 2015 at 08:15:35AM -0400, James Slagle wrote: >> For baremetal, I've added some notes to the docs to hopefully cover the >> requirements. > > I'm not familiar with AMQP. Is there a particular reason why rabbitmq > is so sensitive to hostname issues? I was surprised to run into this > with a packstack install the other day, especially given that > packstack configures everything uses ip addresses rather than names. > > rabbitmq seems to perform some sort of reverse-lookup of the hostname, > and there was a stale dns entry out there that it was picking up. > This wouldn't have caused a problem for anything else (because the > entire configuration is based on ip addresses rather than hostnames), > but it prevented rabbitmq from starting up correctly. > > Is there any chance that we should also be pursuing this as a bug in > rabbitmq? Or is there a configuration we can tweak in rabbitmq that > will make it less sensitive to hostname issues? It's not so much a RabbitMQ issue, moreso an issue with distributed Erlang. The platform itself gets confused while trying to cluster itself. None of those bits are RabbitMQ-proper. eck From jslagle at redhat.com Thu Apr 30 17:15:49 2015 From: jslagle at redhat.com (James Slagle) Date: Thu, 30 Apr 2015 13:15:49 -0400 Subject: [Rdo-list] rdo-manager hostname issues with rabbitmq In-Reply-To: <20150430162204.GD14827@redhat.com> References: <20150430121535.GS29586@teletran-1.redhat.com> <20150430162204.GD14827@redhat.com> Message-ID: <20150430171549.GV29586@teletran-1.redhat.com> On Thu, Apr 30, 2015 at 12:22:04PM -0400, Lars Kellogg-Stedman wrote: > On Thu, Apr 30, 2015 at 08:15:35AM -0400, James Slagle wrote: > > For baremetal, I've added some notes to the docs to hopefully cover the > > requirements. > > I'm not familiar with AMQP. Is there a particular reason why rabbitmq > is so sensitive to hostname issues? I was surprised to run into this > with a packstack install the other day, especially given that > packstack configures everything uses ip addresses rather than names. > > rabbitmq seems to perform some sort of reverse-lookup of the hostname, > and there was a stale dns entry out there that it was picking up. > This wouldn't have caused a problem for anything else (because the > entire configuration is based on ip addresses rather than hostnames), > but it prevented rabbitmq from starting up correctly. > > Is there any chance that we should also be pursuing this as a bug in > rabbitmq? Or is there a configuration we can tweak in rabbitmq that > will make it less sensitive to hostname issues? Possibly. For this particular case anyway, it took me a little while to reproduce the issue. Eventually I realized that if I set a hostname, then logged out and then back in prior to installing the undercloud, I could never reproduce an issue. That led me to believe it was environment related, and further, caused by $HOSTNAME. grep'ing through the rabbitmq-server code, I saw that: https://github.com/rabbitmq/rabbitmq-server/blob/master/scripts/rabbitmq-env uses the $HOSTNAME value. In my earlier email I said that rabbitmq-server sources rabbitmq-env (which is true), but the actual problem I think is that rabbitmqctl also sources rabbitmq-env, and puppetlabs-rabbitmq calls rabbitmqctl all over the place. So, honestly, this could be a bug in rabbitmq-env, I'm not sure what the accepted expectations are around $HOSTNAME, if there even are any. Note though that rabbitmq-env also has it's *own* configuration file where a hostname could be set as well. -- -- James Slagle -- From jslagle at redhat.com Thu Apr 30 17:55:43 2015 From: jslagle at redhat.com (James Slagle) Date: Thu, 30 Apr 2015 13:55:43 -0400 Subject: [Rdo-list] delorean latest-RDO-kilo-CI symlink update needed Message-ID: <20150430175543.GW29586@teletran-1.redhat.com> Can we update the delorean latest-RDO-kilo-CI symlink to a later repo? rdo-manager is trying to move forward with testing the latest openstack-puppet-modules, and we need a newer version that is available in a later repo. It looks like the latest repo that passed rdo-manager CI is: http://trunk.rdoproject.org/kilo/centos7/b0/44/b0447ed8e7bee371bf7095c86e47d717abe89edc_52563694/ That has the updated version of opm we'd like to test. I'm not sure what other packstack related CI jobs need to be checked before updating the symlink. -- -- James Slagle -- From whayutin at redhat.com Thu Apr 30 17:57:36 2015 From: whayutin at redhat.com (whayutin) Date: Thu, 30 Apr 2015 13:57:36 -0400 Subject: [Rdo-list] delorean latest-RDO-kilo-CI symlink update needed In-Reply-To: <20150430175543.GW29586@teletran-1.redhat.com> References: <20150430175543.GW29586@teletran-1.redhat.com> Message-ID: <1430416656.2663.37.camel@redhat.com> On Thu, 2015-04-30 at 13:55 -0400, James Slagle wrote: > Can we update the delorean latest-RDO-kilo-CI symlink to a later repo? > rdo-manager is trying to move forward with testing the latest > openstack-puppet-modules, and we need a newer version that is available in a > later repo. > > It looks like the latest repo that passed rdo-manager CI is: > http://trunk.rdoproject.org/kilo/centos7/b0/44/b0447ed8e7bee371bf7095c86e47d717abe89edc_52563694/ > > That has the updated version of opm we'd like to test. I'm not sure what other > packstack related CI jobs need to be checked before updating the symlink. > > -- > -- James Slagle > -- Even better, let's get the promotion script on the rdo delorean server please :) > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ak at cloudssky.com Thu Apr 30 18:18:43 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Thu, 30 Apr 2015 20:18:43 +0200 Subject: [Rdo-list] RE(4): RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: <55396E31.4030603@arif-ali.co.uk> <5539E107.1010701@redhat.com> <20150429152624.GF2764@tesla.redhat.com> Message-ID: Unfortunately I trashed my first RDO rc1 install in favorite of rdo-manager and now tried to setup packstack with the rc2 version and now nova compute doesn't start on compute node (2 bare-metal node install) and keeps in activating state: And now I'll try the latest CI past repo which was sent by James some minutes ago: http://trunk.rdoproject.org/kilo/centos7/b0/44/b0447ed8e7bee371bf7095c86e47d717abe89edc_52563694/ openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; disabled) Active: activating (start) since Do 2015-04-30 14:02:27 EDT; 1min 13s ago Main PID: 22849 (nova-compute) CGroup: /system.slice/openstack-nova-compute.service ??22849 /usr/bin/python /usr/bin/nova-compute Apr 30 14:02:27 csky06.csg.net systemd[1]: Starting OpenStack Nova Compute Server... [root at csky06 ~]# systemctl status openstack-nova-compute.service openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; disabled) Active: activating (start) since Do 2015-04-30 14:11:28 EDT; 38s ago Main PID: 22900 (nova-compute) CGroup: /system.slice/openstack-nova-compute.service ??22900 /usr/bin/python /usr/bin/nova-compute cat /var/tmp/packstack/20150430-133907-v7rDEH/manifests/20.0.0.12_nova.pp.log ... ESC[1;31mError: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details. Wrapped exception: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details.ESC[0m ESC[1;31mError: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]/ensure: change from stopped to running failed: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details.ESC[0m ESC[mNotice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]: Triggered 'refresh' from 2 eventsESC[0m ESC[mNotice: /Stage[main]/Main/File_line[libvirt-guests]/ensure: createdESC [0m ESC[mNotice: /Stage[main]/Nova/Exec[networking-refresh]: Dependency Service[nova-compute] has failures: trueESC[0m ESC[1;31mWarning: /Stage[main]/Nova/Exec[networking-refresh]: Skipping because of failed dependenciesESC[0m ESC[mNotice: /Stage[main]/Main/Exec[load_kvm]: Dependency Service[nova-compute] has failures: trueESC[0m ESC[1;31mWarning: /Stage[main]/Main/Exec[load_kvm]: Skipping because of failed dependenciesESC[0m ESC[mNotice: Finished catalog run in 279.58 secondsESC[0m Thanks, -Arash On Thu, Apr 30, 2015 at 12:04 AM, Boris Derzhavets wrote: > Arash, > > Please, create BZ for this issue. I don't have that environment in > meantime. > > Thanks. > Boris. > > > Date: Wed, 29 Apr 2015 17:26:24 +0200 > > From: kchamart at redhat.com > > To: bderzhavets at hotmail.com > > CC: ak at cloudssky.com; apevec at gmail.com; rdo-list at redhat.com > > Subject: Re: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core > packages > > > > On Sun, Apr 26, 2015 at 12:52:36PM -0400, Boris Derzhavets wrote: > > > Confirmed version as of 04/21/2015 ( via delorean repo ) was tested > > > with answer-file for Two Node install Controller&&Network and Compute > > > Answer-file sample was brought from blog entry ( works fine on Juno) > > > > http://bderzhavets.blogspot.com/2015/04/switching-to-ethx-interfaces-on-fedora.html > > > > > > Packstack completed OK . However, table "compute_nodes" in nova > > > database appeared to be empty up on completion. System doesn't see > > > Hypervisor Host either via Dashboard or via nova CLI as mentioned by > > > Arash. So it cannot scheduler instance, no matter that `nova > > > service-list` is OK and openstack-nova-compute is up and running on > > > Compute Node. I've also I've got same "WARNING > > > nova.compute.resource_tracker [-] No service record for host compute1" > > > in /var/log/nova/nova-compute.log on Compute node. > > > > Boris, if you're able to reproduce this consistently, can you please > > file a Nova bug with clear reproducer details? Along with contextual log > > files w/ debug enabled. > > > > -- > > /kashyap > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Thu Apr 30 18:50:01 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 30 Apr 2015 14:50:01 -0400 Subject: [Rdo-list] FW: RDO build that passed CI (rc2) In-Reply-To: <5540FC2A.9050906@redhat.com> References: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com>, <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> , <5540FC2A.9050906@redhat.com> Message-ID: Arash, Please, be aware of this response, obtained yesterday. Thanks. Boris. > Date: Wed, 29 Apr 2015 18:43:38 +0300 > From: itzikb at redhat.com > To: bderzhavets at hotmail.com; rdo-list at redhat.com > Subject: Re: [Rdo-list] RDO build that passed CI (rc2) > > Yes.<=== > > In addition: > > 1)I don't see any problem now regarding the 'permission denied error'. > > 2) openstack-nova-novncproxy service is running. > For some reason I can't see the console of an instance with Firefox > but I can with Chrome. > > 3) neutron-lbaas-agent still fails to start. > > Itzik > On 04/28/2015 03:22 PM, Boris Derzhavets wrote: > > Are you able to launch VM on Compute Node ? <=== > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Thu Apr 30 19:06:12 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Thu, 30 Apr 2015 21:06:12 +0200 Subject: [Rdo-list] RE(4): RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: <55396E31.4030603@arif-ali.co.uk> <5539E107.1010701@redhat.com> <20150429152624.GF2764@tesla.redhat.com> Message-ID: Updating to latest repo didn't help either for packstack: (rebooted the compute node, changed the repo to the latest and re-ran packstack) [root at csky06 ~]# systemctl status openstack-nova-compute.service -l openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; disabled) Active: *failed* (Result: exit-code) since Do 2015-04-30 14:40:33 EDT; 2min 24s ago Process: 4569 ExecStart=/usr/bin/nova-compute *(code=exited, status=1/FAILURE)* Main PID: 4569 (code=exited, status=1/FAILURE) CGroup: /system.slice/openstack-nova-compute.service Apr 30 14:40:33 csky06.csg.net systemd[1]: Started OpenStack Nova Compute Server. Apr 30 14:40:33 csky06.csg.net nova-compute[4569]: Traceback (most recent call last): Apr 30 14:40:33 csky06.csg.net nova-compute[4569]: File "/usr/bin/nova-compute", line 6, in Apr 30 14:40:33 csky06.csg.net nova-compute[4569]: from nova.cmd.compute import main Apr 30 14:40:33 csky06.csg.net nova-compute[4569]: File "/usr/lib/python2.7/site-packages/nova/cmd/compute.py", line 22, in Apr 30 14:40:33 csky06.csg.net nova-compute[4569]: from oslo_config import cfg Apr 30 14:40:33 csky06.csg.net nova-compute[4569]: *ImportError: No module named oslo_config* Apr 30 14:40:33 csky06.csg.net systemd[1]: *openstack-nova-compute.service: main process exited, code=exited, status=1/FAILURE* Apr 30 14:40:33 csky06.csg.net systemd[1]: *Unit openstack-nova-compute.service entered failed state.* On Thu, Apr 30, 2015 at 8:18 PM, Arash Kaffamanesh wrote: > Unfortunately I trashed my first RDO rc1 install in favorite of > rdo-manager and > now tried to setup packstack with the rc2 version and now nova compute > doesn't > start on compute node (2 bare-metal node install) and keeps in activating > state: > > And now I'll try the latest CI past repo which was sent by James some > minutes ago: > > > http://trunk.rdoproject.org/kilo/centos7/b0/44/b0447ed8e7bee371bf7095c86e47d717abe89edc_52563694/ > > openstack-nova-compute.service - OpenStack Nova Compute Server > > Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; > disabled) > > Active: activating (start) since Do 2015-04-30 14:02:27 EDT; 1min 13s > ago > > Main PID: 22849 (nova-compute) > > CGroup: /system.slice/openstack-nova-compute.service > > ??22849 /usr/bin/python /usr/bin/nova-compute > > > Apr 30 14:02:27 csky06.csg.net systemd[1]: Starting OpenStack Nova > Compute Server... > > [root at csky06 ~]# systemctl status openstack-nova-compute.service > > openstack-nova-compute.service - OpenStack Nova Compute Server > > Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; > disabled) > > Active: activating (start) since Do 2015-04-30 14:11:28 EDT; 38s ago > > Main PID: 22900 (nova-compute) > > CGroup: /system.slice/openstack-nova-compute.service > > ??22900 /usr/bin/python /usr/bin/nova-compute > > > cat > /var/tmp/packstack/20150430-133907-v7rDEH/manifests/20.0.0.12_nova.pp.log > > ... > > ESC[1;31mError: Could not start Service[nova-compute]: Execution of > '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for > openstack-nova-compute.service failed. See 'systemctl status > openstack-nova-compute.service' and 'journalctl -xn' for details. > > Wrapped exception: > > Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: > Job for openstack-nova-compute.service failed. See 'systemctl status > openstack-nova-compute.service' and 'journalctl -xn' for details.ESC[0m > > ESC[1;31mError: > /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]/ensure: > change from stopped to running failed: Could not start > Service[nova-compute]: Execution of '/usr/bin/systemctl start > openstack-nova-compute' returned 1: Job for openstack-nova-compute.service > failed. See 'systemctl status openstack-nova-compute.service' and > 'journalctl -xn' for details.ESC[0m > > ESC[mNotice: > /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]: > Triggered 'refresh' from 2 eventsESC[0m > > ESC[mNotice: /Stage[main]/Main/File_line[libvirt-guests]/ensure: created > ESC[0m > > ESC[mNotice: /Stage[main]/Nova/Exec[networking-refresh]: Dependency > Service[nova-compute] has failures: trueESC[0m > > ESC[1;31mWarning: /Stage[main]/Nova/Exec[networking-refresh]: Skipping > because of failed dependenciesESC[0m > > ESC[mNotice: /Stage[main]/Main/Exec[load_kvm]: Dependency > Service[nova-compute] has failures: trueESC[0m > > ESC[1;31mWarning: /Stage[main]/Main/Exec[load_kvm]: Skipping because of > failed dependenciesESC[0m > > ESC[mNotice: Finished catalog run in 279.58 secondsESC[0m > Thanks, > -Arash > > > On Thu, Apr 30, 2015 at 12:04 AM, Boris Derzhavets < > bderzhavets at hotmail.com> wrote: > >> Arash, >> >> Please, create BZ for this issue. I don't have that environment in >> meantime. >> >> Thanks. >> Boris. >> >> > Date: Wed, 29 Apr 2015 17:26:24 +0200 >> > From: kchamart at redhat.com >> > To: bderzhavets at hotmail.com >> > CC: ak at cloudssky.com; apevec at gmail.com; rdo-list at redhat.com >> > Subject: Re: [Rdo-list] RE(3): RE(2) : RDO Kilo RC snapshot - core >> packages >> > >> > On Sun, Apr 26, 2015 at 12:52:36PM -0400, Boris Derzhavets wrote: >> > > Confirmed version as of 04/21/2015 ( via delorean repo ) was tested >> > > with answer-file for Two Node install Controller&&Network and Compute >> > > Answer-file sample was brought from blog entry ( works fine on Juno) >> > > >> http://bderzhavets.blogspot.com/2015/04/switching-to-ethx-interfaces-on-fedora.html >> > > >> > > Packstack completed OK . However, table "compute_nodes" in nova >> > > database appeared to be empty up on completion. System doesn't see >> > > Hypervisor Host either via Dashboard or via nova CLI as mentioned by >> > > Arash. So it cannot scheduler instance, no matter that `nova >> > > service-list` is OK and openstack-nova-compute is up and running on >> > > Compute Node. I've also I've got same "WARNING >> > > nova.compute.resource_tracker [-] No service record for host compute1" >> > > in /var/log/nova/nova-compute.log on Compute node. >> > >> > Boris, if you're able to reproduce this consistently, can you please >> > file a Nova bug with clear reproducer details? Along with contextual log >> > files w/ debug enabled. >> > >> > -- >> > /kashyap >> > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Thu Apr 30 19:09:28 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Thu, 30 Apr 2015 21:09:28 +0200 Subject: [Rdo-list] FW: RDO build that passed CI (rc2) In-Reply-To: References: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com> <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> <5540FC2A.9050906@redhat.com> Message-ID: It seems rc2 works on RHEL 7.1 and not on CentOS 7.0? My CentOS version is: CentOS Linux release 7.0.1406 (Core) Thanks! Arash On Thu, Apr 30, 2015 at 8:50 PM, Boris Derzhavets wrote: > Arash, > > Please, be aware of this response, obtained yesterday. > > Thanks. > Boris. > > > Date: Wed, 29 Apr 2015 18:43:38 +0300 > > From: itzikb at redhat.com > > To: bderzhavets at hotmail.com; rdo-list at redhat.com > > Subject: Re: [Rdo-list] RDO build that passed CI (rc2) > > > > Yes.<=== > > > > In addition: > > > > 1)I don't see any problem now regarding the 'permission denied error'. > > > > 2) openstack-nova-novncproxy service is running. > > For some reason I can't see the console of an instance with Firefox > > but I can with Chrome. > > > > 3) neutron-lbaas-agent still fails to start. > > > > Itzik > > On 04/28/2015 03:22 PM, Boris Derzhavets wrote: > > > Are you able to launch VM on Compute Node ? <=== > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu Apr 30 21:59:28 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 30 Apr 2015 23:59:28 +0200 Subject: [Rdo-list] FW: RDO build that passed CI (rc2) In-Reply-To: References: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com> <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> <5540FC2A.9050906@redhat.com> Message-ID: > My CentOS version is: > CentOS Linux release 7.0.1406 (Core) It should turn into 7.1 after you run yum update (Step 1 in Quickstart). With CentOS you can't really "stay" on 7.0 unless you're using out of date mirror. Cheers, Alan From apevec at gmail.com Thu Apr 30 22:03:59 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 1 May 2015 00:03:59 +0200 Subject: [Rdo-list] RE(4): RE(3): RE(2) : RDO Kilo RC snapshot - core packages In-Reply-To: References: <55396E31.4030603@arif-ali.co.uk> <5539E107.1010701@redhat.com> <20150429152624.GF2764@tesla.redhat.com> Message-ID: > Apr 30 14:40:33 csky06.csg.net nova-compute[4569]: ImportError: No module > named oslo_config That would mean python-oslo-config is not installed or old (Juno) version, what does rpm -q python-oslo-config return? Was this upgrade or clean install? Cheers, Alan From ak at cloudssky.com Thu Apr 30 22:12:20 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Fri, 1 May 2015 00:12:20 +0200 Subject: [Rdo-list] FW: RDO build that passed CI (rc2) In-Reply-To: References: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com> <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> <5540FC2A.9050906@redhat.com> Message-ID: But if I yum update it into 7.1, then we have the issue with nmcli: Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. Force execution using --nocheck, but the results are unpredictable. Or do I misunderstand the whole thing? :-) Thanks! Arash On Thu, Apr 30, 2015 at 11:59 PM, Alan Pevec wrote: > > My CentOS version is: > > CentOS Linux release 7.0.1406 (Core) > > It should turn into 7.1 after you run yum update (Step 1 in Quickstart). > With CentOS you can't really "stay" on 7.0 unless you're using out of > date mirror. > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu Apr 30 22:23:20 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 1 May 2015 00:23:20 +0200 Subject: [Rdo-list] FW: RDO build that passed CI (rc2) In-Reply-To: References: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com> <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> <5540FC2A.9050906@redhat.com> Message-ID: 2015-05-01 0:12 GMT+02:00 Arash Kaffamanesh : > But if I yum update it into 7.1, then we have the issue with nmcli: > > Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. > Force execution using --nocheck, but the results are unpredictable. Huh, again?! I thought that was solved after you did yum update... My original answer to that is still the same "Not sure how could that happen, nmcli is part of NetworkManager RPM." Can you reproduce this w/o RDO in the picture, starting with the clean centos installation? How are you installing centos? Cheers, Alan From ak at cloudssky.com Thu Apr 30 23:01:28 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Fri, 1 May 2015 01:01:28 +0200 Subject: [Rdo-list] FW: RDO build that passed CI (rc2) In-Reply-To: References: <193920460.7761474.1430206752811.JavaMail.zimbra@redhat.com> <428601341.7768298.1430207687694.JavaMail.zimbra@redhat.com> <5540FC2A.9050906@redhat.com> Message-ID: I did a CenOS fresh install with the following steps for AIO: yum -y update cat /etc/redhat-release CentOS Linux release 7.1.1503 (Core) yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm yum install epel-release cd /etc/yum.repos.d/ curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc2/delorean-kilo.repo yum install openstack-packstack setenforce 0 packstack --allinone and got again: Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. Force execution using --nocheck, but the results are unpredictable. But if I don't do a yum update and install AIO it finishes successfully and I can yum update afterwards. So if nobody can reproduce this issue, then something is wrong with my base CentOS install, I'll try to install the latest CentOS from ISO now. Thanks! Arash On Fri, May 1, 2015 at 12:42 AM, Arash Kaffamanesh wrote: > I'm installing CentOS with cobbler and kickstart (from centos7-mini) on 2 > machines > and I'm trying a 2 node install. With rc1 it worked without yum update. > I'll do a fresh install now with yum update and let you know. > > Thanks! > Arash > > > > On Fri, May 1, 2015 at 12:23 AM, Alan Pevec wrote: > >> 2015-05-01 0:12 GMT+02:00 Arash Kaffamanesh : >> > But if I yum update it into 7.1, then we have the issue with nmcli: >> > >> > Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. >> > Force execution using --nocheck, but the results are unpredictable. >> >> Huh, again?! I thought that was solved after you did yum update... >> My original answer to that is still the same "Not sure how could that >> happen, nmcli is part of NetworkManager RPM." >> Can you reproduce this w/o RDO in the picture, starting with the clean >> centos installation? How are you installing centos? >> >> Cheers, >> Alan >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: