From marcos.fermin.lobo at cern.ch Tue Sep 1 07:45:06 2015 From: marcos.fermin.lobo at cern.ch (Marcos Fermin Lobo) Date: Tue, 1 Sep 2015 07:45:06 +0000 Subject: [Rdo-list] [rdo-list] OpenStack Murano and EC2 api packages Message-ID: Hi all, I'm starting as fedora packager (I'm already in process to get Sponsor) and I started with: - OpenStack Murano project - OpenStack EC2 api project. Right now I have the RPM running in my local machine (for both) and I will try push them to upstream ASAP to get your feedback. All feedback will be very welcome. Regards, Marcos. IRC: mflobo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichi.sara at gmail.com Wed Sep 2 10:13:41 2015 From: ichi.sara at gmail.com (ICHIBA Sara) Date: Wed, 2 Sep 2015 12:13:41 +0200 Subject: [Rdo-list] Clean heat database Message-ID: Hey there, Please, I need some help with heat database. Actually as I failed to delete some heat stuck via the cli and the dashboard I deleted them manually. I mean I deleted the resources of those stacks. Now I need to clean my heat database as i always see the names of those stacks in my dashboard (see the capture enclosed). I already tried to do it myself but i failed: MariaDB [heat]> delete from stack where id='fd80f026-6c8f-46f2-adfb-61d380eef5cd'; ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`heat`.`event`, CONSTRAINT `event_ibfk_1` FOREIGN KEY (`stack_id`) REFERENCES `stack` (`id`)) can anyone help please? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture_heat.PNG Type: image/png Size: 70055 bytes Desc: not available URL: From chkumar246 at gmail.com Wed Sep 2 13:30:14 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 2 Sep 2015 19:00:14 +0530 Subject: [Rdo-list] bug statistics for 2015-09-02 Message-ID: # RDO Bugs on 2015-09-02 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 268 - Fixed (MODIFIED, POST, ON_QA): 177 ## Number of open bugs by component diskimage-builder [ 4] +++ distribution [ 18] ++++++++++++++++ dnsmasq [ 1] instack [ 4] +++ instack-undercloud [ 23] +++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 5] ++++ openstack-cinder [ 14] +++++++++++++ openstack-foreman-inst... [ 3] ++ openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 5] ++++ openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] ++++++ openstack-neutron [ 6] +++++ openstack-nova [ 18] ++++++++++++++++ openstack-packstack [ 43] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 10] +++++++++ openstack-selinux [ 12] +++++++++++ openstack-swift [ 2] + openstack-tripleo [ 24] ++++++++++++++++++++++ openstack-tripleo-heat... [ 5] ++++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 3] ++ openvswitch [ 2] + python-glanceclient [ 1] python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] ++++ python-oslo-config [ 1] rdo-manager [ 22] ++++++++++++++++++++ rdo-manager-cli [ 6] +++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (268 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1228761 ] http://bugzilla.redhat.com/1228761 (NEW) Component: diskimage-builder Last change: 2015-06-10 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos ### distribution (18 bugs) [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1116972 ] http://bugzilla.redhat.com/1116972 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO website: libffi-devel is required to run Tempest (at least on CentOS 6.5) [1116974 ] http://bugzilla.redhat.com/1116974 (NEW) Component: distribution Last change: 2015-06-04 Summary: Running Tempest according to the instructions @ RDO website fails with missing tox.ini error [1116975 ] http://bugzilla.redhat.com/1116975 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO website: configuring TestR according to website, breaks Tox completely [1117007 ] http://bugzilla.redhat.com/1117007 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO website: newer python-nose is required to run Tempest (at least on CentOS 6.5) [update to http://open stack.redhat.com/Testing_IceHouse_using_Tempest] [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1187309 ] http://bugzilla.redhat.com/1187309 (NEW) Component: distribution Last change: 2015-05-08 Summary: New package - python-cliff-tablib [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1212223 ] http://bugzilla.redhat.com/1212223 (NEW) Component: distribution Last change: 2015-06-04 Summary: mariadb requires Requires: mariadb-libs(x86-64) = 1:5.5.35-3.el7 [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1226795 ] http://bugzilla.redhat.com/1226795 (NEW) Component: distribution Last change: 2015-09-01 Summary: RFE: Manila-UI Plugin support in Horizon [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-09-02 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1249035 ] http://bugzilla.redhat.com/1249035 (ASSIGNED) Component: distribution Last change: 2015-07-31 Summary: liberty missing python module unicodecsv [1258560 ] http://bugzilla.redhat.com/1258560 (ASSIGNED) Component: distribution Last change: 2015-09-02 Summary: /usr/share/openstack-dashboard/openstack_dashboard/temp lates/_stylesheets.html: /bin/sh: horizon.utils.scss_filter.HorizonScssFilter: command not found ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### instack (4 bugs) [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (23 bugs) [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (5 bugs) [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1214928 ] http://bugzilla.redhat.com/1214928 (NEW) Component: openstack-ceilometer Last change: 2015-04-23 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library ### openstack-cinder (14 bugs) [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-cinder Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (5 bugs) [1210821 ] http://bugzilla.redhat.com/1210821 (NEW) Component: openstack-horizon Last change: 2015-04-10 Summary: horizon should be using rdo logo instead of openstack's [1218896 ] http://bugzilla.redhat.com/1218896 (NEW) Component: openstack-horizon Last change: 2015-05-13 Summary: Remaining Horizon issues for kilo release [1218897 ] http://bugzilla.redhat.com/1218897 (NEW) Component: openstack-horizon Last change: 2015-05-11 Summary: new launch instance does not work with webroot other than '/' [1220070 ] http://bugzilla.redhat.com/1220070 (NEW) Component: openstack-horizon Last change: 2015-05-13 Summary: horizon requires manage.py compress to be run [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-08-25 Summary: keystone-all process reaches 100% CPU consumption [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf ### openstack-neutron (6 bugs) [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (18 bugs) [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-06-04 Summary: Ensure translations are installed correctly and picked up at runtime [1123298 ] http://bugzilla.redhat.com/1123298 (NEW) Component: openstack-nova Last change: 2015-04-26 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova: fail to edit project quota with DataError from nova [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova object store allow get object after date exires [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: v4-fixed-ip= not working with juno nova networking [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-04-10 Summary: horizon console uses http when horizon is set to use ssl [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: novnc init script doesnt write to log [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-01-08 Summary: Installation of openstack-nova-compute fails on PowerKVM [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: Nova AVC messages [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-02-09 Summary: Nova - db connection string present on compute nodes [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-06-14 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-06-08 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-06-23 Summary: Kilo assigning ipv6 address, even though its disabled. [1255221 ] http://bugzilla.redhat.com/1255221 (NEW) Component: openstack-nova Last change: 2015-08-20 Summary: Ovirt 3.6 using Cinder as external store does not remove cloned disk image - ceph backend ### openstack-packstack (43 bugs) [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-08-26 Summary: nss.load missing from packstack, httpd unable to start. ### openstack-puppet-modules (10 bugs) [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing ### openstack-selinux (12 bugs) [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-07-23 Summary: Glance over nfs fails due to selinux [1249685 ] http://bugzilla.redhat.com/1249685 (NEW) Component: openstack-selinux Last change: 2015-08-19 Summary: libffi should not require execmem when selinux is enabled [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux ### openstack-swift (2 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (24 bugs) [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Node registration fails silently if instackenv.json is badly formatted [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix ### openstack-tripleo-heat-templates (5 bugs) [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1232015 ] http://bugzilla.redhat.com/1232015 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: instack-undercloud: one controller deployment: running "pcs status" - Error: cluster is not currently running on this node [1235508 ] http://bugzilla.redhat.com/1235508 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-25 Summary: Package update does not take puppet managed packages into account [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates ### openstack-utils (3 bugs) [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service ### openvswitch (2 bugs) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity [1251970 ] http://bugzilla.redhat.com/1251970 (NEW) Component: openvswitch Last change: 2015-08-26 Summary: openvswitch tunnel no communication after update to kernel 3.10.0-229.11.1.el7.x86_64 ### python-glanceclient (1 bug) [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-06-04 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (22 bugs) [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-05-20 Summary: Read bit set for others for Openstack services directories in /etc [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon ### rdo-manager-cli (6 bugs) [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (177 bugs) ### distribution (5 bugs) [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (1 bug) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency ### openstack-cinder (5 bugs) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] ### openstack-glance (3 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue ### openstack-heat (3 bugs) [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (13 bugs) [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service ### openstack-nova (5 bugs) [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options ### openstack-packstack (58 bugs) [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-07-21 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? ### openstack-puppet-modules (18 bugs) [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance ### openstack-sahara (1 bug) [1184522 ] http://bugzilla.redhat.com/1184522 (MODIFIED) Component: openstack-sahara Last change: 2015-03-27 Summary: launch_command.py missing ### openstack-selinux (12 bugs) [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo (1 bug) [1162333 ] http://bugzilla.redhat.com/1162333 (ON_QA) Component: openstack-tripleo Last change: 2015-06-02 Summary: Instack fails to complete instack-virt-setup with syntax error near unexpected token `newline' ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py ### openstack-utils (2 bugs) [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ ### python-django-openstack-auth (3 bugs) [1218894 ] http://bugzilla.redhat.com/1218894 (ON_QA) Component: python-django-openstack-auth Last change: 2015-06-24 Summary: Horizon: Re login failed after timeout [1218899 ] http://bugzilla.redhat.com/1218899 (ON_QA) Component: python-django-openstack-auth Last change: 2015-06-24 Summary: permission checks issue / not properly checking enabled services [1232683 ] http://bugzilla.redhat.com/1232683 (MODIFIED) Component: python-django-openstack-auth Last change: 2015-09-02 Summary: horizon manage.py syncdb errors on "App 'openstack_auth' doesn't have a 'user' model." ### python-glanceclient (3 bugs) [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1244291 ] http://bugzilla.redhat.com/1244291 (MODIFIED) Component: python-glanceclient Last change: 2015-08-01 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion ### python-neutronclient (3 bugs) [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (5 bugs) [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-05-29 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason ### rdo-manager-cli (8 bugs) [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-07-23 Summary: OSC plugin isn't saving plan configuration values [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Wed Sep 2 16:00:41 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 2 Sep 2015 21:30:41 +0530 Subject: [Rdo-list] [meeting] RDO packaging meeting (2015-09-02) Message-ID: Hello, ======================================== #rdo: RDO packaging meeting (2015-09-02) ======================================== Meeting started by chandankumar at 15:01:01 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-09-02/rdo.2015-09-02-15.01.log.html . Meeting summary --------------- * Discover module removal from upstream openstack-* packages (chandankumar, 15:02:42) * patch regarding removing discover module from test-requirements sent and is merged in sahara, cinder, nova, designate, ceilometer, manila, magnum, zaqar, barbican, ironic (chandankumar, 15:03:01) * LINK: https://review.openstack.org/218285 (chandankumar, 15:04:00) * RDO packages specs update to python3 (chandankumar, 15:04:54) * LINK: https://etherpad.openstack.org/p/RDO-python3-porting-packages (chandankumar, 15:05:38) * take care of obsoleting the python-xxx modules while updating the spec to python3 (chandankumar, 15:09:12) * LINK: (chandankumar, 15:10:35) * LINK: https://chandankumar.fedorapeople.org/python-eventlet.spec (chandankumar, 15:10:38) * ACTION: chandankumar to send a mail to python-eventlet maintainer with updated spec (chandankumar, 15:11:46) * LINK: https://trello.com/c/eT0jvRGN/67-rdo-rpm-macros (number80, 15:18:22) * ACTION: pixelb commit the eventlet py3 update to rawhide/f23 (apevec, 15:19:58) * ACTION: chandankumar to update the specs of python-* packages to py3 (chandankumar, 15:20:09) * new package scm requests (chandankumar, 15:21:34) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1257329 (chandankumar, 15:22:07) * openstack-ironic-python-agent is imported in openstack-packages. (chandankumar, 15:23:55) * ACTION: apevec to review Add ironic-python-agent in rdoinfo (apevec, 15:24:01) * new package under review (chandankumar, 15:24:51) * LINK: python-jsonpath-rw-ext - https://bugzilla.redhat.com/show_bug.cgi?id=1259075 (chandankumar, 15:25:17) * LINK: python-yaql https://bugzilla.redhat.com/show_bug.cgi?id=1257178 (number80, 15:26:28) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=RDO-LIBERTY-REVIEWS (apevec, 15:28:08) * ACTION: chandankumar to review python-jsonpath-rw-ext (chandankumar, 15:29:10) * liberty-3 Rawhide updates (chandankumar, 15:30:46) * LINK: https://trello.com/c/GPqDlVLs/63-liberty-3-rpms (chandankumar, 15:33:14) * ACTION: apevec to update pyton-oslo-* to the latest releases (apevec, 15:33:46) * LINK: https://trello.com/c/wzdl1IlZ/52-openstack-in-fedora (chandankumar, 15:34:22) * ACTION: apevec to post on rdo-list summary from https://trello.com/c/wzdl1IlZ/52-openstack-in-fedora (apevec, 15:38:19) * open floor (chandankumar, 15:43:56) Meeting ended at 15:51:43 UTC. Action Items ------------ * chandankumar to send a mail to python-eventlet maintainer with updated spec * pixelb commit the eventlet py3 update to rawhide/f23 * chandankumar to update the specs of python-* packages to py3 * apevec to review Add ironic-python-agent in rdoinfo * chandankumar to review python-jsonpath-rw-ext * apevec to update pyton-oslo-* to the latest releases * apevec to post on rdo-list summary from https://trello.com/c/wzdl1IlZ/52-openstack-in-fedora Action Items, by person ----------------------- * apevec * apevec to review Add ironic-python-agent in rdoinfo * apevec to update pyton-oslo-* to the latest releases * apevec to post on rdo-list summary from https://trello.com/c/wzdl1IlZ/52-openstack-in-fedora * chandankumar * chandankumar to send a mail to python-eventlet maintainer with updated spec * chandankumar to update the specs of python-* packages to py3 * chandankumar to review python-jsonpath-rw-ext * pixelb * pixelb commit the eventlet py3 update to rawhide/f23 * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * chandankumar (72) * apevec (64) * number80 (23) * zodbot (9) * social (5) * trown (4) * jpena (4) * pixelb (2) * rbowen (1) * paragan (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Sep 3 16:27:25 2015 From: whayutin at redhat.com (whayutin) Date: Thu, 03 Sep 2015 12:27:25 -0400 Subject: [Rdo-list] [CI] FYI.. rdo ci will go down this afternoon for scheduled maintenance. Message-ID: <1441297645.2823.30.camel@redhat.com> The openshift team is migrating https://prod-rdojenkins.rhcloud.com/?to another server. I will shutdown the jenkins server at 3pm EST today, we should be back online later this evening. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Thu Sep 3 17:28:36 2015 From: mrunge at redhat.com (Matthias Runge) Date: Thu, 3 Sep 2015 19:28:36 +0200 Subject: [Rdo-list] Proposal: switch delorean to f22 Message-ID: <55E88344.2030809@redhat.com> Hello, this was discussed briefly on irc today. The question was mainly, under the light of python3 sub-packages coming in, we'd need either to backport patches to f21 or to do something else. With f23 in sight, we should switch from f21 to f22 as base distro for delorean. Thoughts on that? We shipped kilo via RDO on top of f22, and liberty will probably be shipped as additional repo to f23. IMHO it would be honest and catch most issues, if we would be basing on f23, but that might be a bit adventurous? Matthias From jrist at redhat.com Thu Sep 3 18:02:06 2015 From: jrist at redhat.com (Jason Rist) Date: Thu, 3 Sep 2015 12:02:06 -0600 Subject: [Rdo-list] Proposal: switch delorean to f22 In-Reply-To: <55E88344.2030809@redhat.com> References: <55E88344.2030809@redhat.com> Message-ID: <55E88B1E.2070404@redhat.com> On 09/03/2015 11:28 AM, Matthias Runge wrote: > Hello, > > this was discussed briefly on irc today. The question was mainly, under > the light of python3 sub-packages coming in, we'd need either to > backport patches to f21 or to do something else. > > With f23 in sight, we should switch from f21 to f22 as base distro for > delorean. > > Thoughts on that? We shipped kilo via RDO on top of f22, and liberty > will probably be shipped as additional repo to f23. > > IMHO it would be honest and catch most issues, if we would be basing on > f23, but that might be a bit adventurous? > > Matthias > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > I say the sooner the better. -- Jason E. Rist Senior Software Engineer OpenStack Infrastructure Integration Red Hat, Inc. openuc: +1.972.707.6408 mobile: +1.720.256.3933 Freenode: jrist github/identi.ca: knowncitizen From rbowen at redhat.com Fri Sep 4 13:04:51 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 4 Sep 2015 09:04:51 -0400 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter, September 2015 Message-ID: <55E996F3.8000502@redhat.com> September 2015 RDO Community Newsletter (This newsletter is also available online at https://www.rdoproject.org/Newsletter/2015_September ) Quick links: * Quick Start - http://rdoproject.org/quickstart * Mailing Lists - http://rdoproject.org/Mailing_lists * RDO packages - http://rdoproject.org/repos/ with the trunk packages in http://rdoproject.org/repos/openstack/openstack-trunk/ * RDO blog - http://rdoproject.org/blog * Q&A - http://ask.openstack.org/ * Open Tickets - http://tm3.org/rdobugs * Twitter - http://twitter.com/rdocommunity Thanks for being part of the RDO community! Mailing List Update =================== Here's what's been going on in the RDO world since you last heard from me. * GSoC The Google Summer of Code has ended, and Asadullah Hussain, who was working on a "Cloud In A box" bootable image, using RDO, has completed his project. Asad says: Hi, I have been working on developing a CentOS-RDO remix as part of my GSoC project with Rich Bowen. I have pushed the first release of the project and the ISO is available at: http://buildlogs.centos.org/gsoc2015/cloud-in-a-box/CentOS-7-x86_64-RDO-1503-2015082701.iso. The ISO allows the user to install RDO during CentOS installation via both --alinone & --answer-file modes of packstack. Development involved writing an extension (addon) to the Anaconda Installer, the source along with testing instructions is available at: https://github.com/asadpiz/org_centos_cloud I would appreciate feeback/testing of this addon as the main motivation was to provide a pre-configured, assumption driven "cloud in a box" to the users. * Packaging meetings We continue to have packaging meetings every week. You can catch up on the minutes at: https://www.redhat.com/archives/rdo-list/2015-August/msg00035.html https://www.redhat.com/archives/rdo-list/2015-August/msg00064.html https://www.redhat.com/archives/rdo-list/2015-August/msg00107.html https://www.redhat.com/archives/rdo-list/2015-August/msg00137.html * Bug Statistics Chandan Kumar has started posting weekly bug statistics summaries. These are great for tracking the progress of the project, and are also an excellent place to look if you're getting started and looking for something to start working on. You can see the latest of these messages at https://www.redhat.com/archives/rdo-list/2015-August/msg00136.html * The future of Packstack There was some discussion of the role of Packstack, and its future, in light of the work that's happening on RDO Manager. You can pick up that thread at https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html RDO Blogs ========= In August I started a series of interviews with various OpenStack PTLs (Project Technical Leads), talking about not only what's coming in Liberty, but the long-term vision that these PTLs have for their corner of OpenStack. I started by interviewing Flavio Percoco, PTL of the Zaqar project: https://www.rdoproject.org/forum/discussion/1031/flavio-percoco-talks-about-the-zaqar-project Next, I spoke with Davanum "Dims" Srinivas, PTL of Oslo: https://www.rdoproject.org/forum/discussion/1032/dims-talks-about-the-oslo-project/p1 And, most recently, I talked with Kyle Mestery, PTL of the Neutron project: https://www.rdoproject.org/forum/discussion/1035/kyle-mestery-and-the-future-of-neutron I hope to continue these interviews in the coming months, and eventually get around to all of the projects. Of course, I'm not the only person interviewing the PTLs. You can hear some of the other PTL interviews at https://www.youtube.com/playlist?list=PLKqaoAnDyfgpNGTIfXQ53UCQAJPn8u25v Events ====== * OpenStack Summit We're less than 2 months away from the OpenStack Summit in Tokyo. The schedule has been published at https://www.openstack.org/summit/tokyo-2015/schedule/ and you can still register at https://www.eventbrite.com/e/openstack-summit-october-2015-tokyo-tickets-17356780598 In addition to tons of great OpenStack content, this will be the design summit for the Mitaka cycle, where the community will decide what the priorities will be for the next release. You don't want to miss it. And we're planning to have another RDO community meetup there, which is the best time to learn about what's happening in RDO, and where you can get involved. Watch the rdo-list mailing list, and @RDOCommunity on Twitter, for details as they become available. * FOSDEM Save the date! FOSDEM will be held 30 & 31 January 2016 in Brussels. We're planning a full-day RDO meetup the day before, for in-depth coverage of some of the topics that we can only briefly touch on in an hour-long meetup. Details are still being worked out, and we'll tell you in the next edition of this newsletter, by which time we should have a location and other details. * Meetups Every week, I update the events listing at http://rdoproject.org/Events including the meetups that happen almost every day of every week. If you're planning a meetup and it's not listed there, please get in touch. TryStack ======== TryStack.org, the site where you can try out OpenStack without having to deploy it yourself, is back online, after an extended outage. During that time, the site was upgraded to better hardware, and is now running OpenStack RDO Kilo. To use TryStack, you'll need to join the TryStack Facebook group (that's how authentication is done), and then you'll have access to a full OpenStack installation where you can create networks, spin up instances, and generally explore what OpenStack can do. Go to http://trystack.org/ to get started. Keep in touch ============= There's lots of ways to stay in in touch with what's going on in the RDO community. The best ways are ... WWW * RDO - http://rdoproject.org/ * OpenStack Q&A - http://ask.openstack.org/ Mailing Lists: * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter IRC * IRC - #rdo on Freenode.irc.net * Puppet module development - #rdo-puppet Social Media: * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * Facebook - http://facebook.com/rdocommunity Thanks again for being part of the RDO community! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From mattdm at fedoraproject.org Fri Sep 4 15:17:16 2015 From: mattdm at fedoraproject.org (Matthew Miller) Date: Fri, 4 Sep 2015 11:17:16 -0400 Subject: [Rdo-list] Question about Restart=always in systemd In-Reply-To: <936474675.323251.1440168150920.JavaMail.zimbra@speichert.pl> References: <316691786CCAEE44AE90482264E3AB8213A9812C@xmb-rcd-x08.cisco.com> <936474675.323251.1440168150920.JavaMail.zimbra@speichert.pl> Message-ID: <20150904151716.GA5011@mattdm.org> On Fri, Aug 21, 2015 at 04:42:30PM +0200, Daniel Speichert wrote: > I +1 that issue. I've noticed large differences in the systemd config > units for many services, often without specific functional needs. > > I'd think that "Restart=always" is a good setting for all services. > What it really brings up is maybe the issue of streamlining the unit > config files. For several releases, we've had packaging guidelines in Fedora encouraging Restart=on-failure or Restart=on-abnormal: https://fedoraproject.org/wiki/Packaging:Systemd#Automatic_restarting We never, however, had an effort to bring existing packages into a consistent state. I'd love for that effort to happen ? anyone interesting in helping out? -- Matthew Miller mattdm at mattdm.org Fedora Project Leader mattdm at fedoraproject.org From javier.pena at redhat.com Fri Sep 4 15:44:53 2015 From: javier.pena at redhat.com (Javier Pena) Date: Fri, 4 Sep 2015 11:44:53 -0400 (EDT) Subject: [Rdo-list] Proposal: switch delorean to f22 In-Reply-To: <55E88B1E.2070404@redhat.com> References: <55E88344.2030809@redhat.com> <55E88B1E.2070404@redhat.com> Message-ID: <505686858.39152835.1441381493693.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On 09/03/2015 11:28 AM, Matthias Runge wrote: > > Hello, > > > > this was discussed briefly on irc today. The question was mainly, under > > the light of python3 sub-packages coming in, we'd need either to > > backport patches to f21 or to do something else. > > > > With f23 in sight, we should switch from f21 to f22 as base distro for > > delorean. > > > > Thoughts on that? We shipped kilo via RDO on top of f22, and liberty > > will probably be shipped as additional repo to f23. > > > > IMHO it would be honest and catch most issues, if we would be basing on > > f23, but that might be a bit adventurous? > > > > Matthias > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > I say the sooner the better. > I have made some tests on a local instance, and building using a Fedora 22 image seems to work just fine. After some discussions today, I'm planning to configure a parallel f22 instance in the current Delorean server next Monday, and leave it running for a week or so to see if it works as expected. Regards, Javier > -- > Jason E. Rist > Senior Software Engineer > OpenStack Infrastructure Integration > Red Hat, Inc. > openuc: +1.972.707.6408 > mobile: +1.720.256.3933 > Freenode: jrist > github/identi.ca: knowncitizen > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rbowen at redhat.com Fri Sep 4 16:10:21 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 4 Sep 2015 12:10:21 -0400 Subject: [Rdo-list] RDO test days Message-ID: <55E9C26D.3060608@redhat.com> We'd like to do a couple of test days leading up to the Liberty release, to make sure we're in good shape. We were thinking maybe one in the latter half of September - say, Sep 23 or 24? - to test a Delorean snapshot that has passed CI. And then another test day on October 15th/16th for GA. The GA release is to be on the 15th, so we'd be testing with what will be in that release. (I'm reluctant to do this the week after GA, simply because it's the week before Summit, and people will be distracted and traveling.) Thoughts? -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From whayutin at redhat.com Fri Sep 4 20:05:35 2015 From: whayutin at redhat.com (whayutin) Date: Fri, 04 Sep 2015 16:05:35 -0400 Subject: [Rdo-list] [CI] neutron fails to start packstack liberty Message-ID: <1441397135.3009.4.camel@redhat.com> https://bugzilla.redhat.com/show_bug.cgi?id=1260222 ERROR neutron.services.service_base [-] No providers specified for 'LOADBALANCER' service, exiting -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Fri Sep 4 20:12:19 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 4 Sep 2015 22:12:19 +0200 Subject: [Rdo-list] [CI] neutron fails to start packstack liberty In-Reply-To: <1441397135.3009.4.camel@redhat.com> References: <1441397135.3009.4.camel@redhat.com> Message-ID: <6CF492A7-D70A-4230-8D7C-A11FE0016140@redhat.com> > On 04 Sep 2015, at 22:05, whayutin wrote: > > https://bugzilla.redhat.com/show_bug.cgi?id=1260222 > > ERROR neutron.services.service_base [-] No providers specified for 'LOADBALANCER' service, exiting > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com Fixed in u/s. I closed the bug. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From hguemar at fedoraproject.org Sat Sep 5 20:25:52 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Sat, 5 Sep 2015 22:25:52 +0200 Subject: [Rdo-list] [RFC] RDO Big Tent projects acceptance guidelines Message-ID: Hi, RDO is growing, and with upstream Big Tent initiative, many people offered to integrate more projects into RDO. As a truly open community, we need to set proper guidelines to define which projects should be accepted or not in RDO. Below, you'll find a draft for such guidelines, and I would like to receive your feedback Projects have to 1. be part of the OpenStack upstream ecosysem (e.g licenses) 2. have an identified downstream maintainer Maintainers have to 1. provides packages that complies with Fedora packaging guidelines minus exceptions granted by the RDO team [1] 2. commit to fix tickets reported on RDO bug tracker 3. fix FTBFS[2] in stable and master branches 4. commit to integrate and maintain their project in RDO CI 5. agree to license packaging under the MIT license (default) or any approved license by Fedora Legal. As long as all these conditions are respected, any project is welcome under RDO :) If everyone agrees, we will publish these guidelines on our website. Regards, H. [1] RDO team to be understood as the community behind the maintenance of RDO, not RDO Engineering which is an internal team from Red Hat. [2] Fail to Build From Source From Tim.Bell at cern.ch Sun Sep 6 07:30:37 2015 From: Tim.Bell at cern.ch (Tim Bell) Date: Sun, 6 Sep 2015 07:30:37 +0000 Subject: [Rdo-list] [RFC] RDO Big Tent projects acceptance guidelines In-Reply-To: References: Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1B46@CERNXCHG44.cern.ch> > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > On Behalf Of Ha?kel > Sent: 05 September 2015 22:26 > To: rdo-list at redhat.com > Subject: [Rdo-list] [RFC] RDO Big Tent projects acceptance guidelines > > Hi, > > RDO is growing, and with upstream Big Tent initiative, many people offered > to integrate more projects into RDO. > > As a truly open community, we need to set proper guidelines to define which > projects should be accepted or not in RDO. > Below, you'll find a draft for such guidelines, and I would like to receive your > feedback > > Projects have to > 1. be part of the OpenStack upstream ecosysem (e.g licenses) 2. have an > identified downstream maintainer > > Maintainers have to > 1. provides packages that complies with Fedora packaging guidelines minus > exceptions granted by the RDO team [1] 2. commit to fix tickets reported on > RDO bug tracker 3. fix FTBFS[2] in stable and master branches 4. commit to > integrate and maintain their project in RDO CI 5. agree to license packaging > under the MIT license (default) or any approved license by Fedora Legal. > To understand the requirement 2, does this mean fixing tickets related to the packaging or tickets related to the upstream project ? CERN are certainly interested in contributing our packaging of some components not yet in RDO (such as the ec2-api and Murano packages we're already working on), but there would be concerns if we take on commitments for the upstream project rather than just the package maintenance. One other requirement that is more specific to OpenStack than to Fedora is the maintenance of packages for newer versions of OpenStack. At what stage during the lifecycle of a release such as Liberty would it be expected that packages be updated ? And for how many older releases would it be expected to be maintained ? > As long as all these conditions are respected, any project is welcome under > RDO :) > > If everyone agrees, we will publish these guidelines on our website. > > Regards, > H. > > [1] RDO team to be understood as the community behind the maintenance > of RDO, not RDO Engineering which is an internal team from Red Hat. > [2] Fail to Build From Source > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 7349 bytes Desc: not available URL: From Tim.Bell at cern.ch Sun Sep 6 07:35:30 2015 From: Tim.Bell at cern.ch (Tim Bell) Date: Sun, 6 Sep 2015 07:35:30 +0000 Subject: [Rdo-list] packstack future Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1C09@CERNXCHG44.cern.ch> Reading the RDO September newsletter, I noticed a mail thread (https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html) on the future of packstack vs rdo-manager. We use packstack to spin up small OpenStack instances for development and testing. Typical cases are to have a look at the features of the latest releases or do some prototyping of an option we've not tried yet. It was not clear to me based on the mailing list thread as to how this could be done using rdo-manager unless you already have the undercloud configiured by RDO. Has there been any further discussions around packstack future ? Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 7349 bytes Desc: not available URL: From hguemar at fedoraproject.org Mon Sep 7 11:02:32 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Mon, 7 Sep 2015 13:02:32 +0200 Subject: [Rdo-list] [RFC] RDO Big Tent projects acceptance guidelines In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1B46@CERNXCHG44.cern.ch> References: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1B46@CERNXCHG44.cern.ch> Message-ID: 2015-09-06 9:30 GMT+02:00 Tim Bell : > > To understand the requirement 2, does this mean fixing tickets related to > the packaging or tickets related to the upstream project ? CERN are > certainly interested in contributing our packaging of some components not > yet in RDO (such as the ec2-api and Murano packages we're already working > on), but there would be concerns if we take on commitments for the upstream > project rather than just the package maintenance. > I meant downstream tickets, packages maintainers are supposed to fix packaging-related issues, and if necessary reporting issues upstream. Nobody expects package maintainers to fix bugs upstream themselves. What we want to avoid is having packages that are not attended at all. > One other requirement that is more specific to OpenStack than to Fedora is > the maintenance of packages for newer versions of OpenStack. At what stage > during the lifecycle of a release such as Liberty would it be expected that > packages be updated ? And for how many older releases would it be expected > to be maintained ? > RDO packages are supposed to be updated in the following cases: * new stable release upstream * packaging fixes * CVE (depending severity) Until now, we've been following upstream lifecycles which means that we support two stable branches + master branch through delorean. Regards, H. From shardy at redhat.com Mon Sep 7 13:07:56 2015 From: shardy at redhat.com (Steven Hardy) Date: Mon, 7 Sep 2015 14:07:56 +0100 Subject: [Rdo-list] packstack future In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1C09@CERNXCHG44.cern.ch> References: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1C09@CERNXCHG44.cern.ch> Message-ID: <20150907130756.GA9590@t430slt.redhat.com> Hi Tim, On Sun, Sep 06, 2015 at 07:35:30AM +0000, Tim Bell wrote: > Reading the RDO September newsletter, I noticed a mail thread > (https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html) on > the future of packstack vs rdo-manager. > > We use packstack to spin up small OpenStack instances for development and > testing. Typical cases are to have a look at the features of the latest > releases or do some prototyping of an option we've not tried yet. > > It was not clear to me based on the mailing list thread as to how this > could be done using rdo-manager unless you already have the undercloud > configiured by RDO. > Has there been any further discussions around packstack future ? Thanks for raising this - I am aware that a number of folks have been thinking about this topic (myself included), but I don't think we've yet reached a definitive consensus re the path forward yet. Here's my view on the subject: 1. Packstack is clearly successful, useful to a lot of folks, and does satisfy a use-case currently not well served via rdo-manager, so IMO we absolutely should maintain it until that is no longer the case. 2. Many people are interested in easier ways to stand up PoC environments via rdo-manager, so we do need to work on ways to make that easier (or even possible at all in the single-node case). 3. It would be really great if we could figure out (2) in such a way as to enable a simple migration path from packstack to whatever the PoC mode of rdo-manager ends up being, for example perhaps we could have an rdo manager interface which is capable of consuming a packstack answer file? Re the thread you reference, it raises a number of interesting questions, particularly the similarities/differences between an all-in-one packstack install and an all-in-one undercloud install; >From an abstract perspective, installing an all-in-one undercloud looks a lot like installing an all-in-one packstack environment, both sets of tools take a config file, and create a puppet-configured all-in-one OpenStack. But there's a lot of potential complexity related to providing a flexible/configurable deployment (like packstack) vs an opinionated bootstrap environment (e.g the current instack undercloud environment). There are a few possible approaches: - Do the work to enable a more flexibly configured undercloud, and just have that as the "all in one" solution - Have some sort of transient undercloud (I'm thinking a container) which exists only for the duration of deploying the all-in-one overcloud, on the local (pre-provisioned, e.g not via Ironic) host. Some prototyping of this approach has already happened [1] which I think James Slagle has used to successfully deploy TripleO templates on pre-provisioned nodes. The latter approach is quite interesting, because it potentially maintains a greater degree of symmetry between the minimal PoC install and real production deployments (e.g you'd use the same heat templates etc), it could also potentially provide easier access to features as they are added to overcloud templates (container integration, as an example), vs integrating new features in two places. Overall at this point I think there are still many unanswered questions around enabling the PoC use-case for rdo-manager (and, more generally making TripleO upstream more easily consumable for these kinds of use-cases). I hope/expect we'll have a TripleO session on this at the forthcoming summit, where we refine the various ideas people have been investigating, and define the path forward wrt PoC deployments. Hopefully that is somewhat helpful, and thanks again for re-starting this discussion! :) Steve [1] https://etherpad.openstack.org/p/noop-softwareconfig From pbrady at redhat.com Mon Sep 7 14:28:25 2015 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Mon, 07 Sep 2015 15:28:25 +0100 Subject: [Rdo-list] qpid support Message-ID: <55ED9F09.1010504@redhat.com> I see that ptyhon-qpid has been retired in Fedora, though I see that iboverma at redhat.com has made attempts to reinstate it. Irina, has there been progress on the Fedora ticket to reinstate that? In the meantime it may make sense to host python-qpid on RDO temporarily to avoid install dep issues. python-oslo-messaging requires it currently for example. On that note, what are the longer term view on supporting qpid? thanks, P?draig. From ihrachys at redhat.com Mon Sep 7 14:34:20 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 7 Sep 2015 16:34:20 +0200 Subject: [Rdo-list] qpid support In-Reply-To: <55ED9F09.1010504@redhat.com> References: <55ED9F09.1010504@redhat.com> Message-ID: <88964CAC-ED4F-4804-97EB-B84C43312586@redhat.com> > On 07 Sep 2015, at 16:28, P?draig Brady wrote: > > I see that ptyhon-qpid has been retired in Fedora, > though I see that iboverma at redhat.com has made attempts > to reinstate it. Irina, has there been progress on > the Fedora ticket to reinstate that? > > In the meantime it may make sense to host > python-qpid on RDO temporarily to avoid install dep issues. > python-oslo-messaging requires it currently for example. > > On that note, what are the longer term view on supporting qpid? > > thanks, > P?draig. I don?t believe we should care much about Qpid in context of RDO and OpenStack. I planned to just kill the dep from openstack-neutron packages (that does not belong there since Juno when the project switched to oslo.messaging anyway). Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From hguemar at fedoraproject.org Mon Sep 7 14:40:02 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Mon, 7 Sep 2015 16:40:02 +0200 Subject: [Rdo-list] qpid support In-Reply-To: <88964CAC-ED4F-4804-97EB-B84C43312586@redhat.com> References: <55ED9F09.1010504@redhat.com> <88964CAC-ED4F-4804-97EB-B84C43312586@redhat.com> Message-ID: For the record, the package has been unretired https://bugzilla.redhat.com/show_bug.cgi?id=1248100 AFAIK, nobody uses qpid support nowadays with RDO. Regards, H. From hguemar at fedoraproject.org Mon Sep 7 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 7 Sep 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150907150003.A14C560A4003@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-09-09 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From hguemar at fedoraproject.org Mon Sep 7 18:19:18 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Mon, 7 Sep 2015 20:19:18 +0200 Subject: [Rdo-list] [packaging] heads-ups about python packaging guidelines Message-ID: Hi, Fedora python guidelines have been updated with newer macros, affecting some of our base dependencies. To keep our packaging consistent and avoid error-prone workarounds to support both Fedora and CentOS, I collected these in a rdo-rpm-macros. https://github.com/hguemar/rdo-rpm-macros/ >From now onwards, we'll maintain macros in this package to ensure compatibility between targets and simplify OpenStack packaging. It has been added to CBS buildroot by alphacc (search for rdo-rpm-macros: http://cbs.centos.org/kojifiles/work/tasks/2021/32021/root.log) Some recommendations: * drop all the workarounds * if you're updating an existing package (ie: python-xxx) python2-xxx should obsolete python-xxx to avoid upgrade issues Obsoletes: python-xxx <= * new package: take care that we provides the proper name of the package and not python-%{pypi_name} Looks like pyp2rpm does not fix it for you. * for new dependencies: prefer using python2-xxx instead of python-xxx as default version of python may change in the future. Please do *not* rush in changing all deps, keep existing requires! Our priority should be stabilizing RDO Liberty now. This will be done in Mitaka cycle. Regards, H. From jslagle at redhat.com Tue Sep 8 14:24:06 2015 From: jslagle at redhat.com (James Slagle) Date: Tue, 8 Sep 2015 10:24:06 -0400 Subject: [Rdo-list] [rdo-manager] Moving some rdo-manager components to git.openstack.org In-Reply-To: <20150825190458.GG3271@localhost.localdomain> References: <20150825190458.GG3271@localhost.localdomain> Message-ID: <20150908142406.GH12870@localhost.localdomain> On Tue, Aug 25, 2015 at 03:04:58PM -0400, James Slagle wrote: > Recently, folks have been working on moving the rdo-manager based workflow > upstream into the TripleO project directly. > > This has been discussed on openstack-dev as part of this thread: > http://lists.openstack.org/pipermail/openstack-dev/2015-July/070140.html > Note that the thread spans into August as well. > > As part of this move, a patch has been proposed to move the following repos out > from under github.com/rdo-management to git.openstack.org: > > instack > instack-undercloud > python-rdomanager-oscplugin (will be renamed in the process, probably > to python-tripleoclient) > > The patch is here: https://review.openstack.org/#/c/215186 The above patch merged this morning, and the repos are now live in their new locations. Note that python-rdomanager-oscplugin has a new name: python-tripleoclient. The web links for the new repos are: http://git.openstack.org/cgit/openstack/instack/ http://git.openstack.org/cgit/openstack/instack-undercloud/ http://git.openstack.org/cgit/openstack/python-tripleoclient/ http://git.openstack.org/cgit/openstack/tripleo-docs/ Later today I'll be updating the gerrithub acl's for instack/instack-undercloud so that no new patches can be submitted there, as it should all be done against the upstream gerrit now (review.openstack.org). There are still some open reviews on gerrithub for instack-undercloud. I'm going to individually comment on those and encourage folks to resubmit them at review.openstack.org. If they're not all moved over in a couple of days, I'll either resubmit them myself (preserving the original author), or abandon them if they're no longer relevant. -- -- James Slagle -- From jslagle at redhat.com Tue Sep 8 14:42:51 2015 From: jslagle at redhat.com (James Slagle) Date: Tue, 8 Sep 2015 10:42:51 -0400 Subject: [Rdo-list] packstack future In-Reply-To: <20150907130756.GA9590@t430slt.redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1C09@CERNXCHG44.cern.ch> <20150907130756.GA9590@t430slt.redhat.com> Message-ID: <20150908144251.GJ12870@localhost.localdomain> On Mon, Sep 07, 2015 at 02:07:56PM +0100, Steven Hardy wrote: > Hi Tim, > > On Sun, Sep 06, 2015 at 07:35:30AM +0000, Tim Bell wrote: > > Reading the RDO September newsletter, I noticed a mail thread > > (https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html) on > > the future of packstack vs rdo-manager. > > > > We use packstack to spin up small OpenStack instances for development and > > testing. Typical cases are to have a look at the features of the latest > > releases or do some prototyping of an option we've not tried yet. > > > > It was not clear to me based on the mailing list thread as to how this > > could be done using rdo-manager unless you already have the undercloud > > configiured by RDO. > > > Has there been any further discussions around packstack future ? > > Thanks for raising this - I am aware that a number of folks have been > thinking about this topic (myself included), but I don't think we've yet > reached a definitive consensus re the path forward yet. > > Here's my view on the subject: > > 1. Packstack is clearly successful, useful to a lot of folks, and does > satisfy a use-case currently not well served via rdo-manager, so IMO we > absolutely should maintain it until that is no longer the case. > > 2. Many people are interested in easier ways to stand up PoC environments > via rdo-manager, so we do need to work on ways to make that easier (or even > possible at all in the single-node case). > > 3. It would be really great if we could figure out (2) in such a way as to > enable a simple migration path from packstack to whatever the PoC mode of > rdo-manager ends up being, for example perhaps we could have an rdo manager > interface which is capable of consuming a packstack answer file? > > Re the thread you reference, it raises a number of interesting questions, > particularly the similarities/differences between an all-in-one packstack > install and an all-in-one undercloud install; > > >From an abstract perspective, installing an all-in-one undercloud looks a > lot like installing an all-in-one packstack environment, both sets of tools > take a config file, and create a puppet-configured all-in-one OpenStack. > > But there's a lot of potential complexity related to providing a > flexible/configurable deployment (like packstack) vs an opinionated > bootstrap environment (e.g the current instack undercloud environment). Besides there being some TripleO related history (which I won't bore everyone with), the above is a big reason why we didn't just use packstack originally to install the all-in-one undercloud. As you point out, the undercloud installer is very opinionated by design. It's not meant to be a flexible all-in-one *OpenStack* installer, nor do I think we want to turn it into one. That would just end up in reimplementing packstack. > > There are a few possible approaches: > > - Do the work to enable a more flexibly configured undercloud, and just > have that as the "all in one" solution -1 :). > - Have some sort of transient undercloud (I'm thinking a container) which > exists only for the duration of deploying the all-in-one overcloud, on > the local (pre-provisioned, e.g not via Ironic) host. Some prototyping > of this approach has already happened [1] which I think James Slagle has > used to successfully deploy TripleO templates on pre-provisioned nodes. Right, so my thinking was to leverage the work (or some part of it) that Jeff Peeler has done on the standalone Heat container as a bootstrap mechanism. Once that container is up, you can use Heat to deploy to preprovisoned nodes that already have an OS installed. Not only would this be nice for POC's, there are also real use cases where dedicated provisioning networks are not available, or there's no access to ipmi/drac/whatever. It would also provide a solution on how to orchestrate an HA undercloud as well. Note that the node running the bootstrap Heat container itself could potentially be reused, providing for the true all-in-one. I do have some hacked on templates I was working with, and had made enough progress to where I was able to get the preprovisoned nodes to start applying the SoftwareDeployments from Heat after I manually configured os-collect-config on each node. I'll get those in order and push up a WIP patch. There are a lot of wrinkles here still, things like how to orchestrate the manual config you still have to do on each node (have to configure os-collect-config with a stack id), and assumptions on network setup, etc. > > The latter approach is quite interesting, because it potentially maintains > a greater degree of symmetry between the minimal PoC install and real > production deployments (e.g you'd use the same heat templates etc), it > could also potentially provide easier access to features as they are added > to overcloud templates (container integration, as an example), vs > integrating new features in two places. > > Overall at this point I think there are still many unanswered questions > around enabling the PoC use-case for rdo-manager (and, more generally > making TripleO upstream more easily consumable for these kinds of > use-cases). I hope/expect we'll have a TripleO session on this at the > forthcoming summit, where we refine the various ideas people have been > investigating, and define the path forward wrt PoC deployments. So I did just send out the etherpad link for our session planning for Tokyo this morning to openstack-dev :) https://etherpad.openstack.org/p/tripleo-mitaka-proposed-sessions I'll add a bullet item about this point. > > Hopefully that is somewhat helpful, and thanks again for re-starting this > discussion! :) > > Steve > > [1] https://etherpad.openstack.org/p/noop-softwareconfig > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From ltoscano at redhat.com Tue Sep 8 14:53:45 2015 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 08 Sep 2015 16:53:45 +0200 Subject: [Rdo-list] [rdo-manager] Moving some rdo-manager components to git.openstack.org In-Reply-To: <20150908142406.GH12870@localhost.localdomain> References: <20150825190458.GG3271@localhost.localdomain> <20150908142406.GH12870@localhost.localdomain> Message-ID: <1557269.MzdAHn0T9U@whitebase.usersys.redhat.com> On Tuesday 08 of September 2015 10:24:06 James Slagle wrote: > > There are still some open reviews on gerrithub for instack-undercloud. I'm > going to individually comment on those and encourage folks to resubmit them > at review.openstack.org. If they're not all moved over in a couple of days, > I'll either resubmit them myself (preserving the original author), or > abandon them if they're no longer relevant. Trying to resubmit a patch, I think you need to fix the .gitreview file. Ciao -- Luigi From jslagle at redhat.com Tue Sep 8 17:33:26 2015 From: jslagle at redhat.com (James Slagle) Date: Tue, 8 Sep 2015 13:33:26 -0400 Subject: [Rdo-list] [rdo-manager] Moving some rdo-manager components to git.openstack.org In-Reply-To: <1557269.MzdAHn0T9U@whitebase.usersys.redhat.com> References: <20150825190458.GG3271@localhost.localdomain> <20150908142406.GH12870@localhost.localdomain> <1557269.MzdAHn0T9U@whitebase.usersys.redhat.com> Message-ID: <20150908173326.GM12870@localhost.localdomain> On Tue, Sep 08, 2015 at 04:53:45PM +0200, Luigi Toscano wrote: > On Tuesday 08 of September 2015 10:24:06 James Slagle wrote: > > > > There are still some open reviews on gerrithub for instack-undercloud. I'm > > going to individually comment on those and encourage folks to resubmit them > > at review.openstack.org. If they're not all moved over in a couple of days, > > I'll either resubmit them myself (preserving the original author), or > > abandon them if they're no longer relevant. > > Trying to resubmit a patch, I think you need to fix the .gitreview file. There's a patch out for that now: https://review.openstack.org/#/c/221412 > > Ciao > -- > Luigi > -- -- James Slagle -- From rbowen at redhat.com Tue Sep 8 18:42:06 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 8 Sep 2015 14:42:06 -0400 Subject: [Rdo-list] The week in RDO blogs: September 8 Message-ID: <55EF2BFE.4040005@redhat.com> Here's what RDO enthusiasts have been writing about over the past week. If you're writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you're not on my list, please let me know! How Red Hat?s OpenStack partner Networking Solutions Offer Choice and Performance, by Jonathan Gershater Successfully implementing an OpenStack cloud is more than just choosing an OpenStack distribution. With its community approach and rich ecosystem of vendors, OpenStack represents a viable option for cloud administrators who want to offer public-cloud-like infrastructure services in their own datacenter. Red Hat Enterprise Linux OpenStack Platform offers pluggable storage and networking options. This open approach is contrary to closed solutions such as VMware Integrated OpenStack (VIO) which only supports VMware NSX for L4-L7 networking or VMware Distributed switch for basic L2 networking ... read more at http://tm3.org/27 Analyzing awareness of MidoNet globally, by Sandro Mathys One thing that is hard to measure in open source projects is just how aware people are of your community or its software. It's even harder to find out where in the world people have heard about you so far. You might have a rough feeling, probably based on some facts like the country-specific TLDs of email addresses on your mailing lists. But that's very vague and only includes people actively participating (and there's more and less vocal people, too). ... read more at http://tm3.org/28 Kyle Mestery and the future of Neutron, by Rich Bowen At LinuxCon two weeks ago I had the privilege of chatting with Kyle about the future of Neutron. Kyle was a delight to interview, because he's obviously so passionate about his project . ... read (and listen) at http://tm3.org/29 RDO Juno DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1 by Boris Derzhavets Neutron DVR implements the fip-namespace on every Compute Node where the VMs are running. Thus VMs with FloatingIPs can forward the traffic to the External Network without routing it via Network Node. (North-South Routing). ... read more at http://tm3.org/2a Managing OpenStack: Integration Matters! by Matt Hicks With so many advantages an Infrastructure-as-a-Service (IaaS) cloud provides businesses, it?s great to see a transformation of IT happening across nearly all industries and markets. Nearly every enterprise is taking advantage of an ?as-a-service? cloud in some form or another. And with this new infrastructure, it?s now more important than ever to remember the critical role that management plays within this mix. Oddly enough, it is sometimes considered a second priority when customers begin investigating the benefits of an IaaS cloud, but quickly becomes your first priority when running one. ... read more at http://tm3.org/2b -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ichavero at redhat.com Tue Sep 8 19:37:20 2015 From: ichavero at redhat.com (Ivan Chavero) Date: Tue, 8 Sep 2015 15:37:20 -0400 (EDT) Subject: [Rdo-list] CentOS-RDO Remix Release In-Reply-To: References: Message-ID: <470618961.18992204.1441741040442.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "sad man" > To: rdo-list at redhat.com > Sent: Friday, August 28, 2015 2:21:19 PM > Subject: [Rdo-list] CentOS-RDO Remix Release > > Hi, I have been working on developing a CentOS-RDO remix as part of my GSoC > project with Rich Bowen. I have pushed the first release of the project and > the ISO is available at: > http://buildlogs.centos.org/gsoc2015/cloud-in-a-box/CentOS-7-x86_64-RDO-1503-2015082701.iso > . > The ISO allows the user to install RDO during CentOS installation via both > --alinone & --answer-file modes of packstack. cool i'm gonna test it! > ?Development involved writing an extension (addon) to the Anaconda Installer, > the source along with testing instructions is available at: > https://github.com/asadpiz/org_centos_cloud > > ?I would appreciate feeback/testing of this addon as the main motivation was > to provide a pre-configured, assumption driven "cloud in a box"? to the > users. > > -- > Cheers, > > Asad > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From jpeeler at redhat.com Tue Sep 8 20:02:01 2015 From: jpeeler at redhat.com (Jeff Peeler) Date: Tue, 8 Sep 2015 16:02:01 -0400 Subject: [Rdo-list] packstack future In-Reply-To: <20150908144251.GJ12870@localhost.localdomain> References: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1C09@CERNXCHG44.cern.ch> <20150907130756.GA9590@t430slt.redhat.com> <20150908144251.GJ12870@localhost.localdomain> Message-ID: On Tue, Sep 8, 2015 at 10:42 AM, James Slagle wrote: > On Mon, Sep 07, 2015 at 02:07:56PM +0100, Steven Hardy wrote: > > Hi Tim, > > > > On Sun, Sep 06, 2015 at 07:35:30AM +0000, Tim Bell wrote: > > > Reading the RDO September newsletter, I noticed a mail thread > > > (https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html) > on > > > the future of packstack vs rdo-manager. > > > > > > We use packstack to spin up small OpenStack instances for > development and > > > testing. Typical cases are to have a look at the features of the > latest > > > releases or do some prototyping of an option we've not tried yet. > > > > > > It was not clear to me based on the mailing list thread as to how > this > > > could be done using rdo-manager unless you already have the > undercloud > > > configiured by RDO. > > > > > Has there been any further discussions around packstack future ? > > > > Thanks for raising this - I am aware that a number of folks have been > > thinking about this topic (myself included), but I don't think we've yet > > reached a definitive consensus re the path forward yet. > > > > Here's my view on the subject: > > > > 1. Packstack is clearly successful, useful to a lot of folks, and does > > satisfy a use-case currently not well served via rdo-manager, so IMO we > > absolutely should maintain it until that is no longer the case. > > > > 2. Many people are interested in easier ways to stand up PoC environments > > via rdo-manager, so we do need to work on ways to make that easier (or > even > > possible at all in the single-node case). > > > > 3. It would be really great if we could figure out (2) in such a way as > to > > enable a simple migration path from packstack to whatever the PoC mode of > > rdo-manager ends up being, for example perhaps we could have an rdo > manager > > interface which is capable of consuming a packstack answer file? > > > > Re the thread you reference, it raises a number of interesting questions, > > particularly the similarities/differences between an all-in-one packstack > > install and an all-in-one undercloud install; > > > > >From an abstract perspective, installing an all-in-one undercloud looks > a > > lot like installing an all-in-one packstack environment, both sets of > tools > > take a config file, and create a puppet-configured all-in-one OpenStack. > > > > But there's a lot of potential complexity related to providing a > > flexible/configurable deployment (like packstack) vs an opinionated > > bootstrap environment (e.g the current instack undercloud environment). > > Besides there being some TripleO related history (which I won't bore > everyone > with), the above is a big reason why we didn't just use packstack > originally to > install the all-in-one undercloud. > > As you point out, the undercloud installer is very opinionated by design. > It's > not meant to be a flexible all-in-one *OpenStack* installer, nor do I > think we > want to turn it into one. That would just end up in reimplementing > packstack. > > > > > There are a few possible approaches: > > > > - Do the work to enable a more flexibly configured undercloud, and just > > have that as the "all in one" solution > > -1 :). > > > - Have some sort of transient undercloud (I'm thinking a container) which > > exists only for the duration of deploying the all-in-one overcloud, on > > the local (pre-provisioned, e.g not via Ironic) host. Some prototyping > > of this approach has already happened [1] which I think James Slagle > has > > used to successfully deploy TripleO templates on pre-provisioned nodes. > > Right, so my thinking was to leverage the work (or some part of it) that > Jeff > Peeler has done on the standalone Heat container as a bootstrap mechanism. > Once > that container is up, you can use Heat to deploy to preprovisoned nodes > that > already have an OS installed. Not only would this be nice for POC's, there > are > also real use cases where dedicated provisioning networks are not > available, or > there's no access to ipmi/drac/whatever. > Perhaps off topic, but are people still interested in the Heat standalone container work? I never received any replies on the openstack-dev list when I asked for direction on how to best integrate with TripleO. I need to bring it up to date to work with recent changes of Kolla (and will do so if people are interested after getting the ironic containers completed). http://lists.openstack.org/pipermail/openstack-dev/2015-August/071613.html > > It would also provide a solution on how to orchestrate an HA undercloud as > well. > > Note that the node running the bootstrap Heat container itself could > potentially be reused, providing for the true all-in-one. > > I do have some hacked on templates I was working with, and had made enough > progress to where I was able to get the preprovisoned nodes to start > applying the > SoftwareDeployments from Heat after I manually configured > os-collect-config on > each node. > > I'll get those in order and push up a WIP patch. > > There are a lot of wrinkles here still, things like how to orchestrate the > manual config you still have to do on each node (have to configure > os-collect-config with a stack id), and assumptions on network setup, etc. > > > > > The latter approach is quite interesting, because it potentially > maintains > > a greater degree of symmetry between the minimal PoC install and real > > production deployments (e.g you'd use the same heat templates etc), it > > could also potentially provide easier access to features as they are > added > > to overcloud templates (container integration, as an example), vs > > integrating new features in two places. > > > > Overall at this point I think there are still many unanswered questions > > around enabling the PoC use-case for rdo-manager (and, more generally > > making TripleO upstream more easily consumable for these kinds of > > use-cases). I hope/expect we'll have a TripleO session on this at the > > forthcoming summit, where we refine the various ideas people have been > > investigating, and define the path forward wrt PoC deployments. > > So I did just send out the etherpad link for our session planning for Tokyo > this morning to openstack-dev :) > > https://etherpad.openstack.org/p/tripleo-mitaka-proposed-sessions > > I'll add a bullet item about this point. > > > > > Hopefully that is somewhat helpful, and thanks again for re-starting this > > discussion! :) > > > > Steve > > > > [1] https://etherpad.openstack.org/p/noop-softwareconfig > > > -- > -- James Slagle > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Tue Sep 8 20:52:55 2015 From: jslagle at redhat.com (James Slagle) Date: Tue, 8 Sep 2015 16:52:55 -0400 Subject: [Rdo-list] packstack future In-Reply-To: References: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1C09@CERNXCHG44.cern.ch> <20150907130756.GA9590@t430slt.redhat.com> <20150908144251.GJ12870@localhost.localdomain> Message-ID: <20150908205255.GN12870@localhost.localdomain> On Tue, Sep 08, 2015 at 04:02:01PM -0400, Jeff Peeler wrote: > On Tue, Sep 8, 2015 at 10:42 AM, James Slagle wrote: > > > On Mon, Sep 07, 2015 at 02:07:56PM +0100, Steven Hardy wrote: > > > Hi Tim, > > > > > > On Sun, Sep 06, 2015 at 07:35:30AM +0000, Tim Bell wrote: > > > > Reading the RDO September newsletter, I noticed a mail thread > > > > (https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html) > > on > > > > the future of packstack vs rdo-manager. > > > > > > > > We use packstack to spin up small OpenStack instances for > > development and > > > > testing. Typical cases are to have a look at the features of the > > latest > > > > releases or do some prototyping of an option we've not tried yet. > > > > > > > > It was not clear to me based on the mailing list thread as to how > > this > > > > could be done using rdo-manager unless you already have the > > undercloud > > > > configiured by RDO. > > > > > > > Has there been any further discussions around packstack future ? > > > > > > Thanks for raising this - I am aware that a number of folks have been > > > thinking about this topic (myself included), but I don't think we've yet > > > reached a definitive consensus re the path forward yet. > > > > > > Here's my view on the subject: > > > > > > 1. Packstack is clearly successful, useful to a lot of folks, and does > > > satisfy a use-case currently not well served via rdo-manager, so IMO we > > > absolutely should maintain it until that is no longer the case. > > > > > > 2. Many people are interested in easier ways to stand up PoC environments > > > via rdo-manager, so we do need to work on ways to make that easier (or > > even > > > possible at all in the single-node case). > > > > > > 3. It would be really great if we could figure out (2) in such a way as > > to > > > enable a simple migration path from packstack to whatever the PoC mode of > > > rdo-manager ends up being, for example perhaps we could have an rdo > > manager > > > interface which is capable of consuming a packstack answer file? > > > > > > Re the thread you reference, it raises a number of interesting questions, > > > particularly the similarities/differences between an all-in-one packstack > > > install and an all-in-one undercloud install; > > > > > > >From an abstract perspective, installing an all-in-one undercloud looks > > a > > > lot like installing an all-in-one packstack environment, both sets of > > tools > > > take a config file, and create a puppet-configured all-in-one OpenStack. > > > > > > But there's a lot of potential complexity related to providing a > > > flexible/configurable deployment (like packstack) vs an opinionated > > > bootstrap environment (e.g the current instack undercloud environment). > > > > Besides there being some TripleO related history (which I won't bore > > everyone > > with), the above is a big reason why we didn't just use packstack > > originally to > > install the all-in-one undercloud. > > > > As you point out, the undercloud installer is very opinionated by design. > > It's > > not meant to be a flexible all-in-one *OpenStack* installer, nor do I > > think we > > want to turn it into one. That would just end up in reimplementing > > packstack. > > > > > > > > There are a few possible approaches: > > > > > > - Do the work to enable a more flexibly configured undercloud, and just > > > have that as the "all in one" solution > > > > -1 :). > > > > > - Have some sort of transient undercloud (I'm thinking a container) which > > > exists only for the duration of deploying the all-in-one overcloud, on > > > the local (pre-provisioned, e.g not via Ironic) host. Some prototyping > > > of this approach has already happened [1] which I think James Slagle > > has > > > used to successfully deploy TripleO templates on pre-provisioned nodes. > > > > Right, so my thinking was to leverage the work (or some part of it) that > > Jeff > > Peeler has done on the standalone Heat container as a bootstrap mechanism. > > Once > > that container is up, you can use Heat to deploy to preprovisoned nodes > > that > > already have an OS installed. Not only would this be nice for POC's, there > > are > > also real use cases where dedicated provisioning networks are not > > available, or > > there's no access to ipmi/drac/whatever. > > > > Perhaps off topic, but are people still interested in the Heat standalone > container work? I never received any replies on the openstack-dev list when > I asked for direction on how to best integrate with TripleO. I need to > bring it up to date to work with recent changes of Kolla (and will do so if > people are interested after getting the ironic containers completed). > > http://lists.openstack.org/pipermail/openstack-dev/2015-August/071613.html Sorry for not replying there when you sent that. I think we're likely just getting to a point where we can start to have some time to think about how an approach like this might integrate with TripleO. I feel the idea has merit and is worth exploring. But, it's up for discussion as to what everyone feels is the best solution and how much interest there might actually be in such an approach. Sorry I can't be more definitive than that. I think solving for: - undercloud HA - easier POC's - containerization are all going to be areas with some focus over the next design cycle, and personally I like the approach you proposed in that it has the potential to address all 3 of those in some fashion. > > > > > > It would also provide a solution on how to orchestrate an HA undercloud as > > well. > > > > Note that the node running the bootstrap Heat container itself could > > potentially be reused, providing for the true all-in-one. > > > > I do have some hacked on templates I was working with, and had made enough > > progress to where I was able to get the preprovisoned nodes to start > > applying the > > SoftwareDeployments from Heat after I manually configured > > os-collect-config on > > each node. > > > > I'll get those in order and push up a WIP patch. > > > > There are a lot of wrinkles here still, things like how to orchestrate the > > manual config you still have to do on each node (have to configure > > os-collect-config with a stack id), and assumptions on network setup, etc. > > > > > > > > The latter approach is quite interesting, because it potentially > > maintains > > > a greater degree of symmetry between the minimal PoC install and real > > > production deployments (e.g you'd use the same heat templates etc), it > > > could also potentially provide easier access to features as they are > > added > > > to overcloud templates (container integration, as an example), vs > > > integrating new features in two places. > > > > > > Overall at this point I think there are still many unanswered questions > > > around enabling the PoC use-case for rdo-manager (and, more generally > > > making TripleO upstream more easily consumable for these kinds of > > > use-cases). I hope/expect we'll have a TripleO session on this at the > > > forthcoming summit, where we refine the various ideas people have been > > > investigating, and define the path forward wrt PoC deployments. > > > > So I did just send out the etherpad link for our session planning for Tokyo > > this morning to openstack-dev :) > > > > https://etherpad.openstack.org/p/tripleo-mitaka-proposed-sessions > > > > I'll add a bullet item about this point. > > > > > > > > Hopefully that is somewhat helpful, and thanks again for re-starting this > > > discussion! :) > > > > > > Steve > > > > > > [1] https://etherpad.openstack.org/p/noop-softwareconfig > > > > > -- > > -- James Slagle > > -- -- -- James Slagle -- From ibravo at ltgfederal.com Tue Sep 8 21:07:48 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Tue, 8 Sep 2015 17:07:48 -0400 Subject: [Rdo-list] RDO Manager GUI install with Ceph Message-ID: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> All, I was trying to deploy an overcloud using the RDO Manager GUI (Tuskar UI) and the deployment was unsuccessful. It seems that there is a particular bug currently with deploying a Ceph based storage with the GUI, so I wanted to ask the list if 1. Indeed this was the case. 2. How to delete my configuration and redeploy using the CLI 3. Finally, if there is any scripted or explained way to perform an HA installation. I read the reference to github, but this seems to be more about the components but there was not a step by step instruction/ explanation. Thanks! IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Tue Sep 8 21:29:15 2015 From: marius at remote-lab.net (Marius Cornea) Date: Tue, 8 Sep 2015 23:29:15 +0200 Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> Message-ID: Hi Ignacio, Yes, I believe ceph currently only works with using direct heat templates. You can check the instruction on how to get it deployed via cli here [1] Make sure you select the Ceph role on the environment specific content (left side column). To delete existing deployments run 'heat stack-delete overcloud' on the undercloud node with the credentials in the stackrc file loaded. In order to get a HA deployment you just need to deploy 3 controllers by passing '--control-scale 3' to the 'openstack overcloud deploy' command. [1] https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html On Tue, Sep 8, 2015 at 11:07 PM, Ignacio Bravo wrote: > All, > > I was trying to deploy an overcloud using the RDO Manager GUI (Tuskar UI) > and the deployment was unsuccessful. It seems that there is a particular bug > currently with deploying a Ceph based storage with the GUI, so I wanted to > ask the list if > > 1. Indeed this was the case. > 2. How to delete my configuration and redeploy using the CLI > 3. Finally, if there is any scripted or explained way to perform an HA > installation. I read the reference to github, but this seems to be more > about the components but there was not a step by step instruction/ > explanation. > > > Thanks! > IB > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ichavero at redhat.com Tue Sep 8 21:31:45 2015 From: ichavero at redhat.com (Ivan Chavero) Date: Tue, 8 Sep 2015 17:31:45 -0400 (EDT) Subject: [Rdo-list] packstack future In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1C09@CERNXCHG44.cern.ch> References: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1C09@CERNXCHG44.cern.ch> Message-ID: <1635431962.19038805.1441747905216.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Tim Bell" > To: rdo-list at redhat.com > Sent: Sunday, September 6, 2015 1:35:30 AM > Subject: [Rdo-list] packstack future > > > > > > Reading the RDO September newsletter, I noticed a mail thread ( > https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html ) on the > future of packstack vs rdo-manager. > > > > We use packstack to spin up small OpenStack instances for development and > testing. Typical cases are to have a look at the features of the latest > releases or do some prototyping of an option we?ve not tried yet. > > > > It was not clear to me based on the mailing list thread as to how this could > be done using rdo-manager unless you already have the undercloud configiured > by RDO. > > > > Has there been any further discussions around packstack future ? > my understanding is that packstack will still be a PoC tool for at least two more years > > Tim > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From yeylon at redhat.com Wed Sep 9 12:42:54 2015 From: yeylon at redhat.com (Yaniv Eylon) Date: Wed, 9 Sep 2015 08:42:54 -0400 (EDT) Subject: [Rdo-list] RDO test days In-Reply-To: <55E9C26D.3060608@redhat.com> References: <55E9C26D.3060608@redhat.com> Message-ID: <1117880054.27021889.1441802574927.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Rich Bowen" > To: rdo-list at redhat.com > Sent: Friday, September 4, 2015 7:10:21 PM > Subject: [Rdo-list] RDO test days > > We'd like to do a couple of test days leading up to the Liberty release, > to make sure we're in good shape. > > We were thinking maybe one in the latter half of September - say, Sep 23 > or 24? - to test a Delorean snapshot that has passed CI. Sep. 23rd is probably not a good date as we have a holiday in Istael and will not be able to join. > > And then another test day on October 15th/16th for GA. The GA release is > to be on the 15th, so we'd be testing with what will be in that release. > (I'm reluctant to do this the week after GA, simply because it's the > week before Summit, and people will be distracted and traveling.) > > Thoughts? > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rbowen at redhat.com Wed Sep 9 13:11:06 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 9 Sep 2015 09:11:06 -0400 Subject: [Rdo-list] RDO test days In-Reply-To: <1117880054.27021889.1441802574927.JavaMail.zimbra@redhat.com> References: <55E9C26D.3060608@redhat.com> <1117880054.27021889.1441802574927.JavaMail.zimbra@redhat.com> Message-ID: <55F02FEA.10705@redhat.com> On 09/09/2015 08:42 AM, Yaniv Eylon wrote: > > > ----- Original Message ----- >> From: "Rich Bowen" >> To: rdo-list at redhat.com >> Sent: Friday, September 4, 2015 7:10:21 PM >> Subject: [Rdo-list] RDO test days >> >> We'd like to do a couple of test days leading up to the Liberty release, >> to make sure we're in good shape. >> >> We were thinking maybe one in the latter half of September - say, Sep 23 >> or 24? - to test a Delorean snapshot that has passed CI. > > Sep. 23rd is probably not a good date as we have a holiday in Istael and will not be able to join. Do you have an alternate date to recommend? > >> >> And then another test day on October 15th/16th for GA. The GA release is >> to be on the 15th, so we'd be testing with what will be in that release. >> (I'm reluctant to do this the week after GA, simply because it's the >> week before Summit, and people will be distracted and traveling.) >> How about these dates? Do they work for you? -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From yeylon at redhat.com Wed Sep 9 13:13:29 2015 From: yeylon at redhat.com (Yaniv Eylon) Date: Wed, 9 Sep 2015 09:13:29 -0400 (EDT) Subject: [Rdo-list] RDO test days In-Reply-To: <55F02FEA.10705@redhat.com> References: <55E9C26D.3060608@redhat.com> <1117880054.27021889.1441802574927.JavaMail.zimbra@redhat.com> <55F02FEA.10705@redhat.com> Message-ID: <2066587884.27046215.1441804409813.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Rich Bowen" > To: "Yaniv Eylon" > Cc: rdo-list at redhat.com > Sent: Wednesday, September 9, 2015 4:11:06 PM > Subject: Re: [Rdo-list] RDO test days > > > > On 09/09/2015 08:42 AM, Yaniv Eylon wrote: > > > > > > ----- Original Message ----- > >> From: "Rich Bowen" > >> To: rdo-list at redhat.com > >> Sent: Friday, September 4, 2015 7:10:21 PM > >> Subject: [Rdo-list] RDO test days > >> > >> We'd like to do a couple of test days leading up to the Liberty release, > >> to make sure we're in good shape. > >> > >> We were thinking maybe one in the latter half of September - say, Sep 23 > >> or 24? - to test a Delorean snapshot that has passed CI. > > > > Sep. 23rd is probably not a good date as we have a holiday in Istael and > > will not be able to join. > > Do you have an alternate date to recommend? unfortunately due to a site shutdown on the following week the next best date would be Oct 7th-8th. > > > > >> > >> And then another test day on October 15th/16th for GA. The GA release is > >> to be on the 15th, so we'd be testing with what will be in that release. > >> (I'm reluctant to do this the week after GA, simply because it's the > >> week before Summit, and people will be distracted and traveling.) > >> > > How about these dates? Do they work for you? > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > From chkumar246 at gmail.com Wed Sep 9 13:42:10 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 9 Sep 2015 19:12:10 +0530 Subject: [Rdo-list] bug statistics for 2015-09-09 Message-ID: # RDO Bugs on 2015-09-09 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 269 - Fixed (MODIFIED, POST, ON_QA): 178 ## Number of open bugs by component diskimage-builder [ 4] +++ distribution [ 18] ++++++++++++++++ dnsmasq [ 1] instack [ 4] +++ instack-undercloud [ 23] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 5] ++++ openstack-cinder [ 13] +++++++++++ openstack-foreman-inst... [ 3] ++ openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 5] ++++ openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] ++++++ openstack-neutron [ 6] +++++ openstack-nova [ 17] +++++++++++++++ openstack-packstack [ 45] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] +++++++++ openstack-selinux [ 13] +++++++++++ openstack-swift [ 2] + openstack-tripleo [ 24] +++++++++++++++++++++ openstack-tripleo-heat... [ 5] ++++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 3] ++ openvswitch [ 1] python-glanceclient [ 1] python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] ++++ python-oslo-config [ 1] rdo-manager [ 22] +++++++++++++++++++ rdo-manager-cli [ 6] +++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (269 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1228761 ] http://bugzilla.redhat.com/1228761 (NEW) Component: diskimage-builder Last change: 2015-06-10 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos ### distribution (18 bugs) [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1116972 ] http://bugzilla.redhat.com/1116972 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO website: libffi-devel is required to run Tempest (at least on CentOS 6.5) [1116974 ] http://bugzilla.redhat.com/1116974 (NEW) Component: distribution Last change: 2015-06-04 Summary: Running Tempest according to the instructions @ RDO website fails with missing tox.ini error [1116975 ] http://bugzilla.redhat.com/1116975 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO website: configuring TestR according to website, breaks Tox completely [1117007 ] http://bugzilla.redhat.com/1117007 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO website: newer python-nose is required to run Tempest (at least on CentOS 6.5) [update to http://open stack.redhat.com/Testing_IceHouse_using_Tempest] [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1187309 ] http://bugzilla.redhat.com/1187309 (NEW) Component: distribution Last change: 2015-05-08 Summary: New package - python-cliff-tablib [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1212223 ] http://bugzilla.redhat.com/1212223 (NEW) Component: distribution Last change: 2015-06-04 Summary: mariadb requires Requires: mariadb-libs(x86-64) = 1:5.5.35-3.el7 [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1226795 ] http://bugzilla.redhat.com/1226795 (NEW) Component: distribution Last change: 2015-09-01 Summary: RFE: Manila-UI Plugin support in Horizon [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-09-08 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1249035 ] http://bugzilla.redhat.com/1249035 (ASSIGNED) Component: distribution Last change: 2015-07-31 Summary: liberty missing python module unicodecsv [1258560 ] http://bugzilla.redhat.com/1258560 (ASSIGNED) Component: distribution Last change: 2015-09-09 Summary: /usr/share/openstack-dashboard/openstack_dashboard/temp lates/_stylesheets.html: /bin/sh: horizon.utils.scss_filter.HorizonScssFilter: command not found ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### instack (4 bugs) [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (23 bugs) [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (5 bugs) [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1214928 ] http://bugzilla.redhat.com/1214928 (NEW) Component: openstack-ceilometer Last change: 2015-04-23 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library ### openstack-cinder (13 bugs) [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (5 bugs) [1210821 ] http://bugzilla.redhat.com/1210821 (NEW) Component: openstack-horizon Last change: 2015-04-10 Summary: horizon should be using rdo logo instead of openstack's [1218896 ] http://bugzilla.redhat.com/1218896 (NEW) Component: openstack-horizon Last change: 2015-05-13 Summary: Remaining Horizon issues for kilo release [1218897 ] http://bugzilla.redhat.com/1218897 (NEW) Component: openstack-horizon Last change: 2015-05-11 Summary: new launch instance does not work with webroot other than '/' [1220070 ] http://bugzilla.redhat.com/1220070 (NEW) Component: openstack-horizon Last change: 2015-05-13 Summary: horizon requires manage.py compress to be run [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-08-25 Summary: keystone-all process reaches 100% CPU consumption [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf ### openstack-neutron (6 bugs) [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (17 bugs) [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-06-04 Summary: Ensure translations are installed correctly and picked up at runtime [1123298 ] http://bugzilla.redhat.com/1123298 (NEW) Component: openstack-nova Last change: 2015-04-26 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova: fail to edit project quota with DataError from nova [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova object store allow get object after date exires [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: v4-fixed-ip= not working with juno nova networking [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-04-10 Summary: horizon console uses http when horizon is set to use ssl [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: novnc init script doesnt write to log [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-01-08 Summary: Installation of openstack-nova-compute fails on PowerKVM [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: Nova AVC messages [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-02-09 Summary: Nova - db connection string present on compute nodes [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-06-14 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-06-08 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-06-23 Summary: Kilo assigning ipv6 address, even though its disabled. ### openstack-packstack (45 bugs) [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-08-26 Summary: nss.load missing from packstack, httpd unable to start. [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-packstack Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error ### openstack-puppet-modules (11 bugs) [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication ### openstack-selinux (13 bugs) [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-07-23 Summary: Glance over nfs fails due to selinux [1249685 ] http://bugzilla.redhat.com/1249685 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: libffi should not require execmem when selinux is enabled [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux [1261465 ] http://bugzilla.redhat.com/1261465 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: OpenStack Keystone is not functional ### openstack-swift (2 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (24 bugs) [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Node registration fails silently if instackenv.json is badly formatted [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix ### openstack-tripleo-heat-templates (5 bugs) [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1232015 ] http://bugzilla.redhat.com/1232015 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: instack-undercloud: one controller deployment: running "pcs status" - Error: cluster is not currently running on this node [1235508 ] http://bugzilla.redhat.com/1235508 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-25 Summary: Package update does not take puppet managed packages into account [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates ### openstack-utils (3 bugs) [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### python-glanceclient (1 bug) [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-06-04 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (22 bugs) [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-05-20 Summary: Read bit set for others for Openstack services directories in /etc [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon ### rdo-manager-cli (6 bugs) [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (178 bugs) ### distribution (5 bugs) [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (1 bug) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency ### openstack-cinder (5 bugs) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] ### openstack-glance (3 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue ### openstack-heat (3 bugs) [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (13 bugs) [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service ### openstack-nova (5 bugs) [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options ### openstack-packstack (58 bugs) [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-07-21 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? ### openstack-puppet-modules (18 bugs) [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance ### openstack-sahara (1 bug) [1184522 ] http://bugzilla.redhat.com/1184522 (MODIFIED) Component: openstack-sahara Last change: 2015-03-27 Summary: launch_command.py missing ### openstack-selinux (12 bugs) [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo (1 bug) [1162333 ] http://bugzilla.redhat.com/1162333 (ON_QA) Component: openstack-tripleo Last change: 2015-06-02 Summary: Instack fails to complete instack-virt-setup with syntax error near unexpected token `newline' ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py ### openstack-utils (2 bugs) [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager ### python-cinderclient (2 bugs) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run [1260154 ] http://bugzilla.redhat.com/1260154 (ON_QA) Component: python-cinderclient Last change: 2015-09-06 Summary: missing dependency on keystoneclient ### python-django-horizon (3 bugs) [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ ### python-django-openstack-auth (3 bugs) [1218894 ] http://bugzilla.redhat.com/1218894 (ON_QA) Component: python-django-openstack-auth Last change: 2015-06-24 Summary: Horizon: Re login failed after timeout [1218899 ] http://bugzilla.redhat.com/1218899 (ON_QA) Component: python-django-openstack-auth Last change: 2015-06-24 Summary: permission checks issue / not properly checking enabled services [1232683 ] http://bugzilla.redhat.com/1232683 (MODIFIED) Component: python-django-openstack-auth Last change: 2015-09-02 Summary: horizon manage.py syncdb errors on "App 'openstack_auth' doesn't have a 'user' model." ### python-glanceclient (3 bugs) [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1244291 ] http://bugzilla.redhat.com/1244291 (MODIFIED) Component: python-glanceclient Last change: 2015-08-01 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion ### python-neutronclient (3 bugs) [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (5 bugs) [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-05-29 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason ### rdo-manager-cli (8 bugs) [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Wed Sep 9 14:37:57 2015 From: pmyers at redhat.com (Perry Myers) Date: Wed, 9 Sep 2015 10:37:57 -0400 Subject: [Rdo-list] RDO test days In-Reply-To: <2066587884.27046215.1441804409813.JavaMail.zimbra@redhat.com> References: <55E9C26D.3060608@redhat.com> <1117880054.27021889.1441802574927.JavaMail.zimbra@redhat.com> <55F02FEA.10705@redhat.com> <2066587884.27046215.1441804409813.JavaMail.zimbra@redhat.com> Message-ID: <55F04445.5030502@redhat.com> On 09/09/2015 09:13 AM, Yaniv Eylon wrote: > > > ----- Original Message ----- >> From: "Rich Bowen" >> To: "Yaniv Eylon" >> Cc: rdo-list at redhat.com >> Sent: Wednesday, September 9, 2015 4:11:06 PM >> Subject: Re: [Rdo-list] RDO test days >> >> >> >> On 09/09/2015 08:42 AM, Yaniv Eylon wrote: >>> >>> >>> ----- Original Message ----- >>>> From: "Rich Bowen" >>>> To: rdo-list at redhat.com >>>> Sent: Friday, September 4, 2015 7:10:21 PM >>>> Subject: [Rdo-list] RDO test days >>>> >>>> We'd like to do a couple of test days leading up to the Liberty release, >>>> to make sure we're in good shape. >>>> >>>> We were thinking maybe one in the latter half of September - say, Sep 23 >>>> or 24? - to test a Delorean snapshot that has passed CI. >>> >>> Sep. 23rd is probably not a good date as we have a holiday in Istael and >>> will not be able to join. >> >> Do you have an alternate date to recommend? > > unfortunately due to a site shutdown on the following week the next best date would be Oct 7th-8th. I think that's pushing things too late. I suggest we proceed then with the originally planned dates with whoever can participate and then we can of course do another test day later where others can participate in October sometime >> >>> >>>> >>>> And then another test day on October 15th/16th for GA. The GA release is >>>> to be on the 15th, so we'd be testing with what will be in that release. >>>> (I'm reluctant to do this the week after GA, simply because it's the >>>> week before Summit, and people will be distracted and traveling.) >>>> >> >> How about these dates? Do they work for you? >> >> >> -- >> Rich Bowen - rbowen at redhat.com >> OpenStack Community Liaison >> http://rdoproject.org/ >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ibravo at ltgfederal.com Wed Sep 9 14:47:00 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 9 Sep 2015 10:47:00 -0400 Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> Message-ID: <0D2CA8A2-C988-4B2D-82C9-7CEC2F122323@ltgfederal.com> Thanks. I was able to delete the existing deployment, but when trying to deploy again using the CLI I got an authentication error. Ideas? [root at bl16 ~]# su stack [stack at bl16 root]$ cd [stack at bl16 ~]$ cd ~ [stack at bl16 ~]$ source stackrc [stack at bl16 ~]$ openstack overcloud deploy --ceph-storage-scale 3 --control-scale 3 --compute-scale 2 --compute-flavor Compute_24 --ntp-server 192.168.10.1 --templates -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates ERROR: openstack ERROR: Authentication failed. Please try again with option --include-password or export HEAT_INCLUDE_PASSWORD=1 Authentication required __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Sep 8, 2015, at 5:29 PM, Marius Cornea wrote: > > Hi Ignacio, > > Yes, I believe ceph currently only works with using direct heat > templates. You can check the instruction on how to get it deployed via > cli here [1] Make sure you select the Ceph role on the environment > specific content (left side column). > > To delete existing deployments run 'heat stack-delete overcloud' on > the undercloud node with the credentials in the stackrc file loaded. > > In order to get a HA deployment you just need to deploy 3 controllers > by passing '--control-scale 3' to the 'openstack overcloud deploy' > command. > > [1] https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html > > On Tue, Sep 8, 2015 at 11:07 PM, Ignacio Bravo wrote: >> All, >> >> I was trying to deploy an overcloud using the RDO Manager GUI (Tuskar UI) >> and the deployment was unsuccessful. It seems that there is a particular bug >> currently with deploying a Ceph based storage with the GUI, so I wanted to >> ask the list if >> >> 1. Indeed this was the case. >> 2. How to delete my configuration and redeploy using the CLI >> 3. Finally, if there is any scripted or explained way to perform an HA >> installation. I read the reference to github, but this seems to be more >> about the components but there was not a step by step instruction/ >> explanation. >> >> >> Thanks! >> IB >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> Office: (703) 951-7760 >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From yeylon at redhat.com Wed Sep 9 14:55:47 2015 From: yeylon at redhat.com (yaniv eylon) Date: Wed, 9 Sep 2015 17:55:47 +0300 Subject: [Rdo-list] RDO test days In-Reply-To: <55F04445.5030502@redhat.com> References: <55E9C26D.3060608@redhat.com> <1117880054.27021889.1441802574927.JavaMail.zimbra@redhat.com> <55F02FEA.10705@redhat.com> <2066587884.27046215.1441804409813.JavaMail.zimbra@redhat.com> <55F04445.5030502@redhat.com> Message-ID: <499D8CA4-0CF5-49B1-A80A-F3E2B42CED47@redhat.com> > On Sep 9, 2015, at 5:37 PM, Perry Myers wrote: > > On 09/09/2015 09:13 AM, Yaniv Eylon wrote: >> >> >> ----- Original Message ----- >>> From: "Rich Bowen" >>> To: "Yaniv Eylon" >>> Cc: rdo-list at redhat.com >>> Sent: Wednesday, September 9, 2015 4:11:06 PM >>> Subject: Re: [Rdo-list] RDO test days >>> >>> >>> >>> On 09/09/2015 08:42 AM, Yaniv Eylon wrote: >>>> >>>> >>>> ----- Original Message ----- >>>>> From: "Rich Bowen" >>>>> To: rdo-list at redhat.com >>>>> Sent: Friday, September 4, 2015 7:10:21 PM >>>>> Subject: [Rdo-list] RDO test days >>>>> >>>>> We'd like to do a couple of test days leading up to the Liberty release, >>>>> to make sure we're in good shape. >>>>> >>>>> We were thinking maybe one in the latter half of September - say, Sep 23 >>>>> or 24? - to test a Delorean snapshot that has passed CI. >>>> >>>> Sep. 23rd is probably not a good date as we have a holiday in Istael and >>>> will not be able to join. >>> >>> Do you have an alternate date to recommend? >> >> unfortunately due to a site shutdown on the following week the next best date would be Oct 7th-8th. > > I think that's pushing things too late. > > I suggest we proceed then with the originally planned dates with whoever > can participate and then we can of course do another test day later > where others can participate in October sometime ACK. we?ll try and get everything ready from our side in regards to updating the test matrix on the web page with topologies and scenarios we would like to see being covered in this test day and assign people from our group out side TLV to them. > >>> >>>> >>>>> >>>>> And then another test day on October 15th/16th for GA. The GA release is >>>>> to be on the 15th, so we'd be testing with what will be in that release. >>>>> (I'm reluctant to do this the week after GA, simply because it's the >>>>> week before Summit, and people will be distracted and traveling.) >>>>> >>> >>> How about these dates? Do they work for you? >>> >>> >>> -- >>> Rich Bowen - rbowen at redhat.com >>> OpenStack Community Liaison >>> http://rdoproject.org/ >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com Yaniv. From marius at remote-lab.net Wed Sep 9 15:10:21 2015 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 9 Sep 2015 17:10:21 +0200 Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: <0D2CA8A2-C988-4B2D-82C9-7CEC2F122323@ltgfederal.com> References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> <0D2CA8A2-C988-4B2D-82C9-7CEC2F122323@ltgfederal.com> Message-ID: Never hit that. Did you try export HEAT_INCLUDE_PASSWORD=1 and rerun deploy? On Wed, Sep 9, 2015 at 4:47 PM, Ignacio Bravo wrote: > Thanks. I was able to delete the existing deployment, but when trying to > deploy again using the CLI I got an authentication error. Ideas? > > [root at bl16 ~]# su stack > [stack at bl16 root]$ cd > [stack at bl16 ~]$ cd ~ > [stack at bl16 ~]$ source stackrc > [stack at bl16 ~]$ openstack overcloud deploy --ceph-storage-scale 3 > --control-scale 3 --compute-scale 2 --compute-flavor Compute_24 --ntp-server > 192.168.10.1 --templates -e > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > ERROR: openstack ERROR: Authentication failed. Please try again with option > --include-password or export HEAT_INCLUDE_PASSWORD=1 > Authentication required > > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > On Sep 8, 2015, at 5:29 PM, Marius Cornea wrote: > > Hi Ignacio, > > Yes, I believe ceph currently only works with using direct heat > templates. You can check the instruction on how to get it deployed via > cli here [1] Make sure you select the Ceph role on the environment > specific content (left side column). > > To delete existing deployments run 'heat stack-delete overcloud' on > the undercloud node with the credentials in the stackrc file loaded. > > In order to get a HA deployment you just need to deploy 3 controllers > by passing '--control-scale 3' to the 'openstack overcloud deploy' > command. > > [1] > https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html > > On Tue, Sep 8, 2015 at 11:07 PM, Ignacio Bravo > wrote: > > All, > > I was trying to deploy an overcloud using the RDO Manager GUI (Tuskar UI) > and the deployment was unsuccessful. It seems that there is a particular bug > currently with deploying a Ceph based storage with the GUI, so I wanted to > ask the list if > > 1. Indeed this was the case. > 2. How to delete my configuration and redeploy using the CLI > 3. Finally, if there is any scripted or explained way to perform an HA > installation. I read the reference to github, but this seems to be more > about the components but there was not a step by step instruction/ > explanation. > > > Thanks! > IB > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > From ibravo at ltgfederal.com Wed Sep 9 15:33:44 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 9 Sep 2015 11:33:44 -0400 Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> <0D2CA8A2-C988-4B2D-82C9-7CEC2F122323@ltgfederal.com> Message-ID: <7AB36331-2DA6-4791-BD18-2B9667EC93C0@ltgfederal.com> I?m currently doing just that. It takes some time for the process to run, so I will post as soon as I gen an answer. __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Sep 9, 2015, at 11:10 AM, Marius Cornea wrote: > > Never hit that. Did you try export HEAT_INCLUDE_PASSWORD=1 and rerun deploy? > > On Wed, Sep 9, 2015 at 4:47 PM, Ignacio Bravo wrote: >> Thanks. I was able to delete the existing deployment, but when trying to >> deploy again using the CLI I got an authentication error. Ideas? >> >> [root at bl16 ~]# su stack >> [stack at bl16 root]$ cd >> [stack at bl16 ~]$ cd ~ >> [stack at bl16 ~]$ source stackrc >> [stack at bl16 ~]$ openstack overcloud deploy --ceph-storage-scale 3 >> --control-scale 3 --compute-scale 2 --compute-flavor Compute_24 --ntp-server >> 192.168.10.1 --templates -e >> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml >> Deploying templates in the directory >> /usr/share/openstack-tripleo-heat-templates >> ERROR: openstack ERROR: Authentication failed. Please try again with option >> --include-password or export HEAT_INCLUDE_PASSWORD=1 >> Authentication required >> >> >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> Office: (703) 951-7760 >> >> On Sep 8, 2015, at 5:29 PM, Marius Cornea wrote: >> >> Hi Ignacio, >> >> Yes, I believe ceph currently only works with using direct heat >> templates. You can check the instruction on how to get it deployed via >> cli here [1] Make sure you select the Ceph role on the environment >> specific content (left side column). >> >> To delete existing deployments run 'heat stack-delete overcloud' on >> the undercloud node with the credentials in the stackrc file loaded. >> >> In order to get a HA deployment you just need to deploy 3 controllers >> by passing '--control-scale 3' to the 'openstack overcloud deploy' >> command. >> >> [1] >> https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html >> >> On Tue, Sep 8, 2015 at 11:07 PM, Ignacio Bravo >> wrote: >> >> All, >> >> I was trying to deploy an overcloud using the RDO Manager GUI (Tuskar UI) >> and the deployment was unsuccessful. It seems that there is a particular bug >> currently with deploying a Ceph based storage with the GUI, so I wanted to >> ask the list if >> >> 1. Indeed this was the case. >> 2. How to delete my configuration and redeploy using the CLI >> 3. Finally, if there is any scripted or explained way to perform an HA >> installation. I read the reference to github, but this seems to be more >> about the components but there was not a step by step instruction/ >> explanation. >> >> >> Thanks! >> IB >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> Office: (703) 951-7760 >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Wed Sep 9 15:40:06 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 9 Sep 2015 11:40:06 -0400 (EDT) Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> <0D2CA8A2-C988-4B2D-82C9-7CEC2F122323@ltgfederal.com> Message-ID: <292404844.23738911.1441813206554.JavaMail.zimbra@redhat.com> Did that error happen after a long time, like 4 hours? I have seen that error when the deploy was actually hung, and the token timeout gets reached and then every API call gets an authentication failed response. Unfortunately, you'll need to diagnose what part of the deployment is failing. Here's what I usually do: # Get state of all uncompleted resources heat resource-list overcloud -n 5 | grep -iv complete # Look closer at failed resources from above command heat resource-show nova list (then ssh as heat-admin to the nodes and check for network connectivity and errors in the logs) -Dan Sneddon ----- Original Message ----- > Never hit that. Did you try export HEAT_INCLUDE_PASSWORD=1 and rerun deploy? > > On Wed, Sep 9, 2015 at 4:47 PM, Ignacio Bravo wrote: > > Thanks. I was able to delete the existing deployment, but when trying to > > deploy again using the CLI I got an authentication error. Ideas? > > > > [root at bl16 ~]# su stack > > [stack at bl16 root]$ cd > > [stack at bl16 ~]$ cd ~ > > [stack at bl16 ~]$ source stackrc > > [stack at bl16 ~]$ openstack overcloud deploy --ceph-storage-scale 3 > > --control-scale 3 --compute-scale 2 --compute-flavor Compute_24 > > --ntp-server > > 192.168.10.1 --templates -e > > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > > Deploying templates in the directory > > /usr/share/openstack-tripleo-heat-templates > > ERROR: openstack ERROR: Authentication failed. Please try again with option > > --include-password or export HEAT_INCLUDE_PASSWORD=1 > > Authentication required > > > > > > > > __ > > Ignacio Bravo > > LTG Federal, Inc > > www.ltgfederal.com > > Office: (703) 951-7760 > > > > On Sep 8, 2015, at 5:29 PM, Marius Cornea wrote: > > > > Hi Ignacio, > > > > Yes, I believe ceph currently only works with using direct heat > > templates. You can check the instruction on how to get it deployed via > > cli here [1] Make sure you select the Ceph role on the environment > > specific content (left side column). > > > > To delete existing deployments run 'heat stack-delete overcloud' on > > the undercloud node with the credentials in the stackrc file loaded. > > > > In order to get a HA deployment you just need to deploy 3 controllers > > by passing '--control-scale 3' to the 'openstack overcloud deploy' > > command. > > > > [1] > > https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html > > > > On Tue, Sep 8, 2015 at 11:07 PM, Ignacio Bravo > > wrote: > > > > All, > > > > I was trying to deploy an overcloud using the RDO Manager GUI (Tuskar UI) > > and the deployment was unsuccessful. It seems that there is a particular > > bug > > currently with deploying a Ceph based storage with the GUI, so I wanted to > > ask the list if > > > > 1. Indeed this was the case. > > 2. How to delete my configuration and redeploy using the CLI > > 3. Finally, if there is any scripted or explained way to perform an HA > > installation. I read the reference to github, but this seems to be more > > about the components but there was not a step by step instruction/ > > explanation. > > > > > > Thanks! > > IB > > > > __ > > Ignacio Bravo > > LTG Federal, Inc > > www.ltgfederal.com > > Office: (703) 951-7760 > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From chkumar246 at gmail.com Wed Sep 9 16:07:18 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 9 Sep 2015 21:37:18 +0530 Subject: [Rdo-list] [meeting] RDO packaging meeting (2015-09-09) Message-ID: ======================================== #rdo: RDO packaging meeting (2015-09-09) ======================================== Meeting started by chandankumar at 15:01:03 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-09-09/rdo.2015-09-09-15.01.log.html . Meeting summary --------------- * roll call (chandankumar, 15:01:11) * status of python3 subpackages for python-oslo-* and python-* packages (chandankumar, 15:03:11) * LINK: https://etherpad.openstack.org/p/RDO-Packaging (number80, 15:03:17) * for now it py3 porting is in lower priority (chandankumar, 15:04:37) * Updating RDO docs from older openstack releases to latest openstack releases. (chandankumar, 15:06:59) * ACTION: rbowen organize doc hack day (number80, 15:08:27) * ACTION: jruzicka to update https://www.rdoproject.org/Clients (chandankumar, 15:08:50) * New Package SCM requests (chandankumar, 15:19:06) * I may steal some core reviews later (number80, 15:20:55) * add apevec as owners in core deps and jruwicka for clients (chandankumar, 15:21:24) * New packages under review (chandankumar, 15:22:47) * LINK: python-django-formtools https://bugzilla.redhat.com/show_bug.cgi?id=1261134 (chandankumar, 15:23:28) * LINK: python-tosca-parser https://bugzilla.redhat.com/show_bug.cgi?id=1261119 (chandankumar, 15:23:39) * package rename (chandankumar, 15:29:43) * ACTION: number80 work with trown on renamed packages (number80, 15:32:46) * python-qpid status (chandankumar, 15:35:29) * LINK: python-quid https://fedorahosted.org/rel-eng/ticket/6218 (chandankumar, 15:36:32) * python-qpid is still blocked in f24 (apevec, 15:37:19) * ACTION: apevec to rebuild oslo.messaging without qpid dep (apevec, 15:42:29) * ACTION: number80 to finish python-futurist review https://bugzilla.redhat.com/show_bug.cgi?id=1243052 (apevec, 15:45:17) * delorean CI (chandankumar, 15:45:42) * LINK: https://prod-rdojenkins.rhcloud.com/view/RDO-Liberty-Delorean-Trunk/ (chandankumar, 15:46:08) * ACTION: apevec to switch Delorean Trunk to use http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/ + http://cbs.centos.org/repos/cloud7-openstack-common-testing/ (apevec, 15:48:28) * xstatic bundles javascript libs which have bundling exception (not fonts!) and all of them do not license file (please file tickets upstream) (chandankumar, 15:50:14) * RDO Test day (chandankumar, 15:51:20) * objective is to have no failing builds in delorean by the end of the week (number80, 15:51:56) * RDO test day is on sep 23/24, 2015 (chandankumar, 15:52:46) * Oct 15/16, 2015 for testing release candidate (chandankumar, 15:54:20) * ACTION: number80 reviwing midonet packages (number80, 15:56:15) * Murano (server and client) and EC2-API RPMs (chandankumar, 15:58:08) * LINK: openststack-dev mailing list http://lists.openstack.org/pipermail/openstack-dev/2015-September/073820.html (mflobo, 15:59:47) * open floor (chandankumar, 16:01:38) Meeting ended at 16:04:32 UTC. Action Items ------------ * rbowen organize doc hack day * jruzicka to update https://www.rdoproject.org/Clients * number80 work with trown on renamed packages * apevec to rebuild oslo.messaging without qpid dep * number80 to finish python-futurist review https://bugzilla.redhat.com/show_bug.cgi?id=1243052 * apevec to switch Delorean Trunk to use http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/ + http://cbs.centos.org/repos/cloud7-openstack-common-testing/ * number80 reviwing midonet packages Action Items, by person ----------------------- * apevec * apevec to rebuild oslo.messaging without qpid dep * apevec to switch Delorean Trunk to use http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/ + http://cbs.centos.org/repos/cloud7-openstack-common-testing/ * jruzicka * jruzicka to update https://www.rdoproject.org/Clients * number80 * number80 work with trown on renamed packages * number80 to finish python-futurist review https://bugzilla.redhat.com/show_bug.cgi?id=1243052 * number80 reviwing midonet packages * rbowen * rbowen organize doc hack day * trown * number80 work with trown on renamed packages * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * chandankumar (87) * number80 (46) * jruzicka (31) * apevec (29) * rbowen (15) * mflobo (15) * trown (13) * dtantsur (13) * zodbot (9) * dmsimard (8) * jpena (7) * derekh (6) * mburned (1) * xinwu_ (1) * social (1) * imcsk8 (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Wed Sep 9 17:26:21 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 9 Sep 2015 13:26:21 -0400 Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: <292404844.23738911.1441813206554.JavaMail.zimbra@redhat.com> References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> <0D2CA8A2-C988-4B2D-82C9-7CEC2F122323@ltgfederal.com> <292404844.23738911.1441813206554.JavaMail.zimbra@redhat.com> Message-ID: <5EA2DA6D-B64F-460D-8CF8-B7757337FF3F@ltgfederal.com> Dan, Thanks for your tips. It seems like there is an issue with the networking piece, as that is where all the nodes are in building state. I have something similar to this for each one of the Controller nodes: | NetworkDeployment | cac8c93b-b784-4a91-bc23-1a932bb1e62f | OS::TripleO::SoftwareDeployment | CREATE_IN_PROGRESS | 2015-09-09T15:22:10Z | 1 | | UpdateDeployment | 97359c35-d2c7-4140-98ed-24525ee4be6b | OS::Heat::SoftwareDeployment | CREATE_IN_PROGRESS | 2015-09-09T15:22:10Z | 1 | Following your advice, I was trying to ssh into the nodes, but didn?t know what username/password combination to use. I tried root, heat-admin, stack with different password located in /home/stack/triple0-overcloud-passwords but none of the combinations seemed to work. BTW, I am using the instructions from https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html and installing in an HP c7000 Blade enclosure. Thanks, IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Sep 9, 2015, at 11:40 AM, Dan Sneddon wrote: > > Did that error happen after a long time, like 4 hours? I have seen that error > when the deploy was actually hung, and the token timeout gets reached and then > every API call gets an authentication failed response. Unfortunately, you'll > need to diagnose what part of the deployment is failing. Here's what I usually > do: > > # Get state of all uncompleted resources > heat resource-list overcloud -n 5 | grep -iv complete > > # Look closer at failed resources from above command > heat resource-show > > nova list > (then ssh as heat-admin to the nodes and check for network connectivity and > errors in the logs) > > -Dan Sneddon > > ----- Original Message ----- >> Never hit that. Did you try export HEAT_INCLUDE_PASSWORD=1 and rerun deploy? >> >> On Wed, Sep 9, 2015 at 4:47 PM, Ignacio Bravo wrote: >>> Thanks. I was able to delete the existing deployment, but when trying to >>> deploy again using the CLI I got an authentication error. Ideas? >>> >>> [root at bl16 ~]# su stack >>> [stack at bl16 root]$ cd >>> [stack at bl16 ~]$ cd ~ >>> [stack at bl16 ~]$ source stackrc >>> [stack at bl16 ~]$ openstack overcloud deploy --ceph-storage-scale 3 >>> --control-scale 3 --compute-scale 2 --compute-flavor Compute_24 >>> --ntp-server >>> 192.168.10.1 --templates -e >>> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml >>> Deploying templates in the directory >>> /usr/share/openstack-tripleo-heat-templates >>> ERROR: openstack ERROR: Authentication failed. Please try again with option >>> --include-password or export HEAT_INCLUDE_PASSWORD=1 >>> Authentication required >>> >>> >>> >>> __ >>> Ignacio Bravo >>> LTG Federal, Inc >>> www.ltgfederal.com >>> Office: (703) 951-7760 >>> >>> On Sep 8, 2015, at 5:29 PM, Marius Cornea wrote: >>> >>> Hi Ignacio, >>> >>> Yes, I believe ceph currently only works with using direct heat >>> templates. You can check the instruction on how to get it deployed via >>> cli here [1] Make sure you select the Ceph role on the environment >>> specific content (left side column). >>> >>> To delete existing deployments run 'heat stack-delete overcloud' on >>> the undercloud node with the credentials in the stackrc file loaded. >>> >>> In order to get a HA deployment you just need to deploy 3 controllers >>> by passing '--control-scale 3' to the 'openstack overcloud deploy' >>> command. >>> >>> [1] >>> https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html >>> >>> On Tue, Sep 8, 2015 at 11:07 PM, Ignacio Bravo >>> wrote: >>> >>> All, >>> >>> I was trying to deploy an overcloud using the RDO Manager GUI (Tuskar UI) >>> and the deployment was unsuccessful. It seems that there is a particular >>> bug >>> currently with deploying a Ceph based storage with the GUI, so I wanted to >>> ask the list if >>> >>> 1. Indeed this was the case. >>> 2. How to delete my configuration and redeploy using the CLI >>> 3. Finally, if there is any scripted or explained way to perform an HA >>> installation. I read the reference to github, but this seems to be more >>> about the components but there was not a step by step instruction/ >>> explanation. >>> >>> >>> Thanks! >>> IB >>> >>> __ >>> Ignacio Bravo >>> LTG Federal, Inc >>> www.ltgfederal.com >>> Office: (703) 951-7760 >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslagle at redhat.com Wed Sep 9 17:53:57 2015 From: jslagle at redhat.com (James Slagle) Date: Wed, 9 Sep 2015 13:53:57 -0400 Subject: [Rdo-list] openstack-puppet-modules master-patches branch update Message-ID: <20150909175357.GP12870@localhost.localdomain> Hi, I'm interested in seeing this patch from the master branch of openstack-puppet-modules: https://github.com/redhat-openstack/openstack-puppet-modules/commit/447059ae0ca4a69ab9171969a1f30962e886b1a9 make it into the master-patches branch so that we can get an updated build in delorean. This is needed to make upstream TripleO work with opm from delorean. Right now, we have to install the puppet modules from source. The specific puppet patch we need is: https://github.com/openstack/puppet-heat/commit/16b4eca4c95d7873ef510181f4a52592abeca24c Does anyone know the process to make this happen? -- -- James Slagle -- From dsneddon at redhat.com Wed Sep 9 18:03:09 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 9 Sep 2015 14:03:09 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] Moving some rdo-manager components to git.openstack.org In-Reply-To: <20150908142406.GH12870@localhost.localdomain> References: <20150825190458.GG3271@localhost.localdomain> <20150908142406.GH12870@localhost.localdomain> Message-ID: <1776288708.23884752.1441821789017.JavaMail.zimbra@redhat.com> James, I'm having trouble submitting my review to git.openstack.org. I copied my changes to the new repo. When I run git-review, I end up with this error: """ No '.gitreview' file found in this repository. We don't know where your gerrit is. Please manually create a remote named "gerrit" and try again. """ I can set one up if you tell me what the content should be, but shouldn't this file be included in the repo? -Dan Sneddon ----- Original Message ----- > On Tue, Aug 25, 2015 at 03:04:58PM -0400, James Slagle wrote: > > Recently, folks have been working on moving the rdo-manager based workflow > > upstream into the TripleO project directly. > > > > This has been discussed on openstack-dev as part of this thread: > > http://lists.openstack.org/pipermail/openstack-dev/2015-July/070140.html > > Note that the thread spans into August as well. > > > > As part of this move, a patch has been proposed to move the following repos > > out > > from under github.com/rdo-management to git.openstack.org: > > > > instack > > instack-undercloud > > python-rdomanager-oscplugin (will be renamed in the process, probably > > to python-tripleoclient) > > > > The patch is here: https://review.openstack.org/#/c/215186 > > The above patch merged this morning, and the repos are now live in their new > locations. Note that python-rdomanager-oscplugin has a new name: > python-tripleoclient. > > The web links for the new repos are: > http://git.openstack.org/cgit/openstack/instack/ > http://git.openstack.org/cgit/openstack/instack-undercloud/ > http://git.openstack.org/cgit/openstack/python-tripleoclient/ > http://git.openstack.org/cgit/openstack/tripleo-docs/ > > Later today I'll be updating the gerrithub acl's for > instack/instack-undercloud > so that no new patches can be submitted there, as it should all be done > against > the upstream gerrit now (review.openstack.org). > > There are still some open reviews on gerrithub for instack-undercloud. I'm > going to individually comment on those and encourage folks to resubmit them > at review.openstack.org. If they're not all moved over in a couple of days, > I'll either resubmit them myself (preserving the original author), or abandon > them if they're no longer relevant. > > -- > -- James Slagle > -- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From marius at remote-lab.net Wed Sep 9 18:18:26 2015 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 9 Sep 2015 20:18:26 +0200 Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: <5EA2DA6D-B64F-460D-8CF8-B7757337FF3F@ltgfederal.com> References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> <0D2CA8A2-C988-4B2D-82C9-7CEC2F122323@ltgfederal.com> <292404844.23738911.1441813206554.JavaMail.zimbra@redhat.com> <5EA2DA6D-B64F-460D-8CF8-B7757337FF3F@ltgfederal.com> Message-ID: For ssh access you can use the heat-admin user with the private key in /home/stack/.ssh/id_rsa On Wed, Sep 9, 2015 at 7:26 PM, Ignacio Bravo wrote: > Dan, > > Thanks for your tips. It seems like there is an issue with the networking > piece, as that is where all the nodes are in building state. I have > something similar to this for each one of the Controller nodes: > > | NetworkDeployment | cac8c93b-b784-4a91-bc23-1a932bb1e62f > | OS::TripleO::SoftwareDeployment | CREATE_IN_PROGRESS | > 2015-09-09T15:22:10Z | 1 | > | UpdateDeployment | 97359c35-d2c7-4140-98ed-24525ee4be6b > | OS::Heat::SoftwareDeployment | CREATE_IN_PROGRESS | > 2015-09-09T15:22:10Z | 1 | > > Following your advice, I was trying to ssh into the nodes, but didn?t know > what username/password combination to use. I tried root, heat-admin, stack > with different password located in /home/stack/triple0-overcloud-passwords > but none of the combinations seemed to work. > > BTW, I am using the instructions from > https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html > and installing in an HP c7000 Blade enclosure. > > Thanks, > IB > > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > On Sep 9, 2015, at 11:40 AM, Dan Sneddon wrote: > > Did that error happen after a long time, like 4 hours? I have seen that > error > when the deploy was actually hung, and the token timeout gets reached and > then > every API call gets an authentication failed response. Unfortunately, you'll > need to diagnose what part of the deployment is failing. Here's what I > usually > do: > > # Get state of all uncompleted resources > heat resource-list overcloud -n 5 | grep -iv complete > > # Look closer at failed resources from above command > heat resource-show > > nova list > (then ssh as heat-admin to the nodes and check for network connectivity and > errors in the logs) > > -Dan Sneddon > > ----- Original Message ----- > > Never hit that. Did you try export HEAT_INCLUDE_PASSWORD=1 and rerun deploy? > > On Wed, Sep 9, 2015 at 4:47 PM, Ignacio Bravo wrote: > > Thanks. I was able to delete the existing deployment, but when trying to > deploy again using the CLI I got an authentication error. Ideas? > > [root at bl16 ~]# su stack > [stack at bl16 root]$ cd > [stack at bl16 ~]$ cd ~ > [stack at bl16 ~]$ source stackrc > [stack at bl16 ~]$ openstack overcloud deploy --ceph-storage-scale 3 > --control-scale 3 --compute-scale 2 --compute-flavor Compute_24 > --ntp-server > 192.168.10.1 --templates -e > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > ERROR: openstack ERROR: Authentication failed. Please try again with option > --include-password or export HEAT_INCLUDE_PASSWORD=1 > Authentication required > > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > On Sep 8, 2015, at 5:29 PM, Marius Cornea wrote: > > Hi Ignacio, > > Yes, I believe ceph currently only works with using direct heat > templates. You can check the instruction on how to get it deployed via > cli here [1] Make sure you select the Ceph role on the environment > specific content (left side column). > > To delete existing deployments run 'heat stack-delete overcloud' on > the undercloud node with the credentials in the stackrc file loaded. > > In order to get a HA deployment you just need to deploy 3 controllers > by passing '--control-scale 3' to the 'openstack overcloud deploy' > command. > > [1] > https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html > > On Tue, Sep 8, 2015 at 11:07 PM, Ignacio Bravo > wrote: > > All, > > I was trying to deploy an overcloud using the RDO Manager GUI (Tuskar UI) > and the deployment was unsuccessful. It seems that there is a particular > bug > currently with deploying a Ceph based storage with the GUI, so I wanted to > ask the list if > > 1. Indeed this was the case. > 2. How to delete my configuration and redeploy using the CLI > 3. Finally, if there is any scripted or explained way to perform an HA > installation. I read the reference to github, but this seems to be more > about the components but there was not a step by step instruction/ > explanation. > > > Thanks! > IB > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > From dsneddon at redhat.com Wed Sep 9 18:43:57 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 9 Sep 2015 14:43:57 -0400 (EDT) Subject: [Rdo-list] [rdo-manager] Moving some rdo-manager components to git.openstack.org In-Reply-To: <1776288708.23884752.1441821789017.JavaMail.zimbra@redhat.com> References: <20150825190458.GG3271@localhost.localdomain> <20150908142406.GH12870@localhost.localdomain> <1776288708.23884752.1441821789017.JavaMail.zimbra@redhat.com> Message-ID: <1132174724.23914842.1441824237029.JavaMail.zimbra@redhat.com> OK, I caught up on email and saw that the repo was updated with a new .gitreview file. The updated repo seems to work, but I had to run the following to enable this to work: ??git remote add gerrit ssh://dsneddon at review.openstack.org:29418/openstack/tripleo-docs (of course, edit the above command to contain your Launchpad username if you are running a similar command) -Dan Sneddon ----- Original Message ----- > James, > > I'm having trouble submitting my review to git.openstack.org. I copied my > changes to the new repo. When I run git-review, I end up with this error: > > """ > No '.gitreview' file found in this repository. We don't know where > your gerrit is. Please manually create a remote named "gerrit" and try > again. > """ > > I can set one up if you tell me what the content should be, but shouldn't > this file be included in the repo? > > -Dan Sneddon > > ----- Original Message ----- > > On Tue, Aug 25, 2015 at 03:04:58PM -0400, James Slagle wrote: > > > Recently, folks have been working on moving the rdo-manager based > > > workflow > > > upstream into the TripleO project directly. > > > > > > This has been discussed on openstack-dev as part of this thread: > > > http://lists.openstack.org/pipermail/openstack-dev/2015-July/070140.html > > > Note that the thread spans into August as well. > > > > > > As part of this move, a patch has been proposed to move the following > > > repos > > > out > > > from under github.com/rdo-management to git.openstack.org: > > > > > > instack > > > instack-undercloud > > > python-rdomanager-oscplugin (will be renamed in the process, probably > > > to python-tripleoclient) > > > > > > The patch is here: https://review.openstack.org/#/c/215186 > > > > The above patch merged this morning, and the repos are now live in their > > new > > locations. Note that python-rdomanager-oscplugin has a new name: > > python-tripleoclient. > > > > The web links for the new repos are: > > http://git.openstack.org/cgit/openstack/instack/ > > http://git.openstack.org/cgit/openstack/instack-undercloud/ > > http://git.openstack.org/cgit/openstack/python-tripleoclient/ > > http://git.openstack.org/cgit/openstack/tripleo-docs/ > > > > Later today I'll be updating the gerrithub acl's for > > instack/instack-undercloud > > so that no new patches can be submitted there, as it should all be done > > against > > the upstream gerrit now (review.openstack.org). > > > > There are still some open reviews on gerrithub for instack-undercloud. I'm > > going to individually comment on those and encourage folks to resubmit them > > at review.openstack.org. If they're not all moved over in a couple of days, > > I'll either resubmit them myself (preserving the original author), or > > abandon > > them if they're no longer relevant. > > > > -- > > -- James Slagle > > -- > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From lars at redhat.com Wed Sep 9 19:00:35 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 9 Sep 2015 15:00:35 -0400 Subject: [Rdo-list] [rdo-manager] Moving some rdo-manager components to git.openstack.org In-Reply-To: <1132174724.23914842.1441824237029.JavaMail.zimbra@redhat.com> References: <20150825190458.GG3271@localhost.localdomain> <20150908142406.GH12870@localhost.localdomain> <1776288708.23884752.1441821789017.JavaMail.zimbra@redhat.com> <1132174724.23914842.1441824237029.JavaMail.zimbra@redhat.com> Message-ID: <20150909190035.GA20542@redhat.com> On Wed, Sep 09, 2015 at 02:43:57PM -0400, Dan Sneddon wrote: > OK, I caught up on email and saw that the repo was updated with a new > .gitreview file. The updated repo seems to work, but I had to run the > following to enable this to work: > > git remote add gerrit ssh://dsneddon at review.openstack.org:29418/openstack/tripleo-docs "git review -s" should do that for you. You may need "git config gitreview.username " if your remote username doesn't match your local username. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ibravo at ltgfederal.com Wed Sep 9 21:53:18 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 9 Sep 2015 17:53:18 -0400 Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> <0D2CA8A2-C988-4B2D-82C9-7CEC2F122323@ltgfederal.com> <292404844.23738911.1441813206554.JavaMail.zimbra@redhat.com> <5EA2DA6D-B64F-460D-8CF8-B7757337FF3F@ltgfederal.com> Message-ID: <11FC8B09-058B-4257-BEEE-9F182CB2CF21@ltgfederal.com> I was able to move past the network issue (boot order of the servers have them booting from the 2nd NIC as well, causing them to register with Foreman, Ups!) Now the issue is deeper in the deployment: [stack at bl16 ~]$ heat resource-list --nested-depth 5 overcloud | grep FAILED | ComputeNodesPostDeployment | c1e5efe9-3a25-482c-91e2-dc443b334b92 | OS::TripleO::ComputePostDeployment | CREATE_FAILED | 2015-09-09T19:58:28Z | | | ControllerNodesPostDeployment | 702c7865-f7f5-4e23-bb89-0fda0fdbf290 | OS::TripleO::ControllerPostDeployment | CREATE_FAILED | 2015-09-09T19:58:28Z | | | ControllerOvercloudServicesDeployment_Step4 | 949468eb-920c-4970-9f99-d8c4baa077d1 | OS::Heat::StructuredDeployments | CREATE_FAILED | 2015-09-09T20:30:31Z | ControllerNodesPostDeployment | | 0 | 61528341-eb2d-471f-88fc-40b3f6ccbb1b | OS::Heat::StructuredDeployment | CREATE_FAILED | 2015-09-09T20:34:17Z | ControllerOvercloudServicesDeployment_Step4 | | 1 | 366fbfb8-933d-4122-b156-065f0a514dbb | OS::Heat::StructuredDeployment | CREATE_FAILED | 2015-09-09T20:34:17Z | ControllerOvercloudServicesDeployment_Step4 | | 2 | cc239277-50c2-4d94-952d-181ccf3199da | OS::Heat::StructuredDeployment | CREATE_FAILED | 2015-09-09T20:34:17Z | ControllerOvercloudServicesDeployment_Step4 | And looking at the error, it shows just one VERY LONG issue: [stack at bl16 ~]$ heat deployment-show cc239277-50c2-4d94-952d-181ccf3199da { "status": "FAILED", "server_id": "7fe469fa-34b9-4128-8651-2a6425ca8d06", "config_id": "96984e90-c1e7-4fe3-b89a-6576989df46c", "output_values": { "deploy_stdout": "\u001b[mNotice: Compiled catalog for overcloud-controller-2.localdomain in environment production in 13.70 seconds\u001b[0m\n\u001b[mNotice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}05503957e3796fbe6fddd756a7a102a0' to '{md5}f1a25c29dec68d565a05b5abe92a2f15'\u001b[0m\n\u001b[mNotice: /File[/etc/sysconfig/memcached]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Memcached/Service[memcached]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/log_dir]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/amqp_durable_queues]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/notify_api_faults]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/memcached_servers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/notification_driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/verbose]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/File[/var/log/nova]/group: group changed 'root' to 'nova'\u001b[0m\n\u001b[mNotice: /File[/var/log/nova]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_ha_queues]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_auth_url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/network_api_class]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_username]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_tenant_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/security_group_api]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_chunk_size]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_virtual_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/var/lib/puppet/concat/_etc_openstack-dashboard_local_settings]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/var/lib/puppet/concat/_etc_openstack-dashboard_local_settings/fragments.concat.out]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/use_ssl]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/auth_region]/ensure: removed\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/kombu_reconnect_delay]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/File[/etc/neutron]/group: group changed 'root' to 'neutron'\u001b[0m\n\u001b[mNotice: /File[/etc/neutron]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/use_namespaces]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/admin_user]/value: value changed '%SERVICE_USER%' to 'neutron'\u001b[0m\n\u001b[mNotice: /File[/etc/neutron/neutron.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/use_syslog]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/auth_url]/value: value changed 'http://localhost:5000/v2.0' to 'http://192.168.10.11:35357/'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_backlog]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/File[/etc/neutron/dnsmasq-neutron.conf]/ensure: defined content as '{md5}2c4080d983906582143a8bf302c46557'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/log_dir]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/rpc_backend]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/state_path]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_config_file]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/control_exchange]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_delete_namespaces]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Vncproxy/Nova_config[DEFAULT/novncproxy_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/notify_on_state_change]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Vncproxy/Nova_config[DEFAULT/novncproxy_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_domain]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Config/Nova_config[DEFAULT/default_floating_pool]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_virtual_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/verbose]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Scheduler/Nova_config[DEFAULT/scheduler_driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/File[/etc/ceilometer/]/owner: owner changed 'root' to 'ceilometer'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/File[/etc/ceilometer/]/group: group changed 'root' to 'ceilometer'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/File[/etc/ceilometer/]/mode: mode changed '0755' to '0750'\u001b[0m\n\u001b[mNotice: /File[/etc/ceilometer/]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/var/lib/puppet/concat/_etc_openstack-dashboard_local_settings/fragments.concat]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_sorting]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/mac_generation_retries]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_strategy]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_tenant_id]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/lock_path]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_ha_queues]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/lock_path]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_bulk]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance/File[/etc/glance/]/owner: owner changed 'root' to 'glance'\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance/File[/etc/glance/]/mode: mode changed '0755' to '0770'\u001b[0m\n\u001b[mNotice: /File[/etc/glance/]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}f8abf835314a1775f891a7aa59842dcd'\u001b[0m\n\u001b[mNotice: /File[snmptrapd.sysconfig]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}b432e3530685b2b53034e4dc1be5193e'\u001b[0m\n\u001b[mNotice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'\u001b[0m\n\u001b[mNotice: /File[/etc/xinetd.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1b13c1ecc4c5ca6dfcca44ae60c2dc3a'\u001b[0m\n\u001b[mNotice: /File[snmpd.sysconfig]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}c07b9a377faea45b96b7d3bf8976004b' to '{md5}42ad956e1bf4ceddaa0c649cd705e28d'\u001b[0m\n\u001b[mNotice: /File[/etc/ntp.conf]/seltype: seltype changed 'etc_t' to 'net_conf_t'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/notification_topics]/ensure: created\u001b[0m\n\u001b[mNotice: /File[var-net-snmp]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/var/lib/puppet/concat/_etc_openstack-dashboard_local_settings/fragments]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Expirer/Cron[ceilometer-expirer]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ntp::Service/Service[ntp]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[osapi_v3/enabled]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/ec2_listen]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/volume_api_class]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/ec2_workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_tenant_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/use_forwarded_for]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/auth_strategy]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_pagination]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_lease_duration]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/cache_url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/File[/srv/node]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.d/prefork.conf]/ensure: defined content as '{md5}109c4f51dac10fc1b39373855e566d01'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}2fa646fe615e44d137a5d629f868c107'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'\u001b[0m\n\u001b[mNotice: /File[/var/log/httpd]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}35506746efa82c4e203c8a724980bdc6'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}f5e7449c0f17bc856e86011cb5d152ba' to '{md5}d54157c1c91291b915633b4781af8fe1'\u001b[0m\n\u001b[mNotice: /File[/etc/httpd/conf/httpd.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'\u001b[0m\n\u001b[mNotice: /File[/var/log/horizon]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'\u001b[0m\n\u001b[mNotice: /File[/var/www/]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /File[/var/www/html]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[/var/lib/puppet/concat/15-default.conf]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[/var/lib/puppet/concat/15-default.conf/fragments]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-file_footer]/File[/var/lib/puppet/concat/15-default.conf/fragments/999_default-file_footer]/ensure: defined content as '{md5}e27b2525783e590ca1820f1e2118285d'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-serversignature]/File[/var/lib/puppet/concat/15-default.conf/fragments/90_default-serversignature]/ensure: defined content as '{md5}9bf5a458783ab459e5043e1cdf671fa7'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-directories]/File[/var/lib/puppet/concat/15-default.conf/fragments/60_default-directories]/ensure: defined content as '{md5}5e2a84875965faa5e3df0e222301ba37'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-docroot]/File[/var/lib/puppet/concat/15-default.conf/fragments/10_default-docroot]/ensure: defined content as '{md5}6faaccbc7ca8bc885ebf139223885d52'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-scriptalias]/File[/var/lib/puppet/concat/15-default.conf/fragments/180_default-scriptalias]/ensure: defined content as '{md5}7fc65400381c3a010f38870f94f236f0'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-apache-header]/File[/var/lib/puppet/concat/15-default.conf/fragments/0_default-apache-header]/ensure: defined content as '{md5}5ca41370b5812e1b87dd74a4499c0192'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[/var/lib/puppet/concat/15-default.conf/fragments.concat]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/File[/var/lib/puppet/concat/15-horizon_vhost.conf]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments.concat.out]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments.concat]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-directories]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/60_horizon_vhost-directories]/ensure: defined content as '{md5}18b24971675e526c8e3b1ea92849b1f4'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-file_footer]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/999_horizon_vhost-file_footer]/ensure: defined content as '{md5}e27b2525783e590ca1820f1e2118285d'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-logging]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/80_horizon_vhost-logging]/ensure: defined content as '{md5}c931b57a272eb434464ceab7ebeffaff'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-serversignature]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/90_horizon_vhost-serversignature]/ensure: defined content as '{md5}9bf5a458783ab459e5043e1cdf671fa7'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-docroot]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/10_horizon_vhost-docroot]/ensure: defined content as '{md5}bfbae283264d47e1c117f32689f18d79'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-apache-header]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/0_horizon_vhost-apache-header]/ensure: defined content as '{md5}c51c95185af4a0f8918c840eb2784d95'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-redirect]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/160_horizon_vhost-redirect]/ensure: defined content as '{md5}356ab43cb27ce407ef8cc6b3a88d9dad'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-access_log]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/100_horizon_vhost-access_log]/ensure: defined content as '{md5}591db222039ca00f58ba1c8457861856'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[/var/lib/puppet/concat/15-default.conf/fragments.concat.out]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-aliases]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/20_horizon_vhost-aliases]/ensure: defined content as '{md5}4a3191f0ccf5c6f7d6d2189906d08624'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Expirer/Ceilometer_config[database/time_to_live]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/os_username]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/os_tenant_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/rpc_backend]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Alarm::Evaluator/Ceilometer_config[alarm/evaluation_service]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Agent::Central/Ceilometer_config[coordination/backend_url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/notification_topics]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/os_auth_url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/use_syslog]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Collector/Ceilometer_config[collector/udp_address]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Alarm::Evaluator/Ceilometer_config[alarm/record_history]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Alarm::Evaluator/Ceilometer_config[alarm/evaluation_interval]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_ha_queues]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/os_region_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/store_events]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Api/Ceilometer_config[keystone_authtoken/auth_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Collector/Ceilometer_config[collector/udp_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Api/Ceilometer_config[keystone_authtoken/identity_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/log_dir]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/verbose]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Api/Ceilometer_config[api/host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Api/Ceilometer_config[keystone_authtoken/admin_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_virtual_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Alarm::Evaluator/Ceilometer_config[alarm/partition_rpc_topic]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Api/Ceilometer_config[keystone_authtoken/admin_tenant_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Concat::Fragment[local_settings.py]/File[/var/lib/puppet/concat/_etc_openstack-dashboard_local_settings/fragments/50_local_settings.py]/ensure: defined content as '{md5}dbc9aa2747d9f33b4231806f1c27c69d'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/Exec[concat_/etc/openstack-dashboard/local_settings]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/Exec[concat_/etc/openstack-dashboard/local_settings]: Triggered 'refresh' from 3 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/enable_security_group]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]/Vs_bridge[br-ex]/external_ids: external_ids changed '' to 'bridge-id=br-ex'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/identity_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'\u001b[0m\n\u001b[mNotice: /File[/etc/xinetd.d]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/os_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'\u001b[0m\n\u001b[mNotice: /Stage[main]/Snmp/File[snmptrapd.conf]/mode: mode changed '0600' to '0644'\u001b[0m\n\u001b[mNotice: /File[snmptrapd.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/rpc_backend]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Api/Ceilometer_config[keystone_authtoken/admin_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{md5}32d3f1a6681c9a8873975a7b756e0e2d' to '{md5}dbc9aa2747d9f33b4231806f1c27c69d'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: group changed 'apache' to 'root'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/mode: mode changed '0640' to '0644'\u001b[0m\n\u001b[mNotice: /File[/etc/openstack-dashboard/local_settings]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Swift::Storage::Filter::Recon[container]/Concat::Fragment[swift_recon_container]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments/35_swift_recon_container]/ensure: defined content as '{md5}e1a260602323a9e194999f76b55dc468'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments.concat]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments.concat]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments.concat.out]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat::Fragment[swift-account-6002]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments/00_swift-account-6002]/ensure: defined content as '{md5}03c4f9c5dcf21fd573cb050ac6d49bf8'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments.concat.out]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'\u001b[0m\n\u001b[mNotice: /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/File[/var/lib/puppet/concat/_etc_rsync.conf]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments.concat.out]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Rsync::Server::Module[container]/Concat::Fragment[frag-container]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments/10_container_frag-container]/ensure: defined content as '{md5}8e648b0f3b538b6726216c98b72a7ab8'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Nova_config[DEFAULT/use_syslog]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[DEFAULT/base_mac]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Db/Ceilometer_config[database/connection]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Db/Exec[ceilometer-dbsync]: Triggered 'refresh' from 2 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments.concat.out]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat::Fragment[swift-object-6000]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments/00_swift-object-6000]/ensure: defined content as '{md5}f5042afb6f245bdefe90ca597385e5d1'\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Swift::Storage::Filter::Healthcheck[object]/Concat::Fragment[swift_healthcheck_object]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments/25_swift_healthcheck_object]/ensure: defined content as '{md5}4c8cd2d18bcd82e4052642d0d45fe6f0'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Rsync::Server::Module[object]/Concat::Fragment[frag-object]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments/10_object_frag-object]/ensure: defined content as '{md5}6c5dcf4876e38ea0927a443b4533e955'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat::Fragment[swift-container-6001]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments/00_swift-container-6001]/ensure: defined content as '{md5}d0396b3be5998f9e318569460c77efc7'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/content: content changed '{md5}1d7d7dd9f1b4beef5a21688ededda355' to '{md5}2421a3c6df32c7e38c2a7a22afdf5728'\u001b[0m\n\u001b[mNotice: /File[autoindex.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Swift::Storage::Filter::Healthcheck[account]/Concat::Fragment[swift_healthcheck_account]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments/25_swift_healthcheck_account]/ensure: defined content as '{md5}4c8cd2d18bcd82e4052642d0d45fe6f0'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/enable_tunneling]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[ovs-cleanup-service]/enable: enable changed 'false' to 'true'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/polling_interval]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/periodic_fuzzy_delay]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/use_namespaces]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/enable_metadata_proxy]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/router_delete_namespaces]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/handle_internal_only_routers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/send_arp_for_ha]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/periodic_interval]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/external_network_bridge]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/metadata_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf/fragments.concat.out]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf/fragments]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Concat::Fragment[Apache ports header]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf/fragments/10_Apache ports header]/ensure: defined content as '{md5}afe35cb5747574b700ebaa0f0b3a626e'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf/fragments.concat]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Api/Ceilometer_config[api/port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 5 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/etc/xinetd.d/rsync]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Xinetd/Service[xinetd]: Triggered 'refresh' from 5 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-wsgi]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/260_horizon_vhost-wsgi]/ensure: defined content as '{md5}3f1e888993d05222c5a79ee4baa35cde'\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Swift::Storage::Filter::Recon[account]/Concat::Fragment[swift_recon_account]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments/35_swift_recon_account]/ensure: defined content as '{md5}e1a260602323a9e194999f76b55dc468'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-access_log]/File[/var/lib/puppet/concat/15-default.conf/fragments/100_default-access_log]/ensure: defined content as '{md5}65fb033baac888b4ab85c295e870cb8f'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/File[/etc/ceilometer/ceilometer.conf]/owner: owner changed 'root' to 'ceilometer'\u001b[0m\n\u001b[mNotice: /File[/etc/ceilometer/ceilometer.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Swift::Storage::Filter::Recon[object]/Concat::Fragment[swift_recon_object]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments/35_swift_recon_object]/ensure: defined content as '{md5}e1a260602323a9e194999f76b55dc468'\u001b[0m\n\u001b[mNotice: /Stage[main]/Rsync::Server/Concat::Fragment[rsyncd_conf_header]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments/00_header_rsyncd_conf_header]/ensure: defined content as '{md5}2ad07d2ccf85d8c10a886f8aed109abb'\u001b[0m\n\u001b[mNotice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}229bae725885187f54da0726272321d9'\u001b[0m\n\u001b[mNotice: /Stage[main]/Snmp/File[snmpd.conf]/mode: mode changed '0600' to '0644'\u001b[0m\n\u001b[mNotice: /File[snmpd.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/File[/etc/glance/glance-cache.conf]/owner: owner changed 'root' to 'glance'\u001b[0m\n\u001b[mNotice: /File[/etc/glance/glance-cache.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[database/idle_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/File[/etc/glance/glance-api-paste.ini]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/File[/etc/glance/glance-api.conf]/owner: owner changed 'root' to 'glance'\u001b[0m\n\u001b[mNotice: /File[/etc/glance/glance-api.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/log_dir]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/admin_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[keystone_authtoken/identity_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/bind_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/File[/etc/glance/glance-registry.conf]/owner: owner changed 'root' to 'glance'\u001b[0m\n\u001b[mNotice: /File[/etc/glance/glance-registry.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/log_dir]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[keystone_authtoken/admin_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Db/Nova_config[database/idle_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/bind_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/use_syslog]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/verbose]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[keystone_authtoken/admin_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/admin_tenant_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[keystone_authtoken/admin_tenant_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[database/idle_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[database/connection]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/backlog]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Api/Cinder_api_paste_ini[filter:authtoken/admin_tenant_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v2_api]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[database/idle_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/use_syslog]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_num_retries]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_ssl_compression]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_insecure]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/log_dir]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[database/max_retries]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Api/Cinder_api_paste_ini[filter:authtoken/auth_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[database/retry_interval]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Db/Nova_config[database/connection]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/log_file]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[database/connection]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/admin_user]/value: value changed '%SERVICE_USER%' to 'neutron'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/identity_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/admin_tenant_name]/value: value changed '%SERVICE_TENANT_NAME%' to 'service'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[database/min_pool_size]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/identity_uri]/value: value changed 'http://127.0.0.1:5000' to 'http://192.168.10.11:35357/'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/min_l3_agents_per_router]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/admin_password]/value: value changed '[old secret redacted]' to '[new secret redacted]'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[database/connection]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[database/max_pool_size]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/auth_uri]/value: value changed 'http://127.0.0.1:35357/v2.0/' to 'http://192.168.10.11:5000/v2.0/'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_client_protocol]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/agent_down_time]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[paste_deploy/flavor]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/admin_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/use_syslog]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Api/Cinder_api_paste_ini[filter:authtoken/admin_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[database/max_overflow]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/auth_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/log_file]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[keystone_authtoken/admin_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/verbose]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[database/max_retries]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/control_exchange]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v1_api]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/File[/etc/glance/glance-registry-paste.ini]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[database/retry_interval]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[keystone_authtoken/auth_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[database/min_pool_size]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/amqp_durable_queues]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[keystone_authtoken/identity_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha_net_cidr]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[database/connection]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/verbose]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/verbose]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder_config[tripleo_ceph/host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[DEFAULT/rpc_backend]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Api/Cinder_api_paste_ini[filter:authtoken/identity_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /File[/var/lib/cinder]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Setup_test_volume/Exec[create_/var/lib/cinder/cinder-volumes]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Setup_test_volume/Exec[create_/var/lib/cinder/cinder-volumes]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Setup_test_volume/Exec[losetup /dev/loop2 /var/lib/cinder/cinder-volumes]: Triggered 'refresh' from 2 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Setup_test_volume/Exec[pvcreate /dev/loop2]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Setup_test_volume/Exec[vgcreate cinder-volumes /dev/loop2]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[keystone_authtoken/admin_tenant_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Glance_registry_config[keystone_authtoken/admin_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[DEFAULT/log_dir]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[DEFAULT/verbose]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/default_deployment_signal_transport]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[keystone_authtoken/admin_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[database/idle_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/engine_life_check_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[DEFAULT/use_syslog]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_ha_queues]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_watch_server_url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_stack_user_role]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api_cloudwatch/Heat_config[heat_api_cloudwatch/workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_virtual_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/deferred_auth_method]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[database/connection]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[keystone_authtoken/admin_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[keystone_authtoken/admin_tenant_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api_cloudwatch/Heat_config[heat_api_cloudwatch/bind_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[DEFAULT/amqp_durable_queues]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[DEFAULT/rpc_backend]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[keystone_authtoken/identity_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[keystone_authtoken/auth_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api/Heat_config[heat_api/workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/default_software_config_transport]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[DEFAULT/instance_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api_cloudwatch/Heat_config[heat_api_cloudwatch/bind_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[database/idle_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Api/Cinder_api_paste_ini[filter:authtoken/admin_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/File[/tmp/ceph-mon-keyring-overcloud-controller-2]/ensure: defined content as '{md5}14ca4391d18faa7cfdc5e93820d90146'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[ceph-mon-mkfs-overcloud-controller-2]/returns: ++ ceph-mon --id overcloud-controller-2 --show-config-value mon_data\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[ceph-mon-mkfs-overcloud-controller-2]/returns: + mon_data=/var/lib/ceph/mon/ceph-overcloud-controller-2\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[ceph-mon-mkfs-overcloud-controller-2]/returns: + '[' '!' -d /var/lib/ceph/mon/ceph-overcloud-controller-2 ']'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[ceph-mon-mkfs-overcloud-controller-2]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[rm-keyring-overcloud-controller-2]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments.concat]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Rabbitmq_policy[ha-all@/]/pattern: pattern changed '^(?!amq\\\\.).*' to '^(?!amq\\.).*'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/File[/var/cache/keystone]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/File[/var/lib/keystone]/mode: mode changed '0755' to '0750'\u001b[0m\n\u001b[mNotice: /File[/var/lib/keystone]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /File[/var/log/keystone]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/File[/etc/keystone]/owner: owner changed 'root' to 'keystone'\u001b[0m\n\u001b[mNotice: /File[/etc/keystone]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/File[/etc/keystone/keystone.conf]/owner: owner changed 'root' to 'keystone'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/File[/etc/keystone/keystone.conf]/mode: mode changed '0640' to '0600'\u001b[0m\n\u001b[mNotice: /File[/etc/keystone/keystone.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_hosts]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_virtual_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[signing/keyfile]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_password]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[revoke/driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[signing/key_size]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[paste_deploy/config_file]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_userid]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[signing/ca_certs]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[signing/ca_key]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/use_syslog]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/File[/etc/keystone/ssl]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/File[/etc/keystone/ssl/private]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/File[/etc/keystone/ssl/private/signing_key.pem]/ensure: defined content as '{md5}5afeb6595947fae21bb9fb32baf00584'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/debug]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[signing/certfile]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_ha_queues]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_use_ssl]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/log_dir]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_port]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[signing/cert_subject]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/File[/etc/keystone/ssl/certs]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/File[/etc/keystone/ssl/certs/ca.pem]/ensure: defined content as '{md5}04105fa97337bb7c4d5ac995998c0a3a'\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/File[/etc/keystone/ssl/certs/signing_cert.pem]/ensure: defined content as '{md5}38885395ec55dd73cff6f450c45ec101'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[database/connection]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[database/idle_timeout]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_workers]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Keystone_config[ec2/driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/owner: owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/group: group changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /File[/etc/swift/object-server/]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/owner: owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/group: group changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /File[/etc/swift/account-server/]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Account/Service[swift-account-reaper]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Account/Service[swift-account-auditor]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Container/Service[swift-container-auditor]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/owner: owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/group: group changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /File[/etc/swift/container-server/]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Object/Service[swift-object-auditor]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Object/Service[swift-object-updater]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments.concat]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments.concat.out]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Concat::Fragment[swift_proxy]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/00_swift_proxy]/ensure: defined content as '{md5}a89d508abb08444cbe368cf0b097025e'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Catch_errors/Concat::Fragment[swift_catch_errors]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/24_swift_catch_errors]/ensure: defined content as '{md5}a6199f60fed26eb9bbbbc8f1357b2599'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Healthcheck/Concat::Fragment[swift_healthcheck]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/25_swift_healthcheck]/ensure: defined content as '{md5}4c8cd2d18bcd82e4052642d0d45fe6f0'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Cache/Concat::Fragment[swift_cache]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/23_swift_cache]/ensure: defined content as '{md5}6ccea1093c7b6e64b5fb117e069b592c'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Tempurl/Concat::Fragment[swift-proxy-tempurl]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/29_swift-proxy-tempurl]/ensure: defined content as '{md5}957e09199678c49631a6614c357c389d'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Formpost/Concat::Fragment[swift-proxy-formpost]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/31_swift-proxy-formpost]/ensure: defined content as '{md5}5f7ff8b059ec81b895d9df33942c2b7f'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Keystone/Concat::Fragment[swift_keystone]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/79_swift_keystone]/ensure: defined content as '{md5}74b74136c1af5783d224cea35d84bc72'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Ratelimit/Concat::Fragment[swift_ratelimit]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/26_swift_ratelimit]/ensure: defined content as '{md5}fe274fcee3bc43dc238276b4d6cf259a'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Staticweb/Concat::Fragment[swift-proxy-staticweb]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/32_swift-proxy-staticweb]/ensure: defined content as '{md5}c00ba1287d860e8a04a30c5913821e7f'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/group: group changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'\u001b[0m\n\u001b[mNotice: /File[/var/cache/swift]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Authtoken/Concat::Fragment[swift_authtoken]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/22_swift_authtoken]/ensure: defined content as '{md5}6c1775e19e22d4dabb237b7f149d7df6'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy::Proxy_logging/Concat::Fragment[swift_proxy-logging]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/27_swift_proxy-logging]/ensure: defined content as '{md5}9598aa5079664e893e048ac4a681f71f'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/Exec[concat_/etc/swift/proxy-server.conf]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/Exec[concat_/etc/swift/proxy-server.conf]: Triggered 'refresh' from 13 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/etc/swift/proxy-server.conf]/content: content changed '{md5}e748b3f4cb5ca7f90d4df919bee99e77' to '{md5}9020c5417b875ffcec92508c7387e506'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/etc/swift/proxy-server.conf]/owner: owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/etc/swift/proxy-server.conf]/mode: mode changed '0640' to '0660'\u001b[0m\n\u001b[mNotice: /File[/etc/swift/proxy-server.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Proxy/Service[swift-proxy]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/admin_password]/value: value changed '[old secret redacted]' to '[new secret redacted]'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/Exec[concat_/etc/swift/account-server.conf]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/Exec[concat_/etc/swift/account-server.conf]: Triggered 'refresh' from 5 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/content: content changed '{md5}07e5a1a1e5a0ab83d745e20680eb32c1' to '{md5}bfec80a23ec7c2e95b3bfd578886e769'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/owner: owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /File[/etc/swift/account-server.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Service[swift-account-replicator]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Service[swift-account]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-logging]/File[/var/lib/puppet/concat/15-default.conf/fragments/80_default-logging]/ensure: defined content as '{md5}f202203ce2fe5d885160be988ff36151'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/Exec[concat_15-default.conf]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/Exec[concat_15-default.conf]: Triggered 'refresh' from 10 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_tenant_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Swift::Storage::Filter::Healthcheck[container]/Concat::Fragment[swift_healthcheck_container]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments/25_swift_healthcheck_container]/ensure: defined content as '{md5}4c8cd2d18bcd82e4052642d0d45fe6f0'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/Exec[concat_/etc/swift/container-server.conf]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/Exec[concat_/etc/swift/container-server.conf]: Triggered 'refresh' from 5 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/content: content changed '{md5}4998257eb89ff63e838b37686ebb1ee7' to '{md5}f8d5363997218ba2893a84e18bfa60d8'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/owner: owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /File[/etc/swift/container-server.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Service[swift-container-replicator]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Service[swift-container]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[ceph-mon-ceph.client.admin.keyring-overcloud-controller-2]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Service[ceph-mon-overcloud-controller-2]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_max_clone_depth]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Api/Exec[cinder-manage db_sync]: Triggered 'refresh' from 42 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Api/Service[cinder-api]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Volume/Service[cinder-volume]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Scheduler/Service[cinder-scheduler]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[DEFAULT/novncproxy_base_url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/auth_insecure]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Rsync::Server::Module[account]/Concat::Fragment[frag-account]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments/10_account_frag-account]/ensure: defined content as '{md5}b4ede98c85e2b1c38d95245a1bdabd0d'\u001b[0m\n\u001b[mNotice: /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/Exec[concat_/etc/rsync.conf]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/Exec[concat_/etc/rsync.conf]: Triggered 'refresh' from 6 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/File[/etc/rsync.conf]/ensure: defined content as '{md5}7cc06d9358f9ee9fab39f84279b8a95e'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-serveralias]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/190_horizon_vhost-serveralias]/ensure: defined content as '{md5}07907bea6a3bc3179c3afbc7e6b4287e'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/Exec[concat_15-horizon_vhost.conf]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/Exec[concat_15-horizon_vhost.conf]: Triggered 'refresh' from 13 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/metering_secret]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Agent::Notification/Service[ceilometer-agent-notification]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Collector/Service[ceilometer-collector]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Agent::Central/Service[ceilometer-agent-central]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Alarm::Notifier/Service[ceilometer-alarm-notifier]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Api/Service[ceilometer-api]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Alarm::Evaluator/Service[ceilometer-alarm-evaluator]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments.concat]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/Exec[concat_/etc/swift/object-server.conf]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/Exec[concat_/etc/swift/object-server.conf]: Triggered 'refresh' from 5 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/content: content changed '{md5}43f14d676b28bc8111d6100e06e9a8bf' to '{md5}093e11507d747144a1bff15b81b4ddbb'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/owner: owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /File[/etc/swift/object-server.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Service[swift-object]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Service[swift-object-replicator]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Apache::Listen[192.168.10.20:80]/Concat::Fragment[Listen 192.168.10.20:80]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf/fragments/10_Listen 192.168.10.20_80]/ensure: defined content as '{md5}1ff994aedb717ebc402c09250ecac25f'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/Exec[concat_/etc/httpd/conf/ports.conf]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/Exec[concat_/etc/httpd/conf/ports.conf]: Triggered 'refresh' from 4 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}33da39503a6f62e5ae62e0be7daae49d'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/admin_tenant_name]/value: value changed '%SERVICE_TENANT_NAME%' to 'service'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Storage::Container/Service[swift-container-updater]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Triggered 'refresh' from 64 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/verbose]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Exec[keystone-manage pki_setup]: Triggered 'refresh' from 37 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/auth_url]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.097 27081 DEBUG oslo_db.sqlalchemy.session [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py:513\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.099 27081 DEBUG migrate.versioning.repository [-] Loading repository /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:76\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.105 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/001_add_images_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.105 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/001_add_images_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.105 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/002_add_image_properties_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.106 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/002_add_image_properties_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.106 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_upgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.106 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_upgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.106 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.106 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.107 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.107 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.107 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/004_add_checksum.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.107 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/004_add_checksum.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.108 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/005_size_big_integer.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.108 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/005_size_big_integer.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.108 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_upgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.109 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_upgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.109 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.109 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.109 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.110 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.110 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.110 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.110 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.110 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.111 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/007_add_owner.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.111 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/007_add_owner.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.111 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/008_add_image_members_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.111 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/008_add_image_members_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.112 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/009_add_mindisk_and_minram.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.112 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/009_add_mindisk_and_minram.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.112 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/010_default_update_at.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.112 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/010_default_update_at.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.113 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_upgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.113 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_upgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.113 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.113 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.114 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_make_mindisk_and_minram_notnull.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.114 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_make_mindisk_and_minram_notnull.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.114 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.114 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.115 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.115 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.115 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_add_protected.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.115 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_add_protected.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.116 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_tags_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.116 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_tags_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.116 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/015_quote_swift_credentials.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.116 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/015_quote_swift_credentials.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.116 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.117 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.117 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_add_status_image_member.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.117 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_add_status_image_member.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.117 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.118 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.118 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/018_add_image_locations_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.118 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/018_add_image_locations_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.118 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/019_migrate_image_locations.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.118 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/019_migrate_image_locations.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.119 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/020_drop_images_table_location.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.119 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/020_drop_images_table_location.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.119 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/021_set_engine_mysql_innodb.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.119 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/021_set_engine_mysql_innodb.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.120 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/022_image_member_index.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.120 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/022_image_member_index.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.120 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/023_placeholder.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.120 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/023_placeholder.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.120 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/024_placeholder.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.121 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/024_placeholder.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.121 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/025_placeholder.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.121 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/025_placeholder.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.121 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/026_add_location_storage_information.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.122 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/026_add_location_storage_information.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.122 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/027_checksum_index.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.122 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/027_checksum_index.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.122 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/028_owner_index.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.123 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/028_owner_index.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.123 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/029_location_meta_data_pickle_to_string.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.123 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/029_location_meta_data_pickle_to_string.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.123 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/030_add_tasks_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.124 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/030_add_tasks_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.124 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/031_remove_duplicated_locations.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.124 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/031_remove_duplicated_locations.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.124 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/032_add_task_info_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.124 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/032_add_task_info_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.125 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/033_add_location_status.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.125 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/033_add_location_status.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.125 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/034_add_virtual_size.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.125 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/034_add_virtual_size.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.126 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/035_add_metadef_tables.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.126 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/035_add_metadef_tables.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.126 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/036_rename_metadef_schema_columns.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.126 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/036_rename_metadef_schema_columns.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.127 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_upgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.127 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_upgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.127 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.127 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.127 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_add_changes_to_satisfy_models.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.128 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_add_changes_to_satisfy_models.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.128 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/038_add_metadef_tags_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.128 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/038_add_metadef_tags_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.128 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.128 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.129 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/040_add_changes_to_satisfy_metadefs_tags.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.129 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/040_add_changes_to_satisfy_metadefs_tags.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.129 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/041_add_artifact_tables.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.129 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/041_add_artifact_tables.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.130 27081 DEBUG migrate.versioning.repository [-] Repository /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:82\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.130 27081 DEBUG migrate.versioning.repository [-] Config: OrderedDict([('db_settings', OrderedDict([('__name__', 'db_settings'), ('repository_id', 'Glance Migrations'), ('version_table', 'migrate_version'), ('required_dbs', '[]')]))]) __init__ /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:83\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.156 27081 DEBUG migrate.versioning.repository [-] Loading repository /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:76\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.157 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/001_add_images_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.157 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/001_add_images_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.158 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/002_add_image_properties_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.158 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/002_add_image_properties_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.158 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_upgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.158 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_upgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.158 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.159 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.159 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.159 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.159 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/004_add_checksum.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.159 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/004_add_checksum.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.160 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/005_size_big_integer.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.160 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/005_size_big_integer.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.160 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_upgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.160 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_upgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.160 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.161 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.161 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.161 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.161 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.161 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.162 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.162 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.162 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/007_add_owner.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.162 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/007_add_owner.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.163 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/008_add_image_members_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.163 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/008_add_image_members_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.163 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/009_add_mindisk_and_minram.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.163 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/009_add_mindisk_and_minram.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.163 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/010_default_update_at.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.164 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/010_default_update_at.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.164 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_upgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.164 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_upgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.164 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.164 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.165 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_make_mindisk_and_minram_notnull.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.165 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_make_mindisk_and_minram_notnull.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.165 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.165 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.165 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.166 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.166 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_add_protected.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.166 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_add_protected.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.166 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_tags_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.166 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_tags_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.167 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/015_quote_swift_credentials.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.167 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/015_quote_swift_credentials.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.167 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.167 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.167 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_add_status_image_member.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.168 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_add_status_image_member.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.168 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.168 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.168 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/018_add_image_locations_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.168 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/018_add_image_locations_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.169 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/019_migrate_image_locations.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.169 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/019_migrate_image_locations.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.169 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/020_drop_images_table_location.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.169 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/020_drop_images_table_location.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.169 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/021_set_engine_mysql_innodb.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.170 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/021_set_engine_mysql_innodb.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.170 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/022_image_member_index.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.170 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/022_image_member_index.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.170 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/023_placeholder.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.171 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/023_placeholder.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.171 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/024_placeholder.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.171 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/024_placeholder.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.171 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/025_placeholder.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.171 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/025_placeholder.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.172 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/026_add_location_storage_information.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.172 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/026_add_location_storage_information.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.172 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/027_checksum_index.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.172 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/027_checksum_index.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.172 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/028_owner_index.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.173 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/028_owner_index.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.173 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/029_location_meta_data_pickle_to_string.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.173 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/029_location_meta_data_pickle_to_string.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.173 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/030_add_tasks_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.173 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/030_add_tasks_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.174 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/031_remove_duplicated_locations.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.174 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/031_remove_duplicated_locations.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.174 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/032_add_task_info_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.174 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/032_add_task_info_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.175 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/033_add_location_status.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.175 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/033_add_location_status.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.175 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/034_add_virtual_size.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.175 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/034_add_virtual_size.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.175 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/035_add_metadef_tables.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.176 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/035_add_metadef_tables.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.176 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/036_rename_metadef_schema_columns.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.176 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/036_rename_metadef_schema_columns.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.176 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_upgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.176 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_upgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.177 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_downgrade.sql... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.177 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_downgrade.sql loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.177 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_add_changes_to_satisfy_models.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.177 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_add_changes_to_satisfy_models.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.178 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/038_add_metadef_tags_table.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.178 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/038_add_metadef_tags_table.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.178 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.178 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.178 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/040_add_changes_to_satisfy_metadefs_tags.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.179 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/040_add_changes_to_satisfy_metadefs_tags.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.179 27081 DEBUG migrate.versioning.script.base [-] Loading script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/041_add_artifact_tables.py... __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.179 27081 DEBUG migrate.versioning.script.base [-] Script /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/041_add_artifact_tables.py loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.179 27081 DEBUG migrate.versioning.repository [-] Repository /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo loaded successfully __init__ /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:82\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.180 27081 DEBUG migrate.versioning.repository [-] Config: OrderedDict([('db_settings', OrderedDict([('__name__', 'db_settings'), ('repository_id', 'Glance Migrations'), ('version_table', 'migrate_version'), ('required_dbs', '[]')]))]) __init__ /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:83\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.188 27081 INFO migrate.versioning.api [-] 38 -> 39... \u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 CRITICAL glance [-] OperationalError: (OperationalError) (1091, \"Can't DROP 'ix_namespaces_namespace'; check that column/key exists\") '\\nDROP INDEX ix_namespaces_namespace ON metadef_namespaces' ()\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance Traceback (most recent call last):\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/bin/glance-manage\", line 10, in \u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance sys.exit(main())\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/glance/cmd/manage.py\", line 303, in main\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance return CONF.command.action_fn()\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/glance/cmd/manage.py\", line 171, in sync\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance CONF.command.current_version)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/glance/cmd/manage.py\", line 116, in sync\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance version)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration.py\", line 79, in db_sync\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance return versioning_api.upgrade(engine, repository, version)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/migrate/versioning/api.py\", line 186, in upgrade\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance return _migrate(url, repository, version, upgrade=True, err=err, **opts)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"\", line 2, in _migrate\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/migrate/versioning/util/__init__.py\", line 160, in with_engine\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance return f(*a, **kw)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/migrate/versioning/api.py\", line 366, in _migrate\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance schema.runchange(ver, change, changeset.step)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/migrate/versioning/schema.py\", line 93, in runchange\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance change.run(self.engine, step)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/migrate/versioning/script/py.py\", line 148, in run\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance script_func(engine)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py\", line 34, in upgrade\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance Index('ix_namespaces_namespace', metadef_namespaces.c.namespace).drop()\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/schema.py\", line 2975, in drop\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance bind._run_visitor(ddl.SchemaDropper, self)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 1616, in _run_visitor\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance conn._run_visitor(visitorcallable, element, **kwargs)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 1245, in _run_visitor\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance **kwargs).traverse_single(element)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/visitors.py\", line 120, in traverse_single\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance return meth(obj, **kw)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py\", line 813, in visit_index\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance self.connection.execute(DropIndex(index))\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 729, in execute\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance return meth(self, multiparams, params)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py\", line 69, in _execute_on_connection\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance return connection._execute_ddl(self, multiparams, params)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 783, in _execute_ddl\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance compiled\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 958, in _execute_context\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance context)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/compat/handle_error.py\", line 261, in _handle_dbapi_exception\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance e, statement, parameters, cursor, context)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 1155, in _handle_dbapi_exception\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance util.raise_from_cause(newraise, exc_info)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py\", line 199, in raise_from_cause\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance reraise(type(exception), exception, tb=exc_tb)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 951, in _execute_context\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance context)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py\", line 436, in do_execute\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance cursor.execute(statement, parameters)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py\", line 174, in execute\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance self.errorhandler(self, exc, value)\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance File \"/usr/lib64/python2.7/site-packages/MySQLdb/connections.py\", line 36, in defaulterrorhandler\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance raise errorclass, errorvalue\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance OperationalError: (OperationalError) (1091, \"Can't DROP 'ix_namespaces_namespace'; check that column/key exists\") '\\nDROP INDEX ix_namespaces_namespace ON metadef_namespaces' ()\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: 2015-09-09 20:36:07.216 27081 TRACE glance \u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Service[glance-registry]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Api/Service[glance-api]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/File[/etc/nova/nova.conf]/owner: owner changed 'root' to 'nova'\u001b[0m\n\u001b[mNotice: /File[/etc/nova/nova.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova/Exec[post-nova_config]: Triggered 'refresh' from 67 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Exec[nova-db-sync]/returns: Command failed, please check log for more info\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Vncproxy/Nova::Generic_service[vncproxy]/Service[nova-vncproxy]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Consoleauth/Nova::Generic_service[consoleauth]/Service[nova-consoleauth]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: Triggered 'refresh' from 38 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Service/Service[keystone]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'\u001b[0m\n\u001b[mNotice: /File[/etc/httpd/conf.d]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[15-default.conf]/ensure: defined content as '{md5}c1bc833d02e055dc8a87f9cb360fc799'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/File[15-horizon_vhost.conf]/ensure: defined content as '{md5}ad571245d8a5dad18b3108bcf129389d'\u001b[0m\n\u001b[mNotice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'\u001b[0m\n\u001b[mNotice: /File[/etc/httpd/conf.d/openstack-dashboard.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Apache::Service/Service[httpd]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/User[heat]/groups: groups changed '' to 'heat'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/Exec[heat-dbsync]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/File[/etc/heat/]/group: group changed 'root' to 'heat'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/File[/etc/heat/]/mode: mode changed '0755' to '0750'\u001b[0m\n\u001b[mNotice: /File[/etc/heat/]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/File[/etc/heat/heat.conf]/owner: owner changed 'root' to 'heat'\u001b[0m\n\u001b[mNotice: /File[/etc/heat/heat.conf]/seluser: seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Service[nova-cert]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Service[heat-engine]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api/Service[heat-api]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api_cloudwatch/Service[heat-api-cloudwatch]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: Finished catalog run in 103.96 seconds\u001b[0m\n", "deploy_stderr": "\u001b[1;31mWarning: Variable access via 'notification_email_to' is deprecated. Use '@notification_email_to' instead. template[/etc/puppet/modules/keepalived/templates/global_config.erb]:3\n (at /etc/puppet/modules/keepalived/templates/global_config.erb:3:in `block in result')\u001b[0m\n\u001b[1;31mWarning: notify is a metaparam; this value will inherit to all contained resources in the keepalived::instance definition\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_host'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_protocol'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_port'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_path'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Concat::Setup]): concat::setup is deprecated as a public API of the concat module and should no longer be directly included in the manifest.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6002]): The default incoming_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6002]): The default outgoing_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6001]): The default incoming_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6001]): The default outgoing_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6000]): The default incoming_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6000]): The default outgoing_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mError: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: Failed to call refresh: glance-manage --config-file=/etc/glance/glance-registry.conf db_sync returned 1 instead of one of [0]\u001b[0m\n\u001b[1;31mError: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: glance-manage --config-file=/etc/glance/glance-registry.conf db_sync returned 1 instead of one of [0]\u001b[0m\n\u001b[1;31mError: /Stage[main]/Nova::Api/Exec[nova-db-sync]: Failed to call refresh: /usr/bin/nova-manage db sync returned 1 instead of one of [0]\u001b[0m\n\u001b[1;31mError: /Stage[main]/Nova::Api/Exec[nova-db-sync]: /usr/bin/nova-manage db sync returned 1 instead of one of [0]\u001b[0m\n", "deploy_status_code": 6 }, "creation_time": "2015-09-09T20:34:20Z", "updated_time": "2015-09-09T20:36:50Z", "input_values": {}, "action": "CREATE", "status_reason": "deploy_status_code : Deployment exited with non-zero status code: 6", "id": "cc239277-50c2-4d94-952d-181ccf3199da" } __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Sep 9, 2015, at 2:18 PM, Marius Cornea wrote: > > For ssh access you can use the heat-admin user with the private key in > /home/stack/.ssh/id_rsa > > On Wed, Sep 9, 2015 at 7:26 PM, Ignacio Bravo wrote: >> Dan, >> >> Thanks for your tips. It seems like there is an issue with the networking >> piece, as that is where all the nodes are in building state. I have >> something similar to this for each one of the Controller nodes: >> >> | NetworkDeployment | cac8c93b-b784-4a91-bc23-1a932bb1e62f >> | OS::TripleO::SoftwareDeployment | CREATE_IN_PROGRESS | >> 2015-09-09T15:22:10Z | 1 | >> | UpdateDeployment | 97359c35-d2c7-4140-98ed-24525ee4be6b >> | OS::Heat::SoftwareDeployment | CREATE_IN_PROGRESS | >> 2015-09-09T15:22:10Z | 1 | >> >> Following your advice, I was trying to ssh into the nodes, but didn?t know >> what username/password combination to use. I tried root, heat-admin, stack >> with different password located in /home/stack/triple0-overcloud-passwords >> but none of the combinations seemed to work. >> >> BTW, I am using the instructions from >> https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html >> and installing in an HP c7000 Blade enclosure. >> >> Thanks, >> IB >> >> >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> Office: (703) 951-7760 >> >> On Sep 9, 2015, at 11:40 AM, Dan Sneddon wrote: >> >> Did that error happen after a long time, like 4 hours? I have seen that >> error >> when the deploy was actually hung, and the token timeout gets reached and >> then >> every API call gets an authentication failed response. Unfortunately, you'll >> need to diagnose what part of the deployment is failing. Here's what I >> usually >> do: >> >> # Get state of all uncompleted resources >> heat resource-list overcloud -n 5 | grep -iv complete >> >> # Look closer at failed resources from above command >> heat resource-show >> >> nova list >> (then ssh as heat-admin to the nodes and check for network connectivity and >> errors in the logs) >> >> -Dan Sneddon >> >> ----- Original Message ----- >> >> Never hit that. Did you try export HEAT_INCLUDE_PASSWORD=1 and rerun deploy? >> >> On Wed, Sep 9, 2015 at 4:47 PM, Ignacio Bravo wrote: >> >> Thanks. I was able to delete the existing deployment, but when trying to >> deploy again using the CLI I got an authentication error. Ideas? >> >> [root at bl16 ~]# su stack >> [stack at bl16 root]$ cd >> [stack at bl16 ~]$ cd ~ >> [stack at bl16 ~]$ source stackrc >> [stack at bl16 ~]$ openstack overcloud deploy --ceph-storage-scale 3 >> --control-scale 3 --compute-scale 2 --compute-flavor Compute_24 >> --ntp-server >> 192.168.10.1 --templates -e >> /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml >> Deploying templates in the directory >> /usr/share/openstack-tripleo-heat-templates >> ERROR: openstack ERROR: Authentication failed. Please try again with option >> --include-password or export HEAT_INCLUDE_PASSWORD=1 >> Authentication required >> >> >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> Office: (703) 951-7760 >> >> On Sep 8, 2015, at 5:29 PM, Marius Cornea wrote: >> >> Hi Ignacio, >> >> Yes, I believe ceph currently only works with using direct heat >> templates. You can check the instruction on how to get it deployed via >> cli here [1] Make sure you select the Ceph role on the environment >> specific content (left side column). >> >> To delete existing deployments run 'heat stack-delete overcloud' on >> the undercloud node with the credentials in the stackrc file loaded. >> >> In order to get a HA deployment you just need to deploy 3 controllers >> by passing '--control-scale 3' to the 'openstack overcloud deploy' >> command. >> >> [1] >> https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html >> >> On Tue, Sep 8, 2015 at 11:07 PM, Ignacio Bravo >> wrote: >> >> All, >> >> I was trying to deploy an overcloud using the RDO Manager GUI (Tuskar UI) >> and the deployment was unsuccessful. It seems that there is a particular >> bug >> currently with deploying a Ceph based storage with the GUI, so I wanted to >> ask the list if >> >> 1. Indeed this was the case. >> 2. How to delete my configuration and redeploy using the CLI >> 3. Finally, if there is any scripted or explained way to perform an HA >> installation. I read the reference to github, but this seems to be more >> about the components but there was not a step by step instruction/ >> explanation. >> >> >> Thanks! >> IB >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> Office: (703) 951-7760 >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Thu Sep 10 01:25:27 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 10 Sep 2015 03:25:27 +0200 Subject: [Rdo-list] bug statistics for 2015-09-09 In-Reply-To: References: Message-ID: Cleaned up 8 tickets in distributions. I suspect a lot of them could be closed as EOL Regards, H. From stdake at cisco.com Thu Sep 10 05:11:36 2015 From: stdake at cisco.com (Steven Dake (stdake)) Date: Thu, 10 Sep 2015 05:11:36 +0000 Subject: [Rdo-list] 404 on http://trunk.rdoproject.org/centos7/current/delorean.repo Message-ID: Could someone address this problem please? Thanks! -steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From jruzicka at redhat.com Thu Sep 10 14:04:11 2015 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Thu, 10 Sep 2015 16:04:11 +0200 Subject: [Rdo-list] packstack future In-Reply-To: <1635431962.19038805.1441747905216.JavaMail.zimbra@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1C09@CERNXCHG44.cern.ch> <1635431962.19038805.1441747905216.JavaMail.zimbra@redhat.com> Message-ID: <55F18DDB.4080807@redhat.com> On 8.9.2015 23:31, Ivan Chavero wrote: > > > ----- Original Message ----- >> From: "Tim Bell" >> To: rdo-list at redhat.com >> Sent: Sunday, September 6, 2015 1:35:30 AM >> Subject: [Rdo-list] packstack future >> >> >> >> >> >> Reading the RDO September newsletter, I noticed a mail thread ( >> https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html ) on the >> future of packstack vs rdo-manager. >> >> >> >> We use packstack to spin up small OpenStack instances for development and >> testing. Typical cases are to have a look at the features of the latest >> releases or do some prototyping of an option we?ve not tried yet. >> >> >> >> It was not clear to me based on the mailing list thread as to how this could >> be done using rdo-manager unless you already have the undercloud configiured >> by RDO. >> >> >> >> Has there been any further discussions around packstack future ? >> > > my understanding is that packstack will still be a PoC tool for at least two > more years That's a reasonable assumption. >> >> Tim >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From jruzicka at redhat.com Thu Sep 10 14:20:35 2015 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Thu, 10 Sep 2015 16:20:35 +0200 Subject: [Rdo-list] packstack future In-Reply-To: <20150908144251.GJ12870@localhost.localdomain> References: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3E1C09@CERNXCHG44.cern.ch> <20150907130756.GA9590@t430slt.redhat.com> <20150908144251.GJ12870@localhost.localdomain> Message-ID: <55F191B3.4000000@redhat.com> On 8.9.2015 16:42, James Slagle wrote: > On Mon, Sep 07, 2015 at 02:07:56PM +0100, Steven Hardy wrote: >> Hi Tim, >> >> On Sun, Sep 06, 2015 at 07:35:30AM +0000, Tim Bell wrote: >>> Reading the RDO September newsletter, I noticed a mail thread >>> (https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html) on >>> the future of packstack vs rdo-manager. >>> >>> We use packstack to spin up small OpenStack instances for development and >>> testing. Typical cases are to have a look at the features of the latest >>> releases or do some prototyping of an option we've not tried yet. >>> >>> It was not clear to me based on the mailing list thread as to how this >>> could be done using rdo-manager unless you already have the undercloud >>> configiured by RDO. >> >>> Has there been any further discussions around packstack future ? >> >> Thanks for raising this - I am aware that a number of folks have been >> thinking about this topic (myself included), but I don't think we've yet >> reached a definitive consensus re the path forward yet. >> >> Here's my view on the subject: >> >> 1. Packstack is clearly successful, useful to a lot of folks, and does >> satisfy a use-case currently not well served via rdo-manager, so IMO we >> absolutely should maintain it until that is no longer the case. >> >> 2. Many people are interested in easier ways to stand up PoC environments >> via rdo-manager, so we do need to work on ways to make that easier (or even >> possible at all in the single-node case). >> >> 3. It would be really great if we could figure out (2) in such a way as to >> enable a simple migration path from packstack to whatever the PoC mode of >> rdo-manager ends up being, for example perhaps we could have an rdo manager >> interface which is capable of consuming a packstack answer file? >> >> Re the thread you reference, it raises a number of interesting questions, >> particularly the similarities/differences between an all-in-one packstack >> install and an all-in-one undercloud install; >> >> >From an abstract perspective, installing an all-in-one undercloud looks a >> lot like installing an all-in-one packstack environment, both sets of tools >> take a config file, and create a puppet-configured all-in-one OpenStack. >> >> But there's a lot of potential complexity related to providing a >> flexible/configurable deployment (like packstack) vs an opinionated >> bootstrap environment (e.g the current instack undercloud environment). > > Besides there being some TripleO related history (which I won't bore everyone > with), the above is a big reason why we didn't just use packstack originally to > install the all-in-one undercloud. > > As you point out, the undercloud installer is very opinionated by design. It's > not meant to be a flexible all-in-one *OpenStack* installer, nor do I think we > want to turn it into one. That would just end up in reimplementing packstack. Great, I like to see this clearly stated. So if I understand correctly rdo-manager's use case is quite different from packstack's and thus it seems we want to have separate PoC/AiO installer. Ideally as simple as possible ;) >> There are a few possible approaches: >> >> - Do the work to enable a more flexibly configured undercloud, and just >> have that as the "all in one" solution > > -1 :). > >> - Have some sort of transient undercloud (I'm thinking a container) which >> exists only for the duration of deploying the all-in-one overcloud, on >> the local (pre-provisioned, e.g not via Ironic) host. Some prototyping >> of this approach has already happened [1] which I think James Slagle has >> used to successfully deploy TripleO templates on pre-provisioned nodes. > > Right, so my thinking was to leverage the work (or some part of it) that Jeff > Peeler has done on the standalone Heat container as a bootstrap mechanism. Once > that container is up, you can use Heat to deploy to preprovisoned nodes that > already have an OS installed. Not only would this be nice for POC's, there are > also real use cases where dedicated provisioning networks are not available, or > there's no access to ipmi/drac/whatever. > > It would also provide a solution on how to orchestrate an HA undercloud as > well. > > Note that the node running the bootstrap Heat container itself could > potentially be reused, providing for the true all-in-one. > > I do have some hacked on templates I was working with, and had made enough > progress to where I was able to get the preprovisoned nodes to start applying the > SoftwareDeployments from Heat after I manually configured os-collect-config on > each node. > > I'll get those in order and push up a WIP patch. > > There are a lot of wrinkles here still, things like how to orchestrate the > manual config you still have to do on each node (have to configure > os-collect-config with a stack id), and assumptions on network setup, etc. > > > >> >> The latter approach is quite interesting, because it potentially maintains >> a greater degree of symmetry between the minimal PoC install and real >> production deployments (e.g you'd use the same heat templates etc), it >> could also potentially provide easier access to features as they are added >> to overcloud templates (container integration, as an example), vs >> integrating new features in two places. >> >> Overall at this point I think there are still many unanswered questions >> around enabling the PoC use-case for rdo-manager (and, more generally >> making TripleO upstream more easily consumable for these kinds of >> use-cases). I hope/expect we'll have a TripleO session on this at the >> forthcoming summit, where we refine the various ideas people have been >> investigating, and define the path forward wrt PoC deployments. Yeah so that sounds like "long live packstack, our PoC/AiO overlord" to me. Until something better for the job is written, that is :) > So I did just send out the etherpad link for our session planning for Tokyo > this morning to openstack-dev :) > > https://etherpad.openstack.org/p/tripleo-mitaka-proposed-sessions > > I'll add a bullet item about this point. > >> >> Hopefully that is somewhat helpful, and thanks again for re-starting this >> discussion! :) >> >> Steve >> >> [1] https://etherpad.openstack.org/p/noop-softwareconfig >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > -- > -- James Slagle > -- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From slinaber at redhat.com Thu Sep 10 14:53:34 2015 From: slinaber at redhat.com (Steve Linabery) Date: Thu, 10 Sep 2015 09:53:34 -0500 Subject: [Rdo-list] openstack-rally in delorean Message-ID: <20150910145334.GD16387@redhat.com> I have started investigating packaging rally. One of the problems I'm hitting is something we also hit (and never, afaik, resolved) when packaging openstack-tempest. Upstream does not provide a tarball for rally via launchpad, so building a rally RPM doesn't work quite the same as, say, nova. In particular, the PKG-INFO file which you get from e.g. nova via launchpad is not present in the rally 0.0.4 tarball you get from github. This causes pbr to complain that it's not being run in a git repo and it can't figure out versioning. I've worked around this in my poc spec/rpm by patching in a 'dummy' PKG-INFO file (which I cut-n-pasted from nova). Two questions for this group. 1) What do we think of this approach of including a fabricated PKG-INFO file to define the version info for pbr et al? 2) If we like this idea, how does this fit in with delorean? IIUC, going forward we want to avoid using midstream repos. What about using https://github.com/openstack/rally as the upstream and something in github.com/redhat-openstack as a patches branch containing the PKG-INFO file alone? I guess what I'm asking is where should we maintain a PKG-INFO file of our own creation. Maybe where we keep the spec? Thanks in advance for your thoughts. Steve Linabery (freenode: eggmaster) From Kevin.Fox at pnnl.gov Thu Sep 10 15:36:00 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 10 Sep 2015 15:36:00 +0000 Subject: [Rdo-list] openstack-rally in delorean In-Reply-To: <20150910145334.GD16387@redhat.com> References: <20150910145334.GD16387@redhat.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F3BA6@EX10MBOX03.pnnl.gov> did you try asking the rally folks about it? Maybe it was a simple oversight? Thanks, Kevin ________________________________________ From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf of Steve Linabery [slinaber at redhat.com] Sent: Thursday, September 10, 2015 7:53 AM To: rdo-list at redhat.com Cc: apevec at redhat.com Subject: [Rdo-list] openstack-rally in delorean I have started investigating packaging rally. One of the problems I'm hitting is something we also hit (and never, afaik, resolved) when packaging openstack-tempest. Upstream does not provide a tarball for rally via launchpad, so building a rally RPM doesn't work quite the same as, say, nova. In particular, the PKG-INFO file which you get from e.g. nova via launchpad is not present in the rally 0.0.4 tarball you get from github. This causes pbr to complain that it's not being run in a git repo and it can't figure out versioning. I've worked around this in my poc spec/rpm by patching in a 'dummy' PKG-INFO file (which I cut-n-pasted from nova). Two questions for this group. 1) What do we think of this approach of including a fabricated PKG-INFO file to define the version info for pbr et al? 2) If we like this idea, how does this fit in with delorean? IIUC, going forward we want to avoid using midstream repos. What about using https://github.com/openstack/rally as the upstream and something in github.com/redhat-openstack as a patches branch containing the PKG-INFO file alone? I guess what I'm asking is where should we maintain a PKG-INFO file of our own creation. Maybe where we keep the spec? Thanks in advance for your thoughts. Steve Linabery (freenode: eggmaster) _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From ihrachys at redhat.com Thu Sep 10 15:42:53 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 10 Sep 2015 17:42:53 +0200 Subject: [Rdo-list] openstack-rally in delorean In-Reply-To: <20150910145334.GD16387@redhat.com> References: <20150910145334.GD16387@redhat.com> Message-ID: <16EA8AF4-580E-490A-ADCE-1D630AAAABC5@redhat.com> > On 10 Sep 2015, at 16:53, Steve Linabery wrote: > > I have started investigating packaging rally. One of the problems I'm hitting is something we also hit (and never, afaik, resolved) when packaging openstack-tempest. Upstream does not provide a tarball for rally via launchpad, so building a rally RPM doesn't work quite the same as, say, nova. > > In particular, the PKG-INFO file which you get from e.g. nova via launchpad is not present in the rally 0.0.4 tarball you get from github. This causes pbr to complain that it's not being run in a git repo and it can't figure out versioning. > > I've worked around this in my poc spec/rpm by patching in a 'dummy' PKG-INFO file (which I cut-n-pasted from nova). > delorean does not run from those tarballs anyway. It runs from tarballs generated from git using python setup.py sdist, and those include proper PKG-INFO. If you want to have it in RDO, you can just build the tarball yourself and publish it somewhere. Or better, make the project publish correct tarballs on LP or pypi. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From javier.pena at redhat.com Thu Sep 10 15:50:54 2015 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 10 Sep 2015 11:50:54 -0400 (EDT) Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance - September 14-15 In-Reply-To: <1721804168.45444414.1441899943655.JavaMail.zimbra@redhat.com> Message-ID: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> Dear all, Due to a planned maintenance of the infrastructure supporting the Delorean instance (trunk.rdoproject.org), it is expected to be offline between September 14 (~ 9PM EDT) and September 15 (~ 9PM EDT). We will be sending updates to the list if there is any additional information or change in the plans, and keep you updated on the status. Please let us know if you have any questions or concerns. Regards, Javier ---- Javier Pe?a, RHCA email: javier.pena at redhat.com Senior Software Engineer phone: +34 914148872 EMEA OpenStack Engineering From slinaber at redhat.com Thu Sep 10 15:50:58 2015 From: slinaber at redhat.com (Steve Linabery) Date: Thu, 10 Sep 2015 10:50:58 -0500 Subject: [Rdo-list] openstack-rally in delorean In-Reply-To: <16EA8AF4-580E-490A-ADCE-1D630AAAABC5@redhat.com> References: <20150910145334.GD16387@redhat.com> <16EA8AF4-580E-490A-ADCE-1D630AAAABC5@redhat.com> Message-ID: <20150910155058.GE16387@redhat.com> On Thu, Sep 10, 2015 at 05:42:53PM +0200, Ihar Hrachyshka wrote: > > On 10 Sep 2015, at 16:53, Steve Linabery wrote: > > > > I have started investigating packaging rally. One of the problems I'm hitting is something we also hit (and never, afaik, resolved) when packaging openstack-tempest. Upstream does not provide a tarball for rally via launchpad, so building a rally RPM doesn't work quite the same as, say, nova. > > > > In particular, the PKG-INFO file which you get from e.g. nova via launchpad is not present in the rally 0.0.4 tarball you get from github. This causes pbr to complain that it's not being run in a git repo and it can't figure out versioning. > > > > I've worked around this in my poc spec/rpm by patching in a 'dummy' PKG-INFO file (which I cut-n-pasted from nova). > > > > delorean does not run from those tarballs anyway. It runs from tarballs generated from git using python setup.py sdist, and those include proper PKG-INFO. Thanks for explaining that. I see that openstack-nova.spec refers to launchpad for Source0. So, putting aside delorean for a moment, what would be the upstream source for an RPM in either RDO release or downstream? > > If you want to have it in RDO, you can just build the tarball yourself and publish it somewhere. Or better, make the project publish correct tarballs on LP or pypi. > On that latter point ('make the project...'), any advice on how to make that happen? Thanks! s|e > Ihar From slinaber at redhat.com Thu Sep 10 15:52:01 2015 From: slinaber at redhat.com (Steve Linabery) Date: Thu, 10 Sep 2015 10:52:01 -0500 Subject: [Rdo-list] openstack-rally in delorean In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A2F3BA6@EX10MBOX03.pnnl.gov> References: <20150910145334.GD16387@redhat.com> <1A3C52DFCD06494D8528644858247BF01A2F3BA6@EX10MBOX03.pnnl.gov> Message-ID: <20150910155200.GF16387@redhat.com> On Thu, Sep 10, 2015 at 03:36:00PM +0000, Fox, Kevin M wrote: > did you try asking the rally folks about it? Maybe it was a simple oversight? > > Thanks, > Kevin My understanding was that upstream did not intend for tempest or rally to be standalone projects and thus the diff. But I can't back that up with any actual references, so maybe my understanding is completely off. s|e > ________________________________________ > From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf of Steve Linabery [slinaber at redhat.com] > Sent: Thursday, September 10, 2015 7:53 AM > To: rdo-list at redhat.com > Cc: apevec at redhat.com > Subject: [Rdo-list] openstack-rally in delorean > > I have started investigating packaging rally. One of the problems I'm hitting is something we also hit (and never, afaik, resolved) when packaging openstack-tempest. Upstream does not provide a tarball for rally via launchpad, so building a rally RPM doesn't work quite the same as, say, nova. > > In particular, the PKG-INFO file which you get from e.g. nova via launchpad is not present in the rally 0.0.4 tarball you get from github. This causes pbr to complain that it's not being run in a git repo and it can't figure out versioning. > > I've worked around this in my poc spec/rpm by patching in a 'dummy' PKG-INFO file (which I cut-n-pasted from nova). > > Two questions for this group. > > 1) What do we think of this approach of including a fabricated PKG-INFO file to define the version info for pbr et al? > > 2) If we like this idea, how does this fit in with delorean? IIUC, going forward we want to avoid using midstream repos. What about using https://github.com/openstack/rally as the upstream and something in github.com/redhat-openstack as a patches branch containing the PKG-INFO file alone? I guess what I'm asking is where should we maintain a PKG-INFO file of our own creation. Maybe where we keep the spec? > > Thanks in advance for your thoughts. > > Steve Linabery (freenode: eggmaster) > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From alan.pevec at redhat.com Thu Sep 10 15:58:50 2015 From: alan.pevec at redhat.com (Alan Pevec) Date: Thu, 10 Sep 2015 17:58:50 +0200 Subject: [Rdo-list] openstack-rally in delorean In-Reply-To: <20150910155058.GE16387@redhat.com> References: <20150910145334.GD16387@redhat.com> <16EA8AF4-580E-490A-ADCE-1D630AAAABC5@redhat.com> <20150910155058.GE16387@redhat.com> Message-ID: <55F1A8BA.7050607@redhat.com> >> If you want to have it in RDO, you can just build the tarball yourself and publish it somewhere. Or better, make the project publish correct tarballs on LP or pypi. > > On that latter point ('make the project...'), any advice on how to make that happen? Ask them nicely? :) Or just do the former, you don't even need to publish it somewhere, just provide reproducible steps how to generate it: https://fedoraproject.org/wiki/Packaging:SourceURL#Using_Revision_Control From hguemar at fedoraproject.org Thu Sep 10 19:20:23 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 10 Sep 2015 21:20:23 +0200 Subject: [Rdo-list] openstack-rally in delorean In-Reply-To: <20150910145334.GD16387@redhat.com> References: <20150910145334.GD16387@redhat.com> Message-ID: Hi, just for the record, Victoria started working a while ago on rally packaging https://bugzilla.redhat.com/show_bug.cgi?id=1193986 It didn't get on top of my TODO for a while, though. Regards, H. From slinaber at redhat.com Thu Sep 10 19:27:34 2015 From: slinaber at redhat.com (Steve Linabery) Date: Thu, 10 Sep 2015 14:27:34 -0500 Subject: [Rdo-list] openstack-rally in delorean In-Reply-To: References: <20150910145334.GD16387@redhat.com> Message-ID: <20150910192733.GA24947@redhat.com> On Thu, Sep 10, 2015 at 09:20:23PM +0200, Ha?kel wrote: > Hi, > > just for the record, Victoria started working a while ago on rally packaging > https://bugzilla.redhat.com/show_bug.cgi?id=1193986 > It didn't get on top of my TODO for a while, though. > > Regards, > H. Splendid! I am glad I emailed; didn't know the packaging effort was this far along. Victoria, any reason not to use 0.0.4 (current upstream rally release)? s|e From rbowen at redhat.com Thu Sep 10 19:54:45 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 10 Sep 2015 15:54:45 -0400 Subject: [Rdo-list] RDO BoF at OpenStack Summit Message-ID: <55F1E005.1080706@redhat.com> We have an opportunity to sign up for a BoF (Birds of a Feather) session at OpenStack Summit in Tokyo. The following dates/times are available: http://doodle.com/poll/bvd8w85kuqngeb7x The rooms are large enough for 30. We obviously can't please everyone, but if you can express your preference, I will sign up for a slot early next week. Note that these slots will likely go very quickly, and the spaces are VERY limited, so please express your opinion as soon as possible. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From marius at remote-lab.net Thu Sep 10 21:01:40 2015 From: marius at remote-lab.net (Marius Cornea) Date: Thu, 10 Sep 2015 23:01:40 +0200 Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: <11FC8B09-058B-4257-BEEE-9F182CB2CF21@ltgfederal.com> References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> <0D2CA8A2-C988-4B2D-82C9-7CEC2F122323@ltgfederal.com> <292404844.23738911.1441813206554.JavaMail.zimbra@redhat.com> <5EA2DA6D-B64F-460D-8CF8-B7757337FF3F@ltgfederal.com> <11FC8B09-058B-4257-BEEE-9F182CB2CF21@ltgfederal.com> Message-ID: Hi, Can you ssh to the overcloud-controller-2 node and try to manually run the commands it failed: /usr/bin/nova-manage db sync and glance-manage --config-file=/etc/glance/glance-registry.conf db_sync? Also check the nova and glance logs for some indications on why they might have failed. /var/log/messages and journalctl -l -u os-collect-config could also indicate more detailed errors. Thanks, Marius On Wed, Sep 9, 2015 at 11:53 PM, Ignacio Bravo wrote: > I was able to move past the network issue (boot order of the servers have > them booting from the 2nd NIC as well, causing them to register with > Foreman, Ups!) > > Now the issue is deeper in the deployment: > > [stack at bl16 ~]$ heat resource-list --nested-depth 5 overcloud | grep FAILED > | ComputeNodesPostDeployment | > c1e5efe9-3a25-482c-91e2-dc443b334b92 | > OS::TripleO::ComputePostDeployment | CREATE_FAILED | > 2015-09-09T19:58:28Z | | > | ControllerNodesPostDeployment | > 702c7865-f7f5-4e23-bb89-0fda0fdbf290 | > OS::TripleO::ControllerPostDeployment | CREATE_FAILED | > 2015-09-09T19:58:28Z | | > | ControllerOvercloudServicesDeployment_Step4 | > 949468eb-920c-4970-9f99-d8c4baa077d1 | > OS::Heat::StructuredDeployments | CREATE_FAILED | > 2015-09-09T20:30:31Z | ControllerNodesPostDeployment | > | 0 | > 61528341-eb2d-471f-88fc-40b3f6ccbb1b | > OS::Heat::StructuredDeployment | CREATE_FAILED | > 2015-09-09T20:34:17Z | ControllerOvercloudServicesDeployment_Step4 | > | 1 | > 366fbfb8-933d-4122-b156-065f0a514dbb | > OS::Heat::StructuredDeployment | CREATE_FAILED | > 2015-09-09T20:34:17Z | ControllerOvercloudServicesDeployment_Step4 | > | 2 | > cc239277-50c2-4d94-952d-181ccf3199da | > OS::Heat::StructuredDeployment | CREATE_FAILED | > 2015-09-09T20:34:17Z | ControllerOvercloudServicesDeployment_Step4 | > > > > And looking at the error, it shows just one VERY LONG issue: > > [stack at bl16 ~]$ heat deployment-show cc239277-50c2-4d94-952d-181ccf3199da > { > "status": "FAILED", > "server_id": "7fe469fa-34b9-4128-8651-2a6425ca8d06", > "config_id": "96984e90-c1e7-4fe3-b89a-6576989df46c", > "output_values": { > "deploy_stdout": "\u001b[mNotice: Compiled catalog for > overcloud-controller-2.localdomain in environment production in 13.70 > seconds\u001b[0m\n\u001b[mNotice: > /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content > changed '{md5}05503957e3796fbe6fddd756a7a102a0' to > '{md5}f1a25c29dec68d565a05b5abe92a2f15'\u001b[0m\n\u001b[mNotice: > /File[/etc/sysconfig/memcached]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Memcached/Service[memcached]/ensure: ensure changed 'stopped' > to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/log_dir]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/amqp_durable_queues]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/notify_api_faults]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/memcached_servers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_userid]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/notification_driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/verbose]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/File[/var/log/nova]/group: group changed 'root' to > 'nova'\u001b[0m\n\u001b[mNotice: /File[/var/log/nova]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_ha_queues]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_auth_url]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/network_api_class]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_username]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_tenant_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/security_group_api]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_chunk_size]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_virtual_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/var/lib/puppet/concat/_etc_openstack-dashboard_local_settings]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/var/lib/puppet/concat/_etc_openstack-dashboard_local_settings/fragments.concat.out]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/use_ssl]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/auth_region]/ensure: > removed\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/kombu_reconnect_delay]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/File[/etc/neutron]/group: group changed 'root' to > 'neutron'\u001b[0m\n\u001b[mNotice: /File[/etc/neutron]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/use_namespaces]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/admin_user]/value: > value changed '%SERVICE_USER%' to 'neutron'\u001b[0m\n\u001b[mNotice: > /File[/etc/neutron/neutron.conf]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/use_syslog]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/auth_url]/value: > value changed 'http://localhost:5000/v2.0' to > 'http://192.168.10.11:35357/'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_backlog]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/File[/etc/neutron/dnsmasq-neutron.conf]/ensure: defined > content as '{md5}2c4080d983906582143a8bf302c46557'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/log_dir]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/rpc_backend]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/state_path]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_config_file]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/control_exchange]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_delete_namespaces]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Vncproxy/Nova_config[DEFAULT/novncproxy_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/notify_on_state_change]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Vncproxy/Nova_config[DEFAULT/novncproxy_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/admin_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_domain]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Config/Nova_config[DEFAULT/default_floating_pool]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_virtual_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/verbose]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Scheduler/Nova_config[DEFAULT/scheduler_driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/File[/etc/ceilometer/]/owner: owner changed 'root' > to 'ceilometer'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/File[/etc/ceilometer/]/group: group changed 'root' > to 'ceilometer'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/File[/etc/ceilometer/]/mode: mode changed '0755' to > '0750'\u001b[0m\n\u001b[mNotice: /File[/etc/ceilometer/]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/var/lib/puppet/concat/_etc_openstack-dashboard_local_settings/fragments.concat]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_sorting]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/mac_generation_retries]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_strategy]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_tenant_id]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/lock_path]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_ha_queues]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/lock_path]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_bulk]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance/File[/etc/glance/]/owner: owner changed 'root' to > 'glance'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance/File[/etc/glance/]/mode: mode changed '0755' to > '0770'\u001b[0m\n\u001b[mNotice: /File[/etc/glance/]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed > '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to > '{md5}f8abf835314a1775f891a7aa59842dcd'\u001b[0m\n\u001b[mNotice: > /File[snmptrapd.sysconfig]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed > '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to > '{md5}b432e3530685b2b53034e4dc1be5193e'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to > '0644'\u001b[0m\n\u001b[mNotice: /File[/etc/xinetd.conf]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed > '{md5}e914149a715dc82812a989314c026305' to > '{md5}1b13c1ecc4c5ca6dfcca44ae60c2dc3a'\u001b[0m\n\u001b[mNotice: > /File[snmpd.sysconfig]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed > '{md5}c07b9a377faea45b96b7d3bf8976004b' to > '{md5}42ad956e1bf4ceddaa0c649cd705e28d'\u001b[0m\n\u001b[mNotice: > /File[/etc/ntp.conf]/seltype: seltype changed 'etc_t' to > 'net_conf_t'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/notification_topics]/ensure: > created\u001b[0m\n\u001b[mNotice: /File[var-net-snmp]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/var/lib/puppet/concat/_etc_openstack-dashboard_local_settings/fragments]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Expirer/Cron[ceilometer-expirer]/ensure: > created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ntp::Service/Service[ntp]: > Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[osapi_v3/enabled]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[DEFAULT/ec2_listen]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[DEFAULT/volume_api_class]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[DEFAULT/ec2_workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/admin_tenant_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[DEFAULT/use_forwarded_for]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/auth_strategy]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_pagination]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_lease_duration]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/cache_url]/ensure: > created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/File[/srv/node]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: > defined content as > '{md5}e36257b9efab01459141d423cae57c7c'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: > defined content as > '{md5}f0825bad1e470de86ffabeb86dcc5d95'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: > defined content as > '{md5}588e496251838c4840c14b28b5aa7881'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: > defined content as > '{md5}f30a9be1016df87f195449d9e02d1857'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.d/prefork.conf]/ensure: > defined content as > '{md5}109c4f51dac10fc1b39373855e566d01'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: > defined content as > '{md5}ae005a36b3ac8c20af36c434561c8a75'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: > defined content as > '{md5}90ee8f8ef1a017cacadfda4225e10651'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: > defined content as > '{md5}704d6e8b02b0eca0eba4083960d16c52'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: > defined content as > '{md5}63594303ee808423679b1ea13dd5a784'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: > defined content as > '{md5}785d35cb285e190d589163b45263ca89'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: > defined content as > '{md5}084533c7a44e9129d0e6df952e2472b6'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as > '{md5}2fa646fe615e44d137a5d629f868c107'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: > defined content as > '{md5}d5feb88bec4570e2dbc41cce7e0de003'\u001b[0m\n\u001b[mNotice: > /File[/var/log/httpd]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: > defined content as > '{md5}1c9243de22ace4dc8266442c48ae0c92'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined > content as '{md5}c7ede4173da1915b7ec088201f030c28'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: > defined content as > '{md5}599866dfaf734f60f7e2d41ee8235515'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content > as '{md5}35506746efa82c4e203c8a724980bdc6'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content > changed '{md5}f5e7449c0f17bc856e86011cb5d152ba' to > '{md5}d54157c1c91291b915633b4781af8fe1'\u001b[0m\n\u001b[mNotice: > /File[/etc/httpd/conf/httpd.conf]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: > defined content as > '{md5}39942569bff2abdb259f9a347c7246bc'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined > content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: > defined content as > '{md5}3cf2fa309ccae4c29a4b875d0894cd79'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[env]/File[env.load]/ensure: > defined content as > '{md5}d74184d40d0ee24ba02626a188ee7e1a'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: > defined content as > '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: > defined content as > '{md5}c1363277984d22f99b70f7dce8753b60'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as > '{md5}c741d8ea840e6eb999d739eed47c69d7'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: > defined content as > '{md5}e95fbbf030fabec98b948f8dc217775c'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: > defined content as > '{md5}eca907865997d50d5130497665c3f82e'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: > defined content as > '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: > defined content as > '{md5}0e8468ecc1265f8947b8725f4d1be9c0'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed > '0750' to '0751'\u001b[0m\n\u001b[mNotice: /File[/var/log/horizon]/seluser: > seluser changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: > defined content as > '{md5}e1795e051e7aae1f865fde0d3b86a507'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: > defined content as > '{md5}494bcf4b843f7908675d663d8dc1bdc8'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: > defined content as > '{md5}d41656680003d7b890267bb73621c60b'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: > defined content as > '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as > '{md5}8b3feb3fc2563de439920bb2c52cbd11'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: > defined content as > '{md5}f82e9e6b871a276c324c9eeffcec8a61'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: > defined content as > '{md5}1bfb1c2a46d7351fc9eb47c659dee068'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: > defined content as > '{md5}2996277c73b1cd684a9a3111c355e0d3'\u001b[0m\n\u001b[mNotice: > /File[/var/www/]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: /File[/var/www/html]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[/var/lib/puppet/concat/15-default.conf]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[/var/lib/puppet/concat/15-default.conf/fragments]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-file_footer]/File[/var/lib/puppet/concat/15-default.conf/fragments/999_default-file_footer]/ensure: > defined content as > '{md5}e27b2525783e590ca1820f1e2118285d'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-serversignature]/File[/var/lib/puppet/concat/15-default.conf/fragments/90_default-serversignature]/ensure: > defined content as > '{md5}9bf5a458783ab459e5043e1cdf671fa7'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-directories]/File[/var/lib/puppet/concat/15-default.conf/fragments/60_default-directories]/ensure: > defined content as > '{md5}5e2a84875965faa5e3df0e222301ba37'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-docroot]/File[/var/lib/puppet/concat/15-default.conf/fragments/10_default-docroot]/ensure: > defined content as > '{md5}6faaccbc7ca8bc885ebf139223885d52'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-scriptalias]/File[/var/lib/puppet/concat/15-default.conf/fragments/180_default-scriptalias]/ensure: > defined content as > '{md5}7fc65400381c3a010f38870f94f236f0'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-apache-header]/File[/var/lib/puppet/concat/15-default.conf/fragments/0_default-apache-header]/ensure: > defined content as > '{md5}5ca41370b5812e1b87dd74a4499c0192'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[/var/lib/puppet/concat/15-default.conf/fragments.concat]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/File[/var/lib/puppet/concat/15-horizon_vhost.conf]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments.concat.out]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments.concat]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-directories]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/60_horizon_vhost-directories]/ensure: > defined content as > '{md5}18b24971675e526c8e3b1ea92849b1f4'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-file_footer]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/999_horizon_vhost-file_footer]/ensure: > defined content as > '{md5}e27b2525783e590ca1820f1e2118285d'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-logging]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/80_horizon_vhost-logging]/ensure: > defined content as > '{md5}c931b57a272eb434464ceab7ebeffaff'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-serversignature]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/90_horizon_vhost-serversignature]/ensure: > defined content as > '{md5}9bf5a458783ab459e5043e1cdf671fa7'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-docroot]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/10_horizon_vhost-docroot]/ensure: > defined content as > '{md5}bfbae283264d47e1c117f32689f18d79'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-apache-header]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/0_horizon_vhost-apache-header]/ensure: > defined content as > '{md5}c51c95185af4a0f8918c840eb2784d95'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-redirect]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/160_horizon_vhost-redirect]/ensure: > defined content as > '{md5}356ab43cb27ce407ef8cc6b3a88d9dad'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: > defined content as > '{md5}88095a914eedc3c2c184dd5d74c3954c'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: > defined content as > '{md5}26e5d44aae258b3e9d821cbbbd3e2826'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as > '{md5}983e865be85f5e0daaed7433db82995e'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-access_log]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/100_horizon_vhost-access_log]/ensure: > defined content as > '{md5}591db222039ca00f58ba1c8457861856'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[/var/lib/puppet/concat/15-default.conf/fragments.concat.out]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.d/prefork.load]/ensure: > defined content as > '{md5}157529aafcf03fa491bc924103e4608e'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-aliases]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/20_horizon_vhost-aliases]/ensure: > defined content as > '{md5}4a3191f0ccf5c6f7d6d2189906d08624'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: > defined content as > '{md5}c7d5c61c534ba423a79b0ae78ff9be35'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Expirer/Ceilometer_config[database/time_to_live]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/os_username]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/os_tenant_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/rpc_backend]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Alarm::Evaluator/Ceilometer_config[alarm/evaluation_service]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Agent::Central/Ceilometer_config[coordination/backend_url]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/notification_topics]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/os_auth_url]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/use_syslog]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Collector/Ceilometer_config[collector/udp_address]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Alarm::Evaluator/Ceilometer_config[alarm/record_history]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Alarm::Evaluator/Ceilometer_config[alarm/evaluation_interval]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_ha_queues]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/os_region_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/store_events]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Api/Ceilometer_config[keystone_authtoken/auth_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Collector/Ceilometer_config[collector/udp_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Api/Ceilometer_config[keystone_authtoken/identity_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/log_dir]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/verbose]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Api/Ceilometer_config[api/host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Api/Ceilometer_config[keystone_authtoken/admin_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_virtual_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Alarm::Evaluator/Ceilometer_config[alarm/partition_rpc_topic]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Api/Ceilometer_config[keystone_authtoken/admin_tenant_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_userid]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Concat::Fragment[local_settings.py]/File[/var/lib/puppet/concat/_etc_openstack-dashboard_local_settings/fragments/50_local_settings.py]/ensure: > defined content as > '{md5}dbc9aa2747d9f33b4231806f1c27c69d'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/Exec[concat_/etc/openstack-dashboard/local_settings]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/Exec[concat_/etc/openstack-dashboard/local_settings]: > Triggered 'refresh' from 3 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/enable_security_group]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]/Vs_bridge[br-ex]/external_ids: > external_ids changed '' to 'bridge-id=br-ex'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/identity_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: > defined content as > '{md5}8077c34a71afcf41c8fc644830935915'\u001b[0m\n\u001b[mNotice: > /File[/etc/xinetd.d]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/os_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed > '{md5}913e2613413a45daa402d0fbdbaba676' to > '{md5}0f92e52f70b5c64864657201eb9581bb'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Snmp/File[snmptrapd.conf]/mode: mode changed '0600' to > '0644'\u001b[0m\n\u001b[mNotice: /File[snmptrapd.conf]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/rpc_backend]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: > defined content as > '{md5}df9e85f8da0b239fe8e698ae7ead4f60'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Api/Ceilometer_config[keystone_authtoken/admin_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: > content changed '{md5}32d3f1a6681c9a8873975a7b756e0e2d' to > '{md5}dbc9aa2747d9f33b4231806f1c27c69d'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: > group changed 'apache' to 'root'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/mode: > mode changed '0640' to '0644'\u001b[0m\n\u001b[mNotice: > /File[/etc/openstack-dashboard/local_settings]/seluser: seluser changed > 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon/Exec[refresh_horizon_django_cache]: Triggered 'refresh' > from 1 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: > defined content as > '{md5}2d1a1afcae0c70557251829a8586eeaf'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Swift::Storage::Filter::Recon[container]/Concat::Fragment[swift_recon_container]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments/35_swift_recon_container]/ensure: > defined content as > '{md5}e1a260602323a9e194999f76b55dc468'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments.concat]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments.concat]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments.concat.out]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat::Fragment[swift-account-6002]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments/00_swift-account-6002]/ensure: > defined content as > '{md5}03c4f9c5dcf21fd573cb050ac6d49bf8'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments.concat.out]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: > defined content as > '{md5}515cdf5b573e961a60d2931d39248648'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/File[/var/lib/puppet/concat/_etc_rsync.conf]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments.concat.out]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Rsync::Server::Module[container]/Concat::Fragment[frag-container]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments/10_container_frag-container]/ensure: > defined content as > '{md5}8e648b0f3b538b6726216c98b72a7ab8'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Nova_config[DEFAULT/use_syslog]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[DEFAULT/base_mac]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Db/Ceilometer_config[database/connection]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Db/Exec[ceilometer-dbsync]: Triggered 'refresh' > from 2 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments.concat.out]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat::Fragment[swift-object-6000]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments/00_swift-object-6000]/ensure: > defined content as > '{md5}f5042afb6f245bdefe90ca597385e5d1'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Swift::Storage::Filter::Healthcheck[object]/Concat::Fragment[swift_healthcheck_object]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments/25_swift_healthcheck_object]/ensure: > defined content as > '{md5}4c8cd2d18bcd82e4052642d0d45fe6f0'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Rsync::Server::Module[object]/Concat::Fragment[frag-object]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments/10_object_frag-object]/ensure: > defined content as > '{md5}6c5dcf4876e38ea0927a443b4533e955'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: > defined content as > '{md5}bf57b94b5aec35476fc2a2dc3861f132'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat::Fragment[swift-container-6001]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments/00_swift-container-6001]/ensure: > defined content as > '{md5}d0396b3be5998f9e318569460c77efc7'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/content: content > changed '{md5}1d7d7dd9f1b4beef5a21688ededda355' to > '{md5}2421a3c6df32c7e38c2a7a22afdf5728'\u001b[0m\n\u001b[mNotice: > /File[autoindex.conf]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Swift::Storage::Filter::Healthcheck[account]/Concat::Fragment[swift_healthcheck_account]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments/25_swift_healthcheck_account]/ensure: > defined content as > '{md5}4c8cd2d18bcd82e4052642d0d45fe6f0'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/enable_tunneling]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[ovs-cleanup-service]/enable: > enable changed 'false' to 'true'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/polling_interval]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/periodic_fuzzy_delay]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/use_namespaces]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/enable_metadata_proxy]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/router_delete_namespaces]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/handle_internal_only_routers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/send_arp_for_ha]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/periodic_interval]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/external_network_bridge]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/metadata_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: > defined content as > '{md5}01e4d392225b518a65b0f7d6c4e21d29'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf/fragments.concat.out]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf/fragments]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Concat::Fragment[Apache ports > header]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf/fragments/10_Apache > ports header]/ensure: defined content as > '{md5}afe35cb5747574b700ebaa0f0b3a626e'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf/fragments.concat]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Api/Ceilometer_config[api/port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: > created\u001b[0m\n\u001b[mNotice: /Stage[main]/Snmp/Service[snmptrapd]: > Triggered 'refresh' from 5 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[keystone_authtoken/auth_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: > defined content as > '{md5}26e2683352fc1599f29573ff0d934e79'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/etc/xinetd.d/rsync]/ensure: > created\u001b[0m\n\u001b[mNotice: /Stage[main]/Xinetd/Service[xinetd]: > Triggered 'refresh' from 5 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: > defined content as > '{md5}d1045f54d2798499ca0f030ca0eef920'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-wsgi]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/260_horizon_vhost-wsgi]/ensure: > defined content as > '{md5}3f1e888993d05222c5a79ee4baa35cde'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Swift::Storage::Filter::Recon[account]/Concat::Fragment[swift_recon_account]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments/35_swift_recon_account]/ensure: > defined content as > '{md5}e1a260602323a9e194999f76b55dc468'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: > defined content as > '{md5}66a1e2064a140c3e7dca7ac33877700e'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-access_log]/File[/var/lib/puppet/concat/15-default.conf/fragments/100_default-access_log]/ensure: > defined content as > '{md5}65fb033baac888b4ab85c295e870cb8f'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/File[/etc/ceilometer/ceilometer.conf]/owner: owner > changed 'root' to 'ceilometer'\u001b[0m\n\u001b[mNotice: > /File[/etc/ceilometer/ceilometer.conf]/seluser: seluser changed > 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Swift::Storage::Filter::Recon[object]/Concat::Fragment[swift_recon_object]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments/35_swift_recon_object]/ensure: > defined content as > '{md5}e1a260602323a9e194999f76b55dc468'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Rsync::Server/Concat::Fragment[rsyncd_conf_header]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments/00_header_rsyncd_conf_header]/ensure: > defined content as > '{md5}2ad07d2ccf85d8c10a886f8aed109abb'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Snmp/File[snmpd.conf]/content: content changed > '{md5}8307434bc8ed4e2a7df4928fb4232778' to > '{md5}229bae725885187f54da0726272321d9'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Snmp/File[snmpd.conf]/mode: mode changed '0600' to > '0644'\u001b[0m\n\u001b[mNotice: /File[snmpd.conf]/seluser: seluser changed > 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to > 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/File[/etc/glance/glance-cache.conf]/owner: owner > changed 'root' to 'glance'\u001b[0m\n\u001b[mNotice: > /File[/etc/glance/glance-cache.conf]/seluser: seluser changed 'unconfined_u' > to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[database/idle_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/File[/etc/glance/glance-api-paste.ini]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/File[/etc/glance/glance-api.conf]/owner: owner > changed 'root' to 'glance'\u001b[0m\n\u001b[mNotice: > /File[/etc/glance/glance-api.conf]/seluser: seluser changed 'unconfined_u' > to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/log_dir]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/admin_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[keystone_authtoken/identity_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/bind_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/File[/etc/glance/glance-registry.conf]/owner: > owner changed 'root' to 'glance'\u001b[0m\n\u001b[mNotice: > /File[/etc/glance/glance-registry.conf]/seluser: seluser changed > 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/log_dir]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[keystone_authtoken/admin_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Db/Nova_config[database/idle_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/bind_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/use_syslog]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/verbose]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[keystone_authtoken/admin_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/admin_tenant_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[keystone_authtoken/admin_tenant_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[database/idle_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[database/connection]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/backlog]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Api/Cinder_api_paste_ini[filter:authtoken/admin_tenant_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v2_api]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[database/idle_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/use_syslog]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_num_retries]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_ssl_compression]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_insecure]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/log_dir]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[database/max_retries]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Api/Cinder_api_paste_ini[filter:authtoken/auth_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[database/retry_interval]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Db/Nova_config[database/connection]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/log_file]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[database/connection]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/admin_user]/value: > value changed '%SERVICE_USER%' to 'neutron'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/identity_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/admin_tenant_name]/value: > value changed '%SERVICE_TENANT_NAME%' to 'service'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[database/min_pool_size]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/identity_uri]/value: > value changed 'http://127.0.0.1:5000' to > 'http://192.168.10.11:35357/'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/min_l3_agents_per_router]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/admin_password]/value: > value changed '[old secret redacted]' to '[new secret > redacted]'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[database/connection]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[database/max_pool_size]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[keystone_authtoken/auth_uri]/value: > value changed 'http://127.0.0.1:35357/v2.0/' to > 'http://192.168.10.11:5000/v2.0/'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_client_protocol]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/agent_down_time]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[paste_deploy/flavor]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/admin_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/use_syslog]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Api/Cinder_api_paste_ini[filter:authtoken/admin_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[keystone_authtoken/auth_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[database/max_overflow]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/auth_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/log_file]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[keystone_authtoken/admin_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/verbose]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[database/max_retries]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/control_exchange]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v1_api]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/File[/etc/glance/glance-registry-paste.ini]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[database/retry_interval]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[keystone_authtoken/auth_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[database/min_pool_size]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/amqp_durable_queues]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[keystone_authtoken/identity_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha_net_cidr]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[database/connection]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/verbose]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[DEFAULT/verbose]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder_config[tripleo_ceph/host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[DEFAULT/rpc_backend]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder/Cinder_config[oslo_messaging_rabbit/rabbit_userid]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Api/Cinder_api_paste_ini[filter:authtoken/identity_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: /File[/var/lib/cinder]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Setup_test_volume/Exec[create_/var/lib/cinder/cinder-volumes]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Setup_test_volume/Exec[create_/var/lib/cinder/cinder-volumes]: > Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Setup_test_volume/Exec[losetup /dev/loop2 > /var/lib/cinder/cinder-volumes]: Triggered 'refresh' from 2 > events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Setup_test_volume/Exec[pvcreate /dev/loop2]: Triggered > 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Setup_test_volume/Exec[vgcreate cinder-volumes > /dev/loop2]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[keystone_authtoken/admin_tenant_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Glance_registry_config[keystone_authtoken/admin_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[DEFAULT/log_dir]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[DEFAULT/verbose]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Engine/Heat_config[DEFAULT/default_deployment_signal_transport]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[keystone_authtoken/admin_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[database/idle_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Engine/Heat_config[DEFAULT/engine_life_check_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[DEFAULT/use_syslog]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_ha_queues]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_watch_server_url]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_stack_user_role]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api/Heat_config[heat_api/bind_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api_cloudwatch/Heat_config[heat_api_cloudwatch/workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_virtual_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Engine/Heat_config[DEFAULT/deferred_auth_method]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[database/connection]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[keystone_authtoken/admin_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[keystone_authtoken/admin_tenant_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api_cloudwatch/Heat_config[heat_api_cloudwatch/bind_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[DEFAULT/amqp_durable_queues]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_hosts]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[DEFAULT/rpc_backend]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[keystone_authtoken/identity_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[keystone_authtoken/auth_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[DEFAULT/rpc_response_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_userid]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api/Heat_config[heat_api/workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Engine/Heat_config[DEFAULT/default_software_config_transport]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[DEFAULT/instance_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api_cloudwatch/Heat_config[heat_api_cloudwatch/bind_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[database/idle_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Api/Cinder_api_paste_ini[filter:authtoken/admin_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined > content as '{md5}b258529b332429e2ff8344f726a95457'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: > defined content as > '{md5}cb8670bb2fb352aac7ebf3a85d52094c'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/File[/tmp/ceph-mon-keyring-overcloud-controller-2]/ensure: > defined content as > '{md5}14ca4391d18faa7cfdc5e93820d90146'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[ceph-mon-mkfs-overcloud-controller-2]/returns: > ++ ceph-mon --id overcloud-controller-2 --show-config-value > mon_data\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[ceph-mon-mkfs-overcloud-controller-2]/returns: > + > mon_data=/var/lib/ceph/mon/ceph-overcloud-controller-2\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[ceph-mon-mkfs-overcloud-controller-2]/returns: > + '[' '!' -d /var/lib/ceph/mon/ceph-overcloud-controller-2 > ']'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[ceph-mon-mkfs-overcloud-controller-2]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[rm-keyring-overcloud-controller-2]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments.concat]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Rabbitmq_policy[ha-all@/]/pattern: pattern changed > '^(?!amq\\\\.).*' to '^(?!amq\\.).*'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/File[/var/cache/keystone]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/File[/var/lib/keystone]/mode: mode changed '0755' to > '0750'\u001b[0m\n\u001b[mNotice: /File[/var/lib/keystone]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /File[/var/log/keystone]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/File[/etc/keystone]/owner: owner changed 'root' to > 'keystone'\u001b[0m\n\u001b[mNotice: /File[/etc/keystone]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/File[/etc/keystone/keystone.conf]/owner: owner changed > 'root' to 'keystone'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/File[/etc/keystone/keystone.conf]/mode: mode changed > '0640' to '0600'\u001b[0m\n\u001b[mNotice: > /File[/etc/keystone/keystone.conf]/seluser: seluser changed 'unconfined_u' > to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_hosts]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_virtual_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[signing/keyfile]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_password]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[revoke/driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[signing/key_size]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[paste_deploy/config_file]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_userid]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[signing/ca_certs]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[signing/ca_key]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/public_workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/use_syslog]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/File[/etc/keystone/ssl]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/File[/etc/keystone/ssl/private]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/File[/etc/keystone/ssl/private/signing_key.pem]/ensure: > defined content as > '{md5}5afeb6595947fae21bb9fb32baf00584'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/debug]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[signing/certfile]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_ha_queues]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_use_ssl]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/log_dir]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/rabbit_port]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[signing/cert_subject]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/File[/etc/keystone/ssl/certs]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/File[/etc/keystone/ssl/certs/ca.pem]/ensure: defined > content as '{md5}04105fa97337bb7c4d5ac995998c0a3a'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/File[/etc/keystone/ssl/certs/signing_cert.pem]/ensure: > defined content as > '{md5}38885395ec55dd73cff6f450c45ec101'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[database/connection]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[database/idle_timeout]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_workers]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Keystone_config[ec2/driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/owner: > owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/group: > group changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /File[/etc/swift/object-server/]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/owner: > owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/group: > group changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /File[/etc/swift/account-server/]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Account/Service[swift-account-reaper]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Account/Service[swift-account-auditor]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Container/Service[swift-container-auditor]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/owner: > owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/group: > group changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /File[/etc/swift/container-server/]/seluser: seluser changed 'unconfined_u' > to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Object/Service[swift-object-auditor]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Object/Service[swift-object-updater]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments.concat]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments.concat.out]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Concat::Fragment[swift_proxy]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/00_swift_proxy]/ensure: > defined content as > '{md5}a89d508abb08444cbe368cf0b097025e'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Catch_errors/Concat::Fragment[swift_catch_errors]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/24_swift_catch_errors]/ensure: > defined content as > '{md5}a6199f60fed26eb9bbbbc8f1357b2599'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Healthcheck/Concat::Fragment[swift_healthcheck]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/25_swift_healthcheck]/ensure: > defined content as > '{md5}4c8cd2d18bcd82e4052642d0d45fe6f0'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Cache/Concat::Fragment[swift_cache]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/23_swift_cache]/ensure: > defined content as > '{md5}6ccea1093c7b6e64b5fb117e069b592c'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Tempurl/Concat::Fragment[swift-proxy-tempurl]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/29_swift-proxy-tempurl]/ensure: > defined content as > '{md5}957e09199678c49631a6614c357c389d'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Formpost/Concat::Fragment[swift-proxy-formpost]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/31_swift-proxy-formpost]/ensure: > defined content as > '{md5}5f7ff8b059ec81b895d9df33942c2b7f'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Keystone/Concat::Fragment[swift_keystone]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/79_swift_keystone]/ensure: > defined content as > '{md5}74b74136c1af5783d224cea35d84bc72'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Ratelimit/Concat::Fragment[swift_ratelimit]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/26_swift_ratelimit]/ensure: > defined content as > '{md5}fe274fcee3bc43dc238276b4d6cf259a'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Staticweb/Concat::Fragment[swift-proxy-staticweb]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/32_swift-proxy-staticweb]/ensure: > defined content as > '{md5}c00ba1287d860e8a04a30c5913821e7f'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/group: group > changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode > changed '0755' to '0700'\u001b[0m\n\u001b[mNotice: > /File[/var/cache/swift]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Authtoken/Concat::Fragment[swift_authtoken]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/22_swift_authtoken]/ensure: > defined content as > '{md5}6c1775e19e22d4dabb237b7f149d7df6'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy::Proxy_logging/Concat::Fragment[swift_proxy-logging]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/27_swift_proxy-logging]/ensure: > defined content as > '{md5}9598aa5079664e893e048ac4a681f71f'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/Exec[concat_/etc/swift/proxy-server.conf]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/Exec[concat_/etc/swift/proxy-server.conf]: > Triggered 'refresh' from 13 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/etc/swift/proxy-server.conf]/content: > content changed '{md5}e748b3f4cb5ca7f90d4df919bee99e77' to > '{md5}9020c5417b875ffcec92508c7387e506'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/etc/swift/proxy-server.conf]/owner: > owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/etc/swift/proxy-server.conf]/mode: > mode changed '0640' to '0660'\u001b[0m\n\u001b[mNotice: > /File[/etc/swift/proxy-server.conf]/seluser: seluser changed 'unconfined_u' > to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Proxy/Service[swift-proxy]/ensure: ensure changed > 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/admin_password]/value: > value changed '[old secret redacted]' to '[new secret > redacted]'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/Exec[concat_/etc/swift/account-server.conf]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/Exec[concat_/etc/swift/account-server.conf]: > Triggered 'refresh' from 5 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/content: > content changed '{md5}07e5a1a1e5a0ab83d745e20680eb32c1' to > '{md5}bfec80a23ec7c2e95b3bfd578886e769'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/owner: > owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /File[/etc/swift/account-server.conf]/seluser: seluser changed > 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Service[swift-account-replicator]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Service[swift-account]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat::Fragment[default-logging]/File[/var/lib/puppet/concat/15-default.conf/fragments/80_default-logging]/ensure: > defined content as > '{md5}f202203ce2fe5d885160be988ff36151'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/Exec[concat_15-default.conf]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/Exec[concat_15-default.conf]: > Triggered 'refresh' from 10 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_api_config[filter:authtoken/admin_tenant_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Swift::Storage::Filter::Healthcheck[container]/Concat::Fragment[swift_healthcheck_container]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments/25_swift_healthcheck_container]/ensure: > defined content as > '{md5}4c8cd2d18bcd82e4052642d0d45fe6f0'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/Exec[concat_/etc/swift/container-server.conf]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/Exec[concat_/etc/swift/container-server.conf]: > Triggered 'refresh' from 5 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/content: > content changed '{md5}4998257eb89ff63e838b37686ebb1ee7' to > '{md5}f8d5363997218ba2893a84e18bfa60d8'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/owner: > owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /File[/etc/swift/container-server.conf]/seluser: seluser changed > 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Service[swift-container-replicator]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Service[swift-container]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Exec[ceph-mon-ceph.client.admin.keyring-overcloud-controller-2]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[overcloud-controller-2]/Service[ceph-mon-overcloud-controller-2]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set > initscript env]/ensure: created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_max_clone_depth]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Main/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Api/Exec[cinder-manage db_sync]: Triggered 'refresh' > from 42 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Api/Service[cinder-api]/ensure: ensure changed > 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Volume/Service[cinder-volume]/ensure: ensure changed > 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Cinder::Scheduler/Service[cinder-scheduler]/ensure: ensure > changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Vncproxy::Common/Nova_config[DEFAULT/novncproxy_base_url]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/auth_insecure]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Rsync::Server::Module[account]/Concat::Fragment[frag-account]/File[/var/lib/puppet/concat/_etc_rsync.conf/fragments/10_account_frag-account]/ensure: > defined content as > '{md5}b4ede98c85e2b1c38d95245a1bdabd0d'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/Exec[concat_/etc/rsync.conf]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/Exec[concat_/etc/rsync.conf]: > Triggered 'refresh' from 6 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Rsync::Server/Concat[/etc/rsync.conf]/File[/etc/rsync.conf]/ensure: > defined content as > '{md5}7cc06d9358f9ee9fab39f84279b8a95e'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content > as '{md5}899a57534f3d84efa81887ec93c90c9b'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat::Fragment[horizon_vhost-serveralias]/File[/var/lib/puppet/concat/15-horizon_vhost.conf/fragments/190_horizon_vhost-serveralias]/ensure: > defined content as > '{md5}07907bea6a3bc3179c3afbc7e6b4287e'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/Exec[concat_15-horizon_vhost.conf]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/Exec[concat_15-horizon_vhost.conf]: > Triggered 'refresh' from 13 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer/Ceilometer_config[publisher/metering_secret]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Agent::Notification/Service[ceilometer-agent-notification]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Collector/Service[ceilometer-collector]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Agent::Central/Service[ceilometer-agent-central]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Alarm::Notifier/Service[ceilometer-alarm-notifier]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Api/Service[ceilometer-api]/ensure: ensure changed > 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Ceilometer::Alarm::Evaluator/Service[ceilometer-alarm-evaluator]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/var/lib/puppet/concat/_etc_swift_object-server.conf/fragments.concat]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/Exec[concat_/etc/swift/object-server.conf]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/Exec[concat_/etc/swift/object-server.conf]: > Triggered 'refresh' from 5 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/content: > content changed '{md5}43f14d676b28bc8111d6100e06e9a8bf' to > '{md5}093e11507d747144a1bff15b81b4ddbb'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/owner: > owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: > /File[/etc/swift/object-server.conf]/seluser: seluser changed 'unconfined_u' > to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Service[swift-object]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Service[swift-object-replicator]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Apache::Listen[192.168.10.20:80]/Concat::Fragment[Listen > 192.168.10.20:80]/File[/var/lib/puppet/concat/_etc_httpd_conf_ports.conf/fragments/10_Listen > 192.168.10.20_80]/ensure: defined content as > '{md5}1ff994aedb717ebc402c09250ecac25f'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/Exec[concat_/etc/httpd/conf/ports.conf]/returns: > executed successfully\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/Exec[concat_/etc/httpd/conf/ports.conf]: > Triggered 'refresh' from 4 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: > defined content as > '{md5}33da39503a6f62e5ae62e0be7daae49d'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/admin_tenant_name]/value: > value changed '%SERVICE_TENANT_NAME%' to 'service'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Swift::Storage::Container/Service[swift-container-updater]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Triggered 'refresh' > from 64 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: ensure changed > 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]/ensure: ensure changed > 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone/Keystone_config[DEFAULT/verbose]/ensure: > created\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone/Exec[keystone-manage > pki_setup]: Triggered 'refresh' from 37 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/auth_url]/ensure: > created\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.097 27081 DEBUG oslo_db.sqlalchemy.session [-] MySQL > server mode set to > STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION > _check_effective_sql_mode > /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py:513\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.099 27081 DEBUG migrate.versioning.repository [-] > Loading repository > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:76\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.105 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/001_add_images_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.105 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/001_add_images_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.105 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/002_add_image_properties_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.106 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/002_add_image_properties_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.106 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_upgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.106 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_upgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.106 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.106 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.107 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.107 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.107 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/004_add_checksum.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.107 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/004_add_checksum.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.108 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/005_size_big_integer.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.108 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/005_size_big_integer.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.108 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_upgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.109 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_upgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.109 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.109 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.109 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.110 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.110 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.110 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.110 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.110 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.111 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/007_add_owner.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.111 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/007_add_owner.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.111 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/008_add_image_members_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.111 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/008_add_image_members_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.112 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/009_add_mindisk_and_minram.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.112 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/009_add_mindisk_and_minram.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.112 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/010_default_update_at.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.112 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/010_default_update_at.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.113 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_upgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.113 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_upgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.113 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.113 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.114 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_make_mindisk_and_minram_notnull.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.114 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_make_mindisk_and_minram_notnull.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.114 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.114 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.115 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.115 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.115 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_add_protected.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.115 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_add_protected.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.116 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_tags_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.116 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_tags_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.116 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/015_quote_swift_credentials.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.116 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/015_quote_swift_credentials.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.116 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.117 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.117 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_add_status_image_member.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.117 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_add_status_image_member.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.117 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.118 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.118 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/018_add_image_locations_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.118 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/018_add_image_locations_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.118 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/019_migrate_image_locations.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.118 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/019_migrate_image_locations.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.119 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/020_drop_images_table_location.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.119 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/020_drop_images_table_location.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.119 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/021_set_engine_mysql_innodb.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.119 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/021_set_engine_mysql_innodb.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.120 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/022_image_member_index.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.120 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/022_image_member_index.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.120 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/023_placeholder.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.120 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/023_placeholder.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.120 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/024_placeholder.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.121 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/024_placeholder.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.121 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/025_placeholder.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.121 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/025_placeholder.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.121 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/026_add_location_storage_information.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.122 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/026_add_location_storage_information.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.122 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/027_checksum_index.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.122 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/027_checksum_index.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.122 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/028_owner_index.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.123 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/028_owner_index.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.123 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/029_location_meta_data_pickle_to_string.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.123 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/029_location_meta_data_pickle_to_string.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.123 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/030_add_tasks_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.124 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/030_add_tasks_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.124 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/031_remove_duplicated_locations.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.124 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/031_remove_duplicated_locations.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.124 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/032_add_task_info_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.124 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/032_add_task_info_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.125 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/033_add_location_status.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.125 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/033_add_location_status.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.125 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/034_add_virtual_size.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.125 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/034_add_virtual_size.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.126 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/035_add_metadef_tables.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.126 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/035_add_metadef_tables.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.126 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/036_rename_metadef_schema_columns.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.126 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/036_rename_metadef_schema_columns.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.127 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_upgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.127 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_upgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.127 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.127 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.127 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_add_changes_to_satisfy_models.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.128 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_add_changes_to_satisfy_models.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.128 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/038_add_metadef_tags_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.128 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/038_add_metadef_tags_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.128 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.128 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.129 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/040_add_changes_to_satisfy_metadefs_tags.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.129 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/040_add_changes_to_satisfy_metadefs_tags.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.129 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/041_add_artifact_tables.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.129 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/041_add_artifact_tables.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.130 27081 DEBUG migrate.versioning.repository [-] > Repository > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo loaded > successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:82\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.130 27081 DEBUG migrate.versioning.repository [-] > Config: OrderedDict([('db_settings', OrderedDict([('__name__', > 'db_settings'), ('repository_id', 'Glance Migrations'), ('version_table', > 'migrate_version'), ('required_dbs', '[]')]))]) __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:83\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.156 27081 DEBUG migrate.versioning.repository [-] > Loading repository > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:76\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.157 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/001_add_images_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.157 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/001_add_images_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.158 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/002_add_image_properties_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.158 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/002_add_image_properties_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.158 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_upgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.158 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_upgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.158 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.159 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.159 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.159 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.159 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/004_add_checksum.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.159 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/004_add_checksum.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.160 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/005_size_big_integer.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.160 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/005_size_big_integer.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.160 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_upgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.160 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_upgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.160 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.161 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.161 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.161 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.161 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.161 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.162 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.162 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.162 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/007_add_owner.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.162 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/007_add_owner.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.163 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/008_add_image_members_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.163 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/008_add_image_members_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.163 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/009_add_mindisk_and_minram.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.163 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/009_add_mindisk_and_minram.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.163 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/010_default_update_at.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.164 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/010_default_update_at.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.164 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_upgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.164 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_upgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.164 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.164 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.165 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_make_mindisk_and_minram_notnull.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.165 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/011_make_mindisk_and_minram_notnull.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.165 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.165 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.165 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.166 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.166 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_add_protected.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.166 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/013_add_protected.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.166 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_tags_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.166 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_tags_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.167 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/015_quote_swift_credentials.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.167 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/015_quote_swift_credentials.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.167 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.167 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.167 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_add_status_image_member.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.168 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/016_add_status_image_member.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.168 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.168 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.168 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/018_add_image_locations_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.168 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/018_add_image_locations_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.169 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/019_migrate_image_locations.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.169 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/019_migrate_image_locations.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.169 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/020_drop_images_table_location.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.169 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/020_drop_images_table_location.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.169 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/021_set_engine_mysql_innodb.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.170 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/021_set_engine_mysql_innodb.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.170 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/022_image_member_index.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.170 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/022_image_member_index.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.170 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/023_placeholder.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.171 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/023_placeholder.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.171 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/024_placeholder.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.171 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/024_placeholder.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.171 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/025_placeholder.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.171 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/025_placeholder.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.172 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/026_add_location_storage_information.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.172 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/026_add_location_storage_information.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.172 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/027_checksum_index.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.172 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/027_checksum_index.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.172 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/028_owner_index.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.173 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/028_owner_index.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.173 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/029_location_meta_data_pickle_to_string.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.173 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/029_location_meta_data_pickle_to_string.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.173 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/030_add_tasks_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.173 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/030_add_tasks_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.174 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/031_remove_duplicated_locations.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.174 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/031_remove_duplicated_locations.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.174 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/032_add_task_info_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.174 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/032_add_task_info_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.175 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/033_add_location_status.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.175 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/033_add_location_status.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.175 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/034_add_virtual_size.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.175 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/034_add_virtual_size.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.175 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/035_add_metadef_tables.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.176 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/035_add_metadef_tables.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.176 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/036_rename_metadef_schema_columns.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.176 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/036_rename_metadef_schema_columns.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.176 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_upgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.176 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_upgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.177 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_downgrade.sql... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.177 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_downgrade.sql > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.177 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_add_changes_to_satisfy_models.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.177 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/037_add_changes_to_satisfy_models.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.178 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/038_add_metadef_tags_table.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.178 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/038_add_metadef_tags_table.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.178 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.178 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.178 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/040_add_changes_to_satisfy_metadefs_tags.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.179 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/040_add_changes_to_satisfy_metadefs_tags.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.179 27081 DEBUG migrate.versioning.script.base [-] > Loading script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/041_add_artifact_tables.py... > __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:27\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.179 27081 DEBUG migrate.versioning.script.base [-] > Script > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/041_add_artifact_tables.py > loaded successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/script/base.py:30\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.179 27081 DEBUG migrate.versioning.repository [-] > Repository > /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo loaded > successfully __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:82\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.180 27081 DEBUG migrate.versioning.repository [-] > Config: OrderedDict([('db_settings', OrderedDict([('__name__', > 'db_settings'), ('repository_id', 'Glance Migrations'), ('version_table', > 'migrate_version'), ('required_dbs', '[]')]))]) __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:83\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.188 27081 INFO migrate.versioning.api [-] 38 -> 39... > \u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Registry/Exec[glance-manage > db_sync]/returns: 2015-09-09 20:36:07.216 27081 CRITICAL glance [-] > OperationalError: (OperationalError) (1091, \"Can't DROP > 'ix_namespaces_namespace'; check that column/key exists\") '\\nDROP INDEX > ix_namespaces_namespace ON metadef_namespaces' ()\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance Traceback (most recent call > last):\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/bin/glance-manage\", line 10, in \u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > sys.exit(main())\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/glance/cmd/manage.py\", line 303, in > main\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance return > CONF.command.action_fn()\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/glance/cmd/manage.py\", line 171, in > sync\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > CONF.command.current_version)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/glance/cmd/manage.py\", line 116, in > sync\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > version)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration.py\", line > 79, in db_sync\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance return > versioning_api.upgrade(engine, repository, > version)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/migrate/versioning/api.py\", line 186, in > upgrade\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance return _migrate(url, > repository, version, upgrade=True, err=err, > **opts)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File \"\", line 2, in > _migrate\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/migrate/versioning/util/__init__.py\", > line 160, in with_engine\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance return f(*a, > **kw)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/migrate/versioning/api.py\", line 366, in > _migrate\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance schema.runchange(ver, change, > changeset.step)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/migrate/versioning/schema.py\", line 93, > in runchange\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance change.run(self.engine, > step)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/migrate/versioning/script/py.py\", line > 148, in run\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > script_func(engine)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py\", > line 34, in upgrade\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > Index('ix_namespaces_namespace', > metadef_namespaces.c.namespace).drop()\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/schema.py\", line 2975, > in drop\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > bind._run_visitor(ddl.SchemaDropper, self)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 1616, > in _run_visitor\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > conn._run_visitor(visitorcallable, element, > **kwargs)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 1245, > in _run_visitor\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > **kwargs).traverse_single(element)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/visitors.py\", line 120, > in traverse_single\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance return meth(obj, > **kw)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py\", line 813, in > visit_index\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > self.connection.execute(DropIndex(index))\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 729, > in execute\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance return meth(self, > multiparams, params)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py\", line 69, in > _execute_on_connection\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance return > connection._execute_ddl(self, multiparams, params)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 783, > in _execute_ddl\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > compiled\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 958, > in _execute_context\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > context)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/compat/handle_error.py\", > line 261, in _handle_dbapi_exception\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance e, statement, parameters, > cursor, context)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 1155, > in _handle_dbapi_exception\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > util.raise_from_cause(newraise, exc_info)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py\", line 199, > in raise_from_cause\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance reraise(type(exception), > exception, tb=exc_tb)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 951, > in _execute_context\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance > context)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py\", line > 436, in do_execute\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance cursor.execute(statement, > parameters)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py\", line 174, in > execute\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance self.errorhandler(self, exc, > value)\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance File > \"/usr/lib64/python2.7/site-packages/MySQLdb/connections.py\", line 36, in > defaulterrorhandler\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance raise errorclass, > errorvalue\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance OperationalError: > (OperationalError) (1091, \"Can't DROP 'ix_namespaces_namespace'; check that > column/key exists\") '\\nDROP INDEX ix_namespaces_namespace ON > metadef_namespaces' ()\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: > 2015-09-09 20:36:07.216 27081 TRACE glance \u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Registry/Service[glance-registry]/ensure: ensure > changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Glance::Api/Service[glance-api]/ensure: ensure changed > 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/File[/etc/nova/nova.conf]/owner: owner changed 'root' to > 'nova'\u001b[0m\n\u001b[mNotice: /File[/etc/nova/nova.conf]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova/Exec[post-nova_config]: Triggered 'refresh' from 67 > events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Exec[nova-db-sync]/returns: Command failed, please > check log for more info\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Vncproxy/Nova::Generic_service[vncproxy]/Service[nova-vncproxy]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Consoleauth/Nova::Generic_service[consoleauth]/Service[nova-consoleauth]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: Triggered > 'refresh' from 38 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Keystone::Service/Service[keystone]/ensure: ensure changed > 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: > defined content as > '{md5}ac20c5c5779b37ab06b480d6485a0881'\u001b[0m\n\u001b[mNotice: > /File[/etc/httpd/conf.d]/seluser: seluser changed 'unconfined_u' to > 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: > removed\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: > removed\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: > removed\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache/Apache::Vhost[default]/Concat[15-default.conf]/File[15-default.conf]/ensure: > defined content as > '{md5}c1bc833d02e055dc8a87f9cb360fc799'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[15-horizon_vhost.conf]/File[15-horizon_vhost.conf]/ensure: > defined content as > '{md5}ad571245d8a5dad18b3108bcf129389d'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: > content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to > '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'\u001b[0m\n\u001b[mNotice: > /File[/etc/httpd/conf.d/openstack-dashboard.conf]/seluser: seluser changed > 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Apache::Service/Service[httpd]/ensure: ensure changed 'stopped' > to 'running'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/User[heat]/groups: > groups changed '' to 'heat'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/Exec[heat-dbsync]: Triggered 'refresh' from 1 > events\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat/File[/etc/heat/]/group: > group changed 'root' to 'heat'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/File[/etc/heat/]/mode: mode changed '0755' to > '0750'\u001b[0m\n\u001b[mNotice: /File[/etc/heat/]/seluser: seluser changed > 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat/File[/etc/heat/heat.conf]/owner: owner changed 'root' to > 'heat'\u001b[0m\n\u001b[mNotice: /File[/etc/heat/heat.conf]/seluser: seluser > changed 'unconfined_u' to 'system_u'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Service[nova-cert]: > Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]/ensure: ensure changed > 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Engine/Service[heat-engine]/ensure: ensure changed > 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api/Service[heat-api]/ensure: ensure changed 'stopped' to > 'running'\u001b[0m\n\u001b[mNotice: > /Stage[main]/Heat::Api_cloudwatch/Service[heat-api-cloudwatch]/ensure: > ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: Finished > catalog run in 103.96 seconds\u001b[0m\n", > "deploy_stderr": "\u001b[1;31mWarning: Variable access via > 'notification_email_to' is deprecated. Use '@notification_email_to' instead. > template[/etc/puppet/modules/keepalived/templates/global_config.erb]:3\n > (at /etc/puppet/modules/keepalived/templates/global_config.erb:3:in `block > in result')\u001b[0m\n\u001b[1;31mWarning: notify is a metaparam; this value > will inherit to all contained resources in the keepalived::instance > definition\u001b[0m\n\u001b[1;31mWarning: > Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable > '::nova::compute::vncproxy_host'; class ::nova::compute has not been > evaluated\u001b[0m\n\u001b[1;31mWarning: > Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable > '::nova::compute::vncproxy_protocol'; class ::nova::compute has not been > evaluated\u001b[0m\n\u001b[1;31mWarning: > Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable > '::nova::compute::vncproxy_port'; class ::nova::compute has not been > evaluated\u001b[0m\n\u001b[1;31mWarning: > Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable > '::nova::compute::vncproxy_path'; class ::nova::compute has not been > evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Concat::Setup]): > concat::setup is deprecated as a public API of the concat module and should > no longer be directly included in the > manifest.\u001b[0m\n\u001b[1;31mWarning: > Scope(Swift::Storage::Server[6002]): The default incoming_chmod set to 0644 > may yield in error prone directories and will be changed in a later > release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6002]): > The default outgoing_chmod set to 0644 may yield in error prone directories > and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: > Scope(Swift::Storage::Server[6001]): The default incoming_chmod set to 0644 > may yield in error prone directories and will be changed in a later > release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6001]): > The default outgoing_chmod set to 0644 may yield in error prone directories > and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: > Scope(Swift::Storage::Server[6000]): The default incoming_chmod set to 0644 > may yield in error prone directories and will be changed in a later > release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6000]): > The default outgoing_chmod set to 0644 may yield in error prone directories > and will be changed in a later release.\u001b[0m\n\u001b[1;31mError: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: Failed to call > refresh: glance-manage --config-file=/etc/glance/glance-registry.conf > db_sync returned 1 instead of one of [0]\u001b[0m\n\u001b[1;31mError: > /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]: glance-manage > --config-file=/etc/glance/glance-registry.conf db_sync returned 1 instead of > one of [0]\u001b[0m\n\u001b[1;31mError: > /Stage[main]/Nova::Api/Exec[nova-db-sync]: Failed to call refresh: > /usr/bin/nova-manage db sync returned 1 instead of one of > [0]\u001b[0m\n\u001b[1;31mError: /Stage[main]/Nova::Api/Exec[nova-db-sync]: > /usr/bin/nova-manage db sync returned 1 instead of one of [0]\u001b[0m\n", > "deploy_status_code": 6 > }, > "creation_time": "2015-09-09T20:34:20Z", > "updated_time": "2015-09-09T20:36:50Z", > "input_values": {}, > "action": "CREATE", > "status_reason": "deploy_status_code : Deployment exited with non-zero > status code: 6", > "id": "cc239277-50c2-4d94-952d-181ccf3199da" > } > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > On Sep 9, 2015, at 2:18 PM, Marius Cornea wrote: > > For ssh access you can use the heat-admin user with the private key in > /home/stack/.ssh/id_rsa > > On Wed, Sep 9, 2015 at 7:26 PM, Ignacio Bravo wrote: > > Dan, > > Thanks for your tips. It seems like there is an issue with the networking > piece, as that is where all the nodes are in building state. I have > something similar to this for each one of the Controller nodes: > > | NetworkDeployment | cac8c93b-b784-4a91-bc23-1a932bb1e62f > | OS::TripleO::SoftwareDeployment | CREATE_IN_PROGRESS | > 2015-09-09T15:22:10Z | 1 | > | UpdateDeployment | 97359c35-d2c7-4140-98ed-24525ee4be6b > | OS::Heat::SoftwareDeployment | CREATE_IN_PROGRESS | > 2015-09-09T15:22:10Z | 1 | > > Following your advice, I was trying to ssh into the nodes, but didn?t know > what username/password combination to use. I tried root, heat-admin, stack > with different password located in /home/stack/triple0-overcloud-passwords > but none of the combinations seemed to work. > > BTW, I am using the instructions from > https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html > and installing in an HP c7000 Blade enclosure. > > Thanks, > IB > > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > On Sep 9, 2015, at 11:40 AM, Dan Sneddon wrote: > > Did that error happen after a long time, like 4 hours? I have seen that > error > when the deploy was actually hung, and the token timeout gets reached and > then > every API call gets an authentication failed response. Unfortunately, you'll > need to diagnose what part of the deployment is failing. Here's what I > usually > do: > > # Get state of all uncompleted resources > heat resource-list overcloud -n 5 | grep -iv complete > > # Look closer at failed resources from above command > heat resource-show > > nova list > (then ssh as heat-admin to the nodes and check for network connectivity and > errors in the logs) > > -Dan Sneddon > > ----- Original Message ----- > > Never hit that. Did you try export HEAT_INCLUDE_PASSWORD=1 and rerun deploy? > > On Wed, Sep 9, 2015 at 4:47 PM, Ignacio Bravo wrote: > > Thanks. I was able to delete the existing deployment, but when trying to > deploy again using the CLI I got an authentication error. Ideas? > > [root at bl16 ~]# su stack > [stack at bl16 root]$ cd > [stack at bl16 ~]$ cd ~ > [stack at bl16 ~]$ source stackrc > [stack at bl16 ~]$ openstack overcloud deploy --ceph-storage-scale 3 > --control-scale 3 --compute-scale 2 --compute-flavor Compute_24 > --ntp-server > 192.168.10.1 --templates -e > /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml > Deploying templates in the directory > /usr/share/openstack-tripleo-heat-templates > ERROR: openstack ERROR: Authentication failed. Please try again with option > --include-password or export HEAT_INCLUDE_PASSWORD=1 > Authentication required > > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > On Sep 8, 2015, at 5:29 PM, Marius Cornea wrote: > > Hi Ignacio, > > Yes, I believe ceph currently only works with using direct heat > templates. You can check the instruction on how to get it deployed via > cli here [1] Make sure you select the Ceph role on the environment > specific content (left side column). > > To delete existing deployments run 'heat stack-delete overcloud' on > the undercloud node with the credentials in the stackrc file loaded. > > In order to get a HA deployment you just need to deploy 3 controllers > by passing '--control-scale 3' to the 'openstack overcloud deploy' > command. > > [1] > https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html > > On Tue, Sep 8, 2015 at 11:07 PM, Ignacio Bravo > wrote: > > All, > > I was trying to deploy an overcloud using the RDO Manager GUI (Tuskar UI) > and the deployment was unsuccessful. It seems that there is a particular > bug > currently with deploying a Ceph based storage with the GUI, so I wanted to > ask the list if > > 1. Indeed this was the case. > 2. How to delete my configuration and redeploy using the CLI > 3. Finally, if there is any scripted or explained way to perform an HA > installation. I read the reference to github, but this seems to be more > about the components but there was not a step by step instruction/ > explanation. > > > Thanks! > IB > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > From flavio at redhat.com Fri Sep 11 07:52:33 2015 From: flavio at redhat.com (Flavio Percoco) Date: Fri, 11 Sep 2015 09:52:33 +0200 Subject: [Rdo-list] qpid support In-Reply-To: References: <55ED9F09.1010504@redhat.com> <88964CAC-ED4F-4804-97EB-B84C43312586@redhat.com> Message-ID: <20150911075233.GP6373@redhat.com> On 07/09/15 16:40 +0200, Ha?kel wrote: >For the record, the package has been unretired >https://bugzilla.redhat.com/show_bug.cgi?id=1248100 > >AFAIK, nobody uses qpid support nowadays with RDO. This is correct but, as long as the driver is in python-oslo-messaging, I think we should keep python-qpid around. Cheers, Flavio > >Regards, >H. -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From apevec at gmail.com Fri Sep 11 09:35:19 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 11 Sep 2015 11:35:19 +0200 Subject: [Rdo-list] 404 on http://trunk.rdoproject.org/centos7/current/delorean.repo In-Reply-To: References: Message-ID: 2015-09-10 7:11 GMT+02:00 Steven Dake (stdake) : > Could someone address this problem please? Works now for me, do you have exact time 404 happened so we can check logs? Cheers, Alan From vimartin at redhat.com Fri Sep 11 14:36:33 2015 From: vimartin at redhat.com (Victoria Martinez de la Cruz) Date: Fri, 11 Sep 2015 11:36:33 -0300 Subject: [Rdo-list] openstack-rally in delorean In-Reply-To: <20150910192733.GA24947@redhat.com> References: <20150910145334.GD16387@redhat.com> <20150910192733.GA24947@redhat.com> Message-ID: <55F2E6F1.1030501@redhat.com> On 09/10/2015 04:27 PM, Steve Linabery wrote: > On Thu, Sep 10, 2015 at 09:20:23PM +0200, Ha?kel wrote: >> Hi, >> >> just for the record, Victoria started working a while ago on rally packaging >> https://bugzilla.redhat.com/show_bug.cgi?id=1193986 >> It didn't get on top of my TODO for a while, though. >> >> Regards, >> H. > Splendid! I am glad I emailed; didn't know the packaging effort was this far along. > > Victoria, any reason not to use 0.0.4 (current upstream rally release)? > > s|e Hi Steve! No reason, it just needs an update. I honestly didn't have enough time to update it, but if there is interest on having Rally soon I'll try to get it working in the short term. There were some issues related to dependencies, but Alex Drahon submitted a doc that explains how he managed to get it working on Centos 7 [0]. Thanks, Victoria [0] https://mojo.redhat.com/docs/DOC-1041457?sr=inbox&ru=3418 From slinaber at redhat.com Fri Sep 11 15:07:59 2015 From: slinaber at redhat.com (Steve Linabery) Date: Fri, 11 Sep 2015 10:07:59 -0500 Subject: [Rdo-list] openstack-rally in delorean In-Reply-To: <55F2E6F1.1030501@redhat.com> References: <20150910145334.GD16387@redhat.com> <20150910192733.GA24947@redhat.com> <55F2E6F1.1030501@redhat.com> Message-ID: <20150911150759.GA6368@redhat.com> On Fri, Sep 11, 2015 at 11:36:33AM -0300, Victoria Martinez de la Cruz wrote: > > > On 09/10/2015 04:27 PM, Steve Linabery wrote: > >On Thu, Sep 10, 2015 at 09:20:23PM +0200, Ha?kel wrote: > >>Hi, > >> > >>just for the record, Victoria started working a while ago on rally packaging > >>https://bugzilla.redhat.com/show_bug.cgi?id=1193986 > >>It didn't get on top of my TODO for a while, though. > >> > >>Regards, > >>H. > >Splendid! I am glad I emailed; didn't know the packaging effort was this far along. > > > >Victoria, any reason not to use 0.0.4 (current upstream rally release)? > > > >s|e > > Hi Steve! > > No reason, it just needs an update. I honestly didn't have enough time to > update it, but if there is interest on having Rally soon I'll try to get it > working in the short term. There were some issues related to dependencies, > but Alex Drahon submitted a doc that explains how he managed to get it > working on Centos 7 [0]. > > Thanks, > > Victoria > > [0] https://mojo.redhat.com/docs/DOC-1041457?sr=inbox&ru=3418 I updated the spec to use 0.0.4 and changed a path in the %files section for the bash completion config (not sure about that use of macros but it works). Here [1], if it's helpful. Ran mock and rpmlint gives only warnings, no errors. hth s|e [1] https://slinabery.fedorapeople.org/openstack-rally.spec From ibravo at ltgfederal.com Fri Sep 11 17:39:44 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Fri, 11 Sep 2015 13:39:44 -0400 Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> <0D2CA8A2-C988-4B2D-82C9-7CEC2F122323@ltgfederal.com> <292404844.23738911.1441813206554.JavaMail.zimbra@redhat.com> <5EA2DA6D-B64F-460D-8CF8-B7757337FF3F@ltgfederal.com> <11FC8B09-058B-4257-BEEE-9F182CB2CF21@ltgfederal.com> Message-ID: <08B04E1A-44C7-4728-9482-4C1039BB3635@ltgfederal.com> Marius, I am getting a permission error when trying to run nova-manage [heat-admin at overcloud-controller-2 ~]$ /usr/bin/nova-manage db sync Traceback (most recent call last): File "/usr/bin/nova-manage", line 10, in sys.exit(main()) File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1330, in main config.parse_args(sys.argv) File "/usr/lib/python2.7/site-packages/nova/config.py", line 56, in parse_args default_config_files=default_config_files) File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1860, in __call__ self._namespace._files_permission_denied) oslo_config.cfg.ConfigFilesPermissionDeniedError: Failed to open some config files: /usr/share/nova/nova-dist.conf,/etc/nova/nova.conf These files are owned by nova:nova [heat-admin at overcloud-controller-2 nova]$ ll total 148 -rw-r-----. 1 root nova 4130 Jul 28 19:53 api-paste.ini -rw-r-----. 1 nova nova 111964 Sep 11 16:49 nova.conf -rw-r-----. 1 root nova 20086 Jul 28 19:53 policy.json -rw-r--r--. 1 root root 72 Aug 21 14:16 release -rw-r-----. 1 root nova 936 Jul 28 19:53 rootwrap.conf Also, nova-manage.log seems to be going fine until it gets this error: 2015-09-11 16:49:40.904 30583 DEBUG migrate.versioning.repository [-] Config: OrderedDict([('db_settings', OrderedDict([('__name__', 'db_settings'), ('repository_id', 'nova'), ('version_table', 'migrate_version'), ('required_dbs', '[]')]))]) __init__ /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:83 2015-09-11 16:49:40.924 30583 DEBUG oslo_db.sqlalchemy.session [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py:513 2015-09-11 16:49:40.956 30583 INFO migrate.versioning.api [-] 266 -> 267... 2015-09-11 16:49:41.294 30583 CRITICAL nova [-] OperationalError: (OperationalError) (1061, "Duplicate key name 'uniq_instances0uuid'") 'ALTER TABLE instances ADD CONSTRAINT uniq_instances0uuid UNIQUE (uuid)' () __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Sep 10, 2015, at 5:01 PM, Marius Cornea wrote: > > Hi, > > Can you ssh to the overcloud-controller-2 node and try to manually run > the commands it failed: /usr/bin/nova-manage db sync and glance-manage > --config-file=/etc/glance/glance-registry.conf db_sync? Also check the > nova and glance logs for some indications on why they might have > failed. /var/log/messages and journalctl -l -u os-collect-config could > also indicate more detailed errors. > > Thanks, > Marius > > On Wed, Sep 9, 2015 at 11:53 PM, Ignacio Bravo wrote: >> I was able to move past the network issue (boot order of the servers have >> them booting from the 2nd NIC as well, causing them to register with >> Foreman, Ups!) >> >> Now the issue is deeper in the deployment: >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasha at redhat.com Fri Sep 11 19:51:35 2015 From: sasha at redhat.com (Sasha Chuzhoy) Date: Fri, 11 Sep 2015 15:51:35 -0400 (EDT) Subject: [Rdo-list] RDO Manager GUI install with Ceph In-Reply-To: <08B04E1A-44C7-4728-9482-4C1039BB3635@ltgfederal.com> References: <7C80FE5F-ACC2-43F2-BC98-6B452336A7C6@ltgfederal.com> <292404844.23738911.1441813206554.JavaMail.zimbra@redhat.com> <5EA2DA6D-B64F-460D-8CF8-B7757337FF3F@ltgfederal.com> <11FC8B09-058B-4257-BEEE-9F182CB2CF21@ltgfederal.com> <08B04E1A-44C7-4728-9482-4C1039BB3635@ltgfederal.com> Message-ID: <924796415.34340929.1442001095041.JavaMail.zimbra@redhat.com> Hi Ignacio, try to prepend the command with "sudo". Best regards, Sasha Chuzhoy. ----- Original Message ----- > From: "Ignacio Bravo" > To: "Marius Cornea" > Cc: "rdo-list" > Sent: Friday, September 11, 2015 1:39:44 PM > Subject: Re: [Rdo-list] RDO Manager GUI install with Ceph > > Marius, > > I am getting a permission error when trying to run nova-manage > [heat-admin at overcloud-controller-2 ~]$ /usr/bin/nova-manage db sync > Traceback (most recent call last): > File "/usr/bin/nova-manage", line 10, in > sys.exit(main()) > File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1330, in > main > config.parse_args(sys.argv) > File "/usr/lib/python2.7/site-packages/nova/config.py", line 56, in > parse_args > default_config_files=default_config_files) > File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1860, in > __call__ > self._namespace._files_permission_denied) > oslo_config.cfg.ConfigFilesPermissionDeniedError: Failed to open some config > files: /usr/share/nova/nova-dist.conf,/etc/nova/nova.conf > > These files are owned by nova:nova > > [heat-admin at overcloud-controller-2 nova]$ ll > total 148 > -rw-r-----. 1 root nova 4130 Jul 28 19:53 api-paste.ini > -rw-r-----. 1 nova nova 111964 Sep 11 16:49 nova.conf > -rw-r-----. 1 root nova 20086 Jul 28 19:53 policy.json > -rw-r--r--. 1 root root 72 Aug 21 14:16 release > -rw-r-----. 1 root nova 936 Jul 28 19:53 rootwrap.conf > > > Also, nova-manage.log seems to be going fine until it gets this error: > > 2015-09-11 16:49:40.904 30583 DEBUG migrate.versioning.repository [-] Config: > OrderedDict([('db_settings', OrderedDict([('__name__', 'db_settings'), > ('repository_id', 'nova'), ('version_table', 'migrate_version'), > ('required_dbs', '[]')]))]) __init__ > /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:83 > 2015-09-11 16:49:40.924 30583 DEBUG oslo_db.sqlalchemy.session [-] MySQL > server mode set to > STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION > _check_effective_sql_mode > /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py:513 > 2015-09-11 16:49:40.956 30583 INFO migrate.versioning.api [-] 266 -> 267... > 2015-09-11 16:49:41.294 30583 CRITICAL nova [-] OperationalError: > (OperationalError) (1061, "Duplicate key name 'uniq_instances0uuid'") 'ALTER > TABLE instances ADD CONSTRAINT uniq_instances0uuid UNIQUE (uuid)' () > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > > > > On Sep 10, 2015, at 5:01 PM, Marius Cornea < marius at remote-lab.net > wrote: > > Hi, > > Can you ssh to the overcloud-controller-2 node and try to manually run > the commands it failed: /usr/bin/nova-manage db sync and glance-manage > --config-file=/etc/glance/glance-registry.conf db_sync? Also check the > nova and glance logs for some indications on why they might have > failed. /var/log/messages and journalctl -l -u os-collect-config could > also indicate more detailed errors. > > Thanks, > Marius > > On Wed, Sep 9, 2015 at 11:53 PM, Ignacio Bravo < ibravo at ltgfederal.com > > wrote: > > > I was able to move past the network issue (boot order of the servers have > them booting from the 2nd NIC as well, causing them to register with > Foreman, Ups!) > > Now the issue is deeper in the deployment: > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mattdm at mattdm.org Fri Sep 11 15:32:14 2015 From: mattdm at mattdm.org (Matthew Miller) Date: Fri, 11 Sep 2015 11:32:14 -0400 Subject: [Rdo-list] Question about Restart=always in systemd In-Reply-To: <20150904151716.GA5011@mattdm.org> References: <316691786CCAEE44AE90482264E3AB8213A9812C@xmb-rcd-x08.cisco.com> <936474675.323251.1440168150920.JavaMail.zimbra@speichert.pl> <20150904151716.GA5011@mattdm.org> Message-ID: <20150911153214.GA21461@mattdm.org> On Fri, Sep 04, 2015 at 11:17:16AM -0400, Matthew Miller wrote: > > I'd think that "Restart=always" is a good setting for all services. > > What it really brings up is maybe the issue of streamlining the unit > > config files. > > For several releases, we've had packaging guidelines in Fedora > encouraging Restart=on-failure or Restart=on-abnormal: > > https://fedoraproject.org/wiki/Packaging:Systemd#Automatic_restarting > > We never, however, had an effort to bring existing packages into a > consistent state. I'd love for that effort to happen ? anyone > interesting in helping out? And then there were crickets. :) -- Matthew Miller mattdm at mattdm.org Fedora Project Leader mattdm at fedoraproject.org From pgsousa at gmail.com Mon Sep 14 09:28:46 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 14 Sep 2015 10:28:46 +0100 Subject: [Rdo-list] Undercloud update broke my environment Message-ID: Hi all, I've updated my undercloud according with http://docs.openstack.org/developer/tripleo-docs/index.html docs, and now I cannot login on GUI or use ironic (see below). Anyone had this issues? *Ironic error:* *[stack at instack ~]$ openstack baremetal import --json instackenv.json* *WARNING: ironicclient.common.http Request returned failure status.* *ERROR: openstack No valid host was found. Reason: No conductor service registered which supports driver pxe_ipmitool. (HTTP 400)* *Sep 14 10:27:01 instack ironic-conductor: File "/usr/lib/python2.7/site-packages/ironic/common/driver_factory.py", line 117, in _catch_driver_not_found* *Sep 14 10:27:01 instack ironic-conductor: raise exc* *Sep 14 10:27:01 instack ironic-conductor: DriverLoadError: python-ironic-inspector-client Python module not found* *Horizon error:* *2015-09-11 18:07:01,494 10615 ERROR django.request Internal Server Error: /dashboard/auth/login/* *Traceback (most recent call last):* * File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 132, in get_response* * response = wrapped_callback(request, *callback_args, **callback_kwargs)* * File "/usr/lib/python2.7/site-packages/django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper* * return view(request, *args, **kwargs)* * File "/usr/lib/python2.7/site-packages/django/utils/decorators.py", line 110, in _wrapped_view* * response = view_func(request, *args, **kwargs)* * File "/usr/lib/python2.7/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func* * response = view_func(request, *args, **kwargs)* * File "/usr/lib/python2.7/site-packages/openstack_auth/views.py", line 112, in login* * **kwargs)* * File "/usr/lib/python2.7/site-packages/django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper* * return view(request, *args, **kwargs)* * File "/usr/lib/python2.7/site-packages/django/utils/decorators.py", line 110, in _wrapped_view* * response = view_func(request, *args, **kwargs)* * File "/usr/lib/python2.7/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func* * response = view_func(request, *args, **kwargs)* * File "/usr/lib/python2.7/site-packages/django/contrib/auth/views.py", line 51, in login* * auth_login(request, form.get_user())* * File "/usr/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line 102, in login* * if _get_user_session_key(request) != user.pk or (* * File "/usr/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line 59, in _get_user_session_key* * return get_user_model()._meta.pk.to_python(request.session[SESSION_KEY])* * File "/usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 969, in to_python* * params={'value': value},* Regards, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Mon Sep 14 10:03:30 2015 From: mcornea at redhat.com (Marius Cornea) Date: Mon, 14 Sep 2015 06:03:30 -0400 (EDT) Subject: [Rdo-list] Undercloud update broke my environment In-Reply-To: References: Message-ID: <1662345802.25883914.1442225010418.JavaMail.zimbra@redhat.com> Hi Pedro, I believe something similar was reported in https://bugzilla.redhat.com/show_bug.cgi?id=1260736 Thanks, Marius ----- Original Message ----- > From: "Pedro Sousa" > To: "rdo-list" > Sent: Monday, September 14, 2015 11:28:46 AM > Subject: [Rdo-list] Undercloud update broke my environment > > Hi all, > > I've updated my undercloud according with > http://docs.openstack.org/developer/tripleo-docs/index.html docs, and now I > cannot login on GUI or use ironic (see below). Anyone had this issues? > > Ironic error: > > [stack at instack ~]$ openstack baremetal import --json instackenv.json > WARNING: ironicclient.common.http Request returned failure status. > ERROR: openstack No valid host was found. Reason: No conductor service > registered which supports driver pxe_ipmitool. (HTTP 400) > > > Sep 14 10:27:01 instack ironic-conductor: File > "/usr/lib/python2.7/site-packages/ironic/common/driver_factory.py", line > 117, in _catch_driver_not_found > Sep 14 10:27:01 instack ironic-conductor: raise exc > Sep 14 10:27:01 instack ironic-conductor: DriverLoadError: > python-ironic-inspector-client Python module not found > > > Horizon error: > > 2015-09-11 18:07:01,494 10615 ERROR django.request Internal Server Error: > /dashboard/auth/login/ > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line > 132, in get_response > response = wrapped_callback(request, *callback_args, **callback_kwargs) > File "/usr/lib/python2.7/site-packages/django/views/decorators/debug.py", > line 76, in sensitive_post_parameters_wrapper > return view(request, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/django/utils/decorators.py", line 110, > in _wrapped_view > response = view_func(request, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/django/views/decorators/cache.py", > line 57, in _wrapped_view_func > response = view_func(request, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/openstack_auth/views.py", line 112, in > login > **kwargs) > File "/usr/lib/python2.7/site-packages/django/views/decorators/debug.py", > line 76, in sensitive_post_parameters_wrapper > return view(request, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/django/utils/decorators.py", line 110, > in _wrapped_view > response = view_func(request, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/django/views/decorators/cache.py", > line 57, in _wrapped_view_func > response = view_func(request, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/django/contrib/auth/views.py", line > 51, in login > auth_login(request, form.get_user()) > File "/usr/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line > 102, in login > if _get_user_session_key(request) != user.pk or ( > File "/usr/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line > 59, in _get_user_session_key > return get_user_model()._meta.pk.to_python(request.session[SESSION_KEY]) > File "/usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py", > line 969, in to_python > params={'value': value}, > > > > Regards, > Pedro Sousa > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Mon Sep 14 14:25:27 2015 From: javier.pena at redhat.com (Javier Pena) Date: Mon, 14 Sep 2015 10:25:27 -0400 (EDT) Subject: [Rdo-list] openstack-puppet-modules master-patches branch update In-Reply-To: <20150909175357.GP12870@localhost.localdomain> References: <20150909175357.GP12870@localhost.localdomain> Message-ID: <1869120236.49586745.1442240727674.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Hi, I'm interested in seeing this patch from the master branch of > openstack-puppet-modules: > https://github.com/redhat-openstack/openstack-puppet-modules/commit/447059ae0ca4a69ab9171969a1f30962e886b1a9 > make it into the master-patches branch so that we can get an updated build in > delorean. > > This is needed to make upstream TripleO work with opm from delorean. Right > now, > we have to install the puppet modules from source. > > The specific puppet patch we need is: > https://github.com/openstack/puppet-heat/commit/16b4eca4c95d7873ef510181f4a52592abeca24c > > Does anyone know the process to make this happen? Hi James, You need to push the commit to the master-patches branch in the openstack-puppet-modules repo. It's just been done by Lukas. Regards, Javier > > -- > -- James Slagle > -- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rbowen at redhat.com Mon Sep 14 14:32:08 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 14 Sep 2015 10:32:08 -0400 Subject: [Rdo-list] beta.rdoproject.org Message-ID: <55F6DA68.4040806@redhat.com> I am very very pleased to announce that the promised website migration that I mentioned several months ago is finally moving. You can see the new website at http://beta.rdoproject.org/ - it should look almost exactly the same as the old one. (The yellow information box at the top of each page will go away once we go to production, and is just there to help during the migration process.) The source for the site is now on Github, at https://github.com/redhat-openstack/website/ and you can start sending pull requests, or opening issues, immediately. We are planning to have a docs hack day prior to the Liberty release, to get the site whipped into shape for Liberty and Summit. I'll be sending details and suggested dates for that in the coming days, once things settle out a little on this new site rollout. Thanks to everyone that helped make this happen. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From hguemar at fedoraproject.org Mon Sep 14 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 14 Sep 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150914150003.4C63360A4003@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-09-16 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From hguemar at fedoraproject.org Mon Sep 14 15:32:51 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Mon, 14 Sep 2015 17:32:51 +0200 Subject: [Rdo-list] beta.rdoproject.org In-Reply-To: <55F6DA68.4040806@redhat.com> References: <55F6DA68.4040806@redhat.com> Message-ID: My hat's off to you Mr. Bowen! H. From rbowen at redhat.com Mon Sep 14 15:42:13 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 14 Sep 2015 11:42:13 -0400 Subject: [Rdo-list] beta.rdoproject.org In-Reply-To: <55F6DA68.4040806@redhat.com> References: <55F6DA68.4040806@redhat.com> Message-ID: <55F6EAD5.4010108@redhat.com> On 09/14/2015 10:32 AM, Rich Bowen wrote: > I am very very pleased to announce that the promised website migration > that I mentioned several months ago is finally moving. > > You can see the new website at http://beta.rdoproject.org/ - it should > look almost exactly the same as the old one. (The yellow information box > at the top of each page will go away once we go to production, and is > just there to help during the migration process.) > > The source for the site is now on Github, at > https://github.com/redhat-openstack/website/ and you can start sending > pull requests, or opening issues, immediately. > > We are planning to have a docs hack day prior to the Liberty release, to > get the site whipped into shape for Liberty and Summit. I'll be sending > details and suggested dates for that in the coming days, once things > settle out a little on this new site rollout. > > Thanks to everyone that helped make this happen. A side-effect of this change is that, now that the export has been done to the new site, further changes in the wiki will *not* be reflected on the new site. So if you're making edits to the wiki, please consider making them to the above Github repo as well (or instead). We hope to switch over very soon, although I don't have an exact timeline for that. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From javier.pena at redhat.com Mon Sep 14 17:23:11 2015 From: javier.pena at redhat.com (Javier Pena) Date: Mon, 14 Sep 2015 13:23:11 -0400 (EDT) Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance - September 14-15 In-Reply-To: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> References: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> Message-ID: <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Dear all, > > Due to a planned maintenance of the infrastructure supporting the Delorean > instance (trunk.rdoproject.org), it is expected to be offline between > September 14 (~ 9PM EDT) and September 15 (~ 9PM EDT). > > We will be sending updates to the list if there is any additional information > or change in the plans, and keep you updated on the status. Dear all, We have set up a temporary server to host the Delorean repos, to avoid any outage. If you are consuming the Delorean Trunk repos using the DNS name (trunk.rdoproject.org), there is no action required from your side. Plese note the temporary server is not processing new packages yet. If you find any issue, please let us know. Regards, Javier From rbowen at redhat.com Mon Sep 14 17:24:09 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 14 Sep 2015 13:24:09 -0400 Subject: [Rdo-list] Fwd: Student project In-Reply-To: <55F70132.5060001@redhat.com> References: <55F70132.5060001@redhat.com> Message-ID: <55F702B9.5040003@redhat.com> The following message is from the University outreach program at Red Hat. I wonder if anybody has any ideas for something that we could recommend to a university student to work on for a semester. I expect that we would need to do some hand-holding to help them along (depending on the idea). Please let me know if you have any ideas I could pass along. --Rich -------- Forwarded Message -------- Subject: Student project Hi! If your community has an idea for a university student project that can be completed in a semester (or slightly less), please let me know. One of our partner schools is looking for ideas. Thanks, From rbowen at redhat.com Mon Sep 14 19:32:01 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 14 Sep 2015 15:32:01 -0400 Subject: [Rdo-list] This week's OpenStack Meetups Message-ID: <55F720B1.8090001@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Mon Sep 14 in Mountain View, CA, US: GlusterFS Meetup is Back! - http://www.meetup.com/GlusterFS-Silicon-Valley/events/224932563/ * Mon Sep 14 in Washington, DC, US: Extensibility in OpenStack Swift and what you can do with it! (#25) - http://www.meetup.com/OpenStackDC/events/223978615/ * Tue Sep 15 in Antwerpen, BE: Open Source Strategist & OpenStack Project Leader at HP visiting Belgium - http://www.meetup.com/OpenStack-Belgium-Meetup/events/224736832/ * Wed Sep 16 in New York, NY, US: OpenStack and ACI - http://www.meetup.com/nycnetworkers/events/225239982/ * Wed Sep 16 in New York, NY, US: An Evening Of OpenStack With Swift PTL (SwiftStack), Arista Networks & Coho Data - http://www.meetup.com/OpenStack-New-York-Meetup/events/224928160/ * Wed Sep 16 in Austin, TX, US: Special OpenStack Meetup: CHECK this out and BE THERE! - http://www.meetup.com/OpenStack-Austin/events/224238339/ * Wed Sep 16 in Porto Alegre, BR: 6? Hangout OpenStack Brasil - http://www.meetup.com/Openstack-Brasil/events/225010609/ * Thu Sep 17 in Amsterdam, NL: OpenStack Benelux Conference 2015 - http://www.meetup.com/Openstack-Amsterdam/events/223038288/ * Thu Sep 17 in Pittsburgh, PA, US: OpenStack Open Source Solutions - http://www.meetup.com/Arista-Networks-Pittsburgh-Meetup/events/223834208/ * Thu Sep 17 in Portland, OR, US: OpenStack PDX Meetup - Open SDN Panel - http://www.meetup.com/openstack-pdx/events/224572482/ * Thu Sep 17 in Philadelphia, PA, US: An Evening Of OpenStack With Arista Networks and Coho Data - http://www.meetup.com/Philly-OpenStack-Meetup-Group/events/224928227/ * Thu Sep 17 in Philadelphia, PA, US: OpenStack Solutions - Date Changed to 9/17 Co-Hosting w OpenStack Philly - http://www.meetup.com/Arista-Warriors-Independence-Philadelphia-PA/events/223834085/ * Thu Sep 17 in Boston, MA, US: September OpenStack Meetup - Storage Double Header Event!! - http://www.meetup.com/Openstack-Boston/events/225057400/ * Thu Sep 17 in Athens, GR: Deploying OpenStack with Ansible - http://www.meetup.com/Athens-OpenStack-User-Group/events/225038590/ * Thu Sep 17 in San Francisco, CA, US: SFBay OpenStack Advanced Meetup: OpenStack Networking for Multi-Hypervisor DCs - http://www.meetup.com/openstack/events/223231183/ * Thu Sep 17 in Atlanta, GA, US: OpenStack Meetup (Topic TBD) - http://www.meetup.com/openstack-atlanta/events/224885995/ * Mon Sep 21 in Guadalajara, MX: OpenStack Development Process - Monty Taylor - http://www.meetup.com/OpenStack-GDL/events/224087006/ * Mon Sep 21 in San Jose, CA, US: Come learn about OpenStack Operations from Platform9 - http://www.meetup.com/Silicon-Valley-OpenStack-Ops-Meetup/events/225127012/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From pgsousa at gmail.com Tue Sep 15 11:04:24 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 15 Sep 2015 12:04:24 +0100 Subject: [Rdo-list] undercloud reinstall error Message-ID: Hi all, whenever I try to reinstall undercloud I get the error below, is this a bug? Thanks. *[2015-09-15 12:02:20,829] (os-refresh-config) [INFO] Completed phase pre-configure* *[2015-09-15 12:02:20,829] (os-refresh-config) [INFO] Starting phase configure* *dib-run-parts Tue Sep 15 12:02:20 WEST 2015 Running /usr/libexec/os-refresh-config/configure.d/00-apply-selinux-policy* *+ set -o pipefail* *+ '[' -x /usr/sbin/semanage ']'* *+ semodule -i /opt/stack/selinux-policy/ipxe.pp* *Full path required for exclude: net:[4026532328].* *Full path required for exclude: net:[4026532328].* *dib-run-parts Tue Sep 15 12:02:42 WEST 2015 00-apply-selinux-policy completed* *dib-run-parts Tue Sep 15 12:02:42 WEST 2015 Running /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies* *+ set -o pipefail* *++ mktemp -d* *+ TMPDIR=/tmp/tmp.aOSfXS3n5a* *+ '[' -x /usr/sbin/semanage ']'* *+ cd /tmp/tmp.aOSfXS3n5a* *++ ls '/opt/stack/selinux-policy/*.te'* *ls: cannot access /opt/stack/selinux-policy/*.te: No such file or directory* *+ semodule -i '/tmp/tmp.aOSfXS3n5a/*.pp'* *semodule: Failed on /tmp/tmp.aOSfXS3n5a/*.pp!* *[2015-09-15 12:02:42,360] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1]* *[2015-09-15 12:02:42,361] (os-refresh-config) [ERROR] Aborting...* *Traceback (most recent call last):* * File "", line 1, in * * File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 551, in install* * _run_orc(instack_env)* * File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 483, in _run_orc* * _run_live_command(args, instack_env, 'os-refresh-config')* * File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 314, in _run_live_command* * raise RuntimeError('%s failed. See log for details.' % name)* *RuntimeError: os-refresh-config failed. See log for details.* *ERROR: openstack Command 'instack-install-undercloud' returned non-zero exit status 1* Regards, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcornea at redhat.com Tue Sep 15 14:03:42 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 15 Sep 2015 10:03:42 -0400 (EDT) Subject: [Rdo-list] undercloud reinstall error In-Reply-To: References: Message-ID: <536306950.26865047.1442325822309.JavaMail.zimbra@redhat.com> Hi Pedro, Can you provide the steps to reproduce this? Thanks, Marius ----- Original Message ----- > From: "Pedro Sousa" > To: "rdo-list" > Sent: Tuesday, September 15, 2015 1:04:24 PM > Subject: [Rdo-list] undercloud reinstall error > > Hi all, > > whenever I try to reinstall undercloud I get the error below, is this a bug? > Thanks. > > [2015-09-15 12:02:20,829] (os-refresh-config) [INFO] Completed phase > pre-configure > [2015-09-15 12:02:20,829] (os-refresh-config) [INFO] Starting phase configure > dib-run-parts Tue Sep 15 12:02:20 WEST 2015 Running > /usr/libexec/os-refresh-config/configure.d/00-apply-selinux-policy > + set -o pipefail > + '[' -x /usr/sbin/semanage ']' > + semodule -i /opt/stack/selinux-policy/ipxe.pp > Full path required for exclude: net:[4026532328]. > Full path required for exclude: net:[4026532328]. > dib-run-parts Tue Sep 15 12:02:42 WEST 2015 00-apply-selinux-policy completed > dib-run-parts Tue Sep 15 12:02:42 WEST 2015 Running > /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > + set -o pipefail > ++ mktemp -d > + TMPDIR=/tmp/tmp.aOSfXS3n5a > + '[' -x /usr/sbin/semanage ']' > + cd /tmp/tmp.aOSfXS3n5a > ++ ls '/opt/stack/selinux-policy/*.te' > ls: cannot access /opt/stack/selinux-policy/*.te: No such file or directory > + semodule -i '/tmp/tmp.aOSfXS3n5a/*.pp' > semodule: Failed on /tmp/tmp.aOSfXS3n5a/*.pp! > [2015-09-15 12:02:42,360] (os-refresh-config) [ERROR] during configure phase. > [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' > returned non-zero exit status 1] > > [2015-09-15 12:02:42,361] (os-refresh-config) [ERROR] Aborting... > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 551, in install > _run_orc(instack_env) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 483, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 314, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) > RuntimeError: os-refresh-config failed. See log for details. > ERROR: openstack Command 'instack-install-undercloud' returned non-zero exit > status 1 > > Regards, > Pedro Sousa > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From pgsousa at gmail.com Tue Sep 15 14:09:42 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 15 Sep 2015 15:09:42 +0100 Subject: [Rdo-list] undercloud reinstall error In-Reply-To: <536306950.26865047.1442325822309.JavaMail.zimbra@redhat.com> References: <536306950.26865047.1442325822309.JavaMail.zimbra@redhat.com> Message-ID: Hi Marius, if I run: # openstack undercloud install the second time, this error appears. I had to reinstall Operating System to advance this step but now I have another issue (see below). After last friday rdo manager update seems pretty unstable, UI doesn't work, ironic had issues discovering nodes and now this. I'm following this howto http://docs.openstack.org/developer/tripleo-docs/installation/installing.html *dib-run-parts Tue Sep 15 10:05:12 EDT 2015 20-os-net-config completed* *dib-run-parts Tue Sep 15 10:05:12 EDT 2015 Running /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles* *[2015/09/15 10:05:12 AM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json* *dib-run-parts Tue Sep 15 10:05:12 EDT 2015 40-hiera-datafiles completed* *dib-run-parts Tue Sep 15 10:05:12 EDT 2015 Running /usr/libexec/os-refresh-config/configure.d/50-puppet-stack-config* *+ set -o pipefail* *+ set +e* *+ puppet apply --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp* *Warning: Scope(Class[Keystone]): Keystone under Eventlet has been drepecated during the Kilo cycle. Support for deploying under eventlet will be dropped as of the M-release of OpenStack.* *Warning: Scope(Class[Glance::Registry]): The auth_host parameter is deprecated. Please use auth_uri and identity_uri instead.* *Warning: Scope(Class[Glance::Registry]): The auth_port parameter is deprecated. Please use auth_uri and identity_uri instead.* *Warning: Scope(Class[Glance::Registry]): The auth_protocol parameter is deprecated. Please use auth_uri and identity_uri instead.* *Warning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_host'; class ::nova::compute has not been evaluated* *Warning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_protocol'; class ::nova::compute has not been evaluated* *Warning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_port'; class ::nova::compute has not been evaluated* *Warning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_path'; class ::nova::compute has not been evaluated* */etc/puppet/modules/vswitch/lib/puppet/provider/vs_port/ovs_redhat.rb:3: warning: already initialized constant BASE* */etc/puppet/modules/vswitch/lib/puppet/provider/vs_port/ovs_redhat.rb:3: warning: previous definition of BASE was here* */etc/puppet/modules/vswitch/lib/puppet/provider/vs_port/ovs_redhat.rb:6: warning: already initialized constant DEFAULT* */etc/puppet/modules/vswitch/lib/puppet/provider/vs_port/ovs_redhat.rb:6: warning: previous definition of DEFAULT was here* *Warning: Scope(Class[Concat::Setup]): concat::setup is deprecated as a public API of the concat module and should no longer be directly included in the manifest.* *Warning: Scope(Class[Heat::Keystone::Domain]): The auth_url parameter is deprecated and will be removed in future releases* *Warning: Scope(Class[Heat::Keystone::Domain]): The keystone_admin parameter is deprecated and will be removed in future releases* *Warning: Scope(Class[Heat::Keystone::Domain]): The keystone_password parameter is deprecated and will be removed in future releases* *Warning: Scope(Class[Heat::Keystone::Domain]): The keystone_tenant parameter is deprecated and will be removed in future releases* *Warning: Scope(Class[Nova::Compute::Ironic]): The admin_user parameter is deprecated, use admin_username instead.* *Warning: Scope(Class[Nova::Compute::Ironic]): The admin_passwd parameter is deprecated, use admin_password instead.* *Warning: Scope(Neutron::Plugins::Ml2::Type_driver[local]): local type_driver is useful only for single-box, because it provides no connectivity between hosts* *Warning: Scope(Swift::Storage::Server[6002]): The default incoming_chmod set to 0644 may yield in error prone directories and will be changed in a later release.* *Warning: Scope(Swift::Storage::Server[6002]): The default outgoing_chmod set to 0644 may yield in error prone directories and will be changed in a later release.* *Warning: Scope(Swift::Storage::Server[6001]): The default incoming_chmod set to 0644 may yield in error prone directories and will be changed in a later release.* *Warning: Scope(Swift::Storage::Server[6001]): The default outgoing_chmod set to 0644 may yield in error prone directories and will be changed in a later release.* *Warning: Scope(Swift::Storage::Server[6000]): The default incoming_chmod set to 0644 may yield in error prone directories and will be changed in a later release.* *Warning: Scope(Swift::Storage::Server[6000]): The default outgoing_chmod set to 0644 may yield in error prone directories and will be changed in a later release.* *Error: Could not find resource 'Exec[heat_domain_create]' for relationship from 'Class[Keystone::Roles::Admin]' on node instack.mydomain* *Error: Could not find resource 'Exec[heat_domain_create]' for relationship from 'Class[Keystone::Roles::Admin]' on node instack.mydomain* *+ rc=1* *+ set -e* *+ echo 'puppet apply exited with exit code 1'* *puppet apply exited with exit code 1* *+ '[' 1 '!=' 2 -a 1 '!=' 0 ']'* *+ exit 1* *[2015-09-15 10:05:26,234] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1]* *[2015-09-15 10:05:26,234] (os-refresh-config) [ERROR] Aborting...* *Traceback (most recent call last):* * File "", line 1, in * * File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 551, in install* * _run_orc(instack_env)* * File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 483, in _run_orc* * _run_live_command(args, instack_env, 'os-refresh-config')* * File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 314, in _run_live_command* * raise RuntimeError('%s failed. See log for details.' % name)* *RuntimeError: os-refresh-config failed. See log for details.* *ERROR: openstack Command 'instack-install-undercloud' returned non-zero exit status 1* Regards, Pedro Sousa On Tue, Sep 15, 2015 at 3:03 PM, Marius Cornea wrote: > Hi Pedro, > > Can you provide the steps to reproduce this? > > Thanks, > Marius > > ----- Original Message ----- > > From: "Pedro Sousa" > > To: "rdo-list" > > Sent: Tuesday, September 15, 2015 1:04:24 PM > > Subject: [Rdo-list] undercloud reinstall error > > > > Hi all, > > > > whenever I try to reinstall undercloud I get the error below, is this a > bug? > > Thanks. > > > > [2015-09-15 12:02:20,829] (os-refresh-config) [INFO] Completed phase > > pre-configure > > [2015-09-15 12:02:20,829] (os-refresh-config) [INFO] Starting phase > configure > > dib-run-parts Tue Sep 15 12:02:20 WEST 2015 Running > > /usr/libexec/os-refresh-config/configure.d/00-apply-selinux-policy > > + set -o pipefail > > + '[' -x /usr/sbin/semanage ']' > > + semodule -i /opt/stack/selinux-policy/ipxe.pp > > Full path required for exclude: net:[4026532328]. > > Full path required for exclude: net:[4026532328]. > > dib-run-parts Tue Sep 15 12:02:42 WEST 2015 00-apply-selinux-policy > completed > > dib-run-parts Tue Sep 15 12:02:42 WEST 2015 Running > > > /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > > + set -o pipefail > > ++ mktemp -d > > + TMPDIR=/tmp/tmp.aOSfXS3n5a > > + '[' -x /usr/sbin/semanage ']' > > + cd /tmp/tmp.aOSfXS3n5a > > ++ ls '/opt/stack/selinux-policy/*.te' > > ls: cannot access /opt/stack/selinux-policy/*.te: No such file or > directory > > + semodule -i '/tmp/tmp.aOSfXS3n5a/*.pp' > > semodule: Failed on /tmp/tmp.aOSfXS3n5a/*.pp! > > [2015-09-15 12:02:42,360] (os-refresh-config) [ERROR] during configure > phase. > > [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' > > returned non-zero exit status 1] > > > > [2015-09-15 12:02:42,361] (os-refresh-config) [ERROR] Aborting... > > Traceback (most recent call last): > > File "", line 1, in > > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > > line 551, in install > > _run_orc(instack_env) > > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > > line 483, in _run_orc > > _run_live_command(args, instack_env, 'os-refresh-config') > > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > > line 314, in _run_live_command > > raise RuntimeError('%s failed. See log for details.' % name) > > RuntimeError: os-refresh-config failed. See log for details. > > ERROR: openstack Command 'instack-install-undercloud' returned non-zero > exit > > status 1 > > > > Regards, > > Pedro Sousa > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Tue Sep 15 14:40:52 2015 From: ayoung at redhat.com (Adam Young) Date: Tue, 15 Sep 2015 10:40:52 -0400 Subject: [Rdo-list] Fwd: Student project In-Reply-To: <55F702B9.5040003@redhat.com> References: <55F70132.5060001@redhat.com> <55F702B9.5040003@redhat.com> Message-ID: <55F82DF4.4000307@redhat.com> On 09/14/2015 01:24 PM, Rich Bowen wrote: > The following message is from the University outreach program at Red > Hat. I wonder if anybody has any ideas for something that we could > recommend to a university student to work on for a semester. I expect > that we would need to do some hand-holding to help them along > (depending on the idea). I have numerous Keystone based projects I can point them at. A very interesting one is Basic-Auth Federation against Keystone. This would show how an application could use the existing user database in SQL to authenticate in an application. mod_auth_dbm should be able to talk to the Keystone database directly. This would be a good step toward upstream cleanup of the Identity vs. Authorization sides of Keystone, too. Personally, I would love to see someone try to get director to deploy multiple overclouds to the same undercloud. Perhaps some steps toward this would be a possible effort, too. > > Please let me know if you have any ideas I could pass along. > > --Rich > > > -------- Forwarded Message -------- > Subject: Student project > > Hi! > > If your community has an idea for a university student project that can > be completed in a semester (or slightly less), please let me know. One > of our partner schools is looking for ideas. > > Thanks, > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From cbrown2 at ocf.co.uk Tue Sep 15 14:41:46 2015 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Tue, 15 Sep 2015 15:41:46 +0100 Subject: [Rdo-list] undercloud reinstall error In-Reply-To: References: <536306950.26865047.1442325822309.JavaMail.zimbra@redhat.com> Message-ID: <1442328106.18512.37.camel@ocf-laptop> Hi Pedro, On Tue, 2015-09-15 at 15:09 +0100, Pedro Sousa wrote: > # openstack undercloud install > This should be run as a regular user I believe, not root. > > Error: Could not find resource 'Exec[heat_domain_create]' for > relationship from 'Class[Keystone::Roles::Admin]' on node > instack.mydomain > Error: Could not find resource 'Exec[heat_domain_create]' for > relationship from 'Class[Keystone::Roles::Admin]' on node Yes, I'm seeing this as well, currently unable to build RDO-Manager at all. I get a different error when I skip: export DIB_INSTALLTYPE_puppet_modules=source -- Regards, Christopher --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus From pgsousa at gmail.com Tue Sep 15 14:44:44 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 15 Sep 2015 15:44:44 +0100 Subject: [Rdo-list] undercloud reinstall error In-Reply-To: <1442328106.18512.37.camel@ocf-laptop> References: <536306950.26865047.1442325822309.JavaMail.zimbra@redhat.com> <1442328106.18512.37.camel@ocf-laptop> Message-ID: Hi Christopher, yes, I run as regular stack user. Regards, Pedro Sousa On Tue, Sep 15, 2015 at 3:41 PM, Christopher Brown wrote: > Hi Pedro, > > On Tue, 2015-09-15 at 15:09 +0100, Pedro Sousa wrote: > > > > # openstack undercloud install > > > > This should be run as a regular user I believe, not root. > > > > > Error: Could not find resource 'Exec[heat_domain_create]' for > > relationship from 'Class[Keystone::Roles::Admin]' on node > > instack.mydomain > > Error: Could not find resource 'Exec[heat_domain_create]' for > > relationship from 'Class[Keystone::Roles::Admin]' on node > > Yes, I'm seeing this as well, currently unable to build RDO-Manager at > all. > > I get a different error when I skip: > > export DIB_INSTALLTYPE_puppet_modules=source > > > > -- > Regards, > > Christopher > > > > > --- > This email has been checked for viruses by Avast antivirus software. > https://www.avast.com/antivirus > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Tue Sep 15 14:34:25 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 15 Sep 2015 16:34:25 +0200 Subject: [Rdo-list] RDO Liberty and Fedora Message-ID: Hi all, Fedora/RDO relationship topic came up few months ago on the rdo-list and in the meantime details have been refined in a Trello card https://trello.com/c/wzdl1IlZ/52-openstack-in-fedora which I'll try to summarize in this post. High level overview (aka tl;dr) is that RDO Liberty repository will be providing EL7 packages, hosted on CentOS mirrors and built as a part of CentOS CloudSIG. For developers using Fedora latest trunk builds of openstack-* packages will be available from Delorean Trunk. OpenStack service packages (openstack-*) will be retired in Fedora master while Oslo and OpenStack clients stay in Fedora proper to allow using Fedora as a client of OpenStack based cloud providers. Reasoning is that Fedora user population are developers who always need latest and greatest version and Delorean Trunk packages are the best match. We were trying to follow 1:1 mapping between Fedora and OpenStack release but it is getting out of sync now (current f22 is Juno, unreleased f23 is Kilo) and it's getting impossible to keep up with required dependency versions in the current stable Fedora without breaking older OpenStack release. There is also wasted overhead of pushing OpenStack stable updates in Fedora when stable builds are not interesting for the target population. For production setup EL7 platform is preferred and that's where we are going to provide stable updates in released RDO versions. Cheers, Alan From bderzhavets at hotmail.com Tue Sep 15 15:12:49 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 15 Sep 2015 11:12:49 -0400 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: References: Message-ID: > Reasoning is that Fedora user population are developers who always > need latest and greatest version and Delorean Trunk packages are the > best match. We were trying to follow 1:1 mapping between Fedora and > OpenStack release but it is getting out of sync now (current f22 is > Juno, unreleased f23 is Kilo) and it's getting impossible to keep up > with required dependency versions in the current stable Fedora without > breaking older OpenStack release. Could you please,specify how to set up Delorean Trunk Repos for Liberty ( a kind of Quick Start Page for Liberty testing ) :- 1. CentOS 7.1 2. Fedora 22 Thanks. Boris > There is also wasted overhead of pushing OpenStack stable updates in > Fedora when stable builds are not interesting for the target > population. > For production setup EL7 platform is preferred and that's where we are > going to provide stable updates in released RDO versions. > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Tue Sep 15 13:35:34 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 15 Sep 2015 15:35:34 +0200 Subject: [Rdo-list] RDO Liberty and Fedora Message-ID: Hi all, Fedora/RDO relationship topic came up few months ago on the rdo-list and in the meantime details have been refined in a Trello card https://trello.com/c/wzdl1IlZ/52-openstack-in-fedora which I'll try to summarize in this post. High level overview (aka tl;dr) is that RDO Liberty repository will be providing EL7 packages, hosted on CentOS mirrors and built as a part of CentOS CloudSIG. For developers using Fedora latest trunk builds of openstack-* packages will be available from Delorean Trunk. OpenStack service packages (openstack-*) will be retired in Fedora master while Oslo and OpenStack clients stay in Fedora proper to allow using Fedora as a client of OpenStack based cloud providers. Reasoning is that Fedora user population are developers who always need latest and greatest version and Delorean Trunk packages are the best match. We were trying to follow 1:1 mapping between Fedora and OpenStack release but it is getting out of sync now (current f22 is Juno, unreleased f23 is Kilo) and it's getting impossible to keep up with required dependency versions in the current stable Fedora without breaking older OpenStack release. There is also wasted overhead of pushing OpenStack stable updates in Fedora when stable builds are not interesting for the target population. For production setup EL7 platform is preferred and that's where we are going to provide stable updates in released RDO versions. Cheers, Alan From derekh at redhat.com Tue Sep 15 15:31:13 2015 From: derekh at redhat.com (Derek Higgins) Date: Tue, 15 Sep 2015 16:31:13 +0100 Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance - September 14-15 In-Reply-To: <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> References: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> Message-ID: <55F839C1.10601@redhat.com> On 14/09/15 18:23, Javier Pena wrote: > > ----- Original Message ----- >> Dear all, >> >> Due to a planned maintenance of the infrastructure supporting the Delorean >> instance (trunk.rdoproject.org), it is expected to be offline between >> September 14 (~ 9PM EDT) and September 15 (~ 9PM EDT). >> >> We will be sending updates to the list if there is any additional information >> or change in the plans, and keep you updated on the status. > > Dear all, > > We have set up a temporary server to host the Delorean repos, to avoid any outage. > > If you are consuming the Delorean Trunk repos using the DNS name (trunk.rdoproject.org), there is no action required from your side. Plese note the temporary server is not processing new packages yet. Good job, can I just point out that if this is just a temporary server it should not process new packages at all (you said yet), This would only complicate things as two master servers processing commits will diverge and cause problems. Adding additional trunk servers should just be a mirroring exercise of yum repositories. > > If you find any issue, please let us know. > > Regards, > Javier > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From dabarren at gmail.com Tue Sep 15 15:34:34 2015 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Tue, 15 Sep 2015 17:34:34 +0200 Subject: [Rdo-list] undercloud reinstall error In-Reply-To: References: <536306950.26865047.1442325822309.JavaMail.zimbra@redhat.com> <1442328106.18512.37.camel@ocf-laptop> Message-ID: Hi all, currently there is a critical bug opened related to "Could not find resource 'Exec[heat_domain_create]'" https://bugs.launchpad.net/tripleo/+bug/1491002 The last status is Fix Committed Regards 2015-09-15 16:44 GMT+02:00 Pedro Sousa : > Hi Christopher, > > yes, I run as regular stack user. > > Regards, > Pedro Sousa > > On Tue, Sep 15, 2015 at 3:41 PM, Christopher Brown > wrote: > >> Hi Pedro, >> >> On Tue, 2015-09-15 at 15:09 +0100, Pedro Sousa wrote: >> >> >> > # openstack undercloud install >> > >> >> This should be run as a regular user I believe, not root. >> >> > >> > Error: Could not find resource 'Exec[heat_domain_create]' for >> > relationship from 'Class[Keystone::Roles::Admin]' on node >> > instack.mydomain >> > Error: Could not find resource 'Exec[heat_domain_create]' for >> > relationship from 'Class[Keystone::Roles::Admin]' on node >> >> Yes, I'm seeing this as well, currently unable to build RDO-Manager at >> all. >> >> I get a different error when I skip: >> >> export DIB_INSTALLTYPE_puppet_modules=source >> >> >> >> -- >> Regards, >> >> Christopher >> >> >> >> >> --- >> This email has been checked for viruses by Avast antivirus software. >> https://www.avast.com/antivirus >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Tue Sep 15 15:42:58 2015 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 15 Sep 2015 11:42:58 -0400 (EDT) Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance - September 14-15 In-Reply-To: <55F839C1.10601@redhat.com> References: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> <55F839C1.10601@redhat.com> Message-ID: <1198304860.50976902.1442331778412.JavaMail.zimbra@redhat.com> ----- Original Message ----- > > > On 14/09/15 18:23, Javier Pena wrote: > > > > ----- Original Message ----- > >> Dear all, > >> > >> Due to a planned maintenance of the infrastructure supporting the Delorean > >> instance (trunk.rdoproject.org), it is expected to be offline between > >> September 14 (~ 9PM EDT) and September 15 (~ 9PM EDT). > >> > >> We will be sending updates to the list if there is any additional > >> information > >> or change in the plans, and keep you updated on the status. > > > > Dear all, > > > > We have set up a temporary server to host the Delorean repos, to avoid any > > outage. > > > > If you are consuming the Delorean Trunk repos using the DNS name > > (trunk.rdoproject.org), there is no action required from your side. Plese > > note the temporary server is not processing new packages yet. > > Good job, can I just point out that if this is just a temporary server > it should not process new packages at all (you said yet), > > This would only complicate things as two master servers processing > commits will diverge and cause problems. Adding additional trunk servers > should just be a mirroring exercise of yum repositories. > Sure, there was never a plan to have both servers processing packages in parallel. The option to have the temp server process packaged was open in case there was an issue during the planned maintenance, or we needed to do it while the main server is down. For now, it's still not processing packages, to avoid having to sync the updated packages back after the maintenance. Regards, Javier > > > > If you find any issue, please let us know. > > > > Regards, > > Javier > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ibravo at ltgfederal.com Tue Sep 15 15:46:07 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Tue, 15 Sep 2015 11:46:07 -0400 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: References: Message-ID: <2E461454-3D6E-451D-84C6-ED993C8B32EC@ltgfederal.com> > Boris, > > I am currently deploying Liberty under Centos 7 using the RDO Manager. The steps I am taking are: > > 1. Follow the Kilo guide in rdoproject.org > 2. Add the following repos: > > http://trunk.rdoproject.org/centos7/current/delorean.repo > http://trunk.rdoproject.org/centos7/delorean-deps.repo > > 3. Install RDO Manager / Director > sudo yum install python-tripleoclient (changed names in Liberty) > > 4. Continue with the Kilo guide. > > IB > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekh at redhat.com Tue Sep 15 15:47:05 2015 From: derekh at redhat.com (Derek Higgins) Date: Tue, 15 Sep 2015 16:47:05 +0100 Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance - September 14-15 In-Reply-To: <1198304860.50976902.1442331778412.JavaMail.zimbra@redhat.com> References: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> <55F839C1.10601@redhat.com> <1198304860.50976902.1442331778412.JavaMail.zimbra@redhat.com> Message-ID: <55F83D79.20003@redhat.com> On 15/09/15 16:42, Javier Pena wrote: > > > ----- Original Message ----- >> >> >> On 14/09/15 18:23, Javier Pena wrote: >>> >>> ----- Original Message ----- >>>> Dear all, >>>> >>>> Due to a planned maintenance of the infrastructure supporting the Delorean >>>> instance (trunk.rdoproject.org), it is expected to be offline between >>>> September 14 (~ 9PM EDT) and September 15 (~ 9PM EDT). >>>> >>>> We will be sending updates to the list if there is any additional >>>> information >>>> or change in the plans, and keep you updated on the status. >>> >>> Dear all, >>> >>> We have set up a temporary server to host the Delorean repos, to avoid any >>> outage. >>> >>> If you are consuming the Delorean Trunk repos using the DNS name >>> (trunk.rdoproject.org), there is no action required from your side. Plese >>> note the temporary server is not processing new packages yet. >> >> Good job, can I just point out that if this is just a temporary server >> it should not process new packages at all (you said yet), >> >> This would only complicate things as two master servers processing >> commits will diverge and cause problems. Adding additional trunk servers >> should just be a mirroring exercise of yum repositories. >> > > Sure, there was never a plan to have both servers processing packages in parallel. The option to have the temp server process packaged was open in case there was an issue during the planned maintenance, or we needed to do it while the main server is down. > > For now, it's still not processing packages, to avoid having to sync the updated packages back after the maintenance. Sounds good, thanks, I just wanted to make sure. > > Regards, > Javier > >>> >>> If you find any issue, please let us know. >>> >>> Regards, >>> Javier >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> From mcornea at redhat.com Tue Sep 15 16:07:01 2015 From: mcornea at redhat.com (Marius Cornea) Date: Tue, 15 Sep 2015 12:07:01 -0400 (EDT) Subject: [Rdo-list] undercloud reinstall error In-Reply-To: References: <536306950.26865047.1442325822309.JavaMail.zimbra@redhat.com> <1442328106.18512.37.camel@ocf-laptop> Message-ID: <105578759.26973609.1442333221273.JavaMail.zimbra@redhat.com> Thanks Eduardo. Just a heads up, I applied the patch[1] mentioned in the bug and was able to pass undercloud install. [1] https://review.gerrithub.io/#/c/244988/1/elements/puppet-stack-config/puppet-stack-config.pp ----- Original Message ----- > From: "Eduardo Gonzalez" > To: "Pedro Sousa" > Cc: rdo-list at redhat.com > Sent: Tuesday, September 15, 2015 5:34:34 PM > Subject: Re: [Rdo-list] undercloud reinstall error > > Hi all, currently there is a critical bug opened related to "Could not find > resource 'Exec[heat_domain_create]'" > > https://bugs.launchpad.net/tripleo/+bug/1491002 > > The last status is Fix Committed > Regards > > 2015-09-15 16:44 GMT+02:00 Pedro Sousa < pgsousa at gmail.com > : > > > > Hi Christopher, > > yes, I run as regular stack user. > > Regards, > Pedro Sousa > > On Tue, Sep 15, 2015 at 3:41 PM, Christopher Brown < cbrown2 at ocf.co.uk > > wrote: > > > Hi Pedro, > > On Tue, 2015-09-15 at 15:09 +0100, Pedro Sousa wrote: > > > > # openstack undercloud install > > > > This should be run as a regular user I believe, not root. > > > > > Error: Could not find resource 'Exec[heat_domain_create]' for > > relationship from 'Class[Keystone::Roles::Admin]' on node > > instack.mydomain > > Error: Could not find resource 'Exec[heat_domain_create]' for > > relationship from 'Class[Keystone::Roles::Admin]' on node > > Yes, I'm seeing this as well, currently unable to build RDO-Manager at > all. > > I get a different error when I skip: > > export DIB_INSTALLTYPE_puppet_modules=source > > > > -- > Regards, > > Christopher > > > > > --- > This email has been checked for viruses by Avast antivirus software. > https://www.avast.com/antivirus > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From lars at redhat.com Tue Sep 15 16:37:56 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 15 Sep 2015 12:37:56 -0400 Subject: [Rdo-list] Question about Restart=always in systemd In-Reply-To: <20150911153214.GA21461@mattdm.org> References: <316691786CCAEE44AE90482264E3AB8213A9812C@xmb-rcd-x08.cisco.com> <936474675.323251.1440168150920.JavaMail.zimbra@speichert.pl> <20150904151716.GA5011@mattdm.org> <20150911153214.GA21461@mattdm.org> Message-ID: <20150915163756.GD14112@redhat.com> On Fri, Sep 11, 2015 at 11:32:14AM -0400, Matthew Miller wrote: > On Fri, Sep 04, 2015 at 11:17:16AM -0400, Matthew Miller wrote: > > > I'd think that "Restart=always" is a good setting for all services. > > > What it really brings up is maybe the issue of streamlining the unit > > > config files. > > > > For several releases, we've had packaging guidelines in Fedora > > encouraging Restart=on-failure or Restart=on-abnormal: > > > > https://fedoraproject.org/wiki/Packaging:Systemd#Automatic_restarting > > > > We never, however, had an effort to bring existing packages into a > > consistent state. I'd love for that effort to happen ? anyone > > interesting in helping out? > > And then there were crickets. :) I didn't see any other traffic on this, so... How about a mechanism to *generate* unit files using information specified by the package? Maybe packages would provide "stub" unit files that contain only things that differ from standard behavior (e.g., description, dependencies, environmentfiles, etc), and then the actual unit file is generated by filling in the missing information. E.g., for openstack-nova, a package provides the following stub: [Service] Type=notify NotifyAcess=all TimeoutStartSec=0 User=nova And then you run: install-systemd-unit \ --execstart /usr/bin/nova-api \ --description "OpenStack Nova API Server" \ /path/to/stub/units/openstack-nova-template.service \ /lib/systemd/system/openstack-nova-api.service And you get: [Unit] Description=OpenStack Nova API Server After=syslog.target network.target [Service] Type=notify NotifyAccess=all User=nova ExecStart=/usr/bin/nova-api Restart=on-failure [Install] WantedBy=multi-user.target And for, say, openstack-nova-scheduler: install-systemd-unit \ --execstart /usr/bin/nova-scheduler \ --description "OpenStack Nova Scheduler Server" \ /path/to/stub/units/openstack-nova-template.service \ /lib/systemd/system/openstack-nova-scheduler.service Or if that's too crazy, just add a check to the fedora-review tool that ensures unit files have standard settings, such as the Restart= setting. The review could flag things like: - Missing Restart behavior - Missing Description Either solution would be reasonably easy to put together. I like the first one, because (a) yay automation and because (b) it would allow for local policy overrides ("I want all my services to run with Restart=always instead of Restart=on-failure"). An auditing tool could also be used to check all of the *existing* packages, if that were preferable in addition to or as an alternative too either of the above. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lars at redhat.com Tue Sep 15 16:54:04 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 15 Sep 2015 12:54:04 -0400 Subject: [Rdo-list] beta.rdoproject.org In-Reply-To: <55F6DA68.4040806@redhat.com> References: <55F6DA68.4040806@redhat.com> Message-ID: <20150915165403.GE14112@redhat.com> On Mon, Sep 14, 2015 at 10:32:08AM -0400, Rich Bowen wrote: > You can see the new website at http://beta.rdoproject.org/ - it should look > almost exactly the same as the old one. (The yellow information box at the > top of each page will go away once we go to production, and is just there to > help during the migration process.) Nice work! -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From pmyers at redhat.com Tue Sep 15 16:58:21 2015 From: pmyers at redhat.com (Perry Myers) Date: Tue, 15 Sep 2015 12:58:21 -0400 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: References: Message-ID: <55F84E2D.5040809@redhat.com> On 09/15/2015 09:35 AM, Alan Pevec wrote: > Hi all, > > Fedora/RDO relationship topic came up few months ago on the rdo-list and > in the meantime details have been refined in a Trello card > https://trello.com/c/wzdl1IlZ/52-openstack-in-fedora which I'll try to > summarize in this post. > > High level overview (aka tl;dr) is that RDO Liberty repository will be > providing EL7 packages, hosted on CentOS mirrors and built as a part > of CentOS CloudSIG. > For developers using Fedora latest trunk builds of openstack-* packages > will be available from Delorean Trunk. > OpenStack service packages (openstack-*) will be retired in Fedora > master while Oslo and OpenStack clients stay in Fedora proper to allow > using Fedora as a client of OpenStack based cloud providers. > > Reasoning is that Fedora user population are developers who always > need latest and greatest version and Delorean Trunk packages are the > best match. We were trying to follow 1:1 mapping between Fedora and > OpenStack release but it is getting out of sync now (current f22 is > Juno, unreleased f23 is Kilo) and it's getting impossible to keep up > with required dependency versions in the current stable Fedora without > breaking older OpenStack release. > There is also wasted overhead of pushing OpenStack stable updates in > Fedora when stable builds are not interesting for the target > population. > For production setup EL7 platform is preferred and that's where we are > going to provide stable updates in released RDO versions. Sounds like a good plan. So, for new openstack service packages (like Murano, or other services we haven't packaged yet) or for other 3rd party packages that we're working to get into RDO... Do we need to get those new services/deps/3rd party plugins into Fedora still? Or do we have a way to get those into Delorean for Fedora and CentOS CloudSIG w/o requiring Fedora formal inclusion? Perry From pgsousa at gmail.com Tue Sep 15 17:34:15 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Tue, 15 Sep 2015 18:34:15 +0100 Subject: [Rdo-list] undercloud reinstall error In-Reply-To: <105578759.26973609.1442333221273.JavaMail.zimbra@redhat.com> References: <536306950.26865047.1442325822309.JavaMail.zimbra@redhat.com> <1442328106.18512.37.camel@ocf-laptop> <105578759.26973609.1442333221273.JavaMail.zimbra@redhat.com> Message-ID: Hi Marius, I've managed to install undercloud after applying the patch, thanks. I also had to disable in /etc/ironic/ironic.conf "[inspector] enabled=false" to get ironic discovery working. Regards, Pedro Sousa On Tue, Sep 15, 2015 at 5:07 PM, Marius Cornea wrote: > Thanks Eduardo. Just a heads up, I applied the patch[1] mentioned in the > bug and was able to pass undercloud install. > > [1] > https://review.gerrithub.io/#/c/244988/1/elements/puppet-stack-config/puppet-stack-config.pp > > ----- Original Message ----- > > From: "Eduardo Gonzalez" > > To: "Pedro Sousa" > > Cc: rdo-list at redhat.com > > Sent: Tuesday, September 15, 2015 5:34:34 PM > > Subject: Re: [Rdo-list] undercloud reinstall error > > > > Hi all, currently there is a critical bug opened related to "Could not > find > > resource 'Exec[heat_domain_create]'" > > > > https://bugs.launchpad.net/tripleo/+bug/1491002 > > > > The last status is Fix Committed > > Regards > > > > 2015-09-15 16:44 GMT+02:00 Pedro Sousa < pgsousa at gmail.com > : > > > > > > > > Hi Christopher, > > > > yes, I run as regular stack user. > > > > Regards, > > Pedro Sousa > > > > On Tue, Sep 15, 2015 at 3:41 PM, Christopher Brown < cbrown2 at ocf.co.uk > > > wrote: > > > > > > Hi Pedro, > > > > On Tue, 2015-09-15 at 15:09 +0100, Pedro Sousa wrote: > > > > > > > # openstack undercloud install > > > > > > > This should be run as a regular user I believe, not root. > > > > > > > > Error: Could not find resource 'Exec[heat_domain_create]' for > > > relationship from 'Class[Keystone::Roles::Admin]' on node > > > instack.mydomain > > > Error: Could not find resource 'Exec[heat_domain_create]' for > > > relationship from 'Class[Keystone::Roles::Admin]' on node > > > > Yes, I'm seeing this as well, currently unable to build RDO-Manager at > > all. > > > > I get a different error when I skip: > > > > export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > > -- > > Regards, > > > > Christopher > > > > > > > > > > --- > > This email has been checked for viruses by Avast antivirus software. > > https://www.avast.com/antivirus > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpeeler at redhat.com Tue Sep 15 17:36:34 2015 From: jpeeler at redhat.com (Jeff Peeler) Date: Tue, 15 Sep 2015 13:36:34 -0400 Subject: [Rdo-list] undercloud reinstall error In-Reply-To: <105578759.26973609.1442333221273.JavaMail.zimbra@redhat.com> References: <536306950.26865047.1442325822309.JavaMail.zimbra@redhat.com> <1442328106.18512.37.camel@ocf-laptop> <105578759.26973609.1442333221273.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Sep 15, 2015 at 12:07 PM, Marius Cornea wrote: > Thanks Eduardo. Just a heads up, I applied the patch[1] mentioned in the > bug and was able to pass undercloud install. > > [1] > https://review.gerrithub.io/#/c/244988/1/elements/puppet-stack-config/puppet-stack-config.pp Thanks for posting this - I ran into this as well. The link in the documentation says "The above Delorean repository is updated after a successful CI run", so either the CI doesn't test the undercloud or the script updating the docs isn't running. (Referring to the delorean link of http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo -o /etc/yum.repos.d/delorean.repo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Tue Sep 15 17:56:51 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 15 Sep 2015 19:56:51 +0200 Subject: [Rdo-list] beta.rdoproject.org In-Reply-To: <20150915165403.GE14112@redhat.com> References: <55F6DA68.4040806@redhat.com> <20150915165403.GE14112@redhat.com> Message-ID: <20150915175651.GA24099@tesla.redhat.com> On Tue, Sep 15, 2015 at 12:54:04PM -0400, Lars Kellogg-Stedman wrote: > On Mon, Sep 14, 2015 at 10:32:08AM -0400, Rich Bowen wrote: > > You can see the new website at http://beta.rdoproject.org/ - it > > should look almost exactly the same as the old one. (The yellow > > information box at the top of each page will go away once we go to > > production, and is just there to help during the migration process.) > > Nice work! Absolutely. Git-based, static websites for the win. -- /kashyap From dms at redhat.com Tue Sep 15 18:37:11 2015 From: dms at redhat.com (David Moreau Simard) Date: Tue, 15 Sep 2015 14:37:11 -0400 Subject: [Rdo-list] Improving RDO continuous integration/testing Message-ID: Hi, Continuous integration jobs for RDO trunk (liberty) are not in good shape right now [1] and they also provide poor coverage. The only test that is run is tempest.scenario.test_server_basic_ops [2] which adds a ssh-keypair. As part of our efforts to speed up the release process of RDO and improve the quality and stability of what we ship, I am working to improve the CI. The good news is that I've managed to get promising results with khaleesi+rdo+packstack+aio with selinux permissive locally (selinux enforced being blocked right now [3]) so we are on the right track. As I've been trying to improve test coverage, a good first step would be to enforce tempest smoke tests. However, I've noticed that khaleesi uses a fork of tempest [4] and this generated a failure [5] on a test that has since been fixed upstream [6]. I'm very concerned about testing trunk RDO against a fork of tempest. We should be testing trunk RDO against trunk tempest. Running against a fork means we might lack some important changes to test coverage or can unnecessarily encounter failures which have already been resolved upstream. My understanding is that Red Hat maintains a fork of tempest to run test suites against products which have a longer release and support cycles and that is fine. Should we switch RDO CI testing to the upstream branches ? Thanks, [1]: https://prod-rdojenkins.rhcloud.com/view/RDO-Liberty-Delorean-Trunk/ [2]: https://github.com/openstack/tempest/blob/master/tempest/scenario/test_server_basic_ops.py [3]: https://bugzilla.redhat.com/show_bug.cgi?id=1249685 [4]: https://github.com/redhat-openstack/khaleesi-settings/blob/master/settings/tester/tempest/setup/git.yml#L4 [5]: http://paste.openstack.org/show/463360/ [6]: https://github.com/openstack/tempest/commit/986b9e6fda6c76806a329bd53f5f73f557da903e#diff-06c9ec2d3ed36f96bfce6f75e82a4f45 David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From rbowen at redhat.com Tue Sep 15 18:40:46 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 15 Sep 2015 14:40:46 -0400 Subject: [Rdo-list] RDO BoF at OpenStack Summit In-Reply-To: <55F1E005.1080706@redhat.com> References: <55F1E005.1080706@redhat.com> Message-ID: <55F8662E.5000504@redhat.com> On 09/10/2015 03:54 PM, Rich Bowen wrote: > We have an opportunity to sign up for a BoF (Birds of a Feather) session > at OpenStack Summit in Tokyo. The following dates/times are available: > > http://doodle.com/poll/bvd8w85kuqngeb7x > > The rooms are large enough for 30. > > We obviously can't please everyone, but if you can express your > preference, I will sign up for a slot early next week. Note that these > slots will likely go very quickly, and the spaces are VERY limited, so > please express your opinion as soon as possible. FYI, all available rooms filled up before I was able to get to this yesterday, but I am investigating other options. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From hguemar at fedoraproject.org Tue Sep 15 18:05:54 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 15 Sep 2015 20:05:54 +0200 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: <55F84E2D.5040809@redhat.com> References: <55F84E2D.5040809@redhat.com> Message-ID: 2015-09-15 18:58 GMT+02:00 Perry Myers : > Sounds like a good plan. > > So, for new openstack service packages (like Murano, or other services > we haven't packaged yet) or for other 3rd party packages that we're > working to get into RDO... > > Do we need to get those new services/deps/3rd party plugins into Fedora > still? Or do we have a way to get those into Delorean for Fedora and > CentOS CloudSIG w/o requiring Fedora formal inclusion? > > Perry > General dependencies will still be imported into Fedora so we could ensure that we remain consistent with the next RHEL/CentOS. As we already do, they will be imported in Delorean before approval in order to keep the continuous delivery pipeline working. As for services and their plugins, we'll keep the same process and guidelines but two major differences: * reviews will be under RDO product instead of Fedora. * packaging guidelines exceptions will be granted by the RDO team during our weekly meeting. Regards, H. From jslagle at redhat.com Tue Sep 15 19:44:38 2015 From: jslagle at redhat.com (James Slagle) Date: Tue, 15 Sep 2015 15:44:38 -0400 Subject: [Rdo-list] undercloud reinstall error In-Reply-To: <105578759.26973609.1442333221273.JavaMail.zimbra@redhat.com> References: <536306950.26865047.1442325822309.JavaMail.zimbra@redhat.com> <1442328106.18512.37.camel@ocf-laptop> <105578759.26973609.1442333221273.JavaMail.zimbra@redhat.com> Message-ID: <20150915194438.GD13458@localhost.localdomain> On Tue, Sep 15, 2015 at 12:07:01PM -0400, Marius Cornea wrote: > Thanks Eduardo. Just a heads up, I applied the patch[1] mentioned in the bug and was able to pass undercloud install. > > [1] https://review.gerrithub.io/#/c/244988/1/elements/puppet-stack-config/puppet-stack-config.pp This issue is why I had a patch up to document saying you must use puppet modules from source: https://review.openstack.org/#/c/221896/ But, depending on when you started, you may not have seen that since it didn't merge until last night. Incidentally there is now an updated build available of openstack-puppet-modules in delorean. I have an additional patch to document how to use that: https://review.openstack.org/#/c/223293/ > > ----- Original Message ----- > > From: "Eduardo Gonzalez" > > To: "Pedro Sousa" > > Cc: rdo-list at redhat.com > > Sent: Tuesday, September 15, 2015 5:34:34 PM > > Subject: Re: [Rdo-list] undercloud reinstall error > > > > Hi all, currently there is a critical bug opened related to "Could not find > > resource 'Exec[heat_domain_create]'" > > > > https://bugs.launchpad.net/tripleo/+bug/1491002 > > > > The last status is Fix Committed > > Regards > > > > 2015-09-15 16:44 GMT+02:00 Pedro Sousa < pgsousa at gmail.com > : > > > > > > > > Hi Christopher, > > > > yes, I run as regular stack user. > > > > Regards, > > Pedro Sousa > > > > On Tue, Sep 15, 2015 at 3:41 PM, Christopher Brown < cbrown2 at ocf.co.uk > > > wrote: > > > > > > Hi Pedro, > > > > On Tue, 2015-09-15 at 15:09 +0100, Pedro Sousa wrote: > > > > > > > # openstack undercloud install > > > > > > > This should be run as a regular user I believe, not root. > > > > > > > > Error: Could not find resource 'Exec[heat_domain_create]' for > > > relationship from 'Class[Keystone::Roles::Admin]' on node > > > instack.mydomain > > > Error: Could not find resource 'Exec[heat_domain_create]' for > > > relationship from 'Class[Keystone::Roles::Admin]' on node > > > > Yes, I'm seeing this as well, currently unable to build RDO-Manager at > > all. > > > > I get a different error when I skip: > > > > export DIB_INSTALLTYPE_puppet_modules=source > > > > > > > > -- > > Regards, > > > > Christopher > > > > > > > > > > --- > > This email has been checked for viruses by Avast antivirus software. > > https://www.avast.com/antivirus > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From hguemar at fedoraproject.org Tue Sep 15 20:07:28 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 15 Sep 2015 22:07:28 +0200 Subject: [Rdo-list] Improving RDO continuous integration/testing In-Reply-To: References: Message-ID: 2015-09-15 20:37 GMT+02:00 David Moreau Simard : > Hi, > > Continuous integration jobs for RDO trunk (liberty) are not in good > shape right now [1] and they also provide poor coverage. > The only test that is run is tempest.scenario.test_server_basic_ops > [2] which adds a ssh-keypair. > > As part of our efforts to speed up the release process of RDO and > improve the quality and stability of what we ship, I am working to > improve the CI. > The good news is that I've managed to get promising results with > khaleesi+rdo+packstack+aio with selinux permissive locally (selinux > enforced being blocked right now [3]) so we are on the right track. > (CC'ing Emilien with whom we worked on integrating RDO into Puppet Modules upstream CI) Thanks for dealing with our CI, as we have a more robust continuous packaging machinery with delorean, this will be the next hot topic. We need more complete coverage for RDO as it will help detecting integration issues on Fedora/EL and fix them earlier. This is an important step to make Fedora/EL first-class citizens upstream, and encourage upstream maintainers to develop on these platforms. > As I've been trying to improve test coverage, a good first step would > be to enforce tempest smoke tests. > However, I've noticed that khaleesi uses a fork of tempest [4] and > this generated a failure [5] on a test that has since been fixed > upstream [6]. > > I'm very concerned about testing trunk RDO against a fork of tempest. > We should be testing trunk RDO against trunk tempest. > Running against a fork means we might lack some important changes to > test coverage or can unnecessarily encounter failures which have > already been resolved upstream. > > My understanding is that Red Hat maintains a fork of tempest to run > test suites against products which have a longer release and support > cycles and that is fine. > Should we switch RDO CI testing to the upstream branches ? > > Thanks, > Yes, we should definitively set this as a goal, at the very least, we should have running it in parallel and work on fixing issues.\ Please update trello accordingly so we could track this effort. Regards, H. > [1]: https://prod-rdojenkins.rhcloud.com/view/RDO-Liberty-Delorean-Trunk/ > [2]: https://github.com/openstack/tempest/blob/master/tempest/scenario/test_server_basic_ops.py > [3]: https://bugzilla.redhat.com/show_bug.cgi?id=1249685 > [4]: https://github.com/redhat-openstack/khaleesi-settings/blob/master/settings/tester/tempest/setup/git.yml#L4 > [5]: http://paste.openstack.org/show/463360/ > [6]: https://github.com/openstack/tempest/commit/986b9e6fda6c76806a329bd53f5f73f557da903e#diff-06c9ec2d3ed36f96bfce6f75e82a4f45 > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From pmyers at redhat.com Tue Sep 15 20:13:04 2015 From: pmyers at redhat.com (Perry Myers) Date: Tue, 15 Sep 2015 16:13:04 -0400 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: References: <55F84E2D.5040809@redhat.com> Message-ID: <55F87BD0.9090506@redhat.com> On 09/15/2015 02:05 PM, Ha?kel wrote: > 2015-09-15 18:58 GMT+02:00 Perry Myers : >> Sounds like a good plan. >> >> So, for new openstack service packages (like Murano, or other services >> we haven't packaged yet) or for other 3rd party packages that we're >> working to get into RDO... >> >> Do we need to get those new services/deps/3rd party plugins into Fedora >> still? Or do we have a way to get those into Delorean for Fedora and >> CentOS CloudSIG w/o requiring Fedora formal inclusion? >> >> Perry >> > > General dependencies will still be imported into Fedora so we could ensure > that we remain consistent with the next RHEL/CentOS. yep, absolutely > As we already do, they will be imported in Delorean before approval in > order to keep > the continuous delivery pipeline working. +1 > As for services and their plugins, we'll keep the same process and > guidelines but two > major differences: > * reviews will be under RDO product instead of Fedora. > * packaging guidelines exceptions will be granted by the RDO team > during our weekly > meeting. Sounds excellent. One question, did we solve the question of where to host the spec files and distgit for the packages that we're not going to maintain in Fedora? From apevec at gmail.com Tue Sep 15 20:50:37 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 15 Sep 2015 22:50:37 +0200 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: <55F87BD0.9090506@redhat.com> References: <55F84E2D.5040809@redhat.com> <55F87BD0.9090506@redhat.com> Message-ID: > One question, did we solve the question of where to > host the spec files and distgit for the packages that we're not going to > maintain in Fedora? distgit will be rdo-liberty branch on gerrithub/openstack-packages, technical details for openstack-* maintainers are coming in my followup post. Alan From hguemar at fedoraproject.org Tue Sep 15 18:33:16 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 15 Sep 2015 20:33:16 +0200 Subject: [Rdo-list] Question about Restart=always in systemd In-Reply-To: <20150915163756.GD14112@redhat.com> References: <316691786CCAEE44AE90482264E3AB8213A9812C@xmb-rcd-x08.cisco.com> <936474675.323251.1440168150920.JavaMail.zimbra@speichert.pl> <20150904151716.GA5011@mattdm.org> <20150911153214.GA21461@mattdm.org> <20150915163756.GD14112@redhat.com> Message-ID: 2015-09-15 18:37 GMT+02:00 Lars Kellogg-Stedman : > On Fri, Sep 11, 2015 at 11:32:14AM -0400, Matthew Miller wrote: >> On Fri, Sep 04, 2015 at 11:17:16AM -0400, Matthew Miller wrote: >> > > I'd think that "Restart=always" is a good setting for all services. >> > > What it really brings up is maybe the issue of streamlining the unit >> > > config files. >> > >> > For several releases, we've had packaging guidelines in Fedora >> > encouraging Restart=on-failure or Restart=on-abnormal: >> > >> > https://fedoraproject.org/wiki/Packaging:Systemd#Automatic_restarting >> > >> > We never, however, had an effort to bring existing packages into a >> > consistent state. I'd love for that effort to happen ? anyone >> > interesting in helping out? >> >> And then there were crickets. :) > > I didn't see any other traffic on this, so... > > How about a mechanism to *generate* unit files using information > specified by the package? Maybe packages would provide "stub" unit > files that contain only things that differ from standard behavior > (e.g., description, dependencies, environmentfiles, etc), and then the > actual unit file is generated by filling in the missing information. > E.g., for openstack-nova, a package provides the following stub: > > [Service] > Type=notify > NotifyAcess=all > TimeoutStartSec=0 > User=nova > > And then you run: > > install-systemd-unit \ > --execstart /usr/bin/nova-api \ > --description "OpenStack Nova API Server" \ > /path/to/stub/units/openstack-nova-template.service \ > /lib/systemd/system/openstack-nova-api.service > > And you get: > > [Unit] > Description=OpenStack Nova API Server > After=syslog.target network.target > > [Service] > Type=notify > NotifyAccess=all > User=nova > ExecStart=/usr/bin/nova-api > Restart=on-failure > > [Install] > WantedBy=multi-user.target > > And for, say, openstack-nova-scheduler: > > install-systemd-unit \ > --execstart /usr/bin/nova-scheduler \ > --description "OpenStack Nova Scheduler Server" \ > /path/to/stub/units/openstack-nova-template.service \ > /lib/systemd/system/openstack-nova-scheduler.service > Not a bad idea, could be a feature for Fedora 24 (systemd-unit-utils) /me adds this on his TODO Could be a RPM macro but it would be very ugly. > Or if that's too crazy, just add a check to the fedora-review tool > that ensures unit files have standard settings, such as the Restart= > setting. The review could flag things like: > > - Missing Restart behavior > - Missing Description > Easy fix. > Either solution would be reasonably easy to put together. I like the > first one, because (a) yay automation and because (b) it would allow > for local policy overrides ("I want all my services to run with > Restart=always instead of Restart=on-failure"). > > An auditing tool could also be used to check all of the *existing* > packages, if that were preferable in addition to or as an alternative > too either of the above. > > -- > Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} > Cloud Engineering / OpenStack | http://blog.oddbit.com/ > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From apevec at gmail.com Tue Sep 15 23:37:43 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 16 Sep 2015 01:37:43 +0200 Subject: [Rdo-list] RDO Liberty and Fedora - details for openstack-* maintainers Message-ID: Subject: RDO Liberty and Fedora - details for openstack-* maintainers As a followup, here are few notes for the package maintainers, until rdo-packaging documentation is updated with this workflow. distgit will be rdo-liberty branch on gerrithub/openstack-packages, create it from Fedora master and squash merge rpm-master: git remote add -f gerrit ssh://review.gerrithub.io:29418/openstack-packages/$PROJECT # skip if already present git-review -s git remote add -f fedora git://pkgs.fedoraproject.org/openstack-$PROJECT.git # skip if already present git branch rdo-liberty fedora/master git checkout rdo-liberty git merge --squash gerrit/rpm-master git reset HEAD .gitignore sources .gitreview git checkout .gitignore sources Resolve conflicts in .spec file Set correct Liberty Version: for your project Release: field should be pre-release 0.N until GA e.g. openstack-keystone.spec %global release_name liberty %global pypi_name keystone %global milestone .0b3 %{!?upstream_version: %global upstream_version %{version}%{?milestone}} Name: openstack-keystone Epoch: 1 Version: 8.0.0 Release: 0.3%{?milestone}%{?dist} Get the source tarball for the current milestone e.g. milestone 3: Source0: http://launchpad.net/%{pypi_name}/%{release_name}/%{release_name}-3/+download/%{pypi_name}-%{upstream_version}.tar.gz If Source0: is like above, rdopkg new-version --bump-only $UPSTREAM_VERSION should work. Preserve Kilo patches if any: ideally, RDO packages are pure upstream and don't need any patches in RPM -patches branches are as before in github/redhat-openstack/$PROJECT but since there isn't Fedora:OpenStack 1:1 anymore, branch name for RDO Liberty RPM patches will be liberty-patches. You might want to archive old changelog entries in ChangeLog.old like done for Kilo GA http://pkgs.fedoraproject.org/cgit/openstack-nova.git/commit/ChangeLog.old?id=8945622d349ef552dbc182f60485f6807d7c8708 When done: git commit -m "Update to Liberty" git push -u gerrit rdo-liberty:rdo-liberty For packages already in Fedora, build Liberty versions in Rawhide and ping apevec or number80 on Freenode #rdo for CloudSIG EL7 build. If you get stuck, please ping us on IRC and/or reply to this thread! Cheers, Alan From rbowen at redhat.com Wed Sep 16 12:14:21 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 16 Sep 2015 08:14:21 -0400 Subject: [Rdo-list] RDO Test Day, Sep 23-24 Message-ID: <55F95D1D.10308@redhat.com> We are planning to do a test day of Liberty RDO next week, September 23rd and 24th. The test day notes are shaping up at https://www.rdoproject.org/RDO_test_day_Liberty and should be fleshed out more by the end of the week. As usual, we'll coordinate on #rdo (on Freenode) for questions and discussion. The packages that we will be testing have been through CI, so we should be able to have a fairly successful day. If you have things that you'd like to see tested, please add these to the test case matrix. We're aware that a number of people will be out during this time, but it's been difficult to find days that work for everyone. So we're planning to have another test day in the weeks to come. Announcement to come soon. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Wed Sep 16 12:21:58 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 16 Sep 2015 08:21:58 -0400 Subject: [Rdo-list] Fwd: [Openstack] User Survey - Deadline Sept 25th In-Reply-To: <55F8E4AE.7060607@openstack.org> References: <55F8E4AE.7060607@openstack.org> Message-ID: <55F95EE6.4060808@redhat.com> A reminder, in case you haven't seen it. It's survey time again, and the deadline is next Friday. -------- Forwarded Message -------- Subject: [Openstack] User Survey - Deadline Sept 25th Date: Wed, 16 Sep 2015 11:40:30 +0800 From: Tom Fifield To: openstack at lists.openstack.org, OpenStack Operators , community at lists.openstack.org Hi all, If you run OpenStack, build apps on it, or have customers with OpenStack deployments, please take a few minutes to respond to the latest User Survey or pass it along to your friends. Since 2013, the user survey has provided significant insight into what people are deploying and how they're using OpenStack. You can see the most recent results in these SuperUser Articles: [1][2][3]. Please follow the link and instructions below to complete the User Survey by ***September 25th, 2015 at 23:00 UTC***. If you already completed the survey, there's no need to start over. You can simply log back in to update your Deployment Profile, as well as take the opportunity to provide additional input. You need to do this to keep your past survey responses active, but we hope you'll do it because we've made the survey shorter and with more interesting questions ;) Take the Survey ( http://www.openstack.org/user-survey ) All of the information you provide is confidential to the Foundation and User Committee and will be aggregated anonymously unless you clearly indicate we can publish your organization?s profile. Remember you can hear directly from users and see the aggregate survey findings by attending the next OpenStack Summit, October 27-30 in Tokyo (http://www.openstack.org/summit). Thank you again for your support. -Tom [1] http://superuser.openstack.org/articles/user-survey-identifies-leading-industries-and-business-drivers-for-openstack-adoption [2] http://superuser.openstack.org/articles/openstack-users-share-how-their-deployments-stack-up [3] http://superuser.openstack.org/articles/openstack-application-developers-share-insights _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From chkumar246 at gmail.com Wed Sep 16 13:08:31 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 16 Sep 2015 18:38:31 +0530 Subject: [Rdo-list] bug statistics for 2015-09-16 Message-ID: # RDO Bugs on 2015-09-16 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 259 - Fixed (MODIFIED, POST, ON_QA): 175 ## Number of open bugs by component diskimage-builder [ 4] +++ distribution [ 12] ++++++++++ dnsmasq [ 1] instack [ 4] +++ instack-undercloud [ 23] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 5] ++++ openstack-cinder [ 13] +++++++++++ openstack-foreman-inst... [ 3] ++ openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 1] openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] ++++++ openstack-neutron [ 6] +++++ openstack-nova [ 17] +++++++++++++++ openstack-packstack [ 45] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] +++++++++ openstack-selinux [ 13] +++++++++++ openstack-swift [ 2] + openstack-tripleo [ 24] +++++++++++++++++++++ openstack-tripleo-heat... [ 5] ++++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 3] ++ openvswitch [ 1] python-glanceclient [ 1] python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] ++++ python-oslo-config [ 1] rdo-manager [ 22] +++++++++++++++++++ rdo-manager-cli [ 6] +++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (259 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1228761 ] http://bugzilla.redhat.com/1228761 (NEW) Component: diskimage-builder Last change: 2015-06-10 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos ### distribution (12 bugs) [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-09-15 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1258560 ] http://bugzilla.redhat.com/1258560 (ASSIGNED) Component: distribution Last change: 2015-09-15 Summary: /usr/share/openstack-dashboard/openstack_dashboard/temp lates/_stylesheets.html: /bin/sh: horizon.utils.scss_filter.HorizonScssFilter: command not found [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### instack (4 bugs) [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (23 bugs) [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (5 bugs) [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1214928 ] http://bugzilla.redhat.com/1214928 (NEW) Component: openstack-ceilometer Last change: 2015-04-23 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library ### openstack-cinder (13 bugs) [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (1 bug) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-08-25 Summary: keystone-all process reaches 100% CPU consumption [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf ### openstack-neutron (6 bugs) [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https ### openstack-nova (17 bugs) [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-06-04 Summary: Ensure translations are installed correctly and picked up at runtime [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova: fail to edit project quota with DataError from nova [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova object store allow get object after date exires [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: v4-fixed-ip= not working with juno nova networking [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-04-10 Summary: horizon console uses http when horizon is set to use ssl [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: novnc init script doesnt write to log [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-01-08 Summary: Installation of openstack-nova-compute fails on PowerKVM [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: Nova AVC messages [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-02-09 Summary: Nova - db connection string present on compute nodes [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-06-14 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-06-08 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-06-23 Summary: Kilo assigning ipv6 address, even though its disabled. ### openstack-packstack (45 bugs) [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-08-26 Summary: nss.load missing from packstack, httpd unable to start. [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-packstack Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error ### openstack-puppet-modules (11 bugs) [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication ### openstack-selinux (13 bugs) [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-07-23 Summary: Glance over nfs fails due to selinux [1249685 ] http://bugzilla.redhat.com/1249685 (NEW) Component: openstack-selinux Last change: 2015-09-16 Summary: libffi should not require execmem when selinux is enabled [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux [1261465 ] http://bugzilla.redhat.com/1261465 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: OpenStack Keystone is not functional ### openstack-swift (2 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (24 bugs) [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Node registration fails silently if instackenv.json is badly formatted [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix ### openstack-tripleo-heat-templates (5 bugs) [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 [1232015 ] http://bugzilla.redhat.com/1232015 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: instack-undercloud: one controller deployment: running "pcs status" - Error: cluster is not currently running on this node [1235508 ] http://bugzilla.redhat.com/1235508 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-25 Summary: Package update does not take puppet managed packages into account [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates ### openstack-utils (3 bugs) [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### python-glanceclient (1 bug) [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-06-04 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (22 bugs) [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-05-20 Summary: Read bit set for others for Openstack services directories in /etc [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon ### rdo-manager-cli (6 bugs) [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (175 bugs) ### distribution (5 bugs) [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (1 bug) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency ### openstack-cinder (5 bugs) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] ### openstack-glance (3 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue ### openstack-heat (3 bugs) [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (13 bugs) [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service ### openstack-nova (5 bugs) [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options ### openstack-packstack (58 bugs) [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-07-21 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? ### openstack-puppet-modules (18 bugs) [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance ### openstack-sahara (1 bug) [1184522 ] http://bugzilla.redhat.com/1184522 (MODIFIED) Component: openstack-sahara Last change: 2015-03-27 Summary: launch_command.py missing ### openstack-selinux (12 bugs) [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo (1 bug) [1162333 ] http://bugzilla.redhat.com/1162333 (ON_QA) Component: openstack-tripleo Last change: 2015-06-02 Summary: Instack fails to complete instack-virt-setup with syntax error near unexpected token `newline' ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py ### openstack-utils (2 bugs) [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager ### python-cinderclient (2 bugs) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run [1260154 ] http://bugzilla.redhat.com/1260154 (ON_QA) Component: python-cinderclient Last change: 2015-09-06 Summary: missing dependency on keystoneclient ### python-django-horizon (3 bugs) [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ ### python-glanceclient (3 bugs) [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1244291 ] http://bugzilla.redhat.com/1244291 (MODIFIED) Component: python-glanceclient Last change: 2015-08-01 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion ### python-neutronclient (3 bugs) [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (5 bugs) [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-05-29 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason ### rdo-manager-cli (8 bugs) [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Wed Sep 16 13:37:08 2015 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 16 Sep 2015 09:37:08 -0400 (EDT) Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance - September 14-15 In-Reply-To: <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> References: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> Message-ID: <1235487111.51892256.1442410628155.JavaMail.zimbra@redhat.com> > > Dear all, > > > > Due to a planned maintenance of the infrastructure supporting the Delorean > > instance (trunk.rdoproject.org), it is expected to be offline between > > September 14 (~ 9PM EDT) and September 15 (~ 9PM EDT). > > > > We will be sending updates to the list if there is any additional > > information > > or change in the plans, and keep you updated on the status. > > Dear all, > > We have set up a temporary server to host the Delorean repos, to avoid any > outage. > > If you are consuming the Delorean Trunk repos using the DNS name > (trunk.rdoproject.org), there is no action required from your side. Plese > note the temporary server is not processing new packages yet. > Dear all, Maintenance is now over, and the Delorean server is again active and processing packages. Everything seems to be ok, but please let us know if you find any issue. Regards, Javier From dms at redhat.com Wed Sep 16 13:46:41 2015 From: dms at redhat.com (David Moreau Simard) Date: Wed, 16 Sep 2015 09:46:41 -0400 Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance - September 14-15 In-Reply-To: <1235487111.51892256.1442410628155.JavaMail.zimbra@redhat.com> References: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> <1235487111.51892256.1442410628155.JavaMail.zimbra@redhat.com> Message-ID: Looks good to me so far. Thanks Javier ! David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Wed, Sep 16, 2015 at 9:37 AM, Javier Pena wrote: > >> > Dear all, >> > >> > Due to a planned maintenance of the infrastructure supporting the Delorean >> > instance (trunk.rdoproject.org), it is expected to be offline between >> > September 14 (~ 9PM EDT) and September 15 (~ 9PM EDT). >> > >> > We will be sending updates to the list if there is any additional >> > information >> > or change in the plans, and keep you updated on the status. >> >> Dear all, >> >> We have set up a temporary server to host the Delorean repos, to avoid any >> outage. >> >> If you are consuming the Delorean Trunk repos using the DNS name >> (trunk.rdoproject.org), there is no action required from your side. Plese >> note the temporary server is not processing new packages yet. >> > > Dear all, > > Maintenance is now over, and the Delorean server is again active and processing packages. > > Everything seems to be ok, but please let us know if you find any issue. > > Regards, > Javier > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From javier.pena at redhat.com Wed Sep 16 13:53:41 2015 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 16 Sep 2015 09:53:41 -0400 (EDT) Subject: [Rdo-list] [delorean] Delorean switch to f22 In-Reply-To: <914020501.51899662.1442411106194.JavaMail.zimbra@redhat.com> Message-ID: <1466235721.51903568.1442411621859.JavaMail.zimbra@redhat.com> Hi all, As previously discussed [1], we are switching Fedora Delorean builds from f21 to f22. The f22 instance has been running stable for several days, and it is already accessible from https://trunk.rdoproject.org/f22 . The switch will happen on Friday, September 18. After that day, we will no longer process packages using f21, and will delete the older, f21-based data. http://trunk.rdoproject.org/f21 will remain active, but it will be a symlink to the f22 repo. Please let us know if you have any questions or concerns. Regards, Javier [1] https://www.redhat.com/archives/rdo-list/2015-September/msg00005.html From ihrachys at redhat.com Wed Sep 16 15:53:19 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 16 Sep 2015 17:53:19 +0200 Subject: [Rdo-list] RDO Liberty and Fedora - details for openstack-* maintainers In-Reply-To: References: Message-ID: > On 16 Sep 2015, at 01:37, Alan Pevec wrote: > > Subject: RDO Liberty and Fedora - details for openstack-* maintainers > > > As a followup, here are few notes for the package maintainers, until > rdo-packaging documentation is updated with this workflow. > > distgit will be rdo-liberty branch on gerrithub/openstack-packages, > create it from Fedora master and squash merge rpm-master: > > git remote add -f gerrit > ssh://review.gerrithub.io:29418/openstack-packages/$PROJECT # skip if > already present > git-review -s > git remote add -f fedora > git://pkgs.fedoraproject.org/openstack-$PROJECT.git # skip if already > present > git branch rdo-liberty fedora/master > git checkout rdo-liberty > git merge --squash gerrit/rpm-master > git reset HEAD .gitignore sources .gitreview > git checkout .gitignore sources > > Resolve conflicts in .spec file > Set correct Liberty Version: for your project > Release: field should be pre-release 0.N until GA > e.g. openstack-keystone.spec > > %global release_name liberty > %global pypi_name keystone > %global milestone .0b3 > > %{!?upstream_version: %global upstream_version %{version}%{?milestone}} > > Name: openstack-keystone > Epoch: 1 > Version: 8.0.0 > Release: 0.3%{?milestone}%{?dist} > > Get the source tarball for the current milestone e.g. milestone 3: > Source0: > http://launchpad.net/%{pypi_name}/%{release_name}/%{release_name}-3/+download/%{pypi_name}-%{upstream_version}.tar.gz > If Source0: is like above, rdopkg new-version --bump-only > $UPSTREAM_VERSION should work. > > Preserve Kilo patches if any: ideally, RDO packages are pure upstream > and don't need any patches in RPM > -patches branches are as before in github/redhat-openstack/$PROJECT > but since there isn't Fedora:OpenStack 1:1 anymore, branch name for > RDO Liberty RPM patches will be liberty-patches. > > You might want to archive old changelog entries in ChangeLog.old like > done for Kilo GA > http://pkgs.fedoraproject.org/cgit/openstack-nova.git/commit/ChangeLog.old?id=8945622d349ef552dbc182f60485f6807d7c8708 > When done: > git commit -m "Update to Liberty" > git push -u gerrit rdo-liberty:rdo-liberty > > For packages already in Fedora, build Liberty versions in Rawhide and > ping apevec or number80 on Freenode #rdo for CloudSIG EL7 build. > > If you get stuck, please ping us on IRC and/or reply to this thread! I am stuck on building Fedora Rawhide package. When I do fedpkg build, I get: -bash-4.2$ fedpkg build Could not execute build: Unknown build target: rdo-liberty-candidate I presume it expects I do it from Fedora distgit. Also, not clear what I should build in case of openstack-neutron-*aas repos that are not in Fedora, yet and probably won?t be there at all now that we move distgit out of Fedora infra. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ihrachys at redhat.com Wed Sep 16 15:59:48 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 16 Sep 2015 17:59:48 +0200 Subject: [Rdo-list] RDO Liberty and Fedora - details for openstack-* maintainers In-Reply-To: References: Message-ID: <5C70E76B-9220-4E88-BD0A-0ED9201D0314@redhat.com> > On 16 Sep 2015, at 01:37, Alan Pevec wrote: > > Subject: RDO Liberty and Fedora - details for openstack-* maintainers > > > As a followup, here are few notes for the package maintainers, until > rdo-packaging documentation is updated with this workflow. > > distgit will be rdo-liberty branch on gerrithub/openstack-packages, > create it from Fedora master and squash merge rpm-master: > > git remote add -f gerrit > ssh://review.gerrithub.io:29418/openstack-packages/$PROJECT # skip if > already present > git-review -s > git remote add -f fedora > git://pkgs.fedoraproject.org/openstack-$PROJECT.git # skip if already > present > git branch rdo-liberty fedora/master > git checkout rdo-liberty > git merge --squash gerrit/rpm-master > git reset HEAD .gitignore sources .gitreview > git checkout .gitignore sources > > Resolve conflicts in .spec file > Set correct Liberty Version: for your project > Release: field should be pre-release 0.N until GA > e.g. openstack-keystone.spec > > %global release_name liberty > %global pypi_name keystone > %global milestone .0b3 > > %{!?upstream_version: %global upstream_version %{version}%{?milestone}} > > Name: openstack-keystone > Epoch: 1 > Version: 8.0.0 > Release: 0.3%{?milestone}%{?dist} > > Get the source tarball for the current milestone e.g. milestone 3: > Source0: > http://launchpad.net/%{pypi_name}/%{release_name}/%{release_name}-3/+download/%{pypi_name}-%{upstream_version}.tar.gz > If Source0: is like above, rdopkg new-version --bump-only > $UPSTREAM_VERSION should work. > > Preserve Kilo patches if any: ideally, RDO packages are pure upstream > and don't need any patches in RPM > -patches branches are as before in github/redhat-openstack/$PROJECT > but since there isn't Fedora:OpenStack 1:1 anymore, branch name for > RDO Liberty RPM patches will be liberty-patches. > > You might want to archive old changelog entries in ChangeLog.old like > done for Kilo GA > http://pkgs.fedoraproject.org/cgit/openstack-nova.git/commit/ChangeLog.old?id=8945622d349ef552dbc182f60485f6807d7c8708 > When done: > git commit -m "Update to Liberty" > git push -u gerrit rdo-liberty:rdo-liberty > > For packages already in Fedora, build Liberty versions in Rawhide and > ping apevec or number80 on Freenode #rdo for CloudSIG EL7 build. > > If you get stuck, please ping us on IRC and/or reply to this thread! > > Cheers, > Alan First, thanks for the detailed plan, I think I got thru it, mostly. One thing I want to note is that with the high number of packages I maintain, and usual lack of time to go thru rebasing it for each milestone, I would be very glad to see maintainers offloaded of the duty to apply those manual steps, since they all can be automated, assuming that there are no differences in Fedora distgit comparing to delorean. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ihrachys at redhat.com Wed Sep 16 16:05:37 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 16 Sep 2015 18:05:37 +0200 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: References: Message-ID: > On 15 Sep 2015, at 17:12, Boris Derzhavets wrote: > > > > > Reasoning is that Fedora user population are developers who always > > need latest and greatest version and Delorean Trunk packages are the > > best match. We were trying to follow 1:1 mapping between Fedora and > > OpenStack release but it is getting out of sync now (current f22 is > > Juno, unreleased f23 is Kilo) and it's getting impossible to keep up > > with required dependency versions in the current stable Fedora without > > breaking older OpenStack release. > > Could you please,specify how to set up Delorean Trunk Repos for Liberty > ( a kind of Quick Start Page for Liberty testing ) :- > > 1. CentOS 7.1 > 2. Fedora 22 I don?t have it completely automated, but for repo setup, you can check: https://github.com/booxter/vagrant-projects/blob/master/.ansible/delorean-packstack.yml Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chkumar246 at gmail.com Wed Sep 16 16:08:36 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 16 Sep 2015 21:38:36 +0530 Subject: [Rdo-list] [meeting] RDO packaging meeting (2015-09-16) Message-ID: ======================================== #rdo: RDO packaging meeting (2015-09-16) ======================================== Meeting started by chandankumar at 15:01:27 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-09-16/rdo.2015-09-16-15.01.log.html . Meeting summary --------------- * LINK: https://etherpad.openstack.org/p/RDO-Packaging (rbowen, 15:03:37) * RDO website (chandankumar, 15:03:48) * beta version of RDO website is up http://beta.rdoproject.org/ (chandankumar, 15:04:05) * LINK: https://www.redhat.com/archives/rdo-list/2015-September/msg00077.html (chandankumar, 15:04:13) * LINK: https://github.com/redhat-openstack/website/ (chandankumar, 15:05:08) * RDO DOC hack day on 12/13 Oct, 2015. (chandankumar, 15:05:46) * start cloning and poking around at RDO website (number80, 15:06:07) * RDO Liberty and Fedora (chandankumar, 15:07:57) * LINK: https://www.redhat.com/archives/rdo-list/2015-September/msg00090.html (chandankumar, 15:08:24) * ACTION: all maintainers are requested to prepare liberty release for their packages (number80, 15:08:40) * LINK: https://www.redhat.com/archives/rdo-list/2015-September/msg00113.html (chandankumar, 15:08:51) * LINK: https://trello.com/c/GPqDlVLs/63-liberty-3-rpms (chandankumar, 15:11:05) * ACTION: apevec send another followup to rdo-list explaining workflow for new RDO packages (apevec, 15:13:53) * ACTION: number80 create package-review component (number80, 15:14:33) * Needs version Bumps for following packages (chandankumar, 15:16:43) * LINK: https://etherpad.openstack.org/p/RDO-Packaging (apevec, 15:33:48) * New Package scm request (chandankumar, 15:35:04) * LINK: python-oslo-reports - https://bugzilla.redhat.com/show_bug.cgi?id=1241088 (chandankumar, 15:35:14) * LINK: python-castellan - https://bugzilla.redhat.com/show_bug.cgi?id=1259919 (chandankumar, 15:35:27) * LINK: openstack-barbican https://bugzilla.redhat.com/show_bug.cgi?id=1190269 (chandankumar, 15:36:43) * LINK: https://admin.fedoraproject.org/pkgdb/package/python-barbicanclient/ (chandankumar, 15:38:11) * Status of maintenance works on Delorean instance (chandankumar, 15:39:12) * f21 Delorean worker deprecation (chandankumar, 15:40:52) * LINK: https://www.redhat.com/archives/rdo-list/2015-September/msg00119.html (chandankumar, 15:41:24) * LINK: https://trunk.rdoproject.org/f22/status_report.html (apevec, 15:42:27) * RDO Liberty Test day - September 23/24 (chandankumar, 15:45:31) * LINK: RDO liberty test day https://www.redhat.com/archives/rdo-list/2015-September/msg00114.html (chandankumar, 15:46:04) * LINK: https://www.rdoproject.org/RDO_test_day_Liberty was not migrated (apevec, 15:47:48) * Delorean CI: https://prod-rdojenkins.rhcloud.com/view/RDO-Liberty-Delorean-Trunk/ (chandankumar, 15:51:11) * LINK: https://www.redhat.com/archives/rdo-list/2015-September/msg00105.html (dmsimard, 15:51:38) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1249685 (dmsimard, 15:52:54) * LINK: https://review.gerrithub.io/#/c/246606/ (dmsimard, 15:54:49) * Fedora client import (chandankumar, 15:55:46) * open floor (chandankumar, 16:00:34) * chair rotation for next meeting (chandankumar, 16:01:38) * ACTION: jpena to chair next meeting (chandankumar, 16:02:03) Meeting ended at 16:02:47 UTC. Action Items ------------ * all maintainers are requested to prepare liberty release for their packages * apevec send another followup to rdo-list explaining workflow for new RDO packages * number80 create package-review component * jpena to chair next meeting Action Items, by person ----------------------- * apevec * apevec send another followup to rdo-list explaining workflow for new RDO packages * jpena * jpena to chair next meeting * number80 * number80 create package-review component * **UNASSIGNED** * all maintainers are requested to prepare liberty release for their packages People Present (lines said) --------------------------- * chandankumar (87) * apevec (72) * number80 (32) * rbowen (25) * dmsimard (15) * zodbot (14) * elmiko (10) * jpena (9) * jruzicka (9) * eggmaster (6) * ihrachys (6) * lon (3) * xaeth (3) * kfox1111 (2) * trown (1) * dtantsur (1) * kashyap (1) * social (1) * csim (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Wed Sep 16 16:28:02 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 16 Sep 2015 18:28:02 +0200 Subject: [Rdo-list] RDO Liberty and Fedora - details for openstack-* maintainers In-Reply-To: References: Message-ID: >> For packages already in Fedora, build Liberty versions in Rawhide ... > I am stuck on building Fedora Rawhide package. When I do fedpkg build, I get: > > -bash-4.2$ fedpkg build > Could not execute build: Unknown build target: rdo-liberty-candidate > > I presume it expects I do it from Fedora distgit. Yeah, I forgot to mention, for packages already in Fedora you need to merge to Fedora master and push before building in Rawhide: git checkout master git merge rdo-liberty #should be clean fast-forward if following steps > Also, not clear what I should build in case of openstack-neutron-*aas repos that are not in Fedora, yet and probably won?t be there at all now that we move distgit out of Fedora infra. Create and push rdo-liberty branches for them on gerrithub, we'll do CBS builds from there. Cheers, Alan From ihrachys at redhat.com Wed Sep 16 16:30:25 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 16 Sep 2015 18:30:25 +0200 Subject: [Rdo-list] RDO Liberty and Fedora - details for openstack-* maintainers In-Reply-To: References: Message-ID: <8D842C8B-DEF0-4994-B35B-1AF3E4AACDCF@redhat.com> > On 16 Sep 2015, at 18:28, Alan Pevec wrote: > >>> For packages already in Fedora, build Liberty versions in Rawhide > ... >> I am stuck on building Fedora Rawhide package. When I do fedpkg build, I get: >> >> -bash-4.2$ fedpkg build >> Could not execute build: Unknown build target: rdo-liberty-candidate >> >> I presume it expects I do it from Fedora distgit. > > Yeah, I forgot to mention, for packages already in Fedora you need to > merge to Fedora master and push before building in Rawhide: > git checkout master > git merge rdo-liberty #should be clean fast-forward if following steps > >> Also, not clear what I should build in case of openstack-neutron-*aas repos that are not in Fedora, yet and probably won?t be there at all now that we move distgit out of Fedora infra. > > Create and push rdo-liberty branches for them on gerrithub, we'll do > CBS builds from there. > How can I validate that it actually works before you trigger the build? Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From apevec at gmail.com Wed Sep 16 16:30:29 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 16 Sep 2015 18:30:29 +0200 Subject: [Rdo-list] RDO Liberty and Fedora - details for openstack-* maintainers In-Reply-To: <5C70E76B-9220-4E88-BD0A-0ED9201D0314@redhat.com> References: <5C70E76B-9220-4E88-BD0A-0ED9201D0314@redhat.com> Message-ID: > First, thanks for the detailed plan, I think I got thru it, mostly. Cool, thanks! > One thing I want to note is that with the high number of packages I maintain, and usual lack of time to go thru rebasing it for each milestone, I would be very glad to see maintainers offloaded of the duty to apply those manual steps, since they all can be automated, assuming that there are no differences in Fedora distgit comparing to delorean. Adding Jakub-the-automation-master: we could definitely script most of this, only manual step is resolving conflicts which I'm not sure how to automate. Cheers, Alan From apevec at gmail.com Wed Sep 16 16:32:48 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 16 Sep 2015 18:32:48 +0200 Subject: [Rdo-list] RDO Liberty and Fedora - details for openstack-* maintainers In-Reply-To: <8D842C8B-DEF0-4994-B35B-1AF3E4AACDCF@redhat.com> References: <8D842C8B-DEF0-4994-B35B-1AF3E4AACDCF@redhat.com> Message-ID: >> Create and push rdo-liberty branches for them on gerrithub, we'll do >> CBS builds from there. > > How can I validate that it actually works before you trigger the build? As a quicktest fedpkg --dist el7 local should work. I'll look at preparing mockbuild against CBS repos for more realistic test build. Cheers, Alan From bderzhavets at hotmail.com Wed Sep 16 16:33:02 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 16 Sep 2015 12:33:02 -0400 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: References: , Message-ID: > Subject: Re: [Rdo-list] RDO Liberty and Fedora > From: ihrachys at redhat.com > Date: Wed, 16 Sep 2015 18:05:37 +0200 > CC: apevec at gmail.com; rdo-list at redhat.com > To: bderzhavets at hotmail.com > > > On 15 Sep 2015, at 17:12, Boris Derzhavets wrote: > > > > > > > > > Reasoning is that Fedora user population are developers who always > > > need latest and greatest version and Delorean Trunk packages are the > > > best match. We were trying to follow 1:1 mapping between Fedora and > > > OpenStack release but it is getting out of sync now (current f22 is > > > Juno, unreleased f23 is Kilo) and it's getting impossible to keep up > > > with required dependency versions in the current stable Fedora without > > > breaking older OpenStack release. > > > > Could you please,specify how to set up Delorean Trunk Repos for Liberty > > ( a kind of Quick Start Page for Liberty testing ) :- > > 1. CentOS 7.1 > > 2. Fedora 22 > > I don?t have it completely automated, but for repo setup, you can check: > > https://github.com/booxter/vagrant-projects/blob/master/.ansible/delorean-packstack.yml Sorry, I feel a bit confused as far as I understood Rich Bowen recent message [Rdo-list] RDO Test Day, Sep 23-24 (for CentOS 7.X) :- 1.Set up repos per https://www.rdoproject.org/RDO_test_day_Liberty # cd /etc/yum.repos.d/ # wget http://trunk.rdoproject.org/centos7/delorean-deps.repo # wget http://trunk.rdoproject.org/centos7/current/delorean.repo Next for packstack install # yum install -y openstack-packstack Am I missing here enabling RDO KIlo repo or no ? Either this set up ( no RDO Kilo enabling ) is just for 09/23-24/2015 ? I apologize in advance for stupid questions. Boris. > > Ihar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Wed Sep 16 16:33:51 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 16 Sep 2015 18:33:51 +0200 Subject: [Rdo-list] RDO Liberty and Fedora - details for openstack-* maintainers In-Reply-To: References: <5C70E76B-9220-4E88-BD0A-0ED9201D0314@redhat.com> Message-ID: > On 16 Sep 2015, at 18:30, Alan Pevec wrote: > >> First, thanks for the detailed plan, I think I got thru it, mostly. > > Cool, thanks! > >> One thing I want to note is that with the high number of packages I maintain, and usual lack of time to go thru rebasing it for each milestone, I would be very glad to see maintainers offloaded of the duty to apply those manual steps, since they all can be automated, assuming that there are no differences in Fedora distgit comparing to delorean. > > Adding Jakub-the-automation-master: we could definitely script most of > this, only manual step is resolving conflicts which I'm not sure how > to automate. I don?t plan to maintain any distgit only changes, and I am fine if we reset if any. Just override everything, apply steps, commit, build. On every delorean rpm-liberty commit. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ihrachys at redhat.com Wed Sep 16 16:40:15 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 16 Sep 2015 18:40:15 +0200 Subject: [Rdo-list] RDO Liberty and Fedora - details for openstack-* maintainers In-Reply-To: References: Message-ID: > On 16 Sep 2015, at 18:28, Alan Pevec wrote: > >>> For packages already in Fedora, build Liberty versions in Rawhide > ... >> I am stuck on building Fedora Rawhide package. When I do fedpkg build, I get: >> >> -bash-4.2$ fedpkg build >> Could not execute build: Unknown build target: rdo-liberty-candidate >> >> I presume it expects I do it from Fedora distgit. > > Yeah, I forgot to mention, for packages already in Fedora you need to > merge to Fedora master and push before building in Rawhide: > git checkout master > git merge rdo-liberty #should be clean fast-forward if following steps > ...Meaning, that fedora remote should be smth like: fedora ssh://ihrachyshka at pkgs.fedoraproject.org/openstack-designate.git (fetch) fedora ssh://ihrachyshka at pkgs.fedoraproject.org/openstack-designate.git (push) to have ssh push access. Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ihrachys at redhat.com Wed Sep 16 16:42:07 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 16 Sep 2015 18:42:07 +0200 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: References: Message-ID: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> > On 16 Sep 2015, at 18:33, Boris Derzhavets wrote: > > > > > Subject: Re: [Rdo-list] RDO Liberty and Fedora > > From: ihrachys at redhat.com > > Date: Wed, 16 Sep 2015 18:05:37 +0200 > > CC: apevec at gmail.com; rdo-list at redhat.com > > To: bderzhavets at hotmail.com > > > > > On 15 Sep 2015, at 17:12, Boris Derzhavets wrote: > > > > > > > > > > > > > Reasoning is that Fedora user population are developers who always > > > > need latest and greatest version and Delorean Trunk packages are the > > > > best match. We were trying to follow 1:1 mapping between Fedora and > > > > OpenStack release but it is getting out of sync now (current f22 is > > > > Juno, unreleased f23 is Kilo) and it's getting impossible to keep up > > > > with required dependency versions in the current stable Fedora without > > > > breaking older OpenStack release. > > > > > > Could you please,specify how to set up Delorean Trunk Repos for Liberty > > > ( a kind of Quick Start Page for Liberty testing ) :- > > > 1. CentOS 7.1 > > > 2. Fedora 22 > > > > I don?t have it completely automated, but for repo setup, you can check: > > > > https://github.com/booxter/vagrant-projects/blob/master/.ansible/delorean-packstack.yml > > Sorry, I feel a bit confused as far as I understood Rich Bowen recent message > [Rdo-list] RDO Test Day, Sep 23-24 (for CentOS 7.X) :- > > 1.Set up repos per https://www.rdoproject.org/RDO_test_day_Liberty > > # cd /etc/yum.repos.d/ > # wget http://trunk.rdoproject.org/centos7/delorean-deps.repo > # wget http://trunk.rdoproject.org/centos7/current/delorean.repo > > Next for packstack install > > # yum install -y openstack-packstack > > Am I missing here enabling RDO KIlo repo or no ? > Either this set up ( no RDO Kilo enabling ) is just for 09/23-24/2015 ? > > I apologize in advance for stupid questions. > Boris. > > > > > Ihar That?s actually a good question. I suspect that delorean-deps.repo may replace Kilo providing needed deps missing in CentOS, but since the link returns 404, I can?t be sure. Rich, why do I get 404 for the links above? Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From apevec at gmail.com Wed Sep 16 17:05:58 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 16 Sep 2015 19:05:58 +0200 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> References: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> Message-ID: > I suspect that delorean-deps.repo may replace Kilo providing needed deps missing in CentOS, That's correct. > but since the link returns 404, I can?t be sure. > > Rich, why do I get 404 for the links above? Looks like we're experience OS1 (cloud Delorean instance is running on) outage fallout, I'm opening tickets as we type... Cheers, Alan From ihrachys at redhat.com Wed Sep 16 17:06:10 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 16 Sep 2015 19:06:10 +0200 Subject: [Rdo-list] RDO Liberty and Fedora - details for openstack-* maintainers In-Reply-To: References: <8D842C8B-DEF0-4994-B35B-1AF3E4AACDCF@redhat.com> Message-ID: <1BFB0EE9-80FC-4523-8498-37807CA1CF1A@redhat.com> > On 16 Sep 2015, at 18:32, Alan Pevec wrote: > >>> Create and push rdo-liberty branches for them on gerrithub, we'll do >>> CBS builds from there. >> >> How can I validate that it actually works before you trigger the build? > > As a quicktest fedpkg --dist el7 local should work. I'll look at > preparing mockbuild against CBS repos for more realistic test build. > > Cheers, > Alan If someone is as crazy as me who builds it on CentOS, then you also need: - modify your /etc/mock/epel-7-x86_64.cfg to include delorean deps as in [1]; - fedpkg --dist el7 --module-name openstack-neutron-vpnaas mockbuild --module-name is needed if you build from a dir that does not reflect package name (as is usually the case for openstack-* packages from delorean repos). [1]: https://github.com/javierpena/delorean-instance/blob/706b151dc3e9332643b9bbc01c11573358beb5a7/delorean-user-data.txt#L335-L347 Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From apevec at gmail.com Wed Sep 16 17:40:07 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 16 Sep 2015 19:40:07 +0200 Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance - September 14-15 In-Reply-To: References: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> <1235487111.51892256.1442410628155.JavaMail.zimbra@redhat.com> Message-ID: >> Everything seems to be ok, but please let us know if you find any issue. All was fine until ~1h ago when folks started reporting 403 from trunk.rdoproject.org I see console log shows XFS errors and I cannot ssh into it, so I've opened the ticket for OS1 Public support. Rich, in the meantime I propose to move DNS trunk.rdoproject.org again to 54.196.178.107 backup. Cheers, Alan From javier.pena at redhat.com Wed Sep 16 19:59:07 2015 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 16 Sep 2015 15:59:07 -0400 (EDT) Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance - September 14-15 In-Reply-To: References: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> <1235487111.51892256.1442410628155.JavaMail.zimbra@redhat.com> Message-ID: <1945559656.52294827.1442433547822.JavaMail.zimbra@redhat.com> ----- Original Message ----- > >> Everything seems to be ok, but please let us know if you find any issue. > > All was fine until ~1h ago when folks started reporting 403 from > trunk.rdoproject.org > > I see console log shows XFS errors and I cannot ssh into it, so I've > opened the ticket for OS1 Public support. > Rich, in the meantime I propose to move DNS trunk.rdoproject.org again > to 54.196.178.107 backup. > So I'm in the VM, running xfs_repair (-n, to see if there is actual damage) on the file system. So far there are no errors, but it is being really slow. Rich, did you get your permissions to change the DNS entry? That is our fastest way to fix it now :-/. Cheers, Javier > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rbowen at redhat.com Wed Sep 16 20:00:17 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 16 Sep 2015 16:00:17 -0400 Subject: [Rdo-list] RDO Test Day, Sep 23-24 In-Reply-To: <55F95D1D.10308@redhat.com> References: <55F95D1D.10308@redhat.com> Message-ID: <55F9CA51.6090307@redhat.com> On 09/16/2015 08:14 AM, Rich Bowen wrote: > We are planning to do a test day of Liberty RDO next week, September > 23rd and 24th. The test day notes are shaping up at > https://www.rdoproject.org/RDO_test_day_Liberty and should be fleshed > out more by the end of the week. As usual, we'll coordinate on #rdo (on > Freenode) for questions and discussion. > > The packages that we will be testing have been through CI, so we should > be able to have a fairly successful day. > > If you have things that you'd like to see tested, please add these to > the test case matrix. > > We're aware that a number of people will be out during this time, but > it's been difficult to find days that work for everyone. So we're > planning to have another test day in the weeks to come. Announcement to > come soon. > Based on the discussion today on the RDO Packaging meeting, we've moved the test day page to the new website, so you'll find the above document at http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ We also have a partial text matrix, at http://beta.rdoproject.org/testday/testedsetups-liberty-01/ Finally, a workarounds page is at http://beta.rdoproject.org/testday/workarounds-liberty-01/ to document things that are necessary to work around known problems. To update any of the above, you can send pull requests to https://github.com/redhat-openstack/website Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ayoung at redhat.com Wed Sep 16 22:34:55 2015 From: ayoung at redhat.com (Adam Young) Date: Wed, 16 Sep 2015 18:34:55 -0400 Subject: [Rdo-list] RDO Test Day, Sep 23-24 In-Reply-To: <55F95D1D.10308@redhat.com> References: <55F95D1D.10308@redhat.com> Message-ID: <55F9EE8F.2060103@redhat.com> On 09/16/2015 08:14 AM, Rich Bowen wrote: > We are planning to do a test day of Liberty RDO next week, September > 23rd and 24th. The test day notes are shaping up at > https://www.rdoproject.org/RDO_test_day_Liberty and should be fleshed > out more by the end of the week. As usual, we'll coordinate on #rdo > (on Freenode) for questions and discussion. > > The packages that we will be testing have been through CI, so we > should be able to have a fairly successful day. > > If you have things that you'd like to see tested, please add these to > the test case matrix. > > We're aware that a number of people will be out during this time, but > it's been difficult to find days that work for everyone. So we're > planning to have another test day in the weeks to come. Announcement > to come soon. > If the focus is going to be on RDO Manager (and it should) I think we have a shortage of Hardware to test on. Last iheard it required 20GB to run the multiple VMs for undercloud/overcloud. Is this the case? From mohammed.arafa at gmail.com Thu Sep 17 01:30:15 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 16 Sep 2015 21:30:15 -0400 Subject: [Rdo-list] [rdo-manager] stable repo v2 Message-ID: hi i asked this question a while back. how do i get to install rdo-manager using only the stable package repo? my understanding is that delorean is a CI dev repo. if i follow the documentation at http://docs.openstack.org/developer/tripleo-docs/ what do i need to change to _only_ use the stable packages repo? thanks in advance -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Thu Sep 17 01:36:13 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 16 Sep 2015 21:36:13 -0400 Subject: [Rdo-list] RDO Test Day, Sep 23-24 In-Reply-To: <55F9EE8F.2060103@redhat.com> References: <55F95D1D.10308@redhat.com> <55F9EE8F.2060103@redhat.com> Message-ID: I was wondering about that. Will some kind and generous company be donating hardware to test on? On Wed, Sep 16, 2015 at 6:34 PM, Adam Young wrote: > On 09/16/2015 08:14 AM, Rich Bowen wrote: > >> We are planning to do a test day of Liberty RDO next week, September 23rd >> and 24th. The test day notes are shaping up at >> https://www.rdoproject.org/RDO_test_day_Liberty and should be fleshed >> out more by the end of the week. As usual, we'll coordinate on #rdo (on >> Freenode) for questions and discussion. >> >> The packages that we will be testing have been through CI, so we should >> be able to have a fairly successful day. >> >> If you have things that you'd like to see tested, please add these to the >> test case matrix. >> >> We're aware that a number of people will be out during this time, but >> it's been difficult to find days that work for everyone. So we're planning >> to have another test day in the weeks to come. Announcement to come soon. >> >> If the focus is going to be on RDO Manager (and it should) I think we > have a shortage of Hardware to test on. Last iheard it required 20GB to > run the multiple VMs for undercloud/overcloud. > > Is this the case? > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lisha2010hust at gmail.com Thu Sep 17 07:41:56 2015 From: lisha2010hust at gmail.com (Sha Li) Date: Thu, 17 Sep 2015 15:41:56 +0800 Subject: [Rdo-list] nova cells with neutron network Message-ID: Hi, I am try to test the nova cells function with Juno release. My test deployment consits of one api-cell node, one child-cell node and one compute node. api-cell node: nova-api, nova-cells, nova-cert, nova-condoleauth, nova-novncproxy child-cell node: nova-cells, nova-conductor, nova-scheduler compute node: nova-compute I found most deployment example is using nova-network with nova-cells. I want to use neutron. So I had keystone , glance, and neutron-server, neutron-dhcp, neutron-l3 shared between all cells and deployed all on the api-cell node. I encounterd similar problem as described in this bug report https://bugs.launchpad.net/nova/+bug/1348103 When boot a new instance, nova-compute fails to get the network-vif-plugged notification and get time out waiting for the call back. But on the neutron server side, it looks like the notification had been successfully sent and get the 200 response code from nova-api server I had to set vif_plugging_is_fatal = False Then the instnace can be spawned normally I am wondering how people use neutron with nova-cells, is this going to cause any trouble in large scale production deployment. Cheers, Sha --- neutron server log file 2015-08-22 00:20:35.464 16812 DEBUG neutron.notifiers.nova [-] Sending events: [{'status': 'completed', 'tag': u'2839ca4d-b632-4d64-a174-ecfe34a7a746', 'name': 'network-vif-plugged', 'server_uuid': u'092c8bc4-3643-44c0-b79e-ad5caac18b3d'}] send_events /usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:232 2015-08-22 00:20:35.468 16812 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.81.221 2015-08-22 00:20:35.548 16812 DEBUG urllib3.connectionpool [-] "POST /v2/338aad513c604880a6a0dcc58b88b905/*os**-server-external-events *HTTP/1.1" 200 183 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionp ool.py:357 2015-08-22 00:20:35.550 16812 INFO neutron.notifiers.nova [-] Nova event *response*: {u'status': u'completed', u'tag': u'2839ca4d-b632-4d64-a174-ecfe34a7a746', u'name': u'network-vif-plugged', u'server_uuid': u'092c8bc4-3643-44c0-b79e-ad5caac18b3d', u'code': 200} -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Thu Sep 17 09:49:13 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 17 Sep 2015 11:49:13 +0200 Subject: [Rdo-list] RDO Test Day, Sep 23-24 In-Reply-To: <55F9CA51.6090307@redhat.com> References: <55F95D1D.10308@redhat.com> <55F9CA51.6090307@redhat.com> Message-ID: <20150917094913.GA7389@tesla.redhat.com> On Wed, Sep 16, 2015 at 04:00:17PM -0400, Rich Bowen wrote: [. . .] > Finally, a workarounds page is at > http://beta.rdoproject.org/testday/workarounds-liberty-01/ to document > things that are necessary to work around known problems. > > To update any of the above, you can send pull requests to > https://github.com/redhat-openstack/website I wonder if Workarounds page is one place where we can use a Wiki/Etherpad so that pople can document as issues arise. With pull requests, you'd have an unknown delay introduced as someone has to process those & merge them. -- /kashyap From apevec at gmail.com Thu Sep 17 11:41:10 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 17 Sep 2015 13:41:10 +0200 Subject: [Rdo-list] RDO Test Day, Sep 23-24 In-Reply-To: <20150917094913.GA7389@tesla.redhat.com> References: <55F95D1D.10308@redhat.com> <55F9CA51.6090307@redhat.com> <20150917094913.GA7389@tesla.redhat.com> Message-ID: > I wonder if Workarounds page is one place where we can use a > Wiki/Etherpad so that pople can document as issues arise. With pull > requests, you'd have an unknown delay introduced as someone has to > process those & merge them. IMHO wiki should be gone after migration to avoid confusion but for collecting workarounds during the day we should create a semi-official etherpad.o.o and then pull them into website. BTW PR should be fast, once we setup some kind of "CI" on PRs (using travis is easiest). With the current setup website is automatically published, not sure about details. Rich, Garrett - is it a cron job or something else? Also, what would be the verification script that website PR doesn't completely break the side, adds spam links etc? Cheers, Alan From ihrachys at redhat.com Thu Sep 17 11:42:48 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 17 Sep 2015 13:42:48 +0200 Subject: [Rdo-list] nova cells with neutron network In-Reply-To: References: Message-ID: > On 17 Sep 2015, at 09:41, Sha Li wrote: > > Hi, > > I am try to test the nova cells function with Juno release. > My test deployment consits of one api-cell node, one child-cell node and one compute node. > > api-cell node: nova-api, nova-cells, nova-cert, nova-condoleauth, nova-novncproxy > child-cell node: nova-cells, nova-conductor, nova-scheduler > compute node: nova-compute > > I found most deployment example is using nova-network with nova-cells. I want to use neutron. So I had keystone , glance, and neutron-server, neutron-dhcp, neutron-l3 shared between all cells and deployed all on the api-cell node. > > I encounterd similar problem as described in this bug report > https://bugs.launchpad.net/nova/+bug/1348103 > > When boot a new instance, nova-compute fails to get the network-vif-plugged notification and get time out waiting for the call back. > But on the neutron server side, it looks like the notification had been successfully sent and get the 200 response code from nova-api server > > I had to set > vif_plugging_is_fatal = False > Then the instnace can be spawned normally > > I am wondering how people use neutron with nova-cells, is this going to cause any trouble in large scale production deployment. > > > Cheers, > Sha > > > > --- neutron server log file > 2015-08-22 00:20:35.464 16812 DEBUG neutron.notifiers.nova [-] Sending events: [{'status': 'completed', 'tag': u'2839ca4d-b632-4d64-a174-ecfe34a7a746', 'name': 'network-vif-plugged', 'server_uuid': u'092c8bc4-3643-44c0-b79e-ad5caac18b3d'}] send_events /usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:232 > 2015-08-22 00:20:35.468 16812 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.81.221 > 2015-08-22 00:20:35.548 16812 DEBUG urllib3.connectionpool [-] "POST /v2/338aad513c604880a6a0dcc58b88b905/os-server-external-events HTTP/1.1" 200 183 _make_request /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357 > 2015-08-22 00:20:35.550 16812 INFO neutron.notifiers.nova [-] Nova event response: {u'status': u'completed', u'tag': u'2839ca4d-b632-4d64-a174-ecfe34a7a746', u'name': u'network-vif-plugged', u'server_uuid': u'092c8bc4-3643-44c0-b79e-ad5caac18b3d', u'code': 200} > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com I suggest to ask at operators@ mailing list in openstack. This list is for RDO, and the question seems more general. http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Ihar -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From kchamart at redhat.com Thu Sep 17 11:49:59 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 17 Sep 2015 13:49:59 +0200 Subject: [Rdo-list] RDO Test Day, Sep 23-24 In-Reply-To: References: <55F95D1D.10308@redhat.com> <55F9CA51.6090307@redhat.com> <20150917094913.GA7389@tesla.redhat.com> Message-ID: <20150917114959.GA26714@tesla.redhat.com> On Thu, Sep 17, 2015 at 01:41:10PM +0200, Alan Pevec wrote: > > I wonder if Workarounds page is one place where we can use a > > Wiki/Etherpad so that pople can document as issues arise. With pull > > requests, you'd have an unknown delay introduced as someone has to > > process those & merge them. > > IMHO wiki should be gone after migration to avoid confusion > but for collecting workarounds during the day we should create a > semi-official etherpad.o.o and then pull them into website. Yep, semi-official etherpad sounds good. (And, yes - I agree that wiki should be gone after full migration.) > BTW PR should be fast, once we setup some kind of "CI" on PRs (using > travis is easiest). Nice. On a related note, I see that Travis allows IRC notification[1]. Probably we can enable that too. [I learnt about it from upstream QEMU, they use it on OFTC.] [1] http://docs.travis-ci.com/user/notifications/#IRC-notification > With the current setup website is automatically published, not sure > about details. Rich, Garrett - is it a cron job or something else? > Also, what would be the verification script that website PR doesn't > completely break the side, adds spam links etc? > > Cheers, > Alan -- /kashyap From dabarren at gmail.com Thu Sep 17 11:44:54 2015 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Thu, 17 Sep 2015 13:44:54 +0200 Subject: [Rdo-list] Delorean-rdo-management.repo missing Message-ID: Hi, Today, i have tried to create overcloud images with the following command: openstack overcloud image build --all The process keeps stuck while adding and updating system packages with delorean-rdo-management-repo. Yesterday, some problems related with delorean repository maintenance happened, so my question is: The missing repo file is supposed to be there or there is a bug that are searching the file in the wrong location? Here is the output related with the issue: + echo dib-run-parts Wed Sep 16 21:08:51 UTC 2015 00-centos-cloud-repo completed dib-run-parts Wed Sep 16 21:08:51 UTC 2015 00-centos-cloud-repo completed + for target in '$targets' + output 'Running /tmp/in_target.d/pre-install.d/00-delorean-rdo-management' ++ date + echo dib-run-parts Wed Sep 16 21:08:51 UTC 2015 Running /tmp/in_target.d/pre-install.d/00-delorean-rdo-management dib-run-parts Wed Sep 16 21:08:51 UTC 2015 Running /tmp/in_target.d/pre-install.d/00-delorean-rdo-management + target_tag=00-delorean-rdo-management + date +%s.%N + /tmp/in_target.d/pre-install.d/00-delorean-rdo-management + export DELOREAN_TRUNK_MGT_REPO= http://trunk.rdoproject.org/centos7/current/ + DELOREAN_TRUNK_MGT_REPO=http://trunk.rdoproject.org/centos7/current/ + curl -o /etc/yum.repos.d/delorean-rdo-management.repo http://trunk.rdoproject.org/centos7/current//delorean-rdo-management.repo % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 243 100 243 0 0 427 0 --:--:-- --:--:-- --:--:-- 428 + target_tag=00-delorean-rdo-management + date +%s.%N + output '00-delorean-rdo-management completed' ++ date + echo dib-run-parts Wed Sep 16 21:08:52 UTC 2015 00-delorean-rdo-management completed dib-run-parts Wed Sep 16 21:08:52 UTC 2015 00-delorean-rdo-management completed + for target in '$targets' + output 'Running /tmp/in_target.d/pre-install.d/00-enable-cr-repo' ++ date + echo dib-run-parts Wed Sep 16 21:08:52 UTC 2015 Running /tmp/in_target.d/pre-install.d/00-enable-cr-repo dib-run-parts Wed Sep 16 21:08:52 UTC 2015 Running /tmp/in_target.d/pre-install.d/00-enable-cr-repo + target_tag=00-enable-cr-repo + date +%s.%N + /tmp/in_target.d/pre-install.d/00-enable-cr-repo + set -o pipefail + yum -y update Loaded plugins: fastestmirror File contains no section headers. file: file:///etc/yum.repos.d/delorean-rdo-management.repo, line: 1 '\n' Regards. Eduardo Gonzalez -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Thu Sep 17 12:00:24 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 17 Sep 2015 14:00:24 +0200 Subject: [Rdo-list] RDO Test Day, Sep 23-24 In-Reply-To: <20150917114959.GA26714@tesla.redhat.com> References: <55F95D1D.10308@redhat.com> <55F9CA51.6090307@redhat.com> <20150917094913.GA7389@tesla.redhat.com> <20150917114959.GA26714@tesla.redhat.com> Message-ID: > Nice. On a related note, I see that Travis allows IRC notification[1]. > Probably we can enable that too. [I learnt about it from upstream QEMU, > they use it on OFTC.] > > [1] http://docs.travis-ci.com/user/notifications/#IRC-notification Thanks for the link, I've copied travis.yml from openstack-puppet-modules to rdoinfo[1] but IRC notifications were not showing up on #rdo because of, now I see, channel configuration. Rich, can we add more ops for #rdo to have all-timezones coverage? Cheers, Alan [1] https://github.com/redhat-openstack/rdoinfo/blob/master/.travis.yml From bderzhavets at hotmail.com Thu Sep 17 11:55:18 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 17 Sep 2015 07:55:18 -0400 Subject: [Rdo-list] RE(2): RDO Test Day, Sep 23-24 In-Reply-To: <55F95D1D.10308@redhat.com> References: <55F95D1D.10308@redhat.com> Message-ID: > To: rdo-list at redhat.com > From: rbowen at redhat.com > Date: Wed, 16 Sep 2015 08:14:21 -0400 > Subject: [Rdo-list] RDO Test Day, Sep 23-24 > > We are planning to do a test day of Liberty RDO next week, September > 23rd and 24th. The test day notes are shaping up at > https://www.rdoproject.org/RDO_test_day_Liberty and should be fleshed Everything works fine without enabling Kilo repo (no errors during packstack run) [root at CentOS71SRV ~(keystone_admin)]# nova-manage --version No handlers could be found for logger "oslo_config.cfg" 12.0.0 CirrOS VM may be booted and is ping able and and available via floating IP. In/outbound connectivity works fine. However:- 1) [root at CentOS71SRV ~(keystone_admin)]# nova keypair-add oskeydev > oskeydev.pem ERROR (ConnectionRefused): Unable to establish connection to http://192.168.1.72:8774/v2/5508bdecf7134035814411a1598b66a6/os-keypairs [root at CentOS71SRV ~(keystone_admin)]# keystone endpoint-list | grep 8774 6d2165b5d4ea4014a62ac5cae509ee58 | RegionOne | http://192.168.1.72:8774/v2/%(tenant_id)s | http://192.168.1.72:8774/v2/%(tenant_id)s | http://192.168.1.72:8774/v2/%(tenant_id)s | 93024d9adb284986aeb52c7735841b42 | | e3b2eb1b20594b4ea716db58f7669d50 | RegionOne | http://127.0.0.1:8774/v3 | http://127.0.0.1:8774/v3 | http://127.0.0.1:8774/v3 | 02c1bb5f320040db832a255f80e89c35 | 1. Actually, it's impossible to create or import ssh keypair 2. It's impossible logout from admin 3. Tabs Log,Console stay empty regardless CirrOS VM is already running Thanks. Boris > out more by the end of the week. As usual, we'll coordinate on #rdo (on > Freenode) for questions and discussion. > > The packages that we will be testing have been through CI, so we should > be able to have a fairly successful day. > > If you have things that you'd like to see tested, please add these to > the test case matrix. > > We're aware that a number of people will be out during this time, but > it's been difficult to find days that work for everyone. So we're > planning to have another test day in the weeks to come. Announcement to > come soon. > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu Sep 17 12:19:08 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 17 Sep 2015 08:19:08 -0400 Subject: [Rdo-list] RDO Test Day, Sep 23-24 In-Reply-To: References: <55F95D1D.10308@redhat.com> <55F9CA51.6090307@redhat.com> <20150917094913.GA7389@tesla.redhat.com> <20150917114959.GA26714@tesla.redhat.com> Message-ID: <55FAAFBC.8050104@redhat.com> On 09/17/2015 08:00 AM, Alan Pevec wrote: >> Nice. On a related note, I see that Travis allows IRC notification[1]. >> Probably we can enable that too. [I learnt about it from upstream QEMU, >> they use it on OFTC.] >> >> [1] http://docs.travis-ci.com/user/notifications/#IRC-notification > > Thanks for the link, I've copied travis.yml from > openstack-puppet-modules to rdoinfo[1] but IRC notifications were not > showing up on #rdo because of, now I see, channel configuration. > > Rich, can we add more ops for #rdo to have all-timezones coverage? I'm not channel owner, but I'll try to figure out who is, and make that happen. > > Cheers, > Alan > > [1] https://github.com/redhat-openstack/rdoinfo/blob/master/.travis.yml > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Thu Sep 17 12:20:01 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 17 Sep 2015 08:20:01 -0400 Subject: [Rdo-list] RDO Test Day, Sep 23-24 In-Reply-To: <20150917094913.GA7389@tesla.redhat.com> References: <55F95D1D.10308@redhat.com> <55F9CA51.6090307@redhat.com> <20150917094913.GA7389@tesla.redhat.com> Message-ID: <55FAAFF1.7000100@redhat.com> On 09/17/2015 05:49 AM, Kashyap Chamarthy wrote: > On Wed, Sep 16, 2015 at 04:00:17PM -0400, Rich Bowen wrote: > > [. . .] > >> Finally, a workarounds page is at >> http://beta.rdoproject.org/testday/workarounds-liberty-01/ to document >> things that are necessary to work around known problems. >> >> To update any of the above, you can send pull requests to >> https://github.com/redhat-openstack/website > > I wonder if Workarounds page is one place where we can use a > Wiki/Etherpad so that pople can document as issues arise. With pull > requests, you'd have an unknown delay introduced as someone has to > process those & merge them. > Yes, good idea. We should either replace with etherpad during the event, or have both - one for scratch and one for "official" or something? -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Thu Sep 17 13:03:02 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 17 Sep 2015 09:03:02 -0400 Subject: [Rdo-list] RDO Test Day, Sep 23-24 In-Reply-To: References: <55F95D1D.10308@redhat.com> <55F9CA51.6090307@redhat.com> <20150917094913.GA7389@tesla.redhat.com> <20150917114959.GA26714@tesla.redhat.com> Message-ID: <55FABA06.8030805@redhat.com> On 09/17/2015 08:00 AM, Alan Pevec wrote: > Rich, can we add more ops for #rdo to have all-timezones coverage? I've added Alan and Haikel to the list. Also on the list already are dneary, pmyers, jbrooks, markmc, radez, mikeburns, kashyap. Who else should be on the list? --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From javier.pena at redhat.com Thu Sep 17 15:04:03 2015 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 17 Sep 2015 11:04:03 -0400 (EDT) Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance - September 14-15 In-Reply-To: References: <1336971049.45447727.1441900254860.JavaMail.zimbra@redhat.com> <1466705828.49951251.1442251391503.JavaMail.zimbra@redhat.com> <1235487111.51892256.1442410628155.JavaMail.zimbra@redhat.com> Message-ID: <1793359761.52785686.1442502243943.JavaMail.zimbra@redhat.com> ----- Original Message ----- > >> Everything seems to be ok, but please let us know if you find any issue. > > All was fine until ~1h ago when folks started reporting 403 from > trunk.rdoproject.org > > I see console log shows XFS errors and I cannot ssh into it, so I've > opened the ticket for OS1 Public support. Hi all, After checking the file system status and making sure everything is ok, the Delorean instance is now back to normal, and processing packages. Regards, Javier From rbowen at redhat.com Thu Sep 17 19:15:28 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 17 Sep 2015 15:15:28 -0400 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> References: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> Message-ID: <55FB1150.60900@redhat.com> > > That?s actually a good question. I suspect that delorean-deps.repo may replace Kilo providing needed deps missing in CentOS, but since the link returns 404, I can?t be sure. > > Rich, why do I get 404 for the links above? Hopefully that was due to the typo in the DNS zone, which is now fixed. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From bderzhavets at hotmail.com Thu Sep 17 19:54:15 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 17 Sep 2015 15:54:15 -0400 Subject: [Rdo-list] RDO Liberty and Fedora In-Reply-To: <55FB1150.60900@redhat.com> References: , , , , <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com>, <55FB1150.60900@redhat.com> Message-ID: > Subject: Re: [Rdo-list] RDO Liberty and Fedora > To: ihrachys at redhat.com; bderzhavets at hotmail.com > CC: rdo-list at redhat.com > From: rbowen at redhat.com > Date: Thu, 17 Sep 2015 15:15:28 -0400 > > > > > > That?s actually a good question. I suspect that delorean-deps.repo may replace Kilo providing needed deps missing in CentOS, but since the link returns 404, I can?t be sure. > > > > Rich, why do I get 404 for the links above? > > Hopefully that was due to the typo in the DNS zone, which is now fixed. Confirmed, problem is fixed. I got some other issues attempting RDO Liberty AIO set up on CentOS 7.1 https://www.redhat.com/archives/rdo-list/2015-September/msg00147.html Boris. > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Thu Sep 17 20:30:03 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 17 Sep 2015 16:30:03 -0400 Subject: [Rdo-list] Question about Restart=always in systemd In-Reply-To: References: <316691786CCAEE44AE90482264E3AB8213A9812C@xmb-rcd-x08.cisco.com> <936474675.323251.1440168150920.JavaMail.zimbra@speichert.pl> <20150904151716.GA5011@mattdm.org> <20150911153214.GA21461@mattdm.org> <20150915163756.GD14112@redhat.com> Message-ID: <20150917203003.GB316@redhat.com> On Tue, Sep 15, 2015 at 08:33:16PM +0200, Ha?kel wrote: > > Or if that's too crazy, just add a check to the fedora-review tool > > that ensures unit files have standard settings, such as the Restart= > > setting. The review could flag things like: > > > > - Missing Restart behavior > > - Missing Description As a proof-of-concept for this sort of idea: https://github.com/larsks/audit-unit-files Which, when run for example against the delorean nova repository: git clone https://github.com/openstack-packages/nova.git find nova -name '*.service' | xargs audit-unit-files -r required.service -a Yields: ERROR:__main__:file nova/openstack-nova-serialproxy.service is missing required option Service/restart ERROR:__main__:file nova/openstack-nova-xvpvncproxy.service is missing required option Service/restart ERROR:__main__:file nova/openstack-nova-novncproxy.service is missing required option Service/restart ERROR:__main__:file nova/openstack-nova-spicehtml5proxy.service is missing required option Service/restart (Run with '-v' to report 'file is okay' for files with no errors) -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From emilien at redhat.com Fri Sep 18 01:18:44 2015 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 17 Sep 2015 21:18:44 -0400 Subject: [Rdo-list] keystoneauth1 seems broken In-Reply-To: <55FB1150.60900@redhat.com> References: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> <55FB1150.60900@redhat.com> Message-ID: <55FB6674.8020004@redhat.com> TL;DR: keystoneauth1 package seems broken, so not installed, Puppet OpenStack CI broken, because keystone can't work. http://logs.openstack.org/07/224107/1/check/gate-puppet-ceilometer-puppet-beaker-rspec-dsvm-centos7/6f1aaa1/console.html#_2015-09-17_14_59_47_842 In case you missed the information, probably a recent commit in https://github.com/openstack-packages/python-keystoneauth1 Thanks, -- Emilien Macchi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From ggillies at redhat.com Fri Sep 18 03:14:57 2015 From: ggillies at redhat.com (Graeme Gillies) Date: Fri, 18 Sep 2015 13:14:57 +1000 Subject: [Rdo-list] Liberty RDO Manager Failing to install Message-ID: <55FB81B1.9060007@redhat.com> Hi, I'm attempting to get Liberty RDO Manager up and running. I've manually compiled python-keystoneauth1 using the spec from the delorean repos and installed that, and have installed python-openstackclient. Now when I attempt to run openstack undercloud install I get the following error $ openstack undercloud install The plugin osc_password could not be found Is another packaging missing, or is this more likely a missing dependency that should be getting pulled in by python-openstackclient? Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From dms at redhat.com Fri Sep 18 04:37:09 2015 From: dms at redhat.com (David Moreau Simard) Date: Fri, 18 Sep 2015 00:37:09 -0400 Subject: [Rdo-list] keystoneauth1 seems broken In-Reply-To: <55FB6674.8020004@redhat.com> References: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> <55FB1150.60900@redhat.com> <55FB6674.8020004@redhat.com> Message-ID: Emilien (and Graeme), Although I do not have a solution for you yet, I confirm that something's broken and I have in fact bumped into this today as well. I raised the issue on IRC and Alan Pevec pointed me to this Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1241812 In fact, what happened, is that this commit merged today: https://review.openstack.org/#/c/191003/3 I'm really not sure what that commit is about, what they're trying to accomplish or what is the use case for it. We will try to get a better understanding and fix tomorrow for sure. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Thu, Sep 17, 2015 at 9:18 PM, Emilien Macchi wrote: > TL;DR: keystoneauth1 package seems broken, so not installed, Puppet > OpenStack CI broken, because keystone can't work. > > > http://logs.openstack.org/07/224107/1/check/gate-puppet-ceilometer-puppet-beaker-rspec-dsvm-centos7/6f1aaa1/console.html#_2015-09-17_14_59_47_842 > > In case you missed the information, probably a recent commit in > https://github.com/openstack-packages/python-keystoneauth1 > > > Thanks, > -- > Emilien Macchi > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Fri Sep 18 07:29:16 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 18 Sep 2015 09:29:16 +0200 Subject: [Rdo-list] Liberty RDO Manager Failing to install In-Reply-To: <55FB81B1.9060007@redhat.com> References: <55FB81B1.9060007@redhat.com> Message-ID: > I've manually compiled python-keystoneauth1 using the spec from the > delorean repos and installed that, and have installed > python-openstackclient. Now when I attempt to run > > openstack undercloud install > > I get the following error > > $ openstack undercloud install > The plugin osc_password could not be found > > Is another packaging missing, or is this more likely a missing > dependency that should be getting pulled in by python-openstackclient? I got the same error last night and that's why I didn't merge keystoneauth1 into rdoinfo yet: https://github.com/redhat-openstack/rdoinfo/pull/92 It was actually os-client-config which was producing backtrace w/o keystoneauth1, now debugging further... Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Fri Sep 18 08:08:59 2015 From: javier.pena at redhat.com (Javier Pena) Date: Fri, 18 Sep 2015 04:08:59 -0400 (EDT) Subject: [Rdo-list] RE(2): RDO Test Day, Sep 23-24 In-Reply-To: References: <55F95D1D.10308@redhat.com> Message-ID: <779206335.53467904.1442563739403.JavaMail.zimbra@redhat.com> ----- Original Message ----- > > To: rdo-list at redhat.com > > From: rbowen at redhat.com > > Date: Wed, 16 Sep 2015 08:14:21 -0400 > > Subject: [Rdo-list] RDO Test Day, Sep 23-24 > > > > We are planning to do a test day of Liberty RDO next week, September > > 23rd and 24th. The test day notes are shaping up at > > https://www.rdoproject.org/RDO_test_day_Liberty and should be fleshed > Everything works fine without enabling Kilo repo (no errors during packstack > run) > [root at CentOS71SRV ~(keystone_admin)]# nova-manage --version > No handlers could be found for logger "oslo_config.cfg" > 12.0.0 > CirrOS VM may be booted and is ping able and and available via floating IP. > In/outbound connectivity works fine. > However:- > 1) [root at CentOS71SRV ~(keystone_admin)]# nova keypair-add oskeydev > > oskeydev.pem > ERROR (ConnectionRefused): Unable to establish connection to > http://192.168.1.72:8774/v2/5508bdecf7134035814411a1598b66a6/os-keypairs > [root at CentOS71SRV ~(keystone_admin)]# keystone endpoint-list | grep 8774 > 6d2165b5d4ea4014a62ac5cae509ee58 | RegionOne | > http://192.168.1.72:8774/v2/%(tenant_id)s | > http://192.168.1.72:8774/v2/%(tenant_id)s | > http://192.168.1.72:8774/v2/%(tenant_id)s | 93024d9adb284986aeb52c7735841b42 > | > | e3b2eb1b20594b4ea716db58f7669d50 | RegionOne | http://127.0.0.1:8774/v3 | > | http://127.0.0.1:8774/v3 | http://127.0.0.1:8774/v3 | > | 02c1bb5f320040db832a255f80e89c35 | > 1. Actually, it's impossible to create or import ssh keypair Hi Boris, You are probably hitting the same issue being tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1249685 . While this is fixed, please try setting SELinux to Permissive and check if that help. Regards, Javier > 2. It's impossible logout from admin > 3. Tabs Log,Console stay empty regardless CirrOS VM is already running > Thanks. > Boris > > out more by the end of the week. As usual, we'll coordinate on #rdo (on > > Freenode) for questions and discussion. > > > > The packages that we will be testing have been through CI, so we should > > be able to have a fairly successful day. > > > > If you have things that you'd like to see tested, please add these to > > the test case matrix. > > > > We're aware that a number of people will be out during this time, but > > it's been difficult to find days that work for everyone. So we're > > planning to have another test day in the weeks to come. Announcement to > > come soon. > > > > -- > > Rich Bowen - rbowen at redhat.com > > OpenStack Community Liaison > > http://rdoproject.org/ > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Fri Sep 18 08:40:54 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 18 Sep 2015 04:40:54 -0400 Subject: [Rdo-list] RE(2): RDO Test Day, Sep 23-24 In-Reply-To: <779206335.53467904.1442563739403.JavaMail.zimbra@redhat.com> References: <55F95D1D.10308@redhat.com> , <779206335.53467904.1442563739403.JavaMail.zimbra@redhat.com> Message-ID: Date: Fri, 18 Sep 2015 04:08:59 -0400 From: javier.pena at redhat.com To: bderzhavets at hotmail.com CC: rbowen at redhat.com; rdo-list at redhat.com Subject: Re: [Rdo-list] RE(2): RDO Test Day, Sep 23-24 > To: rdo-list at redhat.com > From: rbowen at redhat.com > Date: Wed, 16 Sep 2015 08:14:21 -0400 > Subject: [Rdo-list] RDO Test Day, Sep 23-24 > > We are planning to do a test day of Liberty RDO next week, September > 23rd and 24th. The test day notes are shaping up at > https://www.rdoproject.org/RDO_test_day_Liberty and should be fleshed Everything works fine without enabling Kilo repo (no errors during packstack run) [root at CentOS71SRV ~(keystone_admin)]# nova-manage --version No handlers could be found for logger "oslo_config.cfg" 12.0.0 CirrOS VM may be booted and is ping able and and available via floating IP. In/outbound connectivity works fine. However:- 1) [root at CentOS71SRV ~(keystone_admin)]# nova keypair-add oskeydev > oskeydev.pem ERROR (ConnectionRefused): Unable to establish connection to http://192.168.1.72:8774/v2/5508bdecf7134035814411a1598b66a6/os-keypairs [root at CentOS71SRV ~(keystone_admin)]# keystone endpoint-list | grep 8774 6d2165b5d4ea4014a62ac5cae509ee58 | RegionOne | http://192.168.1.72:8774/v2/%(tenant_id)s | http://192.168.1.72:8774/v2/%(tenant_id)s | http://192.168.1.72:8774/v2/%(tenant_id)s | 93024d9adb284986aeb52c7735841b42 | | e3b2eb1b20594b4ea716db58f7669d50 | RegionOne | http://127.0.0.1:8774/v3 | http://127.0.0.1:8774/v3 | http://127.0.0.1:8774/v3 | 02c1bb5f320040db832a255f80e89c35 | 1. Actually, it's impossible to create or import ssh keypair Hi Boris, You are probably hitting the same issue being tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1249685 . While this is fixed, please try setting SELinux to Permissive and check if that help. ---------------------------------------------------------------------------------------------------------------------------- Thank you. Setting SELINUX to permissive resolve this issue. [root at CentOS71SRV ~(keystone_demo)]# nova keypair-add oskeydev > oskeydev.pem now works via CLI. Boris ----------------------------------------------------------------------------------------------------------------------------- Regards, Javier 2. It's impossible logout from admin 3. Tabs Log,Console stay empty regardless CirrOS VM is already running Thanks. Boris > out more by the end of the week. As usual, we'll coordinate on #rdo (on > Freenode) for questions and discussion. > > The packages that we will be testing have been through CI, so we should > be able to have a fairly successful day. > > If you have things that you'd like to see tested, please add these to > the test case matrix. > > We're aware that a number of people will be out during this time, but > it's been difficult to find days that work for everyone. So we're > planning to have another test day in the weeks to come. Announcement to > come soon. > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://rdoproject.org/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Fri Sep 18 08:54:30 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 18 Sep 2015 04:54:30 -0400 Subject: [Rdo-list] [rdo-manager] is there a stable version? Message-ID: For the past 5 days i have been attempting to install RDOmanager. i have been getting errors. and have been attempting to side step them. a) followed the documentation, got errors b) ignored delorean repo, unable to complete installation c) used the alternative trunk "before CI run has passed" delorean repo and got errors. I am currently doing an internal POC and just want RDOmanager runnning. how do i get the stable packages that are guaranteed to install?? for those who will ask what is the error? i am providing it below, pls remember that i have limited time left (deadline=saturday night) to show a working rdomanager installation. if fixing the error is fastest, great, if directing me to documentation on using a stable repo. great. ps. i have a working rdo install _at_home_ about a week old, and based on this success, i was doing this internal POC. _______ Dependency Process ending Depsolve time: 0.720 Error: Package: python-futurist-0.5.1-dev8.el7.centos.noarch (delorean) Requires: python-contextlib2 >= 0.4.0 Error: Package: openstack-tuskar-2013.2-dev8.el7.centos.noarch (delorean) Requires: python-flask-babel You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest INFO: 2015-09-18 08:26:14,700 -- ############### End stdout/stderr logging ############### ERROR: 2015-09-18 08:26:14,701 -- Hook FAILED. ERROR: 2015-09-18 08:26:14,701 -- Failed running command ['dib-run-parts', u'/tmp/tmpqnbtlP/install.d'] File "/usr/lib/python2.7/site-packages/instack/main.py", line 163, in main em.run() File "/usr/lib/python2.7/site-packages/instack/runner.py", line 79, in run self.run_hook(hook) File "/usr/lib/python2.7/site-packages/instack/runner.py", line 174, in run_hook raise Exception("Failed running command %s" % command) ERROR: 2015-09-18 08:26:14,701 -- None Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 544, in install :param instack_root: The path containing the instack-undercloud elements File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 476, in _run_instack return instack_env File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 314, in _run_live_command stderr=subprocess.STDOUT) RuntimeError: instack failed. See log for details. ERROR: openstack Command 'instack-install-undercloud' returned non-zero exit status 1 [ https://mail.egidegypt.com/owa/attachment.ashx?id=RgAAAACtgCAgsTXwQaoSKdlfh5krBwDolgrUG0tcQZzBLjGcH%2bLlAdJ7t9L0AADolgrUG0tcQZzBLjGcH%2bLlAdJ7t9UxAAAJ&attcnt=1&attid0=EAAN%2bSWesCiyQZRR5qEVlE9J]Best Regards, -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Fri Sep 18 09:27:25 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 18 Sep 2015 11:27:25 +0200 Subject: [Rdo-list] keystoneauth1 seems broken In-Reply-To: References: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> <55FB1150.60900@redhat.com> <55FB6674.8020004@redhat.com> Message-ID: > I raised the issue on IRC and Alan Pevec pointed me to this Bugzilla: > https://bugzilla.redhat.com/show_bug.cgi?id=1241812 > > In fact, what happened, is that this commit merged today: > https://review.openstack.org/#/c/191003/3 > I'm really not sure what that commit is about, what they're trying to > accomplish or what is the use case for it. That change in keystoneauth was merged before, I just commented on it yesterday. https://review.openstack.org/221125 in os-client-config is where keystoneauth1 dependency was added. Here's full backtrace for "The plugin osc_password could not be found" error: # openstack --debug The plugin osc_password could not be found Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 108, in run ret_val = super(OpenStackShell, self).run(argv) File "/usr/lib/python2.7/site-packages/cliff/app.py", line 213, in run self.initialize_app(remainder) File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 237, in initialize_app argparse=self.options, File "/usr/lib/python2.7/site-packages/os_client_config/config.py", line 498, in get_one_cloud loader = self._get_auth_loader(config) File "/usr/lib/python2.7/site-packages/os_client_config/config.py", line 418, in _get_auth_loader return loading.get_plugin_loader(config['auth_type']) File "/usr/lib/python2.7/site-packages/keystoneauth1/loading/base.py", line 74, in get_plugin_loader raise exceptions.NoMatchingPlugin(name) NoMatchingPlugin: The plugin osc_password could not be found END return value: 1 Relevant package versions: python-openstackclient-1.6.1-dev46.el7.centos.noarch os-client-config-1.7.1-0.99.20150917.1205git.el7.centos.noarch and local build from master: https://apevec.fedorapeople.org/openstack/python-keystoneauth1-1.0.1-0.1.dev13.el7.noarch.rpm osc_password is defined in /usr/lib/python2.7/site-packages/python_openstackclient-1.6.1.dev46-py2.7.egg-info/entry_points.txt: ... [keystoneclient.auth.plugin] osc_password = openstackclient.api.auth_plugin:OSCGenericPassword token_endpoint = openstackclient.api.auth_plugin:TokenEndpoint ... so I guess something went wrong between osc and occ, downgrading to python-keystoneauth1-1.0.0 doesn't help. Anyone who understands osc auth plugins please help! Cheers, Alan From cbrown2 at ocf.co.uk Fri Sep 18 09:29:05 2015 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Fri, 18 Sep 2015 10:29:05 +0100 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: Message-ID: <1442568545.5876.2.camel@ocf-laptop> Hi Mohammed, Yes, I couldn't agree more. I am trying to do exactly the same thing and have been unable to run up a PoC environment. I think there has been some infra downtime and the CI testing isn't working as it should. I'd recommend seeing if you can run a trial of OSP 7's Director in the interim? Regards Chris On Fri, 2015-09-18 at 09:54 +0100, Mohammed Arafa wrote: > For the past 5 days i have been attempting to install RDOmanager. i > have been getting errors. and have been attempting to side step them. > > a) followed the documentation, got errors > > b) ignored delorean repo, unable to complete installation > > c) used the alternative trunk "before CI run has passed" delorean repo > and got errors. > > I am currently doing an internal POC and just want RDOmanager > runnning. how do i get the stable packages that are guaranteed to > install?? > > for those who will ask what is the error? i am providing it below, pls > remember that i have limited time left (deadline=saturday night) to > show a working rdomanager installation. if fixing the error is > fastest, great, if directing me to documentation on using a stable > repo. great. > > ps. i have a working rdo install _at_home_ about a week old, and based > on this success, i was doing this internal POC. > > _______ > > Dependency Process ending > Depsolve time: 0.720 > Error: Package: python-futurist-0.5.1-dev8.el7.centos.noarch > (delorean) > Requires: python-contextlib2 >= 0.4.0 > Error: Package: openstack-tuskar-2013.2-dev8.el7.centos.noarch > (delorean) > Requires: python-flask-babel > You could try using --skip-broken to work around the problem > You could try running: rpm -Va --nofiles --nodigest > INFO: 2015-09-18 08:26:14,700 -- ############### End stdout/stderr > logging ############### > ERROR: 2015-09-18 08:26:14,701 -- Hook FAILED. > ERROR: 2015-09-18 08:26:14,701 -- Failed running command > ['dib-run-parts', u'/tmp/tmpqnbtlP/install.d'] > File "/usr/lib/python2.7/site-packages/instack/main.py", line 163, > in main > em.run() > File "/usr/lib/python2.7/site-packages/instack/runner.py", line 79, > in run > self.run_hook(hook) > File "/usr/lib/python2.7/site-packages/instack/runner.py", line 174, > in run_hook > raise Exception("Failed running command %s" % command) > ERROR: 2015-09-18 08:26:14,701 -- None > Traceback (most recent call last): > File "", line 1, in > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 544, in install > :param instack_root: The path containing the instack-undercloud > elements > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 476, in _run_instack > return instack_env > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > line 314, in _run_live_command > stderr=subprocess.STDOUT) > RuntimeError: instack failed. See log for details. > ERROR: openstack Command 'instack-install-undercloud' returned > non-zero exit status 1 > [https://mail.egidegypt.com/owa/attachment.ashx?id=RgAAAACtgCAgsTXwQaoSKdlfh5krBwDolgrUG0tcQZzBLjGcH%2bLlAdJ7t9L0AADolgrUG0tcQZzBLjGcH%2bLlAdJ7t9UxAAAJ&attcnt=1&attid0=EAAN%2bSWesCiyQZRR5qEVlE9J]Best Regards, > > -- > > > > > > > > > > 805010942448935 > > > GR750055912MA > > > Link to me on LinkedIn > > -- Regards, Christopher Brown Openstack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc Please note, any emails relating to an OCF Support request must always be sent to support at ocf.co.uk for a ticket number to be generated or existing support ticket to be updated. Should this not be done then OCF cannot be held responsible for requests not dealt with in a timely manner. OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 2PG. This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus From jamielennox at redhat.com Fri Sep 18 11:38:59 2015 From: jamielennox at redhat.com (Jamie Lennox) Date: Fri, 18 Sep 2015 21:38:59 +1000 Subject: [Rdo-list] keystoneauth1 seems broken In-Reply-To: References: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> <55FB1150.60900@redhat.com> <55FB6674.8020004@redhat.com> Message-ID: So what happened here is that the os-client-config package recently added a dependency on keystoneauth. This wasn't supposed to have happened until the start of the Mitaka cycle but got ahead of itself. Due to bad testing upstream there was a problem with how openstackclient works with os-client-config that wasn't noticed until the new os-client-config was released onto pypi. https://bugs.launchpad.net/os-client-config/+bug/1496689 is the upstream bug. I was trying to argue today that we should just roll back the os-client-config dependency on keystoneauth until after liberty is released, however I'm not sure which way that will go. For now it would be best to skip the most recent version of os-client-config. On 18 September 2015 at 19:27, Alan Pevec wrote: >> I raised the issue on IRC and Alan Pevec pointed me to this Bugzilla: >> https://bugzilla.redhat.com/show_bug.cgi?id=1241812 >> >> In fact, what happened, is that this commit merged today: >> https://review.openstack.org/#/c/191003/3 >> I'm really not sure what that commit is about, what they're trying to >> accomplish or what is the use case for it. > > That change in keystoneauth was merged before, I just commented on it yesterday. > https://review.openstack.org/221125 in os-client-config is where > keystoneauth1 dependency was added. > > Here's full backtrace for "The plugin osc_password could not be found" error: > # openstack --debug > The plugin osc_password could not be found > > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", > line 108, in run > ret_val = super(OpenStackShell, self).run(argv) > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 213, in run > self.initialize_app(remainder) > File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", > line 237, in initialize_app > argparse=self.options, > File "/usr/lib/python2.7/site-packages/os_client_config/config.py", > line 498, in get_one_cloud > loader = self._get_auth_loader(config) > File "/usr/lib/python2.7/site-packages/os_client_config/config.py", > line 418, in _get_auth_loader > return loading.get_plugin_loader(config['auth_type']) > File "/usr/lib/python2.7/site-packages/keystoneauth1/loading/base.py", > line 74, in get_plugin_loader > raise exceptions.NoMatchingPlugin(name) > NoMatchingPlugin: The plugin osc_password could not be found > > END return value: 1 > > Relevant package versions: > python-openstackclient-1.6.1-dev46.el7.centos.noarch > os-client-config-1.7.1-0.99.20150917.1205git.el7.centos.noarch > and local build from master: > https://apevec.fedorapeople.org/openstack/python-keystoneauth1-1.0.1-0.1.dev13.el7.noarch.rpm > > osc_password is defined in > /usr/lib/python2.7/site-packages/python_openstackclient-1.6.1.dev46-py2.7.egg-info/entry_points.txt: > ... > [keystoneclient.auth.plugin] > osc_password = openstackclient.api.auth_plugin:OSCGenericPassword > token_endpoint = openstackclient.api.auth_plugin:TokenEndpoint > ... > so I guess something went wrong between osc and occ, downgrading to > python-keystoneauth1-1.0.0 doesn't help. > Anyone who understands osc auth plugins please help! > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ibravo at ltgfederal.com Fri Sep 18 11:39:28 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Fri, 18 Sep 2015 07:39:28 -0400 Subject: [Rdo-list] [rdo-manager] is there a stable version? Message-ID: Mohamed, In addition to delorean and delorean-dep repo, you need to install peel sudo yum install epel-release This will get you through the install of the package Also, if you are installing with Liberty, the name of the package that you need to install changed to: sudo yum install python-tripleoclient I am also trying to install RDO Manager in Centos and was successful about a week ago and now can?t get it to build again. Argh! __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Fri Sep 18 11:49:02 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 18 Sep 2015 13:49:02 +0200 Subject: [Rdo-list] keystoneauth1 seems broken In-Reply-To: References: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> <55FB1150.60900@redhat.com> <55FB6674.8020004@redhat.com> Message-ID: 2015-09-18 13:38 GMT+02:00 Jamie Lennox : > For now it would be best to skip the most recent version of os-client-config. Thanks Jamie for the late night update! As a temp measure I'm forking os-client-config and will restore it to < 1.7.0 Cheers, Alan From dtantsur at redhat.com Fri Sep 18 11:57:42 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 18 Sep 2015 13:57:42 +0200 Subject: [Rdo-list] Liberty RDO Manager Failing to install In-Reply-To: References: <55FB81B1.9060007@redhat.com> Message-ID: <55FBFC36.2050503@redhat.com> On 09/18/2015 09:29 AM, Alan Pevec wrote: > > I've manually compiled python-keystoneauth1 using the spec from the > > delorean repos and installed that, and have installed > > python-openstackclient. Now when I attempt to run > > > > openstack undercloud install > > > > I get the following error > > > > $ openstack undercloud install > > The plugin osc_password could not be found > > > > Is another packaging missing, or is this more likely a missing > > dependency that should be getting pulled in by python-openstackclient? > > I got the same error last night and that's why I didn't merge > keystoneauth1 into rdoinfo yet: > > https://github.com/redhat-openstack/rdoinfo/pull/92 > > It was actually os-client-config which was producing backtrace w/o > keystoneauth1, now debugging further... I've just hit this upstream in a virtualenv, so it seems to be an upstream bug: https://bugs.launchpad.net/python-openstackclient/+bug/1496689 > > Alan > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From hguemar at fedoraproject.org Fri Sep 18 12:07:28 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Fri, 18 Sep 2015 14:07:28 +0200 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: Message-ID: 2015-09-18 13:39 GMT+02:00 Ignacio Bravo : > Mohamed, > > In addition to delorean and delorean-dep repo, you need to install peel > > sudo yum install epel-release > > This will get you through the install of the package You shouldn't need EPEL with these delorean repos. Could you list me the missing packages so we could fix that issue in our repositories? H. > > Also, if you are installing with Liberty, the name of the package that you > need to install changed to: > > sudo yum install python-tripleoclient > > > I am also trying to install RDO Manager in Centos and was successful about a > week ago and now can?t get it to build again. Argh! > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dtantsur at redhat.com Fri Sep 18 12:17:38 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 18 Sep 2015 14:17:38 +0200 Subject: [Rdo-list] keystoneauth1 seems broken In-Reply-To: References: <8D737F51-F4F5-446E-90E2-7B7F0313414D@redhat.com> <55FB1150.60900@redhat.com> <55FB6674.8020004@redhat.com> Message-ID: <55FC00E2.2000808@redhat.com> On 09/18/2015 01:49 PM, Alan Pevec wrote: > 2015-09-18 13:38 GMT+02:00 Jamie Lennox : >> For now it would be best to skip the most recent version of os-client-config. > > Thanks Jamie for the late night update! > As a temp measure I'm forking os-client-config and will restore it to < 1.7.0 1.6.0 worked for me locally. > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mohammed.arafa at gmail.com Fri Sep 18 12:18:53 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 18 Sep 2015 08:18:53 -0400 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: Message-ID: of course my installation has changed but If i recall correctly, epel wasnt needed if you followed the doc to the letter (path a from my first email). i did however get dib-run-parts errors, which i found an email stating it was fixed a week ago so i tried path c) "before CI run has passed" of the doc. That one needed epel. again, i cannot provide the name of the python application as i have rolled back to a previous qemu snapshot out of curiousity, is there no way to install stable package? centos, and fedora has *testing.repo so in my mind the delorean equates to the *testing.repo. so is rdo-manager missing a stable repo? On Fri, Sep 18, 2015 at 8:07 AM, Ha?kel wrote: > 2015-09-18 13:39 GMT+02:00 Ignacio Bravo : > > Mohamed, > > > > In addition to delorean and delorean-dep repo, you need to install peel > > > > sudo yum install epel-release > > > > This will get you through the install of the package > > You shouldn't need EPEL with these delorean repos. > Could you list me the missing packages so we could fix that issue in > our repositories? > > H. > > > > > Also, if you are installing with Liberty, the name of the package that > you > > need to install changed to: > > > > sudo yum install python-tripleoclient > > > > > > I am also trying to install RDO Manager in Centos and was successful > about a > > week ago and now can?t get it to build again. Argh! > > > > > > __ > > Ignacio Bravo > > LTG Federal, Inc > > www.ltgfederal.com > > Office: (703) 951-7760 > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Fri Sep 18 12:20:16 2015 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Fri, 18 Sep 2015 14:20:16 +0200 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: Message-ID: Hi, at a first glance, the missing packages are: python-contextlib2 python-flask-babel Also, at the overcloud image creation i've found a missing repo file: delorean-rdo-management.repo Regards, Eduardo Gonzalez 2015-09-18 14:07 GMT+02:00 Ha?kel : > 2015-09-18 13:39 GMT+02:00 Ignacio Bravo : > > Mohamed, > > > > In addition to delorean and delorean-dep repo, you need to install peel > > > > sudo yum install epel-release > > > > This will get you through the install of the package > > You shouldn't need EPEL with these delorean repos. > Could you list me the missing packages so we could fix that issue in > our repositories? > > H. > > > > > Also, if you are installing with Liberty, the name of the package that > you > > need to install changed to: > > > > sudo yum install python-tripleoclient > > > > > > I am also trying to install RDO Manager in Centos and was successful > about a > > week ago and now can?t get it to build again. Argh! > > > > > > __ > > Ignacio Bravo > > LTG Federal, Inc > > www.ltgfederal.com > > Office: (703) 951-7760 > > > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Fri Sep 18 12:34:57 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Fri, 18 Sep 2015 14:34:57 +0200 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: Message-ID: 2015-09-18 14:20 GMT+02:00 Eduardo Gonzalez : > Hi, at a first glance, the missing packages are: > > python-contextlib2 > python-flask-babel > These should be provided by repositories in delorean-deps.repo (and are enabled by default) As shown here, they're both available. https://repos.fedorapeople.org/repos/openstack/cbs/cloud7-openstack-common-testing/x86_64/os/Packages/ It could be a yum cache refreshment issue. > Also, at the overcloud image creation i've found a missing repo file: > delorean-rdo-management.repo > > Regards, > Eduardo Gonzalez > ack From apevec at gmail.com Fri Sep 18 13:24:53 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 18 Sep 2015 15:24:53 +0200 Subject: [Rdo-list] keystoneauth1 seems broken Message-ID: >> As a temp measure I'm forking os-client-config and will restore it to < >> 1.7.0 > 1.6.0 worked for me locally. So does https://apevec.fedorapeople.org/openstack/python-os-client-config-1.6.4-0.99.20150918.1310git.el7.centos.noarch.rpm I've pinned it to 1.6.4 last know good before 1.7.0: https://github.com/redhat-openstack/rdoinfo/pull/93 NB there is Provides: os-client-config for compatibility with previous Delorean Trunk package you need to yum remove os-client-config before installing new python-os-client-config Cheers, Alan From dabarren at gmail.com Fri Sep 18 10:24:47 2015 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Fri, 18 Sep 2015 12:24:47 +0200 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: <1442568545.5876.2.camel@ocf-laptop> References: <1442568545.5876.2.camel@ocf-laptop> Message-ID: Hi, This week i've been making a PoC over rdo-manager. I realized that some of the needed packages are in EPEL repository. So i've followed the next steps to sucessfull install undercloud: 1- Create stack user 2- Assign hostname 3- Enable/Install repos - Kilo (the same as the guide sugests) - RDO Trunk (the same as the guide sugests) - EPEL (sudo yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm) 4- Install yum plugin priorities 5- Install rdomanager oscplugin 6- Copy and modify the configuration file as needed 7- Export environment variables 8- Install undercloud Following this steps I've been able to install an undercloud. The only diference between this and the official guide is the needed of EPEL repository.(Already suggested a change in the docs) Now, i'm facing another issue when creating base images for overcloud. It fails due a missing repository file. Regards, Eduardo Gonzalez 2015-09-18 11:29 GMT+02:00 Christopher Brown : > Hi Mohammed, > > Yes, I couldn't agree more. > > I am trying to do exactly the same thing and have been unable to run up > a PoC environment. > > I think there has been some infra downtime and the CI testing isn't > working as it should. > > I'd recommend seeing if you can run a trial of OSP 7's Director in the > interim? > > Regards > Chris > > On Fri, 2015-09-18 at 09:54 +0100, Mohammed Arafa wrote: > > For the past 5 days i have been attempting to install RDOmanager. i > > have been getting errors. and have been attempting to side step them. > > > > a) followed the documentation, got errors > > > > b) ignored delorean repo, unable to complete installation > > > > c) used the alternative trunk "before CI run has passed" delorean repo > > and got errors. > > > > I am currently doing an internal POC and just want RDOmanager > > runnning. how do i get the stable packages that are guaranteed to > > install?? > > > > for those who will ask what is the error? i am providing it below, pls > > remember that i have limited time left (deadline=saturday night) to > > show a working rdomanager installation. if fixing the error is > > fastest, great, if directing me to documentation on using a stable > > repo. great. > > > > ps. i have a working rdo install _at_home_ about a week old, and based > > on this success, i was doing this internal POC. > > > > _______ > > > > Dependency Process ending > > Depsolve time: 0.720 > > Error: Package: python-futurist-0.5.1-dev8.el7.centos.noarch > > (delorean) > > Requires: python-contextlib2 >= 0.4.0 > > Error: Package: openstack-tuskar-2013.2-dev8.el7.centos.noarch > > (delorean) > > Requires: python-flask-babel > > You could try using --skip-broken to work around the problem > > You could try running: rpm -Va --nofiles --nodigest > > INFO: 2015-09-18 08:26:14,700 -- ############### End stdout/stderr > > logging ############### > > ERROR: 2015-09-18 08:26:14,701 -- Hook FAILED. > > ERROR: 2015-09-18 08:26:14,701 -- Failed running command > > ['dib-run-parts', u'/tmp/tmpqnbtlP/install.d'] > > File "/usr/lib/python2.7/site-packages/instack/main.py", line 163, > > in main > > em.run() > > File "/usr/lib/python2.7/site-packages/instack/runner.py", line 79, > > in run > > self.run_hook(hook) > > File "/usr/lib/python2.7/site-packages/instack/runner.py", line 174, > > in run_hook > > raise Exception("Failed running command %s" % command) > > ERROR: 2015-09-18 08:26:14,701 -- None > > Traceback (most recent call last): > > File "", line 1, in > > File > > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > > line 544, in install > > :param instack_root: The path containing the instack-undercloud > > elements > > File > > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > > line 476, in _run_instack > > return instack_env > > File > > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", > > line 314, in _run_live_command > > stderr=subprocess.STDOUT) > > RuntimeError: instack failed. See log for details. > > ERROR: openstack Command 'instack-install-undercloud' returned > > non-zero exit status 1 > > [ > https://mail.egidegypt.com/owa/attachment.ashx?id=RgAAAACtgCAgsTXwQaoSKdlfh5krBwDolgrUG0tcQZzBLjGcH%2bLlAdJ7t9L0AADolgrUG0tcQZzBLjGcH%2bLlAdJ7t9UxAAAJ&attcnt=1&attid0=EAAN%2bSWesCiyQZRR5qEVlE9J]Best > Regards, > > > > -- > > > > > > > > > > > > > > > > > > > > 805010942448935 > > > > > > GR750055912MA > > > > > > Link to me on LinkedIn > > > > > > -- > Regards, > > Christopher Brown > Openstack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > This message is private and confidential. If you have received this > message in error, please notify us immediately and remove it from your > system. > > > > > --- > This email has been checked for viruses by Avast antivirus software. > https://www.avast.com/antivirus > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Fri Sep 18 09:25:22 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 18 Sep 2015 10:25:22 +0100 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: Message-ID: Hi, I think you need epel-repo there. It should be documented that you need to install epel-release package. Regards, Pedro Sousa On Fri, Sep 18, 2015 at 9:54 AM, Mohammed Arafa wrote: > For the past 5 days i have been attempting to install RDOmanager. i have > been getting errors. and have been attempting to side step them. > > a) followed the documentation, got errors > > b) ignored delorean repo, unable to complete installation > > c) used the alternative trunk "before CI run has passed" delorean repo and > got errors. > > I am currently doing an internal POC and just want RDOmanager runnning. > how do i get the stable packages that are guaranteed to install?? > > for those who will ask what is the error? i am providing it below, pls > remember that i have limited time left (deadline=saturday night) to show a > working rdomanager installation. if fixing the error is fastest, great, if > directing me to documentation on using a stable repo. great. > > ps. i have a working rdo install _at_home_ about a week old, and based on > this success, i was doing this internal POC. > > _______ > > Dependency Process ending > Depsolve time: 0.720 > Error: Package: python-futurist-0.5.1-dev8.el7.centos.noarch (delorean) > Requires: python-contextlib2 >= 0.4.0 > Error: Package: openstack-tuskar-2013.2-dev8.el7.centos.noarch (delorean) > Requires: python-flask-babel > You could try using --skip-broken to work around the problem > You could try running: rpm -Va --nofiles --nodigest > INFO: 2015-09-18 08:26:14,700 -- ############### End stdout/stderr logging > ############### > ERROR: 2015-09-18 08:26:14,701 -- Hook FAILED. > ERROR: 2015-09-18 08:26:14,701 -- Failed running command ['dib-run-parts', > u'/tmp/tmpqnbtlP/install.d'] > File "/usr/lib/python2.7/site-packages/instack/main.py", line 163, in > main > em.run() > File "/usr/lib/python2.7/site-packages/instack/runner.py", line 79, in > run > self.run_hook(hook) > File "/usr/lib/python2.7/site-packages/instack/runner.py", line 174, in > run_hook > raise Exception("Failed running command %s" % command) > ERROR: 2015-09-18 08:26:14,701 -- None > Traceback (most recent call last): > File "", line 1, in > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 544, in install > :param instack_root: The path containing the instack-undercloud > elements > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 476, in _run_instack > return instack_env > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 314, in _run_live_command > stderr=subprocess.STDOUT) > RuntimeError: instack failed. See log for details. > ERROR: openstack Command 'instack-install-undercloud' returned non-zero > exit status 1 > [ > https://mail.egidegypt.com/owa/attachment.ashx?id=RgAAAACtgCAgsTXwQaoSKdlfh5krBwDolgrUG0tcQZzBLjGcH%2bLlAdJ7t9L0AADolgrUG0tcQZzBLjGcH%2bLlAdJ7t9UxAAAJ&attcnt=1&attid0=EAAN%2bSWesCiyQZRR5qEVlE9J]Best > Regards, > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbrown2 at ocf.co.uk Fri Sep 18 16:42:01 2015 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Fri, 18 Sep 2015 17:42:01 +0100 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: Message-ID: <1442594521.12717.20.camel@ocf-laptop> Something is going very wrong somewhere. I have tried several different ways _just_ to install the undercloud. Using: http://trunk.rdoproject.org/centos7/current/delorean.repo I get the following error: + semodule -i /opt/stack/selinux-policy/ipxe.pp dib-run-parts Fri 18 Sep 17:01:04 BST 2015 00-apply-selinux-policy completed dib-run-parts Fri 18 Sep 17:01:04 BST 2015 Running /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies + set -o pipefail ++ mktemp -d + TMPDIR=/tmp/tmp.ZWLD1K9BTD + '[' -x /usr/sbin/semanage ']' + cd /tmp/tmp.ZWLD1K9BTD ++ ls '/opt/stack/selinux-policy/*.te' ls: cannot access /opt/stack/selinux-policy/*.te: No such file or directory + semodule -i '/tmp/tmp.ZWLD1K9BTD/*.pp' semodule: Failed on /tmp/tmp.ZWLD1K9BTD/*.pp! [2015-09-18 17:01:04,559] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] With http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo I get the same error. This is regardless of: export DIB_INSTALLTYPE_puppet_modules=source On Fri, 2015-09-18 at 13:34 +0100, Ha?kel wrote: > 2015-09-18 14:20 GMT+02:00 Eduardo Gonzalez : > > Hi, at a first glance, the missing packages are: > > > > python-contextlib2 > > python-flask-babel > > > > These should be provided by repositories in delorean-deps.repo (and > are enabled by default) > As shown here, they're both available. > https://repos.fedorapeople.org/repos/openstack/cbs/cloud7-openstack-common-testing/x86_64/os/Packages/ > > It could be a yum cache refreshment issue. > > > Also, at the overcloud image creation i've found a missing repo file: > > delorean-rdo-management.repo > > > > Regards, > > Eduardo Gonzalez > > > > ack > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Regards, Christopher Brown Openstack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc Please note, any emails relating to an OCF Support request must always be sent to support at ocf.co.uk for a ticket number to be generated or existing support ticket to be updated. Should this not be done then OCF cannot be held responsible for requests not dealt with in a timely manner. OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 2PG. This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus From ibravo at ltgfederal.com Fri Sep 18 16:47:50 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Fri, 18 Sep 2015 12:47:50 -0400 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: <1442594521.12717.20.camel@ocf-laptop> References: <1442594521.12717.20.camel@ocf-laptop> Message-ID: Openstack heat seems to be broken: http://trunk.rdoproject.org/centos7/report.html __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Sep 18, 2015, at 12:42 PM, Christopher Brown wrote: > > Something is going very wrong somewhere. > > I have tried several different ways _just_ to install the undercloud. > > Using: > > http://trunk.rdoproject.org/centos7/current/delorean.repo > > I get the following error: > > + semodule -i /opt/stack/selinux-policy/ipxe.pp > dib-run-parts Fri 18 Sep 17:01:04 BST 2015 00-apply-selinux-policy > completed > dib-run-parts Fri 18 Sep 17:01:04 BST 2015 > Running /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > + set -o pipefail > ++ mktemp -d > + TMPDIR=/tmp/tmp.ZWLD1K9BTD > + '[' -x /usr/sbin/semanage ']' > + cd /tmp/tmp.ZWLD1K9BTD > ++ ls '/opt/stack/selinux-policy/*.te' > ls: cannot access /opt/stack/selinux-policy/*.te: No such file or > directory > + semodule -i '/tmp/tmp.ZWLD1K9BTD/*.pp' > semodule: Failed on /tmp/tmp.ZWLD1K9BTD/*.pp! > [2015-09-18 17:01:04,559] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > > With > > http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo > > I get the same error. > > This is regardless of: > > export DIB_INSTALLTYPE_puppet_modules=source > > > > > On Fri, 2015-09-18 at 13:34 +0100, Ha?kel wrote: >> 2015-09-18 14:20 GMT+02:00 Eduardo Gonzalez : >>> Hi, at a first glance, the missing packages are: >>> >>> python-contextlib2 >>> python-flask-babel >>> >> >> These should be provided by repositories in delorean-deps.repo (and >> are enabled by default) >> As shown here, they're both available. >> https://repos.fedorapeople.org/repos/openstack/cbs/cloud7-openstack-common-testing/x86_64/os/Packages/ >> >> It could be a yum cache refreshment issue. >> >>> Also, at the overcloud image creation i've found a missing repo file: >>> delorean-rdo-management.repo >>> >>> Regards, >>> Eduardo Gonzalez >>> >> >> ack >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > -- > Regards, > > Christopher Brown > Openstack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > This message is private and confidential. If you have received this > message in error, please notify us immediately and remove it from your > system. > > > > > --- > This email has been checked for viruses by Avast antivirus software. > https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpeeler at redhat.com Fri Sep 18 17:42:55 2015 From: jpeeler at redhat.com (Jeff Peeler) Date: Fri, 18 Sep 2015 13:42:55 -0400 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: <1442594521.12717.20.camel@ocf-laptop> References: <1442594521.12717.20.camel@ocf-laptop> Message-ID: On Fri, Sep 18, 2015 at 12:42 PM, Christopher Brown wrote: > Something is going very wrong somewhere. > > I have tried several different ways _just_ to install the undercloud. > > Using: > > http://trunk.rdoproject.org/centos7/current/delorean.repo > > I get the following error: > > + semodule -i /opt/stack/selinux-policy/ipxe.pp > dib-run-parts Fri 18 Sep 17:01:04 BST 2015 00-apply-selinux-policy > completed > dib-run-parts Fri 18 Sep 17:01:04 BST 2015 > Running /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > + set -o pipefail > ++ mktemp -d > + TMPDIR=/tmp/tmp.ZWLD1K9BTD > + '[' -x /usr/sbin/semanage ']' > + cd /tmp/tmp.ZWLD1K9BTD > ++ ls '/opt/stack/selinux-policy/*.te' > ls: cannot access /opt/stack/selinux-policy/*.te: No such file or > directory > + semodule -i '/tmp/tmp.ZWLD1K9BTD/*.pp' > semodule: Failed on /tmp/tmp.ZWLD1K9BTD/*.pp! > [2015-09-18 17:01:04,559] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > > With > > http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo > > I get the same error. > > This is regardless of: > > export DIB_INSTALLTYPE_puppet_modules=source I ran into this error as well. Unfortunately, using the delorean repo straight from current is as this thread indicates, not stable. There's an outstanding review here that uses a good delorean repo to install from: https://review.openstack.org/#/c/223293/2 I highly recommend using that and going from there. Do make note that in this particular delorean repo, inspector needs to be set to disabled and the conductor service either restarted or sent a HUP signal. From jslagle at redhat.com Fri Sep 18 19:15:06 2015 From: jslagle at redhat.com (James Slagle) Date: Fri, 18 Sep 2015 15:15:06 -0400 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: <1442594521.12717.20.camel@ocf-laptop> Message-ID: <20150918191506.GJ13458@localhost.localdomain> To address $SUBJECT directly: Not currently. We had rdo-manager running with kilo for a while, but some of the packages got stale as priorities shifted. We switched some of the builds to pull from master, and that worked for a while, but now it's failing again. We've instead been focusing our effort into making upstream TripleO stable at this time (which follows the same rdo-manager workflow). That's currently using a delorean trunk repo that typically lags a week or 2 behind the "current" repo, so currently Liberty based. That's a work in progress, as folks are discovering. We're resource constrained on several fronts, but it's actively being worked on daily in #tripleo on freenode. The plan is to have the issues worked out and be able to keep not only stability with Delorean trunk, but on RDO Liberty as well. Please see my inline reply as well below, it should help some to get further along. On Fri, Sep 18, 2015 at 01:42:55PM -0400, Jeff Peeler wrote: > On Fri, Sep 18, 2015 at 12:42 PM, Christopher Brown wrote: > > Something is going very wrong somewhere. > > > > I have tried several different ways _just_ to install the undercloud. > > > > Using: > > > > http://trunk.rdoproject.org/centos7/current/delorean.repo > > > > I get the following error: > > > > + semodule -i /opt/stack/selinux-policy/ipxe.pp > > dib-run-parts Fri 18 Sep 17:01:04 BST 2015 00-apply-selinux-policy > > completed > > dib-run-parts Fri 18 Sep 17:01:04 BST 2015 > > Running /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > > + set -o pipefail > > ++ mktemp -d > > + TMPDIR=/tmp/tmp.ZWLD1K9BTD > > + '[' -x /usr/sbin/semanage ']' > > + cd /tmp/tmp.ZWLD1K9BTD > > ++ ls '/opt/stack/selinux-policy/*.te' > > ls: cannot access /opt/stack/selinux-policy/*.te: No such file or > > directory > > + semodule -i '/tmp/tmp.ZWLD1K9BTD/*.pp' > > semodule: Failed on /tmp/tmp.ZWLD1K9BTD/*.pp! > > [2015-09-18 17:01:04,559] (os-refresh-config) [ERROR] during configure > > phase. [Command '['dib-run-parts', > > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > > status 1] > > > > With > > > > http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo > > > > I get the same error. > > > > This is regardless of: > > > > export DIB_INSTALLTYPE_puppet_modules=source > > I ran into this error as well. Unfortunately, using the delorean repo > straight from current is as this thread indicates, not stable. There's > an outstanding review here that uses a good delorean repo to install > from: > > https://review.openstack.org/#/c/223293/2 > > I highly recommend using that and going from there. Do make note that > in this particular delorean repo, inspector needs to be set to > disabled and the conductor service either restarted or sent a HUP > signal. Thanks Jeff for pointing this out. I've pushed up a couple more patchsets as well, so make sure to be looking at the latest one: https://review.openstack.org/#/c/223293/ If you git clone tripleo-docs, run "tox -e docs" from your checkout, and the docs will be built under doc/build. You can also view an html build directly from the jenkins job results on the patchset. In addition to that, there is another patch in flight to address the packaging changes around ironic-inspector: https://review.openstack.org/#/c/223267 That's a little bit harder to apply and try out, so I built it into an rpm if folks are blocked on this: https://fedorapeople.org/~slagle/instack-undercloud-2.1.2.post206-99999.noarch.rpm Just before running "openstack undercloud install" you'll need to: sudo yum install -y yum-plugin-versionlock sudo rpm -Uvh --force https://fedorapeople.org/~slagle/instack-undercloud-2.1.2.post206-99999.noarch.rpm sudo yum versionlock instack-undercloud That got me a successful undercloud install. I'm testing image builds now. If anyone does try these patches, reviews in gerrit are appreciated. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From rbowen at redhat.com Fri Sep 18 19:35:40 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 18 Sep 2015 15:35:40 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" Message-ID: <55FC678C.3020604@redhat.com> One of the goals of the RDO Packstack Quickstart is to give people a successful and easy first-time experience deploying OpenStack, even if what they're left with (an --allinone deployment) might not be, strictly speaking, *useful* for much. Today on IRC I asked if we might possibly work towards a similar quickstart for RDO-Manager, where we make a bunch of assumptions and automate whatever parts of it we can, and end up with a "do these three steps" kind of thing, like the Packstack quickstart. I've included the transcript of the conversation below, but since IRC transcripts can be confusing after the fact, to summarize, slagle opined that it might be feasible to have two paths - the full-featured path that we currently have, but also something as I described above for your first time. I wanted to toss this out here for a larger audience to see whether this seems like a reasonable goal to pursue? Is it feasible that we might get the RDO-Manager instructions down to a Quickstart-like page? I don't even necessarily mean NOW, I mean, as a short/medium term goal. The instructions are very intimidating. rbowen, I suspect we might be able to do that if we make a fair number of assumptions and possibly try to leverage some amount of the upstream CI work slagle is presently hacking on. slagle may violently disagree in which case, ignore what I just said. ;-) morazi: rbowen : sure, we can trim whatever. just keep in mind i might not be the best docs writer. we originally started small, but that just caused a lot of mistakes/issues. so we made a conscious decision to be explicit hence the current nature of the docs Yeah, I think there's room for both types of docs, much like there's room for the Packstack quickstart and the full-blown do-it-by-hand OpenStack installation docs. slagle, oh, I was thinking about -- can we lift the ci script and say -- your quickstart is to configure this script. The full blown install is well, we're documenting the same tool and...are stated goal is not to be packstack i'm all for making it easy but we're still bound by how complex openstack is to install in *real* world configurations rbowen, basically with a highly specific/opinionated virtual environment I think we could make it really easy. If you want to inject choice back into it, things get complex quickly. That's the crux to me if that makes sense. I think perhaps what I'm looking for isn't entirely realistic, but it's kind of the "success first time" goal that was part of the original Quickstart page. rbowen, what might be interesting is -- what are the assumptions that the packstack quickstart makes for you? Can we easily figure that out? And then once you have a successful first time, you can learn more about various steps. The quickstart does kind of assume that you'll be left with something that's ... well, not very useful. But it gives you a flavor of it. rbowen: yea, agreed. it's about finding the right balance there Much like the "how fast can you install OpenStack from scratch" contests they had at OpenStack Summit. for what it's worth, I followed the guides and they are already simple. There is an appendix for more complex stuff the feedback swings from "this is too hard", then the more we hide away behind scripts we start hearing "i don't know what this is doing, too much is hidden" ibravo, cool, thx for that feedback! rbowen: so, 2 paths probably makes sense dmsimard, how come is enforcing, when it has -permissive- in the name?? ibravo, I tend to agree. They are pretty simple but there is a lot there in terms of number of steps. nothing in particular is super hard but you have to do many things. ibravo: Ok, that's good to hear. I obviously need to block off a half day at some point to work through it. I mostly skimmed through and it was intimidating, but I was in a hurry. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From pmyers at redhat.com Fri Sep 18 19:49:39 2015 From: pmyers at redhat.com (Perry Myers) Date: Fri, 18 Sep 2015 15:49:39 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <55FC678C.3020604@redhat.com> References: <55FC678C.3020604@redhat.com> Message-ID: <55FC6AD3.8080300@redhat.com> On 09/18/2015 03:35 PM, Rich Bowen wrote: > One of the goals of the RDO Packstack Quickstart is to give people a > successful and easy first-time experience deploying OpenStack, even if > what they're left with (an --allinone deployment) might not be, strictly > speaking, *useful* for much. > > Today on IRC I asked if we might possibly work towards a similar > quickstart for RDO-Manager, where we make a bunch of assumptions and > automate whatever parts of it we can, and end up with a "do these three > steps" kind of thing, like the Packstack quickstart. > > I've included the transcript of the conversation below, but since IRC > transcripts can be confusing after the fact, to summarize, slagle opined > that it might be feasible to have two paths - the full-featured path > that we currently have, but also something as I described above for your > first time. > > I wanted to toss this out here for a larger audience to see whether this > seems like a reasonable goal to pursue? +1 I think it's critical to have something that's easy to work for _very_ constrained use cases. But I also agree with the below sentiments that we need to properly document and enable folks to do more complex deployments after they've had their first success with the minimal deployment option One thing to consider in all of this is... what is the minimum deployment footprint? I think we have to assume virtual, since most folks won't have a lab with 6 nodes sitting around. A few options: a Undercloud on one VM, single overcloud controller on another VM, single compute node on another VM (using nested virt, or just plain emulation) b 2nd variation on the above would be to run the 3 node controller HA setup, which means 1 undercloud, 3 overcloud controllers + 1 compute The question is... what is the minimum amount of RAM that you can run an overcloud controller with? 4GB? Or can that be squeezed to 2 or 3GB just for playing around purposes? What is the minimum amount of RAM you need for the undercloud node? If 4GB per VM, then a) maybe can be done on a 16GB system, while b) needs 32GB If we could squeeze controller and undercloud nodes into 3GB each, then it might be possible to run b) on a 16GB machine, opening up experimentation with RDO Manager in a real HA configuration to lots more people Perry From ryansb at redhat.com Fri Sep 18 19:55:48 2015 From: ryansb at redhat.com (Ryan S. Brown) Date: Fri, 18 Sep 2015 15:55:48 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <55FC6AD3.8080300@redhat.com> References: <55FC678C.3020604@redhat.com> <55FC6AD3.8080300@redhat.com> Message-ID: <55FC6C44.50500@redhat.com> On 09/18/2015 03:49 PM, Perry Myers wrote: > On 09/18/2015 03:35 PM, Rich Bowen wrote: >> One of the goals of the RDO Packstack Quickstart is to give people a >> successful and easy first-time experience deploying OpenStack, even if >> what they're left with (an --allinone deployment) might not be, strictly >> speaking, *useful* for much. >> >> Today on IRC I asked if we might possibly work towards a similar >> quickstart for RDO-Manager, where we make a bunch of assumptions and >> automate whatever parts of it we can, and end up with a "do these three >> steps" kind of thing, like the Packstack quickstart. >> >> I've included the transcript of the conversation below, but since IRC >> transcripts can be confusing after the fact, to summarize, slagle opined >> that it might be feasible to have two paths - the full-featured path >> that we currently have, but also something as I described above for your >> first time. >> >> I wanted to toss this out here for a larger audience to see whether this >> seems like a reasonable goal to pursue? > > +1 > > I think it's critical to have something that's easy to work for _very_ > constrained use cases. But I also agree with the below sentiments that > we need to properly document and enable folks to do more complex > deployments after they've had their first success with the minimal > deployment option > > One thing to consider in all of this is... what is the minimum > deployment footprint? I think we have to assume virtual, since most > folks won't have a lab with 6 nodes sitting around. > > A few options: > > a Undercloud on one VM, single overcloud controller on another VM, > single compute node on another VM (using nested virt, or just plain > emulation) > > b 2nd variation on the above would be to run the 3 node controller HA > setup, which means 1 undercloud, 3 overcloud controllers + 1 compute > > The question is... what is the minimum amount of RAM that you can run an > overcloud controller with? 4GB? Or can that be squeezed to 2 or 3GB > just for playing around purposes? > > What is the minimum amount of RAM you need for the undercloud node? > > If 4GB per VM, then a) maybe can be done on a 16GB system, while b) > needs 32GB If we allow for "not very useful" as a stated caveat of the all-in-one, then we could probably get away with 3GB and swap for both overcloud VMs and 4GB for the undercloud. It's possible to go lower for the undercloud if you have a lot of swap and are patient. It may lead to timeouts/broken-ness, so I wouldn't recommend it. > If we could squeeze controller and undercloud nodes into 3GB each, then > it might be possible to run b) on a 16GB machine, opening up > experimentation with RDO Manager in a real HA configuration to lots more > people > > Perry > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Ryan Brown / Senior Software Engineer, OpenStack / Red Hat, Inc. From jslagle at redhat.com Fri Sep 18 20:10:01 2015 From: jslagle at redhat.com (James Slagle) Date: Fri, 18 Sep 2015 16:10:01 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <55FC6AD3.8080300@redhat.com> References: <55FC678C.3020604@redhat.com> <55FC6AD3.8080300@redhat.com> Message-ID: <20150918201001.GK13458@localhost.localdomain> On Fri, Sep 18, 2015 at 03:49:39PM -0400, Perry Myers wrote: > On 09/18/2015 03:35 PM, Rich Bowen wrote: > > One of the goals of the RDO Packstack Quickstart is to give people a > > successful and easy first-time experience deploying OpenStack, even if > > what they're left with (an --allinone deployment) might not be, strictly > > speaking, *useful* for much. > > > > Today on IRC I asked if we might possibly work towards a similar > > quickstart for RDO-Manager, where we make a bunch of assumptions and > > automate whatever parts of it we can, and end up with a "do these three > > steps" kind of thing, like the Packstack quickstart. > > > > I've included the transcript of the conversation below, but since IRC > > transcripts can be confusing after the fact, to summarize, slagle opined > > that it might be feasible to have two paths - the full-featured path > > that we currently have, but also something as I described above for your > > first time. > > > > I wanted to toss this out here for a larger audience to see whether this > > seems like a reasonable goal to pursue? > > +1 > > I think it's critical to have something that's easy to work for _very_ > constrained use cases. But I also agree with the below sentiments that > we need to properly document and enable folks to do more complex > deployments after they've had their first success with the minimal > deployment option The quickstart idea was kind of what I was going for with this a while back: https://www.rdoproject.org/TripleO_VM_Setup (***now outdated***). It might still look like a lot of commands to run, but it could be way less verbose with some simple tooling. I wasn't trying to hide all the details when I originally wrote that. Anyway, I do think we could so something similar again for RDO-Manager. > > One thing to consider in all of this is... what is the minimum > deployment footprint? I think we have to assume virtual, since most > folks won't have a lab with 6 nodes sitting around. > > A few options: > > a Undercloud on one VM, single overcloud controller on another VM, > single compute node on another VM (using nested virt, or just plain > emulation) I try to stay away from nested KVM. Every now and then I or someone will come along and try it, report it works for a bit, but ends up in various kernel crashes/panics if the environment stays up for too long. We did try with it enabled in an OpenStack cloud itself during a hackfest event, and had planned on giving each participant a 32 GB vm (spawned by nova), that had KVM support. They would then use that to do an rdo-manager HA deployment in virt. It hummed along quite nicely initially, but started experiencing hard lockups before long. > > b 2nd variation on the above would be to run the 3 node controller HA > setup, which means 1 undercloud, 3 overcloud controllers + 1 compute > > The question is... what is the minimum amount of RAM that you can run an > overcloud controller with? 4GB? Or can that be squeezed to 2 or 3GB > just for playing around purposes? > > What is the minimum amount of RAM you need for the undercloud node? 4GB, and that now results in OOM'ing after a couple deployments without swap enabled on the undercloud. > > If 4GB per VM, then a) maybe can be done on a 16GB system, while b) > needs 32GB You can do a) with 12GB max (I do on my laptop) since not all of the memory is in use, and you can give the compute node much less than 4GB even. KSM also helps. > > If we could squeeze controller and undercloud nodes into 3GB each, then We haven't honestly spent a lot (any) time tuning for memory optimization, so it might be possible, but I'm a little doubtful. > it might be possible to run b) on a 16GB machine, opening up > experimentation with RDO Manager in a real HA configuration to lots more > people If you have swap enabled on the host, I've done b) on a 16GB host. I don't recall how much swap ended up getting used. > > Perry > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From pmyers at redhat.com Fri Sep 18 21:04:04 2015 From: pmyers at redhat.com (Perry Myers) Date: Fri, 18 Sep 2015 17:04:04 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <55FC6C44.50500@redhat.com> References: <55FC678C.3020604@redhat.com> <55FC6AD3.8080300@redhat.com> <55FC6C44.50500@redhat.com> Message-ID: <55FC7C44.7050009@redhat.com> >> What is the minimum amount of RAM you need for the undercloud node? >> >> If 4GB per VM, then a) maybe can be done on a 16GB system, while b) >> needs 32GB > > If we allow for "not very useful" as a stated caveat of the all-in-one, > then we could probably get away with I think we need to more clearly define what "not very useful" means. >From my limited PoV, useful would be the ability to run 1 or two Instances just to try out the system end to end. Those Instances could be very very slimmed down Fedora images or even Cirros images. However, for someone else useful might mean a whole other host of things. So we should be careful to identify specific personas here, and map a specific install footprint to that particular persona's view of useful > 3GB and swap for both overcloud VMs and 4GB for the undercloud. > > It's possible to go lower for the undercloud if you have a lot of swap > and are patient. It may lead to timeouts/broken-ness, so I wouldn't > recommend it. Ack From bderzhavets at hotmail.com Fri Sep 18 21:00:18 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 18 Sep 2015 17:00:18 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <55FC6AD3.8080300@redhat.com> References: <55FC678C.3020604@redhat.com>,<55FC6AD3.8080300@redhat.com> Message-ID: > To: rbowen at redhat.com; rdo-list at redhat.com > From: pmyers at redhat.com > Date: Fri, 18 Sep 2015 15:49:39 -0400 > Subject: Re: [Rdo-list] RDO-Manager "quickstart" > > On 09/18/2015 03:35 PM, Rich Bowen wrote: > > One of the goals of the RDO Packstack Quickstart is to give people a > > successful and easy first-time experience deploying OpenStack, even if > > what they're left with (an --allinone deployment) might not be, strictly > > speaking, *useful* for much. > > > > Today on IRC I asked if we might possibly work towards a similar > > quickstart for RDO-Manager, where we make a bunch of assumptions and > > automate whatever parts of it we can, and end up with a "do these three > > steps" kind of thing, like the Packstack quickstart. > > > > I've included the transcript of the conversation below, but since IRC > > transcripts can be confusing after the fact, to summarize, slagle opined > > that it might be feasible to have two paths - the full-featured path > > that we currently have, but also something as I described above for your > > first time. > > > > I wanted to toss this out here for a larger audience to see whether this > > seems like a reasonable goal to pursue? > > +1 > > I think it's critical to have something that's easy to work for _very_ > constrained use cases. But I also agree with the below sentiments that > we need to properly document and enable folks to do more complex > deployments after they've had their first success with the minimal > deployment option > > One thing to consider in all of this is... what is the minimum > deployment footprint? I think we have to assume virtual, since most > folks won't have a lab with 6 nodes sitting around. > > A few options: > > a Undercloud on one VM, single overcloud controller on another VM, > single compute node on another VM (using nested virt, or just plain > emulation) > > b 2nd variation on the above would be to run the 3 node controller HA > setup, which means 1 undercloud, 3 overcloud controllers + 1 compute > > The question is... what is the minimum amount of RAM that you can run an > overcloud controller with? 4GB? Or can that be squeezed to 2 or 3GB > just for playing around purposes? > > What is the minimum amount of RAM you need for the undercloud node? > > If 4GB per VM, then a) maybe can be done on a 16GB system, while b) > needs 32GB 32 GB RAM is not actually a big problem, but test "b" will require 8 CORE CPU at least like Intel? Xeon? Processor E5-2690 (costs about $2000) and corresponding board, which require business environment and would fail on desktop CPU. Even expensive i7 ( Haswell Kernel) top line models won't provide ability to test "b", only "a" due to 4 CORES limitation even with HT enabled. Please, correct me if I am wrong about that. Boris > > If we could squeeze controller and undercloud nodes into 3GB each, then > it might be possible to run b) on a 16GB machine, opening up > experimentation with RDO Manager in a real HA configuration to lots more > people > > Perry > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Fri Sep 18 21:20:41 2015 From: pmyers at redhat.com (Perry Myers) Date: Fri, 18 Sep 2015 17:20:41 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: References: <55FC678C.3020604@redhat.com> <55FC6AD3.8080300@redhat.com> Message-ID: <55FC8029.4030203@redhat.com> > 32 GB RAM is not actually a big problem, but test "b" will require 8 > CORE CPU at least like > Intel? Xeon? Processor E5-2690 (costs about $2000) and corresponding > board, which require > business environment and would fail on desktop CPU. Even expensive i7 ( > Haswell Kernel) top line > models won't provide ability to test "b", only "a" due to 4 CORES > limitation even with HT enabled. > Please, correct me if I am wrong about that. I don't think overcommitting cores is as big of a deal as overcommitting RAM. Yes, things will be slower, but it's a linear slowdown vs. overcommit of RAM which leads to swap and orders of magnitude slowdown. So I would think b, which requires 6VMs (if you are running just one nested instance) is still doable with a 4 core HT machine. Again, it's not going to perform very well, but to just try out the deployment and see what things look like, I would think it would be sufficient I have a 32GB "Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz" on my desk, so I'd be happy to test the theory if I can get instructions simple enough to run [1]. Perry [1] Where simple enough refers to simple enough for even a manager/non- developer to follow... From pmyers at redhat.com Fri Sep 18 21:22:19 2015 From: pmyers at redhat.com (Perry Myers) Date: Fri, 18 Sep 2015 17:22:19 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <20150918201001.GK13458@localhost.localdomain> References: <55FC678C.3020604@redhat.com> <55FC6AD3.8080300@redhat.com> <20150918201001.GK13458@localhost.localdomain> Message-ID: <55FC808B.6020902@redhat.com> >> a Undercloud on one VM, single overcloud controller on another VM, >> single compute node on another VM (using nested virt, or just plain >> emulation) > > I try to stay away from nested KVM. Every now and then I or someone will come > along and try it, report it works for a bit, but ends up in various kernel > crashes/panics if the environment stays up for too long. For some (again need to look at specific personas), _emulated_ Instances might be good enough. (i.e. no nested KVM and instead using qemu emulation on the virtual Compute Node) It's not fast, but it is enough to show end to end usage of the system, in a fairly minimal hardware footprint. As for the stability of nested KVM... Kashyap any thoughts on this? > We did try with it enabled in an OpenStack cloud itself during a hackfest > event, and had planned on giving each participant a 32 GB vm (spawned by nova), > that had KVM support. They would then use that to do an rdo-manager HA > deployment in virt. It hummed along quite nicely initially, but started > experiencing hard lockups before long. > >> >> b 2nd variation on the above would be to run the 3 node controller HA >> setup, which means 1 undercloud, 3 overcloud controllers + 1 compute >> >> The question is... what is the minimum amount of RAM that you can run an >> overcloud controller with? 4GB? Or can that be squeezed to 2 or 3GB >> just for playing around purposes? >> >> What is the minimum amount of RAM you need for the undercloud node? > > 4GB, and that now results in OOM'ing after a couple deployments without swap > enabled on the undercloud. Ack >> >> If 4GB per VM, then a) maybe can be done on a 16GB system, while b) >> needs 32GB > > You can do a) with 12GB max (I do on my laptop) since not all of the memory is > in use, and you can give the compute node much less than 4GB even. KSM also > helps. Ah, good point about KSM. Last time I ran it (admittedly more than 1 yr ago) all it did was suck massive amounts of CPU cycles from me, but maybe it's gotten a bit more efficient since then? :) >> >> If we could squeeze controller and undercloud nodes into 3GB each, then > > We haven't honestly spent a lot (any) time tuning for memory optimization, so > it might be possible, but I'm a little doubtful. Ack >> it might be possible to run b) on a 16GB machine, opening up >> experimentation with RDO Manager in a real HA configuration to lots more >> people > > If you have swap enabled on the host, I've done b) on a 16GB host. I don't > recall how much swap ended up getting used. Thx From rybrown at redhat.com Fri Sep 18 19:55:04 2015 From: rybrown at redhat.com (Ryan Brown) Date: Fri, 18 Sep 2015 15:55:04 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <55FC6AD3.8080300@redhat.com> References: <55FC678C.3020604@redhat.com> <55FC6AD3.8080300@redhat.com> Message-ID: <55FC6C18.10203@redhat.com> On 09/18/2015 03:49 PM, Perry Myers wrote: > On 09/18/2015 03:35 PM, Rich Bowen wrote: >> One of the goals of the RDO Packstack Quickstart is to give people a >> successful and easy first-time experience deploying OpenStack, even if >> what they're left with (an --allinone deployment) might not be, strictly >> speaking, *useful* for much. >> >> Today on IRC I asked if we might possibly work towards a similar >> quickstart for RDO-Manager, where we make a bunch of assumptions and >> automate whatever parts of it we can, and end up with a "do these three >> steps" kind of thing, like the Packstack quickstart. >> >> I've included the transcript of the conversation below, but since IRC >> transcripts can be confusing after the fact, to summarize, slagle opined >> that it might be feasible to have two paths - the full-featured path >> that we currently have, but also something as I described above for your >> first time. >> >> I wanted to toss this out here for a larger audience to see whether this >> seems like a reasonable goal to pursue? > > +1 > > I think it's critical to have something that's easy to work for _very_ > constrained use cases. But I also agree with the below sentiments that > we need to properly document and enable folks to do more complex > deployments after they've had their first success with the minimal > deployment option > > One thing to consider in all of this is... what is the minimum > deployment footprint? I think we have to assume virtual, since most > folks won't have a lab with 6 nodes sitting around. > > A few options: > > a Undercloud on one VM, single overcloud controller on another VM, > single compute node on another VM (using nested virt, or just plain > emulation) > > b 2nd variation on the above would be to run the 3 node controller HA > setup, which means 1 undercloud, 3 overcloud controllers + 1 compute > > The question is... what is the minimum amount of RAM that you can run an > overcloud controller with? 4GB? Or can that be squeezed to 2 or 3GB > just for playing around purposes? > > What is the minimum amount of RAM you need for the undercloud node? > > If 4GB per VM, then a) maybe can be done on a 16GB system, while b) > needs 32GB I think if we allow for "not very useful" as a stated caveat of the all-in-one, then we could probably get away with 3GB and swap for both overcloud VMs and 4GB for the undercloud. It's possible to go lower for the undercloud if you have a lot of swap and are patient. It may lead to timeouts/broken-ness, so I wouldn't recommend it. > If we could squeeze controller and undercloud nodes into 3GB each, then > it might be possible to run b) on a 16GB machine, opening up > experimentation with RDO Manager in a real HA configuration to lots more > people > > Perry > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. From pgsousa at gmail.com Sat Sep 19 12:00:02 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Sat, 19 Sep 2015 13:00:02 +0100 Subject: [Rdo-list] ERROR: Store for scheme file not found after glance upgrade to Kilo Message-ID: Hi all, I've updated my RDO Juno to Kilo and everything went smoothly except glance-api, that doesn't start anymore if I have: *[default]* *default_store=swift* *[glance_store]* *stores=glance.store.swift.Store,glance.store.http.Store* I get this error: *ERROR: Store for scheme file not found* If I comment out the *stores=glance.store.swift.Store,glance.store.http.Store *glance-api starts but it doesn't use swift backend anymore, it tries to use file backend. Anyone had this issue? Thanks, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Sat Sep 19 14:03:48 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sat, 19 Sep 2015 16:03:48 +0200 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <55FC808B.6020902@redhat.com> References: <55FC678C.3020604@redhat.com> <55FC6AD3.8080300@redhat.com> <20150918201001.GK13458@localhost.localdomain> <55FC808B.6020902@redhat.com> Message-ID: <20150919140348.GA23243@tesla.redhat.com> On Fri, Sep 18, 2015 at 05:22:19PM -0400, Perry Myers wrote: > >> a Undercloud on one VM, single overcloud controller on another VM, > >> single compute node on another VM (using nested virt, or just plain > >> emulation) > > > > I try to stay away from nested KVM. Every now and then I or someone > > will come along and try it, report it works for a bit, but ends up > > in various kernel crashes/panics if the environment stays up for too > > long. If these are reliably (or occur even once or twice) reproducible, please take time to file bugs. Upstream KVM maintainers say (at one of previous KVM Forums) not many nested KVM bugs are reported. Without more testing, and diligent bug reporting, nested KVM won't magically become stable. And, IME, upstream has usually been responsive to bugs. Test matrix explosion (with so many combinations on baremetal, guest hypervisor, and the nested guest) is a challenge. Any efforts towards consistent testing and bug reporting are helpful contributions to make nested virt stable. > For some (again need to look at specific personas), _emulated_ > Instances might be good enough. (i.e. no nested KVM and instead using > qemu emulation on the virtual Compute Node) True. > It's not fast, but it is enough to show end to end usage of the > system, in a fairly minimal hardware footprint. > > As for the stability of nested KVM... Kashyap any thoughts on this? Some comments: - Speaking from my testing on Fedora (often Kernel & Virt components from Rawhide) and Intel, nested KVM is now much more stable (I haven't seen a crash or panic in the last year) than what it was 2 years ago. - Using EL 7.x (or better, current Fedora stable Kernels), nested KVM should work relatively pain-free for Linux on Linux use cases. (But if you do see any, please report.) EL7.1 (and above) also has nested Extended Page Tables (EPT) support, which makes nVMX (Intel-based nested Virt) more performant[1][2]. Ensure you have this Kernel parameter is enabled: $ cat /sys/module/kvm_intel/parameters/ept Y - AMD-based Nested Virtualization support is slightly better, upstream reports(refer slide-29 here[3]). But, there's consistent bug fixing and automated testing efforts upstream to make nVMX support better too. Upstream also does discusses frequent new features. [1] A talk about it by Gleb Natapov, KVM maintainer in 2013 -- http://www.linux-kvm.org/images/8/8c/Kvm-forum-2013-nested-ept.pdf [2] Slide-18 for nested EPT test -- http://events.linuxfoundation.org/sites/events/files/slides/nested-virt-kvm-on-kvm-CloudOpen-Eu-2013-Kashyap-Chamarthy_0.pdf [3] Talk by Jan Kizka, Bandan Das et al. at 2014 KVM Forum, with details on performance evaluation -- http://www.linux-kvm.org/images/3/33/02x03-NestedVirtualization.pdf > > We did try with it enabled in an OpenStack cloud itself during a > > hackfest event, and had planned on giving each participant a 32 GB > > vm (spawned by nova), that had KVM support. They would then use that > > to do an rdo-manager HA deployment in virt. It hummed along quite > > nicely initially, but started experiencing hard lockups before long. [. . .] > > You can do a) with 12GB max (I do on my laptop) since not all of the > > memory is in use, and you can give the compute node much less than > > 4GB even. KSM also helps. > > Ah, good point about KSM. Last time I ran it (admittedly more than 1 > yr ago) all it did was suck massive amounts of CPU cycles from me, Yeah, the above was the case when it was relatively new (about 2009-2010). > but maybe it's gotten a bit more efficient since then? :) Yep, FWIW, I enable it (Kernel Samepage Merging) in my homogenous (all Linux) test environemnt, and don't see any noticeable spikes in CPU consumption (sorry, no real benchmark details). I think if you have many identical guests, it's worth it to enable allowing it to increase the memory density. [. . .] -- /kashyap From mohammed.arafa at gmail.com Sat Sep 19 20:31:18 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sat, 19 Sep 2015 16:31:18 -0400 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: <20150918191506.GJ13458@localhost.localdomain> References: <1442594521.12717.20.camel@ocf-laptop> <20150918191506.GJ13458@localhost.localdomain> Message-ID: thank you i was able to get the undercloud installed (on a saturday!) currently downloading images may i suggest that deltarpm be installed by default centos7 or somehow pulled in with rdo-manager? On Fri, Sep 18, 2015 at 3:15 PM, James Slagle wrote: > To address $SUBJECT directly: > > Not currently. We had rdo-manager running with kilo for a while, but some > of > the packages got stale as priorities shifted. We switched some of the > builds to > pull from master, and that worked for a while, but now it's failing again. > > We've instead been focusing our effort into making upstream TripleO stable > at > this time (which follows the same rdo-manager workflow). That's currently > using > a delorean trunk repo that typically lags a week or 2 behind the "current" > repo, so currently Liberty based. > > That's a work in progress, as folks are discovering. We're resource > constrained > on several fronts, but it's actively being worked on daily in #tripleo on > freenode. > > The plan is to have the issues worked out and be able to keep not only > stability with Delorean trunk, but on RDO Liberty as well. > > Please see my inline reply as well below, it should help some to get > further > along. > > On Fri, Sep 18, 2015 at 01:42:55PM -0400, Jeff Peeler wrote: > > On Fri, Sep 18, 2015 at 12:42 PM, Christopher Brown > wrote: > > > Something is going very wrong somewhere. > > > > > > I have tried several different ways _just_ to install the undercloud. > > > > > > Using: > > > > > > http://trunk.rdoproject.org/centos7/current/delorean.repo > > > > > > I get the following error: > > > > > > + semodule -i /opt/stack/selinux-policy/ipxe.pp > > > dib-run-parts Fri 18 Sep 17:01:04 BST 2015 00-apply-selinux-policy > > > completed > > > dib-run-parts Fri 18 Sep 17:01:04 BST 2015 > > > Running > /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > > > + set -o pipefail > > > ++ mktemp -d > > > + TMPDIR=/tmp/tmp.ZWLD1K9BTD > > > + '[' -x /usr/sbin/semanage ']' > > > + cd /tmp/tmp.ZWLD1K9BTD > > > ++ ls '/opt/stack/selinux-policy/*.te' > > > ls: cannot access /opt/stack/selinux-policy/*.te: No such file or > > > directory > > > + semodule -i '/tmp/tmp.ZWLD1K9BTD/*.pp' > > > semodule: Failed on /tmp/tmp.ZWLD1K9BTD/*.pp! > > > [2015-09-18 17:01:04,559] (os-refresh-config) [ERROR] during configure > > > phase. [Command '['dib-run-parts', > > > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > > > status 1] > > > > > > With > > > > > > > http://trunk.rdoproject.org/centos7/38/1c/381cac9139096bfef49952f3fd67e19451160b61_4bc2d731/delorean.repo > > > > > > I get the same error. > > > > > > This is regardless of: > > > > > > export DIB_INSTALLTYPE_puppet_modules=source > > > > I ran into this error as well. Unfortunately, using the delorean repo > > straight from current is as this thread indicates, not stable. There's > > an outstanding review here that uses a good delorean repo to install > > from: > > > > https://review.openstack.org/#/c/223293/2 > > > > I highly recommend using that and going from there. Do make note that > > in this particular delorean repo, inspector needs to be set to > > disabled and the conductor service either restarted or sent a HUP > > signal. > > Thanks Jeff for pointing this out. I've pushed up a couple more patchsets > as > well, so make sure to be looking at the latest one: > https://review.openstack.org/#/c/223293/ > If you git clone tripleo-docs, run "tox -e docs" from your checkout, and > the > docs will be built under doc/build. You can also view an html build > directly > from the jenkins job results on the patchset. > > In addition to that, there is another patch in flight to address the > packaging > changes around ironic-inspector: > https://review.openstack.org/#/c/223267 > > That's a little bit harder to apply and try out, so I built it into an rpm > if > folks are blocked on this: > > https://fedorapeople.org/~slagle/instack-undercloud-2.1.2.post206-99999.noarch.rpm > > Just before running "openstack undercloud install" you'll need to: > sudo yum install -y yum-plugin-versionlock > sudo rpm -Uvh --force > https://fedorapeople.org/~slagle/instack-undercloud-2.1.2.post206-99999.noarch.rpm > sudo yum versionlock instack-undercloud > > That got me a successful undercloud install. I'm testing image builds now. > > If anyone does try these patches, reviews in gerrit are appreciated. > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- > -- James Slagle > -- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Mon Sep 21 01:40:16 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Sun, 20 Sep 2015 21:40:16 -0400 Subject: [Rdo-list] [rdo-manager] instack-virt-setup expecting tripleo incubator Message-ID: i had hw issues and had to reinstall everything from scratch on sunday. (still in process) i just came across this bug where instack-virt-setup gave this error "Cannot find $TRIPLEO_ROOT/tripleo-incubator/scripts" google (and slagle) tells me this can be ignored so i did this workaround mkdir -p /home/stack/tripleo-incubator/scripts and i was able to continue with instack-virt-setup -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Mon Sep 21 09:45:15 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 21 Sep 2015 11:45:15 +0200 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: Message-ID: > I think you need epel-repo there. It should be documented that you need to install epel-release package. I wonder which packages were missing in RDO repo, please run on your boxen: # yumdb search from_repo epel Cheers, Alan From mohammed.arafa at gmail.com Mon Sep 21 10:26:25 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 21 Sep 2015 06:26:25 -0400 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: Message-ID: hi 2 points a) in yesterday's run of instack-virt-setup - it complained about needing epel repo, i installed it manually b) using a minimal centos install does not install yumdb. yum is currently in use as i am redoing instack-virt-setup. maybe later i can install it. On Mon, Sep 21, 2015 at 5:45 AM, Alan Pevec wrote: > > I think you need epel-repo there. It should be documented that you need > to install epel-release package. > > I wonder which packages were missing in RDO repo, please run on your boxen: > # yumdb search from_repo epel > > Cheers, > Alan > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Mon Sep 21 10:45:56 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 21 Sep 2015 12:45:56 +0200 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: Message-ID: [stack at rdo ~]$ yum list installed | grep epel epel-release.noarch 7-5 @extras jq.x86_64 1.5-1.el7 @epel oniguruma.x86_64 5.9.5-3.el7 @epel python-pip.noarch 7.1.0-1.el7 @epel On Mon, Sep 21, 2015 at 6:26 AM, Mohammed Arafa wrote: > > hi > > 2 points > a) in yesterday's run of instack-virt-setup - it complained about needing epel repo, i installed it manually > b) using a minimal centos install does not install yumdb. yum is currently in use as i am redoing instack-virt-setup. maybe later i can install it. > > On Mon, Sep 21, 2015 at 5:45 AM, Alan Pevec wrote: >> >> > I think you need epel-repo there. It should be documented that you need to install epel-release package. >> >> I wonder which packages were missing in RDO repo, please run on your boxen: >> # yumdb search from_repo epel >> >> Cheers, >> Alan > > > > > -- > > [image] > > [image] > > [image] > > 805010942448935 > > GR750055912MA > > Link to me on LinkedIn -- [image] [image] [image] 805010942448935 GR750055912MA Link to me on LinkedIn From cbrown2 at ocf.co.uk Mon Sep 21 10:52:57 2015 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Mon, 21 Sep 2015 11:52:57 +0100 Subject: [Rdo-list] [rdo-manager] is there a stable version? In-Reply-To: References: , Message-ID: This is my output: blosc-1.6.1-1.el7.x86_64 ccache-3.1.9-3.el7.x86_64 facter-2.4.1-1.el7.x86_64 hiera-1.3.4-1.el7.noarch jq-1.3-2.el7.x86_64 lz4-r131-1.el7.x86_64 puppet-3.6.2-3.el7.noarch pysendfile-2.0.0-5.el7.x86_64 pystache-0.5.3-2.el7.noarch python-Bottleneck-0.7.0-1.el7.x86_64 python-anyjson-0.3.3-3.el7.noarch python-contextlib2-0.4.0-1.el7.noarch python-dogpile-cache-0.5.5-1.el7.noarch python-dogpile-core-0.4.1-2.el7.noarch python-extras-0.0.3-2.el7.noarch python-fasteners-0.9.0-2.el7.noarch python-fixtures-0.3.14-3.el7.noarch python-flask-babel-0.9-2.el7.noarch python-futures-3.0.3-1.el7.noarch python-httplib2-0.7.7-3.el7.noarch python-iso8601-0.1.10-1.el7.noarch python-jsonpath-rw-1.2.3-2.el7.noarch python-jsonschema-2.3.0-1.el7.noarch python-keyring-5.0-1.el7.noarch python-kombu-2.5.16-1.el7.noarch 1:python-lockfile-0.9.1-4.el7.noarch python-mimeparse-0.1.4-1.el7.noarch python-mock-1.0.1-5.el7.noarch python-monotonic-0.1-1.el7.noarch python-msgpack-0.4.6-1.el7.x86_64 python-numexpr-2.3-4.el7.x86_64 python-pandas-0.16.2-1.el7.x86_64 python-paramiko-1.15.1-1.el7.noarch python-paste-deploy-1.5.0-10.el7.noarch python-ply-3.4-4.el7.noarch python-posix_ipc-0.9.8-1.el7.x86_64 python-ptyprocess-0.5-1.el7.noarch python-qpid-0.32-3.el7.x86_64 python-qpid-common-0.32-3.el7.x86_64 python-repoze-lru-0.4-3.el7.noarch python-routes-1.13-2.el7.noarch python-simplegeneric-0.8-7.el7.noarch python-simplejson-3.3.3-1.el7.x86_64 python-singledispatch-3.4.0.2-2.el7.noarch python-speaklater-1.3-1.el7.noarch python-sqlparse-0.1.11-2.el7.noarch python-tables-3.2.0-1.el7.x86_64 python-testtools-1.1.0-1.el7.noarch python-warlock-1.0.1-1.el7.noarch python-websockify-0.6.0-2.el7.noarch python-wrapt-1.10.4-6.el7.x86_64 python-wsme-0.6-2.el7.noarch qpid-cpp-client-0.32-3.el7.x86_64 qpid-proton-c-0.9-3.el7.x86_64 ruby-augeas-0.5.0-1.el7.x86_64 ruby-shadow-1.4.1-23.el7.x86_64 rubygem-rgen-0.6.6-2.el7.noarch ________________________________________ From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] On Behalf Of Mohammed Arafa [mohammed.arafa at gmail.com] Sent: 21 September 2015 11:45 To: Alan Pevec Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] [rdo-manager] is there a stable version? [stack at rdo ~]$ yum list installed | grep epel epel-release.noarch 7-5 @extras jq.x86_64 1.5-1.el7 @epel oniguruma.x86_64 5.9.5-3.el7 @epel python-pip.noarch 7.1.0-1.el7 @epel On Mon, Sep 21, 2015 at 6:26 AM, Mohammed Arafa wrote: > > hi > > 2 points > a) in yesterday's run of instack-virt-setup - it complained about needing epel repo, i installed it manually > b) using a minimal centos install does not install yumdb. yum is currently in use as i am redoing instack-virt-setup. maybe later i can install it. > > On Mon, Sep 21, 2015 at 5:45 AM, Alan Pevec wrote: >> >> > I think you need epel-repo there. It should be documented that you need to install epel-release package. >> >> I wonder which packages were missing in RDO repo, please run on your boxen: >> # yumdb search from_repo epel >> >> Cheers, >> Alan > > > > > -- > > [image] > > [image] > > [image] > > 805010942448935 > > GR750055912MA > > Link to me on LinkedIn -- [image] [image] [image] 805010942448935 GR750055912MA Link to me on LinkedIn _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus From bderzhavets at hotmail.com Mon Sep 21 11:14:41 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 21 Sep 2015 07:14:41 -0400 Subject: [Rdo-list] Attempt 3 Node Liberty deployment via http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ In-Reply-To: References: , , Message-ID: Packstack completion OK (ML2&OVS&VXLAN). Network node configuration. 1) Attempt to add internal interface to router works only via Neutron CLI. 2) Attempt to launch Cirros VM on compute node shows up following errors :- info: initramfs: up at 1.18 GROWROOT: CHANGED: partition=1 start=16065 old: size=64260 end=80325 new: size=41913585,end=41929650 info: initramfs loading root from /dev/vda1 info: /etc/init.d/rc.sysinit: up at 2.25 info: container: none Starting logging: OK modprobe: module virtio_blk not found in modules.dep <=== modprobe: module virtio_net not found in modules.dep <=== WARN: /etc/rc3.d/S10-load-modules failed <=== Initializing random number generator... done. Starting acpid: OK cirros-ds 'local' up at 3.06 no results found for mode=local. up 3.23. searched: nocloud configdrive ec2 Starting network... udhcpc (v1.20.1) started Sending discover... Sending discover... Thanks. Boris -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Sep 21 13:51:38 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 21 Sep 2015 09:51:38 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <55FC7C44.7050009@redhat.com> References: <55FC678C.3020604@redhat.com> <55FC6AD3.8080300@redhat.com> <55FC6C44.50500@redhat.com> <55FC7C44.7050009@redhat.com> Message-ID: <56000B6A.4010304@redhat.com> On 09/18/2015 05:04 PM, Perry Myers wrote: >>> What is the minimum amount of RAM you need for the undercloud node? >>> >>> If 4GB per VM, then a) maybe can be done on a 16GB system, while b) >>> needs 32GB >> >> If we allow for "not very useful" as a stated caveat of the all-in-one, >> then we could probably get away with > > I think we need to more clearly define what "not very useful" means. > >>From my limited PoV, useful would be the ability to run 1 or two > Instances just to try out the system end to end. Those Instances could > be very very slimmed down Fedora images or even Cirros images. > Yes, that would be my definition of "minimal required usefulness" - run a couple of instances, and be able to connect to them from "outside". Not running any actual workloads. Related, it would be awesome to have, some day, a Trystack-ish service for experimenting with RDO-Manager. (I know this has been mentioned before.) > However, for someone else useful might mean a whole other host of > things. So we should be careful to identify specific personas here, and > map a specific install footprint to that particular persona's view of useful > >> 3GB and swap for both overcloud VMs and 4GB for the undercloud. >> >> It's possible to go lower for the undercloud if you have a lot of swap >> and are patient. It may lead to timeouts/broken-ness, so I wouldn't >> recommend it. > > Ack > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From ayoung at redhat.com Mon Sep 21 13:57:02 2015 From: ayoung at redhat.com (Adam Young) Date: Mon, 21 Sep 2015 09:57:02 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <56000B6A.4010304@redhat.com> References: <55FC678C.3020604@redhat.com> <55FC6AD3.8080300@redhat.com> <55FC6C44.50500@redhat.com> <55FC7C44.7050009@redhat.com> <56000B6A.4010304@redhat.com> Message-ID: <56000CAE.4070200@redhat.com> On 09/21/2015 09:51 AM, Rich Bowen wrote: > > > On 09/18/2015 05:04 PM, Perry Myers wrote: >>>> What is the minimum amount of RAM you need for the undercloud node? >>>> >>>> If 4GB per VM, then a) maybe can be done on a 16GB system, while b) >>>> needs 32GB >>> >>> If we allow for "not very useful" as a stated caveat of the all-in-one, >>> then we could probably get away with >> >> I think we need to more clearly define what "not very useful" means. >> >>> From my limited PoV, useful would be the ability to run 1 or two >> Instances just to try out the system end to end. Those Instances could >> be very very slimmed down Fedora images or even Cirros images. >> > > Yes, that would be my definition of "minimal required usefulness" - > run a couple of instances, and be able to connect to them from > "outside". Not running any actual workloads. > > Related, it would be awesome to have, some day, a Trystack-ish service > for experimenting with RDO-Manager. (I know this has been mentioned > before.) One way that I routinely use packstack is on top of an existing OpenStack instance. I think it would be a very powerful tool if we could run the overcloud install on top of an existing OpenStack instance. We should use the existing openstack deployment as the undercloud, to minimize degrees of nesting of virt. > >> However, for someone else useful might mean a whole other host of >> things. So we should be careful to identify specific personas here, and >> map a specific install footprint to that particular persona's view of >> useful >> >>> 3GB and swap for both overcloud VMs and 4GB for the undercloud. >>> >>> It's possible to go lower for the undercloud if you have a lot of swap >>> and are patient. It may lead to timeouts/broken-ness, so I wouldn't >>> recommend it. >> >> Ack >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > From bderzhavets at hotmail.com Mon Sep 21 14:28:54 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 21 Sep 2015 10:28:54 -0400 Subject: [Rdo-list] Attempt 3 Node Liberty deployment via http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ In-Reply-To: References: , , , , , Message-ID: I've restarted every node, recreated private network. This time router's interface was smoothly created via dashboard. Getting same errors in CirrOS VM's log, however now VM still obtains the lease and behaves normal afterwards :- info: initramfs: up at 1.20 GROWROOT: CHANGED: partition=1 start=16065 old: size=64260 end=80325 new: size=41913585,end=41929650 info: initramfs loading root from /dev/vda1 info: /etc/init.d/rc.sysinit: up at 1.97 info: container: none Starting logging: OK modprobe: module virtio_blk not found in modules.dep modprobe: module virtio_net not found in modules.dep WARN: /etc/rc3.d/S10-load-modules failed Initializing random number generator... done. Starting acpid: OK cirros-ds 'local' up at 2.78 no results found for mode=local. up 2.96. searched: nocloud configdrive ec2 Starting network... udhcpc (v1.20.1) started Sending discover... Sending select for 40.0.0.12... Lease of 40.0.0.12 obtained, lease time 86400 <== Lease obtained cirros-ds 'net' up at 3.59 checking http://169.254.169.254/2009-04-04/instance-id successful after 1/20 tries: up 3.73. iid=i-0000000a <=== Metadata obtained failed to get http://169.254.169.254/2009-04-04/user-data warning: no ec2 metadata for user-data found datasource (ec2, net) Starting dropbear sshd: generating rsa key... generating dsa key... OK /run/cirros/datasource/data/user-data was not '#!' or executable === system information === Platform: Fedora Project OpenStack Nova Container: none Arch: x86_64 CPU(s): 1 @ 3597.890 MHz Cores/Sockets/Threads: 1/1/1 Virt-type: AMD-V RAM Size: 2003MB Disks: NAME MAJ:MIN SIZE LABEL MOUNTPOINT vda 253:0 21474836480 vda1 253:1 21459755520 cirros-rootfs / === sshd host keys === -----BEGIN SSH HOST KEY KEYS----- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgnw2I6fryPFyB+vAGS6OzLOKOi9CUyBh1ErA4z8luvgeW/crKKIuLAO44fx0flsUYEYsL7m8DvsGESQUH+GMbKO3BoKGPapG7FYfCpuckQPWi5FDxAeccOwk/07Z5eVAqc2qra5+oOmdRiZgbLDrQALnhxxagodCLKOr3jzFKXeGGYs= root at cirrosdev15 ssh-dss AAAAB3NzaC1kc3MAAACBAPkvrucBAFqKluJqQ2JAOUNQ2Y/cVEODgLCC6Z4ApQIdtwLy5M22yHIgJhJ/MEWlomTI004Bx9emfZurw1F0136KA5yOhQ9urGSaH9SCicwB9MXa0NJDcm+7Hu2BiXhzZBjOfYZhv0qHzpBecN7ozLQb8g5fi70m5d2RbeAMEnIHAAAAFQDA1gCpOZ+Q29gMF4sKmEdwn3/6YwAAAIBo2FYA8p9NSkXDy9ljRdWY+TTB0mz9eJpSWAfgSnN0SGytewhiko758m8w8gleT5IayHb0Ci3r0Sg15Y35ZGt6OkdFZPyDSwOYFVzsSzRV0QTr8boyhM1GGPiXNgywQHuXVAiiAGdg9Mqhhkg+PJfP/kwAp00ZosQ5YthJwpMHCwAAAIAjZiz91MwsGHg4k18SnuIANWTTHWfdpI7g1nokOzEfznLYHzURTGWehoqDu3GBOIHfLBSfQcWJb99v5ZwcrBFTnTVpymK3MErLXbn0EgPAPuMxTodrw9F3e7qvlztnR7WrpxV0s7gnlmKeHC7c1KkUAV0c+7/mV3A/Ab5s18xEVw== root at cirrosdev15 -----END SSH HOST KEY KEYS----- === network info === if-info: lo,up,127.0.0.1,8,::1 if-info: eth0,up,40.0.0.12,24,fe80::f816:3eff:fed6:e8eb ip-route:default via 40.0.0.1 dev eth0 ip-route:40.0.0.0/24 dev eth0 src 40.0.0.12 === datasource: ec2 net === instance-id: i-0000000a name: N/A availability-zone: nova local-hostname: cirrosdev15.novalocal launch-index: 0 === cirros: current=0.3.4 latest=0.3.4 uptime=9.05 === ____ ____ ____ / __/ __ ____ ____ / __ \/ __/ / /__ / // __// __// /_/ /\ \ \___//_//_/ /_/ \____/___/ http://cirros-cloud.net login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root. cirrosdev15 login: From: bderzhavets at hotmail.com To: apevec at gmail.com; rbowen at redhat.com Date: Mon, 21 Sep 2015 07:14:41 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Attempt 3 Node Liberty deployment via http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ Packstack completion OK (ML2&OVS&VXLAN). Network node configuration. 1) Attempt to add internal interface to router works only via Neutron CLI. 2) Attempt to launch Cirros VM on compute node shows up following errors :- info: initramfs: up at 1.18 GROWROOT: CHANGED: partition=1 start=16065 old: size=64260 end=80325 new: size=41913585,end=41929650 info: initramfs loading root from /dev/vda1 info: /etc/init.d/rc.sysinit: up at 2.25 info: container: none Starting logging: OK modprobe: module virtio_blk not found in modules.dep <=== modprobe: module virtio_net not found in modules.dep <=== WARN: /etc/rc3.d/S10-load-modules failed <=== Initializing random number generator... done. Starting acpid: OK cirros-ds 'local' up at 3.06 no results found for mode=local. up 3.23. searched: nocloud configdrive ec2 Starting network... udhcpc (v1.20.1) started Sending discover... Sending discover... Thanks. Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Sep 21 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 21 Sep 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150921150003.4AB1A60A4003@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-09-23 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rbowen at redhat.com Mon Sep 21 15:21:14 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 21 Sep 2015 11:21:14 -0400 Subject: [Rdo-list] RDO blog roundup, week of September 21 Message-ID: <5600206A.10908@redhat.com> Here's what RDO enthusiasts have been writing about over the past week. If you're writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you're not on my list, please let me know! Red Hat Confirms Speaking Sessions at OpenStack Summit Tokyo by Jeff Jameson As this Fall?s OpenStack Summit in Tokyo approaches, the Foundation has posted the session agenda, outlining the final schedule of events. I am happy to report that Red Hat has 15 sessions that will be included in the weeks agenda, along with a few more as waiting alternates. With the limited space and shortened event this time around, I am please to see that Red Hat continues to remain in sync with the current topics, projects, and technologies the OpenStack community and customers are most interested in. ... read more at http://tm3.org/2g Driving in the Fast Lane: Huge Page support in OpenStack Compute by Steve Gordon In a previous ?Driving in the Fast Lane? blog post we focused on optimization of instance CPU resources. This time around let?s take a dive into the handling of system memory, and more specifically configurable page sizes. We will reuse the environment from the previous post, but add huge page support to our performance flavor. ... read more at http://tm3.org/2h Reviewing Puppet OpenStack patches by Emilien Macchi Reviewing code takes to me 20% of my work. It?s a lot of time, but not too much when you look at OpenStack velocity. To be efficient, you need to understand how review process works and have the right tools in hand. ... read more at http://tm3.org/2i Horizon Performance Optimizations by Silver Sky ome notes on Open Stack Horizon Performance optimizations on CentOS 7.1 install: 4 vCPU (2.3 GHz Intel Xeon E5 v3), 2 GB ? 4 GB RAM, SSD backed 40 GB RAW image. ... read more at http://tm3.org/2j Python 3 Status in OpenStack Liberty by Victor Stinner The Python 3 support in OpenStack Liberty made huge visible progress. Blocking libraries have been ported. Six OpenStack applications are now compatible with Python3: Aodh, Ceilometer, Gnocchi, Ironic, Rally and Sahara. Thanks to voting python34 check jobs, the Python 3 support can only increase, Python 2 only code cannot be reintroduced by mistake in tested code. ... read more at http://tm3.org/2k Analyzing the performance of Red Hat Enterprise Linux OpenStack Platform using Rally by Roger Lopez and Joe Talerico In our recent blog post, we?ve discussed the steps involved in determining the performance and scalability of a Red Hat Enterprise Linux OpenStack Platform environment. To recap, we?ve recommended the following: ... read more at http://tm3.org/2l -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From pgsousa at gmail.com Mon Sep 21 16:02:58 2015 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 21 Sep 2015 17:02:58 +0100 Subject: [Rdo-list] error ssh to overcloud nodes Message-ID: Hi all, I've deployed overcloud, however I cannot login to my nodes, I get this error: [*stack at instack ~]$ openstack overcloud deploy --templates* *Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates* */home/stack/.ssh/known_hosts updated.* *Original contents retained as /home/stack/.ssh/known_hosts.old* *PKI initialization in init-keystone is deprecated and will be removed.* *Warning: Permanently added '192.168.21.16' (ECDSA) to the list of known hosts.* *Permission denied (publickey,gssapi-keyex,gssapi-with-mic).* *ERROR: openstack Command '['ssh', '-oStrictHostKeyChecking=no', '-t', '-l', 'heat-admin', u'192.168.21.16', 'sudo', 'keystone-manage', 'pki_setup', '--keystone-user', "$(getent passwd | grep '^keystone' | cut -d: -f1)", '--keystone-group', "$(getent group | grep '^keystone' | cut -d: -f1)"]' returned non-zero exit status 255* Can anyone help me solve this issue? Thanks, Pedro Sousa -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Sep 21 18:03:11 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 21 Sep 2015 14:03:11 -0400 Subject: [Rdo-list] More transparency in RDO infra/process In-Reply-To: <55FB45D1.2090902@redhat.com> References: <55FB45D1.2090902@redhat.com> Message-ID: <5600465F.4030201@redhat.com> I wanted to share a portion of a conversation with folks here, and ask for some help. To summarize, it's confusing to people who are not part of the process (and often for people who are) where all of the pieces of the RDO process live, what they do, how they talk to one another, and so on. Because of how RDO has evolved, and interacted with other parties (Fedora, CentOS, upstream OpenStack, Trystack, and now the rpm-packaging upstream project, and probably other things I'm forgetting) it can feel, sometimes, that there's some magic involved. I'd like to bring more clarity to this with some docs, some diagrams, and whatever else we need to do, both to bring more transparency to the process, and to help people who are trying to get involved, find their way. I could definitely use your help in making that happen. --Rich -------- Forwarded Message -------- On 09/18/2015 05:49 AM, Rich Bowen wrote: > > >>>> * One of the criticisms against RDO at the moment is the lack of real >>>> transparency and confusion as to the infrastructure and systems that >>>> power it. > > > I'm curious where you're seeing this criticism, as I haven't seen it at > all. I'll take this as an action item to thoroughly document the > infrastructure and systems that power RDO, but I was completely unaware > that this was a concern. > > > It's been mentioned to me by operators that I've spoken to at Meetups and the like. It's more of a casual observation from users who are more interested in contributing and being a part of RDO. If you look at Fedora as an example, all the infrastructure is exposed, they have things like bohdi and koji that allow you to directly see and "touch" the entire process used to ship Fedora. You can very easily follow the process code goes through the whole system to get out the door (and the documentation of how it all works is very good). While all the pieces of RDO are open (I think), there is no unified documentation that shows you a birds eye view of how it all works. We have tools like delorean and rdopkg which aim to make things easier for people (and they do), and you often follow the documentation to do what you need to do, without understanding why. How does code upstream get into RDO? Following the packging docs it looks like it touches github, fedora, delorean, and other pieces. Things like Delorean and the CI parts often have no links off the main rdo website (or if they do, they are buried in a page somewhere). We have pieces running on OS1, we have pieces running on Trystack, we have pieces living inside fedorapeople (?), we have pieces in Centos/CBS, I'm still not entirely sure where the main site itself is running so if anyone wants to actually try and understand the infrastructure (and in particular, what infrastructure impacts RDO) it's a mystery. Even something as simple as this https://apps.fedoraproject.org/ To have links to all the parts of the RDO that exist currently, would be a massive help. Note Rich this is not a slight against yourself and the fantastic work you have done for RDO, I think this is more because the RDO project has had to grow, change and adapt so quickly, everything is moving at breakneck speed so it's hard for people not involved to keep up. From hguemar at fedoraproject.org Mon Sep 21 19:55:32 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Mon, 21 Sep 2015 21:55:32 +0200 Subject: [Rdo-list] More transparency in RDO infra/process In-Reply-To: <5600465F.4030201@redhat.com> References: <55FB45D1.2090902@redhat.com> <5600465F.4030201@redhat.com> Message-ID: 2015-09-21 20:03 GMT+02:00 Rich Bowen : > I wanted to share a portion of a conversation with folks here, and ask for > some help. > > To summarize, it's confusing to people who are not part of the process (and > often for people who are) where all of the pieces of the RDO process live, > what they do, how they talk to one another, and so on. Because of how RDO > has evolved, and interacted with other parties (Fedora, CentOS, upstream > OpenStack, Trystack, and now the rpm-packaging upstream project, and > probably other things I'm forgetting) it can feel, sometimes, that there's > some magic involved. > > I'd like to bring more clarity to this with some docs, some diagrams, and > whatever else we need to do, both to bring more transparency to the process, > and to help people who are trying to get involved, find their way. I could > definitely use your help in making that happen. > > --Rich > Thanks Rich for bringing that topic and what follows will be my own PoV, not my employer's. First, until recently RDO had little to no external contributors, making opening the infrastructure and process a lower priority against making RDO a truly usable distro. Now, that RDO reached some maturity, openness is now on the top of the queue, a dedicated team (RDO Engineering) has been creating early 2015 to work full-time on RDO. RDO Engineering is one of the few team with Perry Myers organization that has *no* internal meeting/call, everything we do is in the *open*. Most of our decisions are discussed in this list or during our packaging meetings. I'd like to thank Alan (RDO Eng. team lead), Alvaro (RDO Eng, manager) and Perry (dept manager) to enforce that. And you Rich to ensure that RDO community grows healthily. Sorry for disclosing corporate secrets: any internal discussion concerning RDO ends up with someone reminding us that *NO, DON'T DO THAT, BRING THE DISCUSSION ON RDO LIST OR RDO PUBLIC CHANNEL!!!!* --- RDO Engineering has done a tremendous effort to open our infrastructure, our main build system is CentOS CBS (primarily maintained by alphacc from CERN!), anyone can submit packaging changes through gerrithub. All our tooling has been opened (delorean, rdopkg etc.), our trello is open, our bug tracker too. We're starting to see a RDO Team that is a superset of RDO Eng. made by non-redhatters, and redhatters from different teams or dept. At my level, I spend a significant amount of my time to mentor RDO contributors (and regardless of their affiliation). One of my goals is that most of the governance and effective stewardship of RDO belongs to the RDO Community becomes possible. We could create a governance body made up of non-redhatters but if they can't manage the infrastructure or enforce their decisions, it would be shallow. ----- Is that enough? the answer is NO We need to define our processes, so that people won't get confused and lower as much as possible the entry barrier. Our current infrastructure is still a patchwork: * builders are scattered between Fedora Koji, CBS, copr * no single accounting/access control system * lack of automation * CI has yet to be opened. => That makes it to hard to grant access to external contributors as we don't have control over our infrastructure. And we can't grant finer-grained accesses, which does not help in granting more people access to our infrastructure. Especially when CI and more generally infrastructure is not reliable and we're missing a lot of safety checks. We've been working into consolidating our infrastructure (Cf. the fedora thread), and have a dedicated person to work full-time on RDO CI so we could open it. it'll get better. As our platform will be stabilizing, we'll be able to welcome more contributors and grant them more access. Even myself don't have access to everything and sometimes, it's frustrating not being able to fix trivial things. As someone who saw Fedora's evolution over the years, and who co-founded CentOS Cloud SIG, it takes time to build a truly open community. Currently RDO is not that much different from early Fedora Core. We have yet to build our infra and define our processes but we're on the path to be a fully open community. Our priority is to stabilize our foundations (including infrastructure), strive to lower the contribution barrier, keep the decision process transparent and open. Regards, H. PS: I like having my team meeting on a public irc channel, and that I don't need connecting corporate VPN to do my main job, that my team is not limited to Red Hat folks, don't take that away from me! > > -------- Forwarded Message -------- > On 09/18/2015 05:49 AM, Rich Bowen wrote: >> >> >> >>>>> * One of the criticisms against RDO at the moment is the lack of real >>>>> transparency and confusion as to the infrastructure and systems that >>>>> power it. >> >> >> >> I'm curious where you're seeing this criticism, as I haven't seen it at >> all. I'll take this as an action item to thoroughly document the >> infrastructure and systems that power RDO, but I was completely unaware >> that this was a concern. >> >> >> > > It's been mentioned to me by operators that I've spoken to at Meetups > and the like. It's more of a casual observation from users who are more > interested in contributing and being a part of RDO. > > If you look at Fedora as an example, all the infrastructure is exposed, > they have things like bohdi and koji that allow you to directly see and > "touch" the entire process used to ship Fedora. You can very easily > follow the process code goes through the whole system to get out the > door (and the documentation of how it all works is very good). > > While all the pieces of RDO are open (I think), there is no unified > documentation that shows you a birds eye view of how it all works. We > have tools like delorean and rdopkg which aim to make things easier for > people (and they do), and you often follow the documentation to do what > you need to do, without understanding why. How does code upstream get > into RDO? Following the packging docs it looks like it touches github, > fedora, delorean, and other pieces. > > Things like Delorean and the CI parts often have no links off the main > rdo website (or if they do, they are buried in a page somewhere). We > have pieces running on OS1, we have pieces running on Trystack, we have > pieces living inside fedorapeople (?), we have pieces in Centos/CBS, I'm > still not entirely sure where the main site itself is running so if > anyone wants to actually try and understand the infrastructure (and in > particular, what infrastructure impacts RDO) it's a mystery. > > Even something as simple as this > > https://apps.fedoraproject.org/ > > To have links to all the parts of the RDO that exist currently, would be > a massive help. > > > > Note Rich this is not a slight against yourself and the fantastic work > you have done for RDO, I think this is more because the RDO project has > had to grow, change and adapt so quickly, everything is moving at > breakneck speed so it's hard for people not involved to keep up. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Mon Sep 21 20:47:57 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 21 Sep 2015 16:47:57 -0400 Subject: [Rdo-list] OpenStack meetups, week of September 21 Message-ID: <56006CFD.3060209@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/Events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Mon Sep 21 in Durham, NC, US: Joint Meetup: Triangle OpenStack & Triangle Bluemix - http://www.meetup.com/Triangle-OpenStack-Meetup/events/225367959/ * Mon Sep 21 in Guadalajara, MX: OpenStack Development Process - Monty Taylor - http://www.meetup.com/OpenStack-GDL/events/224087006/ * Mon Sep 21 in San Jose, CA, US: Come learn about OpenStack Operations from Platform9 - http://www.meetup.com/Silicon-Valley-OpenStack-Ops-Meetup/events/225127012/ * Tue Sep 22 in Beijing, CN: ???? ?????????OpenStack??? ??????????????? - http://www.meetup.com/China-OpenStack-User-Group/events/224963576/ * Tue Sep 22 in Washington, DC, US: Bi-modal IT & OpenStack (#26) - http://www.meetup.com/OpenStackDC/events/224954102/ * Wed Sep 23 in Helsinki, FI: OpenStackFin and DevOps User Groups' meetup - http://www.meetup.com/OpenStack-Finland-User-Group/events/225087536/ * Wed Sep 23 in Budapest, HU: OpenStack 2015 September - http://www.meetup.com/OpenStack-Hungary-Meetup-Group/events/225198785/ * Wed Sep 23 in Orlando, FL, US: OpenStack Central Florida is a go! - http://www.meetup.com/Orlando-Central-Florida-OpenStack-Meetup/events/224917186/ * Thu Sep 24 in Herriman, UT, US: How Adobe Built an OpenStack Cloud - http://www.meetup.com/openstack-utah/events/224939158/ * Thu Sep 24 in Prague, CZ: OpenStack Howto part 6 - Storage/Telemetry - http://www.meetup.com/OpenStack-Czech-User-Group-Meetup/events/222955580/ * Thu Sep 24 in K?ln, DE: OpenStack mit Kubernetes, Ceph & Docker Apps - http://www.meetup.com/OpenStack-X/events/219719012/ * Thu Sep 24 in Littleton, CO, US: Discuss and Learn about OpenStack - http://www.meetup.com/OpenStack-Denver/events/224948248/ * Thu Sep 24 in Chesterfield, MO, US: OpenStack DNS as as Service - Designate - http://www.meetup.com/OpenStack-STL/events/225111289/ * Thu Sep 24 in Athens, GR: Deploying OpenStack with Ansible - http://www.meetup.com/Athens-OpenStack-User-Group/events/225038590/ * Thu Sep 24 in Pasadena, CA, US: Elastic L4-L7 Services for OpenStack. The September OpenStack L.A. Meetup - http://www.meetup.com/OpenStack-LA/events/225213471/ * Thu Sep 24 in Prague, CZ: WeAreCloud 2015 - http://www.meetup.com/OpenStack-Czech-User-Group-Meetup/events/225061891/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Tue Sep 22 16:25:50 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 22 Sep 2015 12:25:50 -0400 Subject: [Rdo-list] REMINDER: RDO Liberty test day tomorrow Message-ID: <5601810E.9010008@redhat.com> Don't forget - we'll be doing a Liberty test day September 23rd and 24th. Details are here: http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ You can update that page, and the associated test scenarios page, at https://github.com/redhat-openstack/website Thanks, and "see" you tomorrow. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From whayutin at redhat.com Tue Sep 22 16:59:13 2015 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 22 Sep 2015 12:59:13 -0400 Subject: [Rdo-list] [CI] rdo-manager liberty install undercloud fails w/ rpm dep error Message-ID: This is a known issue. https://bugzilla.redhat.com/show_bug.cgi?id=1265334 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Tue Sep 22 18:14:33 2015 From: marius at remote-lab.net (Marius Cornea) Date: Tue, 22 Sep 2015 20:14:33 +0200 Subject: [Rdo-list] Undercloud fails to install Message-ID: Hi everyone, Undercloud failed to install with the following message[1]. I followed the upstream docs and skipped the "Enable latest RDO Trunk Delorean repository" part. It looks like it's missing the openstack-tempest package. Did I miss something or is there any workaround for this? Thanks, Marius [1] http://paste.openstack.org/show/473585/ From jslagle at redhat.com Tue Sep 22 23:50:18 2015 From: jslagle at redhat.com (James Slagle) Date: Tue, 22 Sep 2015 19:50:18 -0400 Subject: [Rdo-list] Undercloud fails to install In-Reply-To: References: Message-ID: <20150922235018.GF4230@localhost.localdomain> On Tue, Sep 22, 2015 at 08:14:33PM +0200, Marius Cornea wrote: > Hi everyone, > > Undercloud failed to install with the following message[1]. I followed > the upstream docs and skipped the "Enable latest RDO Trunk Delorean > repository" part. It looks like it's missing the openstack-tempest > package. Did I miss something or is there any workaround for this? I submitted a patch for this yesterday, and it merged today: https://review.openstack.org/#/c/225942/ > > Thanks, > Marius > > [1] http://paste.openstack.org/show/473585/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- James Slagle -- From ggillies at redhat.com Wed Sep 23 07:06:35 2015 From: ggillies at redhat.com (Graeme Gillies) Date: Wed, 23 Sep 2015 17:06:35 +1000 Subject: [Rdo-list] REMINDER: RDO Liberty test day tomorrow In-Reply-To: <5601810E.9010008@redhat.com> References: <5601810E.9010008@redhat.com> Message-ID: <56024F7B.1080107@redhat.com> On 09/23/2015 02:25 AM, Rich Bowen wrote: > Don't forget - we'll be doing a Liberty test day September 23rd and > 24th. Details are here: > http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ > > You can update that page, and the associated test scenarios page, at > https://github.com/redhat-openstack/website > > Thanks, and "see" you tomorrow. > Hi, So I've documented my testing of using RDO Manager with liberty today at https://etherpad.openstack.org/p/IFOhksP55K I've encountered a few problems (some of which I think are serious problems worth of a BZ, for example there is an issue there I think breaks all node introspection due to malformed ipxe templates). I wasn't able to get a deployment going 100% successfully, I'm now hitting an issue with os-net-config not adding my devices to my ovs bond (might be a bug in os-net-config or openvswitch), but I will continue to try tomorrow. Overall it's actually quite promising, I think once I have this last issue ironed out I think I should be able to do successful deployments. Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From ggillies at redhat.com Wed Sep 23 07:08:16 2015 From: ggillies at redhat.com (Graeme Gillies) Date: Wed, 23 Sep 2015 17:08:16 +1000 Subject: [Rdo-list] REMINDER: RDO Liberty test day tomorrow In-Reply-To: <56024F7B.1080107@redhat.com> References: <5601810E.9010008@redhat.com> <56024F7B.1080107@redhat.com> Message-ID: <56024FE0.8040402@redhat.com> On 09/23/2015 05:06 PM, Graeme Gillies wrote: > On 09/23/2015 02:25 AM, Rich Bowen wrote: >> Don't forget - we'll be doing a Liberty test day September 23rd and >> 24th. Details are here: >> http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ >> >> You can update that page, and the associated test scenarios page, at >> https://github.com/redhat-openstack/website >> >> Thanks, and "see" you tomorrow. >> > > Hi, > > So I've documented my testing of using RDO Manager with liberty today at > > https://etherpad.openstack.org/p/IFOhksP55K > > I've encountered a few problems (some of which I think are serious > problems worth of a BZ, for example there is an issue there I think > breaks all node introspection due to malformed ipxe templates). > > I wasn't able to get a deployment going 100% successfully, I'm now > hitting an issue with os-net-config not adding my devices to my ovs bond > (might be a bug in os-net-config or openvswitch), but I will continue to > try tomorrow. > > Overall it's actually quite promising, I think once I have this last > issue ironed out I think I should be able to do successful deployments. > > Regards, > > Graeme > Once last thing I forgot to mention, is updating from undercloud/overcloud kilo to undercloud/overcloud liberty supposed to be a supported test scenario? If so, I think there will be a number of issues with instack-install-undercloud not being completely idempotent that will cause things to fail. Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From chzhang8 at qq.com Wed Sep 23 02:56:11 2015 From: chzhang8 at qq.com (=?gb18030?B?1cWz2w==?=) Date: Wed, 23 Sep 2015 10:56:11 +0800 Subject: [Rdo-list] How to install openstack(openstack-Juno) on CentOS6.6 Message-ID: hello: Could you help me that is there an instruction or quickstart about instaling openstack (Juno.eg) on CentOS6.6 when I use rdo installing I met probemls as bellow: many thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 64D0FC6E at F78C3952.CB140256.jpg Type: image/jpeg Size: 139357 bytes Desc: not available URL: From Kevin.Fox at pnnl.gov Wed Sep 23 12:33:45 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 23 Sep 2015 12:33:45 +0000 Subject: [Rdo-list] How to install openstack(openstack-Juno) on CentOS6.6 In-Reply-To: References: Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C9435@EX10MBOX06.pnnl.gov> I'd recommend against using CentOS6 for new deployments as its no longer supported moving forward. I've upgraded a CentOS6 cloud from 6 to 7 to get it to Kilo, but its not something you want to do if you don't have to. Thanks, Kevin ________________________________ From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf of ?? [chzhang8 at qq.com] Sent: Tuesday, September 22, 2015 7:56 PM To: rdo-list Subject: [Rdo-list] How to install openstack(openstack-Juno) on CentOS6.6 hello: Could you help me that is there an instruction or quickstart about instaling openstack (Juno.eg) on CentOS6.6 when I use rdo installing I met probemls as bellow: [cid:64D0FC6E at F78C3952.CB140256.jpg] many thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 64D0FC6E at F78C3952.CB140256.jpg Type: image/jpeg Size: 139357 bytes Desc: 64D0FC6E at F78C3952.CB140256.jpg URL: From mohammed.arafa at gmail.com Wed Sep 23 12:10:17 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 23 Sep 2015 08:10:17 -0400 Subject: [Rdo-list] How to install openstack(openstack-Juno) on CentOS6.6 In-Reply-To: References: Message-ID: Hi You already have the package installed. you can move on to the next step of the documentation On Tue, Sep 22, 2015 at 10:56 PM, ?? wrote: > hello: > > Could you help me that is there an instruction or quickstart about > instaling openstack (Juno.eg) on CentOS6.6 > > when I use rdo installing I met probemls as bellow: > > > > > many thanks > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 64D0FC6E at F78C3952.CB140256.jpg Type: image/jpeg Size: 139357 bytes Desc: not available URL: From chkumar246 at gmail.com Wed Sep 23 14:05:38 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 23 Sep 2015 19:35:38 +0530 Subject: [Rdo-list] bug statistics for 2015-09-23 Message-ID: # RDO Bugs on 2015-09-23 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 265 - Fixed (MODIFIED, POST, ON_QA): 174 ## Number of open bugs by component diskimage-builder [ 4] +++ distribution [ 12] ++++++++++ dnsmasq [ 1] instack [ 4] +++ instack-undercloud [ 24] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 6] +++++ openstack-cinder [ 14] ++++++++++++ openstack-foreman-inst... [ 3] ++ openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 1] openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] ++++++ openstack-neutron [ 6] +++++ openstack-nova [ 17] ++++++++++++++ openstack-packstack [ 46] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] +++++++++ openstack-selinux [ 12] ++++++++++ openstack-swift [ 2] + openstack-tripleo [ 24] ++++++++++++++++++++ openstack-tripleo-heat... [ 5] ++++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 3] ++ openvswitch [ 1] Package Review [ 1] python-glanceclient [ 2] + python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] ++++ python-oslo-config [ 1] rdo-manager [ 23] ++++++++++++++++++++ rdo-manager-cli [ 6] +++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (265 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (12 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-09-18 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1264072 ] http://bugzilla.redhat.com/1264072 (NEW) Component: distribution Last change: 2015-09-22 Summary: app-catalog-ui new package ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (24 bugs) [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (6 bugs) [1214928 ] http://bugzilla.redhat.com/1214928 (NEW) Component: openstack-ceilometer Last change: 2015-04-23 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field [1263839 ] http://bugzilla.redhat.com/1263839 (NEW) Component: openstack-ceilometer Last change: 2015-09-16 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library ### openstack-cinder (14 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1265690 ] http://bugzilla.redhat.com/1265690 (NEW) Component: openstack-cinder Last change: 2015-09-23 Summary: Failed to create cinder volume using Delorean Trunk packages [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted ### openstack-horizon (1 bug) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-08-25 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class ### openstack-neutron (6 bugs) [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks ### openstack-nova (17 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-06-14 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-01-08 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-06-08 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-06-23 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-04-10 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: novnc init script doesnt write to log [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-02-09 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-06-04 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: Nova AVC messages ### openstack-packstack (46 bugs) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-packstack Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-21 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list ### openstack-puppet-modules (11 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm ### openstack-selinux (12 bugs) [1261465 ] http://bugzilla.redhat.com/1261465 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: OpenStack Keystone is not functional [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-07-23 Summary: Glance over nfs fails due to selinux [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC ### openstack-swift (2 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (24 bugs) [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Node registration fails silently if instackenv.json is badly formatted [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials ### openstack-tripleo-heat-templates (5 bugs) [1232015 ] http://bugzilla.redhat.com/1232015 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: instack-undercloud: one controller deployment: running "pcs status" - Error: cluster is not currently running on this node [1235508 ] http://bugzilla.redhat.com/1235508 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-25 Summary: Package update does not take puppet managed packages into account [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (3 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### Package Review (1 bug) [1243550 ] http://bugzilla.redhat.com/1243550 (ASSIGNED) Component: Package Review Last change: 2015-09-22 Summary: Review Request: openstack-aodh - OpenStack Telemetry Alarming ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-09-17 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-06-04 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (23 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (174 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (5 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (1 bug) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder ### openstack-glance (3 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (13 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (59 bugs) [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-07-21 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface ### openstack-puppet-modules (18 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance ### openstack-selinux (12 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-09-23 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (5 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-05-29 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason ### rdo-manager-cli (8 bugs) [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Wed Sep 23 15:54:09 2015 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 23 Sep 2015 11:54:09 -0400 (EDT) Subject: [Rdo-list] [meeting] RDO packaging meeting (2015-09-23) In-Reply-To: <649510706.56284816.1443023616346.JavaMail.zimbra@redhat.com> Message-ID: <1369716763.56286867.1443023649206.JavaMail.zimbra@redhat.com> ======================================== #rdo: RDO Packaging Meeting (2015-09-23) ======================================== Meeting started by jpena at 15:02:23 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-09-23/rdo.2015-09-23-15.02.log.html . Meeting summary --------------- * Roll Call (jpena, 15:02:32) * Packages needs version bump (jpena, 15:05:22) * LINK: https://etherpad.openstack.org/p/RDO-Packaging (number80, 15:05:47) * LINK: http://lists.openstack.org/pipermail/openstack-dev/2015-September/075077.html (chandankumar, 15:06:51) * LINK: https://trello.com/c/VPTFAP4o/72-delorean-stable-liberty (chandankumar, 15:10:03) * ACTION: apevec to send delorean patch for upstream branch fallback to "master" (apevec, 15:13:57) * Test day - September 23/24 (jpena, 15:15:17) * LINK: http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ (jpena, 15:16:27) * LINK: https://etherpad.openstack.org/p/rdo_test_day_sep_2015 (jpena, 15:17:30) * LINK: https://etherpad.openstack.org/p/xLg7ufNIem (jschlueter, 15:24:21) * LINK: https://github.com/redhat-openstack/website/issues (rbowen, 15:33:28) * New Package scm request / approved (jpena, 15:38:10) * open floor (jpena, 15:43:52) * chair for next meeting (chandankumar, 15:46:02) * LINK: http://stream1.gifsoup.com/webroot/animatedgifs4/1158287_o.gif (jpena, 15:49:17) * ACTION: jpena to chair the next meeting. :) (chandankumar, 15:49:46) Meeting ended at 15:51:10 UTC. Action Items ------------ * apevec to send delorean patch for upstream branch fallback to "master" * jpena to chair the next meeting. :) Action Items, by person ----------------------- * apevec * apevec to send delorean patch for upstream branch fallback to "master" * jpena * jpena to chair the next meeting. :) * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (44) * jpena (38) * _diana_ (22) * number80 (21) * rbowen (20) * chandankumar (19) * zodbot (10) * jschlueter (10) * garrett (8) * jruzicka (7) * dmsimard (6) * eggmaster (6) * cdent (3) * trown (2) * eharney (2) * social (2) * panbalag (1) * pmyers (1) * weshay (1) * dustins (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot Thanks, Javier From tom at buskey.name Wed Sep 23 20:27:54 2015 From: tom at buskey.name (Tom Buskey) Date: Wed, 23 Sep 2015 16:27:54 -0400 Subject: [Rdo-list] RDO-Manager "quickstart" In-Reply-To: <56000CAE.4070200@redhat.com> References: <55FC678C.3020604@redhat.com> <55FC6AD3.8080300@redhat.com> <55FC6C44.50500@redhat.com> <55FC7C44.7050009@redhat.com> <56000B6A.4010304@redhat.com> <56000CAE.4070200@redhat.com> Message-ID: I've been using packstack to build an allinone controller and then add compute nodes. I do some modification of cinder backend/size, nova networking, openvswitch, neutron, vlans and interface NICs. I modify settings after packstack so I exclude previously configured nodes from being modified when I add a new compute node. We develop an application that uses the openstack APIs and it is very useful to be stand up an allinone and a compute node the same way every time. It's more important for us to have multiple clouds than multiple nodes. Having 2 nodes means we can do some internode work that won't be on an allinone. We can also use this setup for a customer POC to see our software run w/o spending lots on a real cloud. Some will only purchase on physical system so it's important that we don't require a 2nd system just to standup the cloud. Running off a DVD with no internet access in a customer lab means they see our software run. I am interested in KVM on KVM. It would let me have more cloud controllers :-) I've been able to stand up a slow allinone in virtualbox or vmware with 1 GB RAM, 1 CPU and 20 GB. I haven't been able to get a compute node to go with it. Probably because connecting the compute NICs on layer 2 isn't done quite right in the hypervisor. On Mon, Sep 21, 2015 at 9:57 AM, Adam Young wrote: > On 09/21/2015 09:51 AM, Rich Bowen wrote: > >> >> >> On 09/18/2015 05:04 PM, Perry Myers wrote: >> >>> What is the minimum amount of RAM you need for the undercloud node? >>>>> >>>>> If 4GB per VM, then a) maybe can be done on a 16GB system, while b) >>>>> needs 32GB >>>>> >>>> >>>> If we allow for "not very useful" as a stated caveat of the all-in-one, >>>> then we could probably get away with >>>> >>> >>> I think we need to more clearly define what "not very useful" means. >>> >>> From my limited PoV, useful would be the ability to run 1 or two >>>> >>> Instances just to try out the system end to end. Those Instances could >>> be very very slimmed down Fedora images or even Cirros images. >>> >>> >> Yes, that would be my definition of "minimal required usefulness" - run a >> couple of instances, and be able to connect to them from "outside". Not >> running any actual workloads. >> >> Related, it would be awesome to have, some day, a Trystack-ish service >> for experimenting with RDO-Manager. (I know this has been mentioned before.) >> > > One way that I routinely use packstack is on top of an existing OpenStack > instance. > > I think it would be a very powerful tool if we could run the overcloud > install on top of an existing OpenStack instance. We should use the > existing openstack deployment as the undercloud, to minimize degrees of > nesting of virt. > > > > > > > >> However, for someone else useful might mean a whole other host of >>> things. So we should be careful to identify specific personas here, and >>> map a specific install footprint to that particular persona's view of >>> useful >>> >>> 3GB and swap for both overcloud VMs and 4GB for the undercloud. >>>> >>>> It's possible to go lower for the undercloud if you have a lot of swap >>>> and are patient. It may lead to timeouts/broken-ness, so I wouldn't >>>> recommend it. >>>> >>> >>> Ack >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >> >> > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Sep 23 21:03:32 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 23 Sep 2015 17:03:32 -0400 Subject: [Rdo-list] RDO Community Meetup at OpenStack Summit Message-ID: <560313A4.8000708@redhat.com> Because we were unable to get one of the official BoF rooms at OpenStack Summit, in Tokyo, the Ceph Community Meetup has graciously offered us the use of their room during the lunch our on Wednesday. That's 12:45 - 14:00, Wednesday, October 28th, in the Sakura Tower. Exact room name/number to com very soon. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From limao at cisco.com Thu Sep 24 01:27:01 2015 From: limao at cisco.com (Liping Mao (limao)) Date: Thu, 24 Sep 2015 01:27:01 +0000 Subject: [Rdo-list] How to install openstack(openstack-Juno) on CentOS6.6 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7C9435@EX10MBOX06.pnnl.gov> References: <1A3C52DFCD06494D8528644858247BF01B7C9435@EX10MBOX06.pnnl.gov> Message-ID: +1, if you has not start to build environment, I think centos7 will be a better start. Regards, Liping Mao From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Fox, Kevin M Sent: 2015?9?23? 20:34 To: ??; rdo-list Subject: Re: [Rdo-list] How to install openstack(openstack-Juno) on CentOS6.6 I'd recommend against using CentOS6 for new deployments as its no longer supported moving forward. I've upgraded a CentOS6 cloud from 6 to 7 to get it to Kilo, but its not something you want to do if you don't have to. Thanks, Kevin ________________________________ From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf of ?? [chzhang8 at qq.com] Sent: Tuesday, September 22, 2015 7:56 PM To: rdo-list Subject: [Rdo-list] How to install openstack(openstack-Juno) on CentOS6.6 hello: Could you help me that is there an instruction or quickstart about instaling openstack (Juno.eg) on CentOS6.6 when I use rdo installing I met probemls as bellow: [cid:image001.jpg at 01D0F6AB.26A2AC20] many thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 139357 bytes Desc: image001.jpg URL: From jweber at cofront.net Thu Sep 24 03:21:11 2015 From: jweber at cofront.net (Jeff Weber) Date: Wed, 23 Sep 2015 23:21:11 -0400 Subject: [Rdo-list] python-cheetah and python markdown missing for rhel-7 on RDO-Kilo Message-ID: When installing RDO against rhel-7 the packages python-cheetah and python-markdown are missing. On my CentOS installs these are available in extras, but on rhel-7 I have to get them from epel. Should I expect to need to use epel for RDO if I'm using rhel-7? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Thu Sep 24 06:41:58 2015 From: ggillies at redhat.com (Graeme Gillies) Date: Thu, 24 Sep 2015 16:41:58 +1000 Subject: [Rdo-list] REMINDER: RDO Liberty test day tomorrow In-Reply-To: <56024F7B.1080107@redhat.com> References: <5601810E.9010008@redhat.com> <56024F7B.1080107@redhat.com> Message-ID: <56039B36.60001@redhat.com> On 09/23/2015 05:06 PM, Graeme Gillies wrote: > On 09/23/2015 02:25 AM, Rich Bowen wrote: >> Don't forget - we'll be doing a Liberty test day September 23rd and >> 24th. Details are here: >> http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/ >> >> You can update that page, and the associated test scenarios page, at >> https://github.com/redhat-openstack/website >> >> Thanks, and "see" you tomorrow. >> > > Hi, > > So I've documented my testing of using RDO Manager with liberty today at > > https://etherpad.openstack.org/p/IFOhksP55K > > I've encountered a few problems (some of which I think are serious > problems worth of a BZ, for example there is an issue there I think > breaks all node introspection due to malformed ipxe templates). > > I wasn't able to get a deployment going 100% successfully, I'm now > hitting an issue with os-net-config not adding my devices to my ovs bond > (might be a bug in os-net-config or openvswitch), but I will continue to > try tomorrow. > > Overall it's actually quite promising, I think once I have this last > issue ironed out I think I should be able to do successful deployments. > > Regards, > > Graeme > Hi, I continued Liberty testing with RDO Manager today, and while it took a lot of effort to work through some bugs with neutron and force disk-image-builder to build liberty images (not kilo, which it sadly is hard coded to do), I was able to deploy a Liberty cloud with 1 controller, and 1 compute deployed successfully. Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From javier.pena at redhat.com Thu Sep 24 11:42:42 2015 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 24 Sep 2015 07:42:42 -0400 (EDT) Subject: [Rdo-list] python-cheetah and python markdown missing for rhel-7 on RDO-Kilo In-Reply-To: References: Message-ID: <1638490347.56855585.1443094962333.JavaMail.zimbra@redhat.com> ----- Original Message ----- > When installing RDO against rhel-7 the packages python-cheetah and > python-markdown are missing. On my CentOS installs these are available in > extras, but on rhel-7 I have to get them from epel. Should I expect to need > to use epel for RDO if I'm using rhel-7? Hi Jeff, These packages are available from the RH Common repository in RHEL7 (rhel-7-server-rh-common-rpms), there should be no need to use EPEL. Regards, Javier > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Thu Sep 24 13:24:27 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 24 Sep 2015 09:24:27 -0400 Subject: [Rdo-list] beta.rdoproject.org - remember to update redirects.yaml Message-ID: <5603F98B.5050003@redhat.com> First of all, many thanks to the people that have been sending pull requests for the beta.rdoproject.org website. I wanted to remind you that if you relocate a page (say, moving it from uncategorized/ into docs/ or whatever) be sure to check the source/redirects.yaml file to see if you need to update the file location in there. The purpose of that file is to preserve our existing URL structure from the old site when we move to the new site, so that we don't lose all of our search engine karma overnight. Thanks! --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Fri Sep 25 14:09:01 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 25 Sep 2015 10:09:01 -0400 Subject: [Rdo-list] RDO test day Message-ID: <5605557D.7070708@redhat.com> Thanks so much to everyone that participate in the test day. A few notes: There were copious notes taken at https://etherpad.openstack.org/p/rdo_test_day_sep_2015 and we need to capture this information on the website where appropriate. A reminder that you can fork the website on github at https://github.com/redhat-openstack/website and send in pull requests. Once you've forked the website, if you run ./run-server.sh in that top-level directory you'll have a local instance of the website where you can test your changes before sending them in. It looks like we had 17 bugs created during the test day - http://tm3.org/test-day-liberty-bugs If you're sitting on any other bugs, please don't forget to open tickets before you forget the details! Finally, with the Liberty release coming on October 15th, we are planning a follow-up test day on Oct 12-13, to do a final run-through before release day. Please save the date, and we'll have more details coming very soon. Thanks again! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From rbowen at redhat.com Fri Sep 25 14:19:54 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 25 Sep 2015 10:19:54 -0400 Subject: [Rdo-list] RDO test day In-Reply-To: <5605557D.7070708@redhat.com> References: <5605557D.7070708@redhat.com> Message-ID: <5605580A.3060908@redhat.com> On 09/25/2015 10:09 AM, Rich Bowen wrote: > A reminder that you can fork the website on github at > https://github.com/redhat-openstack/website and send in pull requests. > Once you've forked the website, if you run ./run-server.sh in that > top-level directory you'll have a local instance of the website where > you can test your changes before sending them in. You need to run setup.sh first -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From marius at remote-lab.net Sat Sep 26 10:43:15 2015 From: marius at remote-lab.net (Marius Cornea) Date: Sat, 26 Sep 2015 12:43:15 +0200 Subject: [Rdo-list] Undercloud installation fails Message-ID: Hi, Undercloud installation is failing for me with the following error, using the last known good RDO Trunk Delorean repository: http://paste.openstack.org/show/474109/ Where should I check/file bugs about this? Launchpad or BZ? Thanks, Marius From mohammed.arafa at gmail.com Mon Sep 28 01:40:45 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 28 Sep 2015 03:40:45 +0200 Subject: [Rdo-list] [rdo-manager] undercloud baremetal hanging Message-ID: hi i am trying to install rdo-manager on baremetal. i am still on the undercloud stage. and it looks like it is hanging. it was stuck for hours on this screen Error: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]: Failed to call refresh: Command exceeded timeout Error: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]: Command exceeded timeout Wrapped exception: execution expired Notice: /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Service[nova-cert]/ensure: ensure changed 'stopped' to 'running' Notice: /Stage[main]/Nova::Vncproxy/Nova::Generic_service[vncproxy]/Service[nova-vncproxy]/ensure: ensure changed 'stopped' to 'running' Notice: /Stage[main]/Nova::Consoleauth/Nova::Generic_service[consoleauth]/Service[nova-consoleauth]/ensure: ensure changed 'stopped' to 'running' Notice: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]/ensure: ensure changed 'stopped' to 'running' Notice: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]/ensure: ensure changed 'stopped' to 'running' Notice: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]/ensure: ensure changed 'stopped' to 'running' so i hit ctrl-c and ran the undercloud installer again. and now it is stuck on this screen: Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false. (at /usr/share/ruby/vendor_ruby/puppet/type.rb:816:in `set_default') Warning: ([keystone_user]: The tenant parameter is deprecated and will be removed in the future. Please use keystone_user_role to assign a user to a project. Warning: ([keystone_user]: The ignore_default_tenant parameter is deprecated and will be removed in the future. Notice: /File[/var/run/swift]/seltype: seltype changed 'swift_var_run_t' to 'var_run_t' Warning: Unexpected line: Devices: id region zone ip address port replication ip replication port name weight partitions balance meta Warning: Unexpected line: Devices: id region zone ip address port replication ip replication port name weight partitions balance meta Warning: Unexpected line: Devices: id region zone ip address port replication ip replication port name weight partitions balance meta Notice: /Stage[main]/Main/File[/etc/keystone/ssl/private/signing_key.pem]/content: content changed '{md5}e583c18075542eca7133b710338cbeae' to '{md5}edb6096a47aba508e758c2ec312e1841' Notice: /Stage[main]/Main/File[/etc/keystone/ssl/certs/ca.pem]/content: content changed '{md5}fb2df484822237fae34b8a8314d79bca' to '{md5}c12d1749101079b0f2599b75ab9d45b1' Notice: /Stage[main]/Main/File[/etc/keystone/ssl/certs/signing_cert.pem]/content: content changed '{md5}7752f1a96b2e630106a2a2a850bc8a62' to '{md5}b6aaa55f61e9788e9248f912a284a0ed' Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/owner: owner changed 'root' to 'swift' Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/group: group changed 'root' to 'swift' Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/owner: owner changed 'root' to 'swift' Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/group: group changed 'root' to 'swift' Notice: /Stage[main]/Swift::Proxy/Concat::Fragment[swift_proxy]/File[/var/lib/puppet/concat/_etc_swift_proxy-server.conf/fragments/00_swift_proxy]/content: content changed '{md5}9fe6fc8fc87431620ec0f5aed49c7395' to '{md5}7eadb01d1322a2e7e67fde3e2fb1ba58' Notice: /File[/var/cache/swift]/seltype: seltype changed 'swift_var_cache_t' to 'var_t' Notice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/Exec[concat_/etc/swift/proxy-server.conf]/returns: executed successfully Notice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/Exec[concat_/etc/swift/proxy-server.conf]: Triggered 'refresh' from 1 events Notice: /Stage[main]/Swift::Proxy/Concat[/etc/swift/proxy-server.conf]/File[/etc/swift/proxy-server.conf]/content: content changed '{md5}1a33a986084bb7123429f99abedbd43f' to '{md5}7342cc81152c646351d8c641a37f7b78' Notice: /Stage[main]/Swift::Proxy/Service[swift-proxy]/ensure: ensure changed 'stopped' to 'running' Notice: /Stage[main]/Keystone::Service/Service[keystone]: Triggered 'refresh' from 3 events any help to massage it forward would be very nice Thank you -- *805010942448935* * * *GR750055912MA* *Link to me on LinkedIn * From dtantsur at redhat.com Mon Sep 28 07:30:40 2015 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 28 Sep 2015 09:30:40 +0200 Subject: [Rdo-list] Undercloud installation fails In-Reply-To: References: Message-ID: <5608ECA0.8080907@redhat.com> On 09/26/2015 12:43 PM, Marius Cornea wrote: > Hi, > > Undercloud installation is failing for me with the following error, > using the last known good RDO Trunk Delorean repository: > > http://paste.openstack.org/show/474109/ > > Where should I check/file bugs about this? Launchpad or BZ? I think upstream bugs go to https://bugs.launchpad.net/tripleo/ now. Adding John, as there was something about discoverd in the error log. > > Thanks, > Marius > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From hguemar at fedoraproject.org Mon Sep 28 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 28 Sep 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150928150003.669B060A4003@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-09-30 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rbowen at redhat.com Mon Sep 28 17:02:16 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 28 Sep 2015 13:02:16 -0400 Subject: [Rdo-list] RDO blog roundup,. week of September 28 Message-ID: <56097298.8030402@redhat.com> Here's what RDO enthusiasts have been writing about over the past week. If you're writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you're not on my list, please let me know! OpenContrail on the controller side by Sylvain Afchain In my previous post I explained how packets are forwarded from point to point within OpenContrail. We saw the tools available to check what are the routes involved in the forwarding. Last time we focused on the agent side but now we are going to understand on another key component: the controller.. ... read more at http://tm3.org/2m Highly available virtual machines in RHEL OpenStack Platform 7 by Steve Gordon OpenStack provides scale and redundancy at the infrastructure layer to provide high availability for applications built for operation in a horizontally scaling cloud computing environment. It has been designed for applications that are ?designed for failure? and voluntarily excluded features that would enable traditional enterprise applications, in fear of limiting its? scalability and corrupting its initial goals. These traditional enterprise applications demand continuous operation, and fast, automatic recovery in the event of an infrastructure level failure. While an increasing number of enterprises look to OpenStack as providing the infrastructure platform for their forward-looking applications they are also looking to simplify operations by consolidating their legacy application workloads on it as well. ... read more at http://tm3.org/2n Keystone Unit Tests by Adam Young Running the Keystone Unit tests takes a long time. To start with a blank slate, you want to make sure you have the latest from master and a clean git repository. ... read more at http://tm3.org/2o Hints and tips from the CERN OpenStack cloud team by Tim Bell Having reported that EPT has a negative influence on the High Energy Physics standard benchmark HepSpec06, we have started the deployment of those settings across the CERN OpenStack cloud, ... read more at http://tm3.org/2p Ossipee by Adam Young OpenStack is a big distributed system. FreeIPA is designed for security in distributed system. In order to develop and test each of them, separately or together, I need a distributed system. Virtualization has been a key technology for making this kind of work possible. OpenStack is great of managing virtualization. Added to that is the benefits found when one ?Fly our own airplanes.? Thus, I am using OpenStack to develop OpenStack. ... read more at http://tm3.org/2q -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://rdoproject.org/ From bderzhavets at hotmail.com Tue Sep 29 13:24:48 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 29 Sep 2015 09:24:48 -0400 Subject: [Rdo-list] What happens to DVR on RDO Kilo ? In-Reply-To: <56006CFD.3060209@redhat.com> References: <56006CFD.3060209@redhat.com> Message-ID: DVR setup on RDO Kilo (Controller/Network)+Compute+compute (ML2&OVS&VXLAN Per http://www.linux.com/community/blogs/133-general-linux/850749-rdo-juno-dvr-deployment-controllernetworkcomputecompute-ml2aovsavxlan-on-centos-71 However, to have VXLAN tunnels alive I was forced to set on each node l2population,enable_distributed_routing,arp_responder to False. [root at ip-192-169-142-147 openvswitch]# cat ovs_neutron_plugin.ini | grep -v ^#| grep -v ^$ [ovs] enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip =10.0.0.147 bridge_mappings =physnet1:br-ex [agent] polling_interval = 2 tunnel_types =vxlan vxlan_udp_port =4789 l2population = False enable_distributed_routing = False arp_responder = False [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver So, I was expecting all traffic to go via br-ex on Controller 192.169.142.127. However, when I created as admin distributed router belongs to demo. As demo tenant attached public and private networks to router. Created VM (F22) and assigned floating IP no VM. On Compute Node 192.169.142.147 appeared two namespaces :- [root at ip-192-169-142-147 ~]# ip netns fip-a33f05a2-f6ca-4dd9-ba84-1b9e2f3f256d qrouter-1ca178ff-2ee1-4f40-a65a-fbc590dc18a4 Then I logged into Fedora VM via floating IP and started download of CentOS-7-x86_64-Everything-1503-01.iso (7.5 GB) image via wget. [root at vf22devs01 ~]# wget http://centos-mirror.rbc.ru/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1503-01.iso --2015-09-29 11:22:28-- http://centos-mirror.rbc.ru/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1503-01.iso Resolving centos-mirror.rbc.ru (centos-mirror.rbc.ru)... 80.68.250.218 Connecting to centos-mirror.rbc.ru (centos-mirror.rbc.ru)|80.68.250.218|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 7591690240 (7.1G) [application/octet-stream] Saving to: ?CentOS-7-x86_64-Everything-1503-01.iso? CentOS-7-x86_64-Everything-1 100%[==============================================>] 7.07G 1.71MB/s in 71m 49ss 2015-09-29 12:34:16 (1.68 MB/s) - ?CentOS-7-x86_64-Everything-1503-01.iso? saved [7591690240/7591690240] Protocol bellow clearly shows that traffic was coming via br-ex on Compute Node Looks like if Compute nodes are running required services, ml2_conf.ini and metadata_agent.ini are replicated as required and admin creates:- # source keystonerc_admin # neutron router-create RouterDemo --distributed True --tenant-id demo-tenants-id DVR will work with l2population, arp_responder, enable_distributed_routing set to False across all landscape. *********************************** Unchanging picture on Controller *********************************** [root at ip-192-169-142-127 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.127 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::44e1:5aff:fe40:8244 prefixlen 64 scopeid 0x20 ether 46:e1:5a:40:82:44 txqueuelen 0 (Ethernet) RX packets 2599455 bytes 1150042934 (1.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2564510 bytes 1643558459 (1.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:feeb:df21 prefixlen 64 scopeid 0x20 ether 52:54:00:eb:df:21 txqueuelen 1000 (Ethernet) RX packets 2639669 bytes 1154683451 (1.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2598079 bytes 1647031397 (1.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ****************************************************************** Download running via br-ex on Compute Node (192.169.142.147):- ****************************************************************** [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 384115 bytes 257195666 (245.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 765238 bytes 338726146 (323.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3264206 bytes 4290640925 (3.9 GiB) <==== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1819329 bytes 410834141 (391.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 385282 bytes 257337071 (245.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 766473 bytes 338883813 (323.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3435000 bytes 4529492380 (4.2 GiB) <==== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1876798 bytes 414801100 (395.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 387008 bytes 257539547 (245.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 768296 bytes 339107545 (323.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3691373 bytes 4888357764 (4.5 GiB) <====== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1961572 bytes 420645258 (401.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 387525 bytes 257602938 (245.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 768848 bytes 339178423 (323.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3767710 bytes 4995080257 (4.6 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1987149 bytes 422411726 (402.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 389681 bytes 257850620 (245.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 771110 bytes 339453335 (323.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4089304 bytes 5445503015 (5.0 GiB) <======== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2093163 bytes 429718230 (409.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 393357 bytes 258277943 (246.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 775035 bytes 340005436 (324.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4588907 bytes 6145182030 (5.7 GiB) <===== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2256743 bytes 441092293 (420.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 395350 bytes 258511452 (246.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 777148 bytes 340266919 (324.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4888982 bytes 6565013893 (6.1 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2353216 bytes 447751576 (427.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 397729 bytes 258782156 (246.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 779703 bytes 340646185 (324.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 5199268 bytes 6998765919 (6.5 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2454158 bytes 454799344 (433.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 402423 bytes 259332767 (247.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 784670 bytes 341253943 (325.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 5900362 bytes 7979888974 (7.4 GiB) <======== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2684043 bytes 470658266 (448.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Thanks Boris -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Tue Sep 29 16:45:11 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 29 Sep 2015 12:45:11 -0400 Subject: [Rdo-list] What happens to DVR on RDO Kilo ? In-Reply-To: References: <56006CFD.3060209@redhat.com>, Message-ID: `Iftop -i eth0` running on Controller vs `iftop -i eth0` running on Compute node https://bderzhavets.wordpress.com/2015/09/29/iftop-i-eth0-running-on-controller-vs-iftop-i-eth0-running-on-compute/ From: bderzhavets at hotmail.com To: rdo-list at redhat.com Date: Tue, 29 Sep 2015 09:24:48 -0400 Subject: [Rdo-list] What happens to DVR on RDO Kilo ? DVR setup on RDO Kilo (Controller/Network)+Compute+compute (ML2&OVS&VXLAN Per http://www.linux.com/community/blogs/133-general-linux/850749-rdo-juno-dvr-deployment-controllernetworkcomputecompute-ml2aovsavxlan-on-centos-71 However, to have VXLAN tunnels alive I was forced to set on each node l2population,enable_distributed_routing,arp_responder to False. [root at ip-192-169-142-147 openvswitch]# cat ovs_neutron_plugin.ini | grep -v ^#| grep -v ^$ [ovs] enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip =10.0.0.147 bridge_mappings =physnet1:br-ex [agent] polling_interval = 2 tunnel_types =vxlan vxlan_udp_port =4789 l2population = False enable_distributed_routing = False arp_responder = False [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver So, I was expecting all traffic to go via br-ex on Controller 192.169.142.127. However, when I created as admin distributed router belongs to demo. As demo tenant attached public and private networks to router. Created VM (F22) and assigned floating IP no VM. On Compute Node 192.169.142.147 appeared two namespaces :- [root at ip-192-169-142-147 ~]# ip netns fip-a33f05a2-f6ca-4dd9-ba84-1b9e2f3f256d qrouter-1ca178ff-2ee1-4f40-a65a-fbc590dc18a4 Then I logged into Fedora VM via floating IP and started download of CentOS-7-x86_64-Everything-1503-01.iso (7.5 GB) image via wget. [root at vf22devs01 ~]# wget http://centos-mirror.rbc.ru/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1503-01.iso --2015-09-29 11:22:28-- http://centos-mirror.rbc.ru/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1503-01.iso Resolving centos-mirror.rbc.ru (centos-mirror.rbc.ru)... 80.68.250.218 Connecting to centos-mirror.rbc.ru (centos-mirror.rbc.ru)|80.68.250.218|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 7591690240 (7.1G) [application/octet-stream] Saving to: ?CentOS-7-x86_64-Everything-1503-01.iso? CentOS-7-x86_64-Everything-1 100%[==============================================>] 7.07G 1.71MB/s in 71m 49ss 2015-09-29 12:34:16 (1.68 MB/s) - ?CentOS-7-x86_64-Everything-1503-01.iso? saved [7591690240/7591690240] Protocol bellow clearly shows that traffic was coming via br-ex on Compute Node Looks like if Compute nodes are running required services, ml2_conf.ini and metadata_agent.ini are replicated as required and admin creates:- # source keystonerc_admin # neutron router-create RouterDemo --distributed True --tenant-id demo-tenants-id DVR will work with l2population, arp_responder, enable_distributed_routing set to False across all landscape. *********************************** Unchanging picture on Controller *********************************** [root at ip-192-169-142-127 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.127 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::44e1:5aff:fe40:8244 prefixlen 64 scopeid 0x20 ether 46:e1:5a:40:82:44 txqueuelen 0 (Ethernet) RX packets 2599455 bytes 1150042934 (1.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2564510 bytes 1643558459 (1.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:feeb:df21 prefixlen 64 scopeid 0x20 ether 52:54:00:eb:df:21 txqueuelen 1000 (Ethernet) RX packets 2639669 bytes 1154683451 (1.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2598079 bytes 1647031397 (1.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ****************************************************************** Download running via br-ex on Compute Node (192.169.142.147):- ****************************************************************** [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 384115 bytes 257195666 (245.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 765238 bytes 338726146 (323.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3264206 bytes 4290640925 (3.9 GiB) <==== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1819329 bytes 410834141 (391.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 385282 bytes 257337071 (245.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 766473 bytes 338883813 (323.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3435000 bytes 4529492380 (4.2 GiB) <==== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1876798 bytes 414801100 (395.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 387008 bytes 257539547 (245.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 768296 bytes 339107545 (323.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3691373 bytes 4888357764 (4.5 GiB) <====== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1961572 bytes 420645258 (401.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 387525 bytes 257602938 (245.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 768848 bytes 339178423 (323.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3767710 bytes 4995080257 (4.6 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1987149 bytes 422411726 (402.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 389681 bytes 257850620 (245.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 771110 bytes 339453335 (323.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4089304 bytes 5445503015 (5.0 GiB) <======== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2093163 bytes 429718230 (409.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 393357 bytes 258277943 (246.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 775035 bytes 340005436 (324.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4588907 bytes 6145182030 (5.7 GiB) <===== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2256743 bytes 441092293 (420.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 395350 bytes 258511452 (246.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 777148 bytes 340266919 (324.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4888982 bytes 6565013893 (6.1 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2353216 bytes 447751576 (427.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 397729 bytes 258782156 (246.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 779703 bytes 340646185 (324.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 5199268 bytes 6998765919 (6.5 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2454158 bytes 454799344 (433.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 402423 bytes 259332767 (247.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 784670 bytes 341253943 (325.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 5900362 bytes 7979888974 (7.4 GiB) <======== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2684043 bytes 470658266 (448.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Thanks Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Tue Sep 29 23:34:59 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 30 Sep 2015 01:34:59 +0200 Subject: [Rdo-list] [rdo-manager] physical undercloud errors Message-ID: hello all last week, i was able to build rdo-manager on a vm after much struggles and finally help from this list this week i have a similar struggle with building rdo-manager on a physical machine. i am using the docs from here: http://docs.openstack.org/developer/tripleo-docs/installation/installing.html . i understand the bugs in the documentation have been fixed and yet i am getting these errors with dib-parts-run Notice: /Stage[main]/Keystone::Service/Service[keystone]: Triggered 'refresh' from 3 events Notice: Finished catalog run in 114.95 seconds + rc=6 + set -e + echo 'puppet apply exited with exit code 6' puppet apply exited with exit code 6 + '[' 6 '!=' 2 -a 6 '!=' 0 ']' + exit 6 [2015-09-29 14:07:17,856] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 6] [2015-09-29 14:07:17,857] (os-refresh-config) [ERROR] Aborting... Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 562, in install _run_orc(instack_env) File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 494, in _run_orc _run_live_command(args, instack_env, 'os-refresh-config') File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 325, in _run_live_command raise RuntimeError('%s failed. See log for details.' % name) RuntimeError: os-refresh-config failed. See log for details. Command 'instack-install-undercloud' returned non-zero exit status 1 the os-apply-config.log show this [2015/09/29 02:04:37 PM] [INFO] success [2015/09/29 02:04:37 PM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json [2015/09/29 02:04:37 PM] [WARNING] DEPRECATED: falling back to /var/run/os-collect-config/os_config_files.json [root at rdo log]# ll /var/run/os-collect-config/os_config_files.json ls: cannot access /var/run/os-collect-config/os_config_files.json: No such file or directory any help would be great. as mentioned before i am doing a POC and it definitely has gone over schedule thank you -- *805010942448935* * * *GR750055912MA* *Link to me on LinkedIn * From amuller at redhat.com Wed Sep 30 01:37:39 2015 From: amuller at redhat.com (Assaf Muller) Date: Tue, 29 Sep 2015 21:37:39 -0400 Subject: [Rdo-list] What happens to DVR on RDO Kilo ? In-Reply-To: References: <56006CFD.3060209@redhat.com> Message-ID: I'm not sure what you're asking. Everything you described seems like it's working as intended. Is anything not working, any specific questions? By the way, check out my series of blog posts on DVR here, it may help: http://assafmuller.com/2015/04/15/distributed-virtual-routing-overview-and-eastwest-routing/ On Tue, Sep 29, 2015 at 9:24 AM, Boris Derzhavets wrote: > > > DVR setup on RDO Kilo (Controller/Network)+Compute+compute (ML2&OVS&VXLAN > Per > http://www.linux.com/community/blogs/133-general-linux/850749-rdo-juno-dvr-deployment-controllernetworkcomputecompute-ml2aovsavxlan-on-centos-71 > > However, to have VXLAN tunnels alive I was forced to set on each node > l2population,enable_distributed_routing,arp_responder to False. > > [root at ip-192-169-142-147 openvswitch]# cat ovs_neutron_plugin.ini | grep > -v ^#| grep -v ^$ > [ovs] > enable_tunneling = True > integration_bridge = br-int > tunnel_bridge = br-tun > local_ip =10.0.0.147 > bridge_mappings =physnet1:br-ex > [agent] > polling_interval = 2 > tunnel_types =vxlan > vxlan_udp_port =4789 > l2population = False > enable_distributed_routing = False > arp_responder = False > [securitygroup] > firewall_driver = > neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > > So, I was expecting all traffic to go via br-ex on Controller > 192.169.142.127. > However, when I created as admin distributed router belongs to demo. > As demo tenant attached public and private networks to router. > Created VM (F22) and assigned floating IP no VM. > > On Compute Node 192.169.142.147 appeared two namespaces :- > > [root at ip-192-169-142-147 ~]# ip netns > fip-a33f05a2-f6ca-4dd9-ba84-1b9e2f3f256d > qrouter-1ca178ff-2ee1-4f40-a65a-fbc590dc18a4 > > Then I logged into Fedora VM via floating IP and started download of > CentOS-7-x86_64-Everything-1503-01.iso (7.5 GB) image via wget. > > [root at vf22devs01 ~]# wget > http://centos-mirror.rbc.ru/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1503-01.iso > --2015-09-29 11:22:28-- > http://centos-mirror.rbc.ru/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1503-01.iso > Resolving centos-mirror.rbc.ru (centos-mirror.rbc.ru)... 80.68.250.218 > Connecting to centos-mirror.rbc.ru (centos-mirror.rbc.ru)|80.68.250.218|:80... > connected. > HTTP request sent, awaiting response... 200 OK > Length: 7591690240 (7.1G) [application/octet-stream] > Saving to: ?CentOS-7-x86_64-Everything-1503-01.iso? > > CentOS-7-x86_64-Everything-1 > 100%[==============================================>] 7.07G 1.71MB/s > in 71m 49ss > > 2015-09-29 12:34:16 (1.68 MB/s) - ?CentOS-7-x86_64-Everything-1503-01.iso? > saved [7591690240/7591690240] > > > Protocol bellow clearly shows that traffic was coming via br-ex on Compute > Node > Looks like if Compute nodes are running required services, ml2_conf.ini > and metadata_agent.ini > are replicated as required and admin creates:- > > # source keystonerc_admin > # neutron router-create RouterDemo --distributed True --tenant-id > demo-tenants-id > > DVR will work with l2population, arp_responder, > enable_distributed_routing set to False > across all landscape. > > > *********************************** > Unchanging picture on Controller > *********************************** > [root at ip-192-169-142-127 ~]# ifconfig | head -16 > br-ex: flags=4163 mtu 1500 > inet 192.169.142.127 netmask 255.255.255.0 broadcast > 192.169.142.255 > inet6 fe80::44e1:5aff:fe40:8244 prefixlen 64 scopeid 0x20 > ether 46:e1:5a:40:82:44 txqueuelen 0 (Ethernet) > RX packets 2599455 bytes 1150042934 (1.0 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 2564510 bytes 1643558459 (1.5 GiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::5054:ff:feeb:df21 prefixlen 64 scopeid 0x20 > ether 52:54:00:eb:df:21 txqueuelen 1000 (Ethernet) > RX packets 2639669 bytes 1154683451 (1.0 GiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 2598079 bytes 1647031397 (1.5 GiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > ****************************************************************** > Download running via br-ex on Compute Node (192.169.142.147):- > ****************************************************************** > > [root at ip-192-169-142-147 ~]# ifconfig | head -16 > br-ex: flags=4163 mtu 1500 > inet 192.169.142.147 netmask 255.255.255.0 broadcast > 192.169.142.255 > inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 > ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) > RX packets 384115 bytes 257195666 (245.2 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 765238 bytes 338726146 (323.0 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 > ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) > RX packets 3264206 bytes 4290640925 (3.9 > GiB) <==== > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 1819329 bytes 410834141 (391.8 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > [root at ip-192-169-142-147 ~]# ifconfig | head -16 > br-ex: flags=4163 mtu 1500 > inet 192.169.142.147 netmask 255.255.255.0 broadcast > 192.169.142.255 > inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 > ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) > RX packets 385282 bytes 257337071 (245.4 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 766473 bytes 338883813 (323.1 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 > ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) > RX packets 3435000 bytes 4529492380 (4.2 > GiB) <==== > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 1876798 bytes 414801100 (395.5 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > [root at ip-192-169-142-147 ~]# ifconfig | head -16 > br-ex: flags=4163 mtu 1500 > inet 192.169.142.147 netmask 255.255.255.0 broadcast > 192.169.142.255 > inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 > ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) > RX packets 387008 bytes 257539547 (245.6 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 768296 bytes 339107545 (323.3 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 > ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) > RX packets 3691373 bytes 4888357764 (4.5 > GiB) <====== > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 1961572 bytes 420645258 (401.1 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > [root at ip-192-169-142-147 ~]# ifconfig | head -16 > br-ex: flags=4163 mtu 1500 > inet 192.169.142.147 netmask 255.255.255.0 broadcast > 192.169.142.255 > inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 > ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) > RX packets 387525 bytes 257602938 (245.6 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 768848 bytes 339178423 (323.4 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 > ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) > RX packets 3767710 bytes 4995080257 (4.6 > GiB) <======= > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 1987149 bytes 422411726 (402.8 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > [root at ip-192-169-142-147 ~]# ifconfig | head -16 > br-ex: flags=4163 mtu 1500 > inet 192.169.142.147 netmask 255.255.255.0 broadcast > 192.169.142.255 > inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 > ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) > RX packets 389681 bytes 257850620 (245.9 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 771110 bytes 339453335 (323.7 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 > ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) > RX packets 4089304 bytes 5445503015 (5.0 GiB) > <======== > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 2093163 bytes 429718230 (409.8 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > [root at ip-192-169-142-147 ~]# ifconfig | head -16 > br-ex: flags=4163 mtu 1500 > inet 192.169.142.147 netmask 255.255.255.0 broadcast > 192.169.142.255 > inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 > ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) > RX packets 393357 bytes 258277943 (246.3 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 775035 bytes 340005436 (324.2 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 > ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) > RX packets 4588907 bytes 6145182030 (5.7 > GiB) <===== > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 2256743 bytes 441092293 (420.6 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > [root at ip-192-169-142-147 ~]# ifconfig | head -16 > br-ex: flags=4163 mtu 1500 > inet 192.169.142.147 netmask 255.255.255.0 broadcast > 192.169.142.255 > inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 > ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) > RX packets 395350 bytes 258511452 (246.5 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 777148 bytes 340266919 (324.5 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 > ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) > RX packets 4888982 bytes 6565013893 (6.1 GiB) > <======= > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 2353216 bytes 447751576 (427.0 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > [root at ip-192-169-142-147 ~]# ifconfig | head -16 > br-ex: flags=4163 mtu 1500 > inet 192.169.142.147 netmask 255.255.255.0 broadcast > 192.169.142.255 > inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 > ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) > RX packets 397729 bytes 258782156 (246.7 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 779703 bytes 340646185 (324.8 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 > ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) > RX packets 5199268 bytes 6998765919 (6.5 GiB) > <======= > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 2454158 bytes 454799344 (433.7 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > [root at ip-192-169-142-147 ~]# ifconfig | head -16 > br-ex: flags=4163 mtu 1500 > inet 192.169.142.147 netmask 255.255.255.0 broadcast > 192.169.142.255 > inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 > ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) > RX packets 402423 bytes 259332767 (247.3 MiB) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 784670 bytes 341253943 (325.4 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > eth0: flags=4163 mtu 1500 > inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 > ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) > RX packets 5900362 bytes 7979888974 (7.4 > GiB) <======== > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 2684043 bytes 470658266 (448.8 MiB) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > Thanks > Boris > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Sep 30 06:58:45 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 30 Sep 2015 02:58:45 -0400 Subject: [Rdo-list] What happens to DVR on RDO Kilo ? In-Reply-To: References: <56006CFD.3060209@redhat.com> , Message-ID: The thing surprising me is that DVR does work with ovs_neutron_plugin.ini on Compute Nodes with :- l2population =False enable_distributed_routing = False arp_responder = False Please, see official manual for Kilo :- https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/version-7/red-hat-enterprise-linux-openstack-platform-7-networking-guide/chapter-9-configure-distributed-virtual-routing-dvr In Juno release they should be True on Compute Nodes. I also tested DVR on Liberty (beta) and it works ( until now ) with :- l2population = True enable_distributed_routing = True arp_responder = True in openvswitch_agent.ini on Compute Nodes and Controller. If the "True" values are no longer needed on Kilo, please, confirm. Thank you. Boris. From: amuller at redhat.com Date: Tue, 29 Sep 2015 21:37:39 -0400 Subject: Re: [Rdo-list] What happens to DVR on RDO Kilo ? To: bderzhavets at hotmail.com CC: rdo-list at redhat.com I'm not sure what you're asking. Everything you described seems like it's working as intended.Is anything not working, any specific questions? By the way, check out my series of blog posts on DVR here, it may help:http://assafmuller.com/2015/04/15/distributed-virtual-routing-overview-and-eastwest-routing/ On Tue, Sep 29, 2015 at 9:24 AM, Boris Derzhavets wrote: DVR setup on RDO Kilo (Controller/Network)+Compute+compute (ML2&OVS&VXLAN Per http://www.linux.com/community/blogs/133-general-linux/850749-rdo-juno-dvr-deployment-controllernetworkcomputecompute-ml2aovsavxlan-on-centos-71 However, to have VXLAN tunnels alive I was forced to set on each node l2population,enable_distributed_routing,arp_responder to False. [root at ip-192-169-142-147 openvswitch]# cat ovs_neutron_plugin.ini | grep -v ^#| grep -v ^$ [ovs] enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip =10.0.0.147 bridge_mappings =physnet1:br-ex [agent] polling_interval = 2 tunnel_types =vxlan vxlan_udp_port =4789 l2population = False enable_distributed_routing = False arp_responder = False [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver So, I was expecting all traffic to go via br-ex on Controller 192.169.142.127. However, when I created as admin distributed router belongs to demo. As demo tenant attached public and private networks to router. Created VM (F22) and assigned floating IP no VM. On Compute Node 192.169.142.147 appeared two namespaces :- [root at ip-192-169-142-147 ~]# ip netns fip-a33f05a2-f6ca-4dd9-ba84-1b9e2f3f256d qrouter-1ca178ff-2ee1-4f40-a65a-fbc590dc18a4 Then I logged into Fedora VM via floating IP and started download of CentOS-7-x86_64-Everything-1503-01.iso (7.5 GB) image via wget. [root at vf22devs01 ~]# wget http://centos-mirror.rbc.ru/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1503-01.iso --2015-09-29 11:22:28-- http://centos-mirror.rbc.ru/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1503-01.iso Resolving centos-mirror.rbc.ru (centos-mirror.rbc.ru)... 80.68.250.218 Connecting to centos-mirror.rbc.ru (centos-mirror.rbc.ru)|80.68.250.218|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 7591690240 (7.1G) [application/octet-stream] Saving to: ?CentOS-7-x86_64-Everything-1503-01.iso? CentOS-7-x86_64-Everything-1 100%[==============================================>] 7.07G 1.71MB/s in 71m 49ss 2015-09-29 12:34:16 (1.68 MB/s) - ?CentOS-7-x86_64-Everything-1503-01.iso? saved [7591690240/7591690240] Protocol bellow clearly shows that traffic was coming via br-ex on Compute Node Looks like if Compute nodes are running required services, ml2_conf.ini and metadata_agent.ini are replicated as required and admin creates:- # source keystonerc_admin # neutron router-create RouterDemo --distributed True --tenant-id demo-tenants-id DVR will work with l2population, arp_responder, enable_distributed_routing set to False across all landscape. *********************************** Unchanging picture on Controller *********************************** [root at ip-192-169-142-127 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.127 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::44e1:5aff:fe40:8244 prefixlen 64 scopeid 0x20 ether 46:e1:5a:40:82:44 txqueuelen 0 (Ethernet) RX packets 2599455 bytes 1150042934 (1.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2564510 bytes 1643558459 (1.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:feeb:df21 prefixlen 64 scopeid 0x20 ether 52:54:00:eb:df:21 txqueuelen 1000 (Ethernet) RX packets 2639669 bytes 1154683451 (1.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2598079 bytes 1647031397 (1.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ****************************************************************** Download running via br-ex on Compute Node (192.169.142.147):- ****************************************************************** [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 384115 bytes 257195666 (245.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 765238 bytes 338726146 (323.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3264206 bytes 4290640925 (3.9 GiB) <==== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1819329 bytes 410834141 (391.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 385282 bytes 257337071 (245.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 766473 bytes 338883813 (323.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3435000 bytes 4529492380 (4.2 GiB) <==== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1876798 bytes 414801100 (395.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 387008 bytes 257539547 (245.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 768296 bytes 339107545 (323.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3691373 bytes 4888357764 (4.5 GiB) <====== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1961572 bytes 420645258 (401.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 387525 bytes 257602938 (245.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 768848 bytes 339178423 (323.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3767710 bytes 4995080257 (4.6 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1987149 bytes 422411726 (402.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 389681 bytes 257850620 (245.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 771110 bytes 339453335 (323.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4089304 bytes 5445503015 (5.0 GiB) <======== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2093163 bytes 429718230 (409.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 393357 bytes 258277943 (246.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 775035 bytes 340005436 (324.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4588907 bytes 6145182030 (5.7 GiB) <===== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2256743 bytes 441092293 (420.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 395350 bytes 258511452 (246.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 777148 bytes 340266919 (324.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4888982 bytes 6565013893 (6.1 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2353216 bytes 447751576 (427.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 397729 bytes 258782156 (246.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 779703 bytes 340646185 (324.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 5199268 bytes 6998765919 (6.5 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2454158 bytes 454799344 (433.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 402423 bytes 259332767 (247.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 784670 bytes 341253943 (325.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 5900362 bytes 7979888974 (7.4 GiB) <======== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2684043 bytes 470658266 (448.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Thanks Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Sep 30 09:00:56 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 30 Sep 2015 05:00:56 -0400 Subject: [Rdo-list] RE(2): What happens to DVR on RDO Kilo ? In-Reply-To: References: <56006CFD.3060209@redhat.com> , Message-ID: Thanks again for your feedback. I've checked your another post http://assafmuller.com/2014/02/23/ml2-address-population/ Then added entries to ml2_conf.ini on all compute nodes :- [agent] l2_population = True Afterwards updated ovs_neutron_plugin.ini on all nodes with :- l2_population = True enable_distributed_routing = True arp_responder = True Then restarted all nodes and VXLAN tunnels came up as expected. Update done to ml2_conf.ini files seems to be important for neutron-openvswitch-agent. DVR is also working as expected. Boris. From: amuller at redhat.com Date: Tue, 29 Sep 2015 21:37:39 -0400 Subject: Re: [Rdo-list] What happens to DVR on RDO Kilo ? To: bderzhavets at hotmail.com CC: rdo-list at redhat.com I'm not sure what you're asking. Everything you described seems like it's working as intended.Is anything not working, any specific questions? By the way, check out my series of blog posts on DVR here, it may help:http://assafmuller.com/2015/04/15/distributed-virtual-routing-overview-and-eastwest-routing/ On Tue, Sep 29, 2015 at 9:24 AM, Boris Derzhavets wrote: DVR setup on RDO Kilo (Controller/Network)+Compute+compute (ML2&OVS&VXLAN Per http://www.linux.com/community/blogs/133-general-linux/850749-rdo-juno-dvr-deployment-controllernetworkcomputecompute-ml2aovsavxlan-on-centos-71 However, to have VXLAN tunnels alive I was forced to set on each node l2population,enable_distributed_routing,arp_responder to False. [root at ip-192-169-142-147 openvswitch]# cat ovs_neutron_plugin.ini | grep -v ^#| grep -v ^$ [ovs] enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip =10.0.0.147 bridge_mappings =physnet1:br-ex [agent] polling_interval = 2 tunnel_types =vxlan vxlan_udp_port =4789 l2population = False enable_distributed_routing = False arp_responder = False [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver So, I was expecting all traffic to go via br-ex on Controller 192.169.142.127. However, when I created as admin distributed router belongs to demo. As demo tenant attached public and private networks to router. Created VM (F22) and assigned floating IP no VM. On Compute Node 192.169.142.147 appeared two namespaces :- [root at ip-192-169-142-147 ~]# ip netns fip-a33f05a2-f6ca-4dd9-ba84-1b9e2f3f256d qrouter-1ca178ff-2ee1-4f40-a65a-fbc590dc18a4 Then I logged into Fedora VM via floating IP and started download of CentOS-7-x86_64-Everything-1503-01.iso (7.5 GB) image via wget. [root at vf22devs01 ~]# wget http://centos-mirror.rbc.ru/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1503-01.iso --2015-09-29 11:22:28-- http://centos-mirror.rbc.ru/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1503-01.iso Resolving centos-mirror.rbc.ru (centos-mirror.rbc.ru)... 80.68.250.218 Connecting to centos-mirror.rbc.ru (centos-mirror.rbc.ru)|80.68.250.218|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 7591690240 (7.1G) [application/octet-stream] Saving to: ?CentOS-7-x86_64-Everything-1503-01.iso? CentOS-7-x86_64-Everything-1 100%[==============================================>] 7.07G 1.71MB/s in 71m 49ss 2015-09-29 12:34:16 (1.68 MB/s) - ?CentOS-7-x86_64-Everything-1503-01.iso? saved [7591690240/7591690240] Protocol bellow clearly shows that traffic was coming via br-ex on Compute Node Looks like if Compute nodes are running required services, ml2_conf.ini and metadata_agent.ini are replicated as required and admin creates:- # source keystonerc_admin # neutron router-create RouterDemo --distributed True --tenant-id demo-tenants-id DVR will work with l2population, arp_responder, enable_distributed_routing set to False across all landscape. *********************************** Unchanging picture on Controller *********************************** [root at ip-192-169-142-127 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.127 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::44e1:5aff:fe40:8244 prefixlen 64 scopeid 0x20 ether 46:e1:5a:40:82:44 txqueuelen 0 (Ethernet) RX packets 2599455 bytes 1150042934 (1.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2564510 bytes 1643558459 (1.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:feeb:df21 prefixlen 64 scopeid 0x20 ether 52:54:00:eb:df:21 txqueuelen 1000 (Ethernet) RX packets 2639669 bytes 1154683451 (1.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2598079 bytes 1647031397 (1.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ****************************************************************** Download running via br-ex on Compute Node (192.169.142.147):- ****************************************************************** [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 384115 bytes 257195666 (245.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 765238 bytes 338726146 (323.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3264206 bytes 4290640925 (3.9 GiB) <==== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1819329 bytes 410834141 (391.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 385282 bytes 257337071 (245.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 766473 bytes 338883813 (323.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3435000 bytes 4529492380 (4.2 GiB) <==== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1876798 bytes 414801100 (395.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 387008 bytes 257539547 (245.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 768296 bytes 339107545 (323.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3691373 bytes 4888357764 (4.5 GiB) <====== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1961572 bytes 420645258 (401.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 387525 bytes 257602938 (245.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 768848 bytes 339178423 (323.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 3767710 bytes 4995080257 (4.6 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1987149 bytes 422411726 (402.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 389681 bytes 257850620 (245.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 771110 bytes 339453335 (323.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4089304 bytes 5445503015 (5.0 GiB) <======== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2093163 bytes 429718230 (409.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 393357 bytes 258277943 (246.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 775035 bytes 340005436 (324.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4588907 bytes 6145182030 (5.7 GiB) <===== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2256743 bytes 441092293 (420.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 395350 bytes 258511452 (246.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 777148 bytes 340266919 (324.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 4888982 bytes 6565013893 (6.1 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2353216 bytes 447751576 (427.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 397729 bytes 258782156 (246.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 779703 bytes 340646185 (324.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 5199268 bytes 6998765919 (6.5 GiB) <======= RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2454158 bytes 454799344 (433.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root at ip-192-169-142-147 ~]# ifconfig | head -16 br-ex: flags=4163 mtu 1500 inet 192.169.142.147 netmask 255.255.255.0 broadcast 192.169.142.255 inet6 fe80::38a0:86ff:fec6:8a4d prefixlen 64 scopeid 0x20 ether 3a:a0:86:c6:8a:4d txqueuelen 0 (Ethernet) RX packets 402423 bytes 259332767 (247.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 784670 bytes 341253943 (325.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163 mtu 1500 inet6 fe80::5054:ff:fe3f:a45d prefixlen 64 scopeid 0x20 ether 52:54:00:3f:a4:5d txqueuelen 1000 (Ethernet) RX packets 5900362 bytes 7979888974 (7.4 GiB) <======== RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2684043 bytes 470658266 (448.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Thanks Boris _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Wed Sep 30 08:34:32 2015 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 30 Sep 2015 14:04:32 +0530 Subject: [Rdo-list] Bug statistics for 2015-09-30 Message-ID: # RDO Bugs on 2015-09-30 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 276 - Fixed (MODIFIED, POST, ON_QA): 176 ## Number of open bugs by component diskimage-builder [ 4] +++ distribution [ 13] ++++++++++ dnsmasq [ 1] instack [ 4] +++ instack-undercloud [ 26] ++++++++++++++++++++ iproute [ 1] openstack-ceilometer [ 11] ++++++++ openstack-cinder [ 13] ++++++++++ openstack-foreman-inst... [ 3] ++ openstack-glance [ 2] + openstack-heat [ 3] ++ openstack-horizon [ 1] openstack-ironic [ 1] openstack-ironic-disco... [ 2] + openstack-keystone [ 7] +++++ openstack-neutron [ 6] ++++ openstack-nova [ 17] +++++++++++++ openstack-packstack [ 50] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 11] ++++++++ openstack-selinux [ 12] +++++++++ openstack-swift [ 2] + openstack-tripleo [ 24] +++++++++++++++++++ openstack-tripleo-heat... [ 5] ++++ openstack-tripleo-imag... [ 2] + openstack-trove [ 1] openstack-tuskar [ 3] ++ openstack-utils [ 3] ++ openvswitch [ 1] Package Review [ 1] python-glanceclient [ 2] + python-keystonemiddleware [ 1] python-neutronclient [ 2] + python-novaclient [ 1] python-openstackclient [ 5] ++++ python-oslo-config [ 1] rdo-manager [ 23] ++++++++++++++++++ rdo-manager-cli [ 6] ++++ rdopkg [ 1] RFEs [ 3] ++ tempest [ 1] ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (276 bugs) ### diskimage-builder (4 bugs) [1210465 ] http://bugzilla.redhat.com/1210465 (NEW) Component: diskimage-builder Last change: 2015-04-09 Summary: instack-build-images fails when building CentOS7 due to EPEL version change [1235685 ] http://bugzilla.redhat.com/1235685 (NEW) Component: diskimage-builder Last change: 2015-07-01 Summary: DIB fails on not finding sos [1233210 ] http://bugzilla.redhat.com/1233210 (NEW) Component: diskimage-builder Last change: 2015-06-18 Summary: Image building fails silently [1265598 ] http://bugzilla.redhat.com/1265598 (NEW) Component: diskimage-builder Last change: 2015-09-23 Summary: rdo-manager liberty dib fails on python-pecan version ### distribution (13 bugs) [1176509 ] http://bugzilla.redhat.com/1176509 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] text of uninitialized deployment needs rewording [1219890 ] http://bugzilla.redhat.com/1219890 (ASSIGNED) Component: distribution Last change: 2015-06-09 Summary: Unable to launch an instance [1116011 ] http://bugzilla.redhat.com/1116011 (NEW) Component: distribution Last change: 2015-06-04 Summary: RDO: Packages needed to support AMQP1.0 [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2015-09-24 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1266923 ] http://bugzilla.redhat.com/1266923 (NEW) Component: distribution Last change: 2015-09-28 Summary: RDO's hdf5 rpm/yum dependencies conflicts [1063474 ] http://bugzilla.redhat.com/1063474 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: python-backports: /usr/lib/python2.6/site- packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site- packages/backports/__init__.pyc, but /usr/lib/python2.6 /site-packages is being added to sys.path [1176506 ] http://bugzilla.redhat.com/1176506 (NEW) Component: distribution Last change: 2015-06-04 Summary: [TripleO] Provisioning Images filter doesn't work [1218555 ] http://bugzilla.redhat.com/1218555 (ASSIGNED) Component: distribution Last change: 2015-06-04 Summary: rdo-release needs to enable RHEL optional extras and rh-common repositories [1206867 ] http://bugzilla.redhat.com/1206867 (NEW) Component: distribution Last change: 2015-06-04 Summary: Tracking bug for bugs that Lars is interested in [1263696 ] http://bugzilla.redhat.com/1263696 (NEW) Component: distribution Last change: 2015-09-16 Summary: Memcached not built with SASL support [1261821 ] http://bugzilla.redhat.com/1261821 (NEW) Component: distribution Last change: 2015-09-14 Summary: [RFE] Packages upgrade path checks in Delorean CI [1264072 ] http://bugzilla.redhat.com/1264072 (NEW) Component: distribution Last change: 2015-09-29 Summary: app-catalog-ui new package [1178131 ] http://bugzilla.redhat.com/1178131 (NEW) Component: distribution Last change: 2015-06-04 Summary: SSL supports only broken crypto ### dnsmasq (1 bug) [1164770 ] http://bugzilla.redhat.com/1164770 (NEW) Component: dnsmasq Last change: 2015-06-22 Summary: On a 3 node setup (controller, network and compute), instance is not getting dhcp ip (while using flat network) ### instack (4 bugs) [1224459 ] http://bugzilla.redhat.com/1224459 (NEW) Component: instack Last change: 2015-06-18 Summary: AttributeError: 'User' object has no attribute '_meta' [1192622 ] http://bugzilla.redhat.com/1192622 (NEW) Component: instack Last change: 2015-06-04 Summary: RDO Instack FAQ has serious doc bug [1201372 ] http://bugzilla.redhat.com/1201372 (NEW) Component: instack Last change: 2015-06-04 Summary: instack-update-overcloud fails because it tries to access non-existing files [1225590 ] http://bugzilla.redhat.com/1225590 (NEW) Component: instack Last change: 2015-06-04 Summary: When supplying Satellite registration fails do to Curl SSL error but i see now curl code ### instack-undercloud (26 bugs) [1266451 ] http://bugzilla.redhat.com/1266451 (NEW) Component: instack-undercloud Last change: 2015-09-25 Summary: instack-undercloud fails to setup seed vm, parse error while creating ssh key [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: wget is missing from qcow2 image fails instack-build- images script [1229720 ] http://bugzilla.redhat.com/1229720 (NEW) Component: instack-undercloud Last change: 2015-06-09 Summary: overcloud deploy fails due to timeout [1216243 ] http://bugzilla.redhat.com/1216243 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-18 Summary: Undercloud install leaves services enabled but not started [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2015-09-23 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-19 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1230870 ] http://bugzilla.redhat.com/1230870 (NEW) Component: instack-undercloud Last change: 2015-06-29 Summary: instack-undercloud: The documention is missing the instructions for installing the epel repos prior to running "sudo yum install -y python-rdomanager- oscplugin'. [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2015-07-14 Summary: Installing instack undercloud on Fedora20 VM fails [1215178 ] http://bugzilla.redhat.com/1215178 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: RDO-instack-undercloud: instack-install-undercloud exists with error "ImportError: No module named six." [1210685 ] http://bugzilla.redhat.com/1210685 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Could not retrieve facts for localhost.localhost: no address for localhost.localhost (corrupted /etc/resolv.conf) [1234652 ] http://bugzilla.redhat.com/1234652 (NEW) Component: instack-undercloud Last change: 2015-06-25 Summary: Instack has hard coded values for specific config files [1214545 ] http://bugzilla.redhat.com/1214545 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: undercloud nova.conf needs reserved_host_memory_mb=0 [1221812 ] http://bugzilla.redhat.com/1221812 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud install fails w/ rdo-kilo on rhel-7.1 due to rpm gpg key import [1232083 ] http://bugzilla.redhat.com/1232083 (NEW) Component: instack-undercloud Last change: 2015-06-16 Summary: instack-ironic-deployment --register-nodes swallows error output [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1225688 ] http://bugzilla.redhat.com/1225688 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-undercloud: running instack-build-imsages exists with "Not enough RAM to use tmpfs for build. (4048492 < 4G)" [1266101 ] http://bugzilla.redhat.com/1266101 (NEW) Component: instack-undercloud Last change: 2015-09-29 Summary: instack-virt-setup fails on CentOS7 [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1176569 ] http://bugzilla.redhat.com/1176569 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: 404 not found when instack-virt-setup tries to download the rhel-6.5 guest image [1232029 ] http://bugzilla.redhat.com/1232029 (NEW) Component: instack-undercloud Last change: 2015-06-22 Summary: instack-undercloud: "openstack undercloud install" fails with "RuntimeError: ('%s failed. See log for details.', 'os-refresh-config')" [1230937 ] http://bugzilla.redhat.com/1230937 (NEW) Component: instack-undercloud Last change: 2015-06-11 Summary: instack-undercloud: multiple "openstack No user with a name or ID of" errors during overcloud deployment. [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2015-06-15 Summary: instack-build-images does not stop on certain errors [1223977 ] http://bugzilla.redhat.com/1223977 (ASSIGNED) Component: instack-undercloud Last change: 2015-08-27 Summary: instack-undercloud: Running "openstack undercloud install" exits with error due to a missing python- flask-babel package: "Error: Package: openstack- tuskar-2013.2-dev1.el7.centos.noarch (delorean-rdo- management) Requires: python-flask-babel" [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1187966 ] http://bugzilla.redhat.com/1187966 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: missing dependency on which [1221818 ] http://bugzilla.redhat.com/1221818 (NEW) Component: instack-undercloud Last change: 2015-06-04 Summary: rdo-manager documentation required for RHEL7 + rdo kilo (only) setup and install ### iproute (1 bug) [1173435 ] http://bugzilla.redhat.com/1173435 (NEW) Component: iproute Last change: 2015-08-20 Summary: deleting netns ends in Device or resource busy and blocks further namespace usage ### openstack-ceilometer (11 bugs) [1265708 ] http://bugzilla.redhat.com/1265708 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: Ceilometer requires pymongo>=3.0.2 [1214928 ] http://bugzilla.redhat.com/1214928 (NEW) Component: openstack-ceilometer Last change: 2015-04-23 Summary: package ceilometermiddleware missing [1219372 ] http://bugzilla.redhat.com/1219372 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Info about 'severity' field changes is not displayed via alarm-history call [1265721 ] http://bugzilla.redhat.com/1265721 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: FIle /etc/ceilometer/meters.yaml missing [1263839 ] http://bugzilla.redhat.com/1263839 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-25 Summary: openstack-ceilometer should requires python-oslo-policy in kilo [1265746 ] http://bugzilla.redhat.com/1265746 (NEW) Component: openstack-ceilometer Last change: 2015-09-23 Summary: Options 'disable_non_metric_meters' and 'meter_definitions_cfg_file' are missing from ceilometer.conf [1194230 ] http://bugzilla.redhat.com/1194230 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-02-26 Summary: The /etc/sudoers.d/ceilometer have incorrect permissions [1265818 ] http://bugzilla.redhat.com/1265818 (ASSIGNED) Component: openstack-ceilometer Last change: 2015-09-28 Summary: ceilometer polling agent does not start [1231326 ] http://bugzilla.redhat.com/1231326 (NEW) Component: openstack-ceilometer Last change: 2015-06-12 Summary: kafka publisher requires kafka-python library [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2015-09-25 Summary: python-redis is not installed with packstack allinone [1219376 ] http://bugzilla.redhat.com/1219376 (NEW) Component: openstack-ceilometer Last change: 2015-05-07 Summary: Wrong alarms order on 'severity' field ### openstack-cinder (13 bugs) [1157939 ] http://bugzilla.redhat.com/1157939 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-27 Summary: Default binary for iscsi_helper (lioadm) does not exist in the repos [1167156 ] http://bugzilla.redhat.com/1167156 (NEW) Component: openstack-cinder Last change: 2014-11-24 Summary: cinder-api[14407]: segfault at 7fc84636f7e0 ip 00007fc84636f7e0 sp 00007fff3110a468 error 15 in multiarray.so[7fc846369000+d000] [1178648 ] http://bugzilla.redhat.com/1178648 (NEW) Component: openstack-cinder Last change: 2015-01-05 Summary: vmware: "Not authenticated error occurred " on delete volume [1049380 ] http://bugzilla.redhat.com/1049380 (NEW) Component: openstack-cinder Last change: 2015-03-23 Summary: openstack-cinder: cinder fails to copy an image a volume with GlusterFS backend [1028688 ] http://bugzilla.redhat.com/1028688 (ASSIGNED) Component: openstack-cinder Last change: 2015-03-20 Summary: should use new names in cinder-dist.conf [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2015-04-14 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage [1206864 ] http://bugzilla.redhat.com/1206864 (NEW) Component: openstack-cinder Last change: 2015-03-31 Summary: cannot attach local cinder volume [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2015-07-23 Summary: Configuration file in share forces ignore of auth_uri [1229551 ] http://bugzilla.redhat.com/1229551 (ASSIGNED) Component: openstack-cinder Last change: 2015-06-14 Summary: Nova resize fails with iSCSI logon failure when booting from volume [1049511 ] http://bugzilla.redhat.com/1049511 (NEW) Component: openstack-cinder Last change: 2015-03-30 Summary: EMC: fails to boot instances from volumes with "TypeError: Unsupported parameter type" [1231311 ] http://bugzilla.redhat.com/1231311 (NEW) Component: openstack-cinder Last change: 2015-06-12 Summary: Cinder missing dep: fasteners against liberty packstack install [1167945 ] http://bugzilla.redhat.com/1167945 (NEW) Component: openstack-cinder Last change: 2014-11-25 Summary: Random characters in instacne name break volume attaching [1212899 ] http://bugzilla.redhat.com/1212899 (ASSIGNED) Component: openstack-cinder Last change: 2015-04-17 Summary: [packaging] missing dependencies for openstack-cinder ### openstack-foreman-installer (3 bugs) [1082728 ] http://bugzilla.redhat.com/1082728 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [1203292 ] http://bugzilla.redhat.com/1203292 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: [RFE] Openstack Installer should install and configure SPICE to work with Nova and Horizon [1205782 ] http://bugzilla.redhat.com/1205782 (NEW) Component: openstack-foreman-installer Last change: 2015-06-04 Summary: support the ldap user_enabled_invert parameter ### openstack-glance (2 bugs) [1208798 ] http://bugzilla.redhat.com/1208798 (NEW) Component: openstack-glance Last change: 2015-04-20 Summary: Split glance-api and glance-registry [1213545 ] http://bugzilla.redhat.com/1213545 (NEW) Component: openstack-glance Last change: 2015-04-21 Summary: [packaging] missing dependencies for openstack-glance- common: python-glance ### openstack-heat (3 bugs) [1216917 ] http://bugzilla.redhat.com/1216917 (NEW) Component: openstack-heat Last change: 2015-07-08 Summary: Clearing non-existing hooks yields no error message [1228324 ] http://bugzilla.redhat.com/1228324 (NEW) Component: openstack-heat Last change: 2015-07-20 Summary: When deleting the stack, a bare metal node goes to ERROR state and is not deleted [1235472 ] http://bugzilla.redhat.com/1235472 (NEW) Component: openstack-heat Last change: 2015-08-19 Summary: SoftwareDeployment resource attributes are null ### openstack-horizon (1 bug) [1248634 ] http://bugzilla.redhat.com/1248634 (NEW) Component: openstack-horizon Last change: 2015-09-02 Summary: Horizon Create volume from Image not mountable ### openstack-ironic (1 bug) [1221472 ] http://bugzilla.redhat.com/1221472 (NEW) Component: openstack-ironic Last change: 2015-05-14 Summary: Error message is not clear: Node can not be updated while a state transition is in progress. (HTTP 409) ### openstack-ironic-discoverd (2 bugs) [1209110 ] http://bugzilla.redhat.com/1209110 (NEW) Component: openstack-ironic-discoverd Last change: 2015-04-09 Summary: Introspection times out after more than an hour [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2015-08-10 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (7 bugs) [1208934 ] http://bugzilla.redhat.com/1208934 (NEW) Component: openstack-keystone Last change: 2015-04-05 Summary: Need to include SSO callback form in the openstack- keystone RPM [1220489 ] http://bugzilla.redhat.com/1220489 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: wrong log directories in /usr/share/keystone/wsgi- keystone.conf [1008865 ] http://bugzilla.redhat.com/1008865 (NEW) Component: openstack-keystone Last change: 2015-08-25 Summary: keystone-all process reaches 100% CPU consumption [1212126 ] http://bugzilla.redhat.com/1212126 (NEW) Component: openstack-keystone Last change: 2015-06-01 Summary: keystone: add token flush cronjob script to keystone package [1218644 ] http://bugzilla.redhat.com/1218644 (ASSIGNED) Component: openstack-keystone Last change: 2015-06-04 Summary: CVE-2015-3646 openstack-keystone: cache backend password leak in log (OSSA 2015-008) [openstack-rdo] [1217663 ] http://bugzilla.redhat.com/1217663 (NEW) Component: openstack-keystone Last change: 2015-06-04 Summary: Overridden default for Token Provider points to non- existent class [1167528 ] http://bugzilla.redhat.com/1167528 (NEW) Component: openstack-keystone Last change: 2015-07-23 Summary: assignment table migration fails for keystone-manage db_sync if duplicate entry exists ### openstack-neutron (6 bugs) [1180201 ] http://bugzilla.redhat.com/1180201 (NEW) Component: openstack-neutron Last change: 2015-01-08 Summary: neutron-netns-cleanup.service needs RemainAfterExit=yes and PrivateTmp=false [1254275 ] http://bugzilla.redhat.com/1254275 (NEW) Component: openstack-neutron Last change: 2015-08-17 Summary: neutron-dhcp-agent.service is not enabled after packstack deploy [1164230 ] http://bugzilla.redhat.com/1164230 (NEW) Component: openstack-neutron Last change: 2014-12-16 Summary: In openstack-neutron-sriov-nic-agent package is missing the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini config files [1259351 ] http://bugzilla.redhat.com/1259351 (NEW) Component: openstack-neutron Last change: 2015-09-02 Summary: Neutron API behind SSL terminating haproxy returns http version URL's instead of https [1226006 ] http://bugzilla.redhat.com/1226006 (NEW) Component: openstack-neutron Last change: 2015-05-28 Summary: Option "username" from group "keystone_authtoken" is deprecated. Use option "username" from group "keystone_authtoken". [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2014-09-27 Summary: Use neutron-sanity-check in CI checks ### openstack-nova (17 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2015-06-14 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1180129 ] http://bugzilla.redhat.com/1180129 (NEW) Component: openstack-nova Last change: 2015-01-08 Summary: Installation of openstack-nova-compute fails on PowerKVM [1157690 ] http://bugzilla.redhat.com/1157690 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: v4-fixed-ip= not working with juno nova networking [1200701 ] http://bugzilla.redhat.com/1200701 (NEW) Component: openstack-nova Last change: 2015-05-06 Summary: openstack-nova-novncproxy.service in failed state - need upgraded websockify version [1229301 ] http://bugzilla.redhat.com/1229301 (NEW) Component: openstack-nova Last change: 2015-06-08 Summary: used_now is really used_max, and used_max is really used_now in "nova host-describe" [1234837 ] http://bugzilla.redhat.com/1234837 (NEW) Component: openstack-nova Last change: 2015-06-23 Summary: Kilo assigning ipv6 address, even though its disabled. [1161915 ] http://bugzilla.redhat.com/1161915 (NEW) Component: openstack-nova Last change: 2015-04-10 Summary: horizon console uses http when horizon is set to use ssl [1213547 ] http://bugzilla.redhat.com/1213547 (NEW) Component: openstack-nova Last change: 2015-05-22 Summary: launching 20 VMs at once via a heat resource group causes nova to not record some IPs correctly [1154152 ] http://bugzilla.redhat.com/1154152 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova] hw:numa_nodes=0 causes divide by zero [1161920 ] http://bugzilla.redhat.com/1161920 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: novnc init script doesnt write to log [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2015-09-11 Summary: logrotate should copytruncate to avoid oepnstack logging to deleted files [1154201 ] http://bugzilla.redhat.com/1154201 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: [nova][PCI-Passthrough] TypeError: pop() takes at most 1 argument (2 given) [1190815 ] http://bugzilla.redhat.com/1190815 (NEW) Component: openstack-nova Last change: 2015-02-09 Summary: Nova - db connection string present on compute nodes [1149682 ] http://bugzilla.redhat.com/1149682 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova object store allow get object after date exires [1148526 ] http://bugzilla.redhat.com/1148526 (NEW) Component: openstack-nova Last change: 2015-06-04 Summary: nova: fail to edit project quota with DataError from nova [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2015-06-04 Summary: Ensure translations are installed correctly and picked up at runtime [1189931 ] http://bugzilla.redhat.com/1189931 (NEW) Component: openstack-nova Last change: 2015-02-05 Summary: Nova AVC messages ### openstack-packstack (50 bugs) [1225312 ] http://bugzilla.redhat.com/1225312 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack Installation error - Invalid parameter create_mysql_resource on Class[Galera::Server] [1203444 ] http://bugzilla.redhat.com/1203444 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: "private" network created by packstack is not owned by any tenant [1171811 ] http://bugzilla.redhat.com/1171811 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: misleading exit message on fail [1207248 ] http://bugzilla.redhat.com/1207248 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: auto enablement of the extras channel [1148468 ] http://bugzilla.redhat.com/1148468 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: proposal to use the Red Hat tempest rpm to configure a demo environment and configure tempest [1176833 ] http://bugzilla.redhat.com/1176833 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails when starting neutron server [1169742 ] http://bugzilla.redhat.com/1169742 (NEW) Component: openstack-packstack Last change: 2015-06-25 Summary: Error: service-update is not currently supported by the keystone sql driver [1176433 ] http://bugzilla.redhat.com/1176433 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to configure horizon - juno/rhel7 (vm) [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-24 Summary: [RFE] Include Fedora cloud images in some nice way [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [1160885 ] http://bugzilla.redhat.com/1160885 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: rabbitmq wont start if ssl is required [1202958 ] http://bugzilla.redhat.com/1202958 (NEW) Component: openstack-packstack Last change: 2015-07-14 Summary: Packstack generates invalid /etc/sysconfig/network- scripts/ifcfg-br-ex [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] SPICE support in packstack [1244407 ] http://bugzilla.redhat.com/1244407 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Deploying ironic kilo with packstack fails [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2015-09-09 Summary: swift: Admin user does not have permissions to see containers created by glance service [1100142 ] http://bugzilla.redhat.com/1100142 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack missing ML2 Mellanox Mechanism Driver [953586 ] http://bugzilla.redhat.com/953586 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Openstack Installer: packstack should install and configure SPICE to work with Nova and Horizon [1206742 ] http://bugzilla.redhat.com/1206742 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Installed epel-release prior to running packstack, packstack disables it on invocation [1257352 ] http://bugzilla.redhat.com/1257352 (NEW) Component: openstack-packstack Last change: 2015-09-22 Summary: nss.load missing from packstack, httpd unable to start. [1232455 ] http://bugzilla.redhat.com/1232455 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Errors install kilo on fedora21 [1187572 ] http://bugzilla.redhat.com/1187572 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: allow to set certfile for /etc/rabbitmq/rabbitmq.config [1239286 ] http://bugzilla.redhat.com/1239286 (NEW) Component: openstack-packstack Last change: 2015-07-05 Summary: ERROR: cliff.app 'super' object has no attribute 'load_commands' [1259354 ] http://bugzilla.redhat.com/1259354 (NEW) Component: openstack-packstack Last change: 2015-09-02 Summary: When pre-creating a vg of cinder-volumes packstack fails with an error [1226393 ] http://bugzilla.redhat.com/1226393 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_PROVISION_DEMO=n causes packstack to fail [1187609 ] http://bugzilla.redhat.com/1187609 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: CONFIG_AMQP_ENABLE_SSL=y does not really set ssl on [1232496 ] http://bugzilla.redhat.com/1232496 (NEW) Component: openstack-packstack Last change: 2015-06-16 Summary: Error during puppet run causes install to fail, says rabbitmq.com cannot be reached when it can [1208812 ] http://bugzilla.redhat.com/1208812 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: add DiskFilter to scheduler_default_filters [1247816 ] http://bugzilla.redhat.com/1247816 (NEW) Component: openstack-packstack Last change: 2015-07-29 Summary: rdo liberty trunk; nova compute fails to start [1266028 ] http://bugzilla.redhat.com/1266028 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack should use pymysql database driver since Liberty [1167121 ] http://bugzilla.redhat.com/1167121 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: centos7 fails to install glance [1107908 ] http://bugzilla.redhat.com/1107908 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1266196 ] http://bugzilla.redhat.com/1266196 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Packstack Fails on prescript.pp with "undefined method 'unsafe_load_file' for Psych:Module" [1155722 ] http://bugzilla.redhat.com/1155722 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: [delorean] ArgumentError: Invalid resource type database_user at /var/tmp/packstack//manifests/17 2.16.32.71_mariadb.pp:28 on node [1213149 ] http://bugzilla.redhat.com/1213149 (NEW) Component: openstack-packstack Last change: 2015-07-08 Summary: openstack-keystone service is in " failed " status when CONFIG_KEYSTONE_SERVICE_NAME=httpd [1176797 ] http://bugzilla.redhat.com/1176797 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone on CentOS 7 VM fails at cinder puppet manifest [1235948 ] http://bugzilla.redhat.com/1235948 (NEW) Component: openstack-packstack Last change: 2015-07-18 Summary: Error occurred at during setup Ironic via packstack. Invalid parameter rabbit_user [1209206 ] http://bugzilla.redhat.com/1209206 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails - CentOS7 ; fresh install : Error: /Stage[main]/Apache::Service/Service[httpd] [1254447 ] http://bugzilla.redhat.com/1254447 (NEW) Component: openstack-packstack Last change: 2015-08-18 Summary: Packstack --allinone fails while starting HTTPD service [1207371 ] http://bugzilla.redhat.com/1207371 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack --allinone fails during _keystone.pp [1235139 ] http://bugzilla.redhat.com/1235139 (NEW) Component: openstack-packstack Last change: 2015-07-01 Summary: [F22-Packstack-Kilo] Error: Could not find dependency Package[openstack-swift] for File[/srv/node] at /var/tm p/packstack/b77f37620d9f4794b6f38730442962b6/manifests/ xxx.xxx.xxx.xxx_swift.pp:90 [1158015 ] http://bugzilla.redhat.com/1158015 (NEW) Component: openstack-packstack Last change: 2015-04-14 Summary: Post installation, Cinder fails with an error: Volume group "cinder-volumes" not found [1206358 ] http://bugzilla.redhat.com/1206358 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: provision_glance does not honour proxy setting when getting image [1185627 ] http://bugzilla.redhat.com/1185627 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: glance provision disregards keystone region setting [1214922 ] http://bugzilla.redhat.com/1214922 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Cannot use ipv6 address for cinder nfs backend. [1249169 ] http://bugzilla.redhat.com/1249169 (NEW) Component: openstack-packstack Last change: 2015-08-05 Summary: FWaaS does not work because DB was not synced [1265816 ] http://bugzilla.redhat.com/1265816 (NEW) Component: openstack-packstack Last change: 2015-09-24 Summary: Manila Puppet Module Expects Glance Endpoint to Be Available for Upload of Service Image [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2015-06-04 Summary: API services has all admin permission instead of service [1207098 ] http://bugzilla.redhat.com/1207098 (NEW) Component: openstack-packstack Last change: 2015-08-04 Summary: [RDO] packstack installation failed with "Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1: Redirecting to /bin/systemctl start httpd.service" [1264843 ] http://bugzilla.redhat.com/1264843 (NEW) Component: openstack-packstack Last change: 2015-09-25 Summary: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list iptables-ipv6' returned 1: Error: No matching Packages to list [1203131 ] http://bugzilla.redhat.com/1203131 (NEW) Component: openstack-packstack Last change: 2015-06-04 Summary: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br- eno50:eno50,encounters an error?ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp ?. ### openstack-puppet-modules (11 bugs) [1236775 ] http://bugzilla.redhat.com/1236775 (NEW) Component: openstack-puppet-modules Last change: 2015-06-30 Summary: rdo kilo mongo fails to start [1150678 ] http://bugzilla.redhat.com/1150678 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Permissions issue prevents CSS from rendering [1192539 ] http://bugzilla.redhat.com/1192539 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-tripleo and puppet-gnocchi to opm [1157500 ] http://bugzilla.redhat.com/1157500 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: ERROR: Network commands are not supported when using the Neutron API. [1222326 ] http://bugzilla.redhat.com/1222326 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: trove conf files require update when neutron disabled [1259411 ] http://bugzilla.redhat.com/1259411 (NEW) Component: openstack-puppet-modules Last change: 2015-09-03 Summary: Backport: nova-network needs authentication [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2015-07-07 Summary: trove guestagent config mods for integration testing [1155663 ] http://bugzilla.redhat.com/1155663 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Increase the rpc_thread_pool_size [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Offset Swift ports to 6200 [1174454 ] http://bugzilla.redhat.com/1174454 (ASSIGNED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Add puppet-openstack_extras to opm ### openstack-selinux (12 bugs) [1261465 ] http://bugzilla.redhat.com/1261465 (NEW) Component: openstack-selinux Last change: 2015-09-09 Summary: OpenStack Keystone is not functional [1158394 ] http://bugzilla.redhat.com/1158394 (NEW) Component: openstack-selinux Last change: 2014-11-23 Summary: keystone-all proccess raised avc denied [1202944 ] http://bugzilla.redhat.com/1202944 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: "glance image-list" fails on F21, causing packstack install to fail [1219406 ] http://bugzilla.redhat.com/1219406 (NEW) Component: openstack-selinux Last change: 2015-07-23 Summary: Glance over nfs fails due to selinux [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2015-02-24 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1252675 ] http://bugzilla.redhat.com/1252675 (NEW) Component: openstack-selinux Last change: 2015-08-12 Summary: neutron-server cannot connect to port 5000 due to SELinux [1189929 ] http://bugzilla.redhat.com/1189929 (NEW) Component: openstack-selinux Last change: 2015-02-06 Summary: Glance AVC messages [1170238 ] http://bugzilla.redhat.com/1170238 (NEW) Component: openstack-selinux Last change: 2014-12-18 Summary: Keepalived fail to start for HA router because of SELinux issues [1255559 ] http://bugzilla.redhat.com/1255559 (NEW) Component: openstack-selinux Last change: 2015-08-21 Summary: nova api can't be started in WSGI under httpd, blocked by selinux [1206740 ] http://bugzilla.redhat.com/1206740 (NEW) Component: openstack-selinux Last change: 2015-04-09 Summary: On CentOS7.1 packstack --allinone fails to start Apache because of binding error on port 5000 [1203910 ] http://bugzilla.redhat.com/1203910 (NEW) Component: openstack-selinux Last change: 2015-03-19 Summary: Keystone requires keystone_t self:process signal; [1202941 ] http://bugzilla.redhat.com/1202941 (NEW) Component: openstack-selinux Last change: 2015-03-18 Summary: Glance fails to start on CentOS 7 because of selinux AVC ### openstack-swift (2 bugs) [1169215 ] http://bugzilla.redhat.com/1169215 (NEW) Component: openstack-swift Last change: 2014-12-12 Summary: swift-init does not interoperate with systemd swift service files [1179931 ] http://bugzilla.redhat.com/1179931 (NEW) Component: openstack-swift Last change: 2015-01-07 Summary: Variable of init script gets overwritten preventing the startup of swift services when using multiple server configurations ### openstack-tripleo (24 bugs) [1221731 ] http://bugzilla.redhat.com/1221731 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Overcloud missing ceilometer keystone user and endpoints [1225390 ] http://bugzilla.redhat.com/1225390 (NEW) Component: openstack-tripleo Last change: 2015-06-29 Summary: The role names from "openstack management role list" don't match those for "openstack overcloud scale stack" [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1218340 ] http://bugzilla.redhat.com/1218340 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RFE: add "scheduler_default_weighers = CapacityWeigher" explicitly to cinder.conf [1205645 ] http://bugzilla.redhat.com/1205645 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Dependency issue: python-oslo-versionedobjects is required by heat and not in the delorean repos [1225022 ] http://bugzilla.redhat.com/1225022 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When adding nodes to the cloud the update hangs and takes forever [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1223667 ] http://bugzilla.redhat.com/1223667 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: When using 'tripleo wait_for' with the command 'nova hypervisor-stats' it hangs forever [1224604 ] http://bugzilla.redhat.com/1224604 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Lots of dracut-related error messages during instack- build-images [1229174 ] http://bugzilla.redhat.com/1229174 (NEW) Component: openstack-tripleo Last change: 2015-06-08 Summary: Nova computes can't resolve each other because the hostnames in /etc/hosts don't include the ".novalocal" suffix [1223443 ] http://bugzilla.redhat.com/1223443 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: You can still check introspection status for ironic nodes that have been deleted [1187352 ] http://bugzilla.redhat.com/1187352 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: /usr/bin/instack-prepare-for-overcloud glance using incorrect parameter [1223672 ] http://bugzilla.redhat.com/1223672 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Node registration fails silently if instackenv.json is badly formatted [1221610 ] http://bugzilla.redhat.com/1221610 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: RDO-manager beta fails to install: Deployment exited with non-zero status code: 6 [1223471 ] http://bugzilla.redhat.com/1223471 (NEW) Component: openstack-tripleo Last change: 2015-06-22 Summary: Discovery errors out even when it is successful [1223424 ] http://bugzilla.redhat.com/1223424 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud should not rely on instackenv.json, but should use ironic instead [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1226653 ] http://bugzilla.redhat.com/1226653 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: The usage message for "heat resource-show" is confusing and incorrect [1218168 ] http://bugzilla.redhat.com/1218168 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: ceph.service should only be running on the ceph nodes, not on the controller and compute nodes [1211560 ] http://bugzilla.redhat.com/1211560 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: instack-deploy-overcloud times out after ~3 minutes, no plan or stack is created [1226867 ] http://bugzilla.redhat.com/1226867 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: Timeout in API [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: User can not login into the overcloud horizon using the proper credentials [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2015-06-04 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI ### openstack-tripleo-heat-templates (5 bugs) [1232015 ] http://bugzilla.redhat.com/1232015 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: instack-undercloud: one controller deployment: running "pcs status" - Error: cluster is not currently running on this node [1236760 ] http://bugzilla.redhat.com/1236760 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-29 Summary: Drop 'without-mergepy' from main overcloud template [1204479 ] http://bugzilla.redhat.com/1204479 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-06-04 Summary: The ExtraConfig and controllerExtraConfig parameters are ignored in the controller-puppet template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2015-09-24 Summary: TripleO should use pymysql database driver since Liberty [1230250 ] http://bugzilla.redhat.com/1230250 (ASSIGNED) Component: openstack-tripleo-heat-templates Last change: 2015-06-16 Summary: [Unified CLI] Deployment using Tuskar has failed - Deployment exited with non-zero status code: 1 ### openstack-tripleo-image-elements (2 bugs) [1187354 ] http://bugzilla.redhat.com/1187354 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: possible incorrect selinux check in 97-mysql-selinux [1187965 ] http://bugzilla.redhat.com/1187965 (NEW) Component: openstack-tripleo-image-elements Last change: 2015-06-04 Summary: mariadb my.cnf socket path does not exist ### openstack-trove (1 bug) [1219069 ] http://bugzilla.redhat.com/1219069 (ASSIGNED) Component: openstack-trove Last change: 2015-08-27 Summary: trove-guestagent systemd unit file uses incorrect path for guest_info ### openstack-tuskar (3 bugs) [1210223 ] http://bugzilla.redhat.com/1210223 (ASSIGNED) Component: openstack-tuskar Last change: 2015-06-23 Summary: Updating the controller count to 3 fails [1229493 ] http://bugzilla.redhat.com/1229493 (ASSIGNED) Component: openstack-tuskar Last change: 2015-07-27 Summary: Difficult to synchronise tuskar stored files with /usr/share/openstack-tripleo-heat-templates [1229401 ] http://bugzilla.redhat.com/1229401 (NEW) Component: openstack-tuskar Last change: 2015-06-26 Summary: stack is stuck in DELETE_FAILED state ### openstack-utils (3 bugs) [1211989 ] http://bugzilla.redhat.com/1211989 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status shows 'disabled on boot' for the mysqld service [1161501 ] http://bugzilla.redhat.com/1161501 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: Can't enable OpenStack service after openstack-service disable [1201340 ] http://bugzilla.redhat.com/1201340 (NEW) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-service tries to restart neutron-ovs- cleanup.service ### openvswitch (1 bug) [1209003 ] http://bugzilla.redhat.com/1209003 (ASSIGNED) Component: openvswitch Last change: 2015-08-18 Summary: ovs-vswitchd segfault on boot leaving server with no network connectivity ### Package Review (1 bug) [1243550 ] http://bugzilla.redhat.com/1243550 (ASSIGNED) Component: Package Review Last change: 2015-09-22 Summary: Review Request: openstack-aodh - OpenStack Telemetry Alarming ### python-glanceclient (2 bugs) [1244291 ] http://bugzilla.redhat.com/1244291 (ASSIGNED) Component: python-glanceclient Last change: 2015-09-17 Summary: python-glanceclient-0.17.0-2.el7.noarch.rpm packaged with buggy glanceclient/common/https.py [1164349 ] http://bugzilla.redhat.com/1164349 (ASSIGNED) Component: python-glanceclient Last change: 2014-11-17 Summary: rdo juno glance client needs python-requests >= 2.2.0 ### python-keystonemiddleware (1 bug) [1195977 ] http://bugzilla.redhat.com/1195977 (NEW) Component: python-keystonemiddleware Last change: 2015-06-04 Summary: Rebase python-keystonemiddleware to version 1.3 ### python-neutronclient (2 bugs) [1221063 ] http://bugzilla.redhat.com/1221063 (ASSIGNED) Component: python-neutronclient Last change: 2015-08-20 Summary: --router:external=True syntax is invalid - not backward compatibility [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2015-03-30 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2015-06-04 Summary: Missing versioned dependency on python-six ### python-openstackclient (5 bugs) [1212439 ] http://bugzilla.redhat.com/1212439 (NEW) Component: python-openstackclient Last change: 2015-04-16 Summary: Usage is not described accurately for 99% of openstack baremetal [1212091 ] http://bugzilla.redhat.com/1212091 (NEW) Component: python-openstackclient Last change: 2015-04-28 Summary: `openstack ip floating delete` fails if we specify IP address as input [1227543 ] http://bugzilla.redhat.com/1227543 (NEW) Component: python-openstackclient Last change: 2015-06-13 Summary: openstack undercloud install fails due to a missing make target for tripleo-selinux-keepalived.pp [1187310 ] http://bugzilla.redhat.com/1187310 (NEW) Component: python-openstackclient Last change: 2015-06-04 Summary: Add --user to project list command to filter projects by user [1239144 ] http://bugzilla.redhat.com/1239144 (NEW) Component: python-openstackclient Last change: 2015-07-10 Summary: appdirs requirement ### python-oslo-config (1 bug) [1258014 ] http://bugzilla.redhat.com/1258014 (NEW) Component: python-oslo-config Last change: 2015-08-28 Summary: oslo_config != oslo.config ### rdo-manager (23 bugs) [1234467 ] http://bugzilla.redhat.com/1234467 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot access instance vnc console on horizon after overcloud deployment [1218281 ] http://bugzilla.redhat.com/1218281 (NEW) Component: rdo-manager Last change: 2015-08-10 Summary: RFE: rdo-manager - update heat deployment-show to make puppet output readable [1264526 ] http://bugzilla.redhat.com/1264526 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Deployment of Undercloud [1213647 ] http://bugzilla.redhat.com/1213647 (NEW) Component: rdo-manager Last change: 2015-04-21 Summary: RFE: add deltarpm to all images built [1221663 ] http://bugzilla.redhat.com/1221663 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: [RFE][RDO-manager]: Alert when deploying a physical compute if the virtualization flag is disabled in BIOS. [1214343 ] http://bugzilla.redhat.com/1214343 (NEW) Component: rdo-manager Last change: 2015-04-24 Summary: [RFE] Command to create flavors based on real hardware and profiles [1223993 ] http://bugzilla.redhat.com/1223993 (ASSIGNED) Component: rdo-manager Last change: 2015-06-04 Summary: overcloud failure with "openstack Authorization Failed: Cannot authenticate without an auth_url" [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2015-08-28 Summary: No way to increase yum timeouts when building images [1234475 ] http://bugzilla.redhat.com/1234475 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: Cannot login to Overcloud Horizon through Virtual IP (VIP) [1226969 ] http://bugzilla.redhat.com/1226969 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: Tempest failed when running after overcloud deployment [1229343 ] http://bugzilla.redhat.com/1229343 (NEW) Component: rdo-manager Last change: 2015-06-08 Summary: instack-virt-setup missing package dependency device- mapper* [1212520 ] http://bugzilla.redhat.com/1212520 (NEW) Component: rdo-manager Last change: 2015-04-16 Summary: [RFE] [CI] Add ability to generate and store overcloud images provided by latest-passed-ci [1221986 ] http://bugzilla.redhat.com/1221986 (ASSIGNED) Component: rdo-manager Last change: 2015-06-03 Summary: openstack-nova-novncproxy fails to start [1227035 ] http://bugzilla.redhat.com/1227035 (ASSIGNED) Component: rdo-manager Last change: 2015-06-02 Summary: RDO-Manager Undercloud install fails while trying to insert data into keystone [1214349 ] http://bugzilla.redhat.com/1214349 (NEW) Component: rdo-manager Last change: 2015-04-22 Summary: [RFE] Use Ironic API instead of discoverd one for discovery/introspection [1233410 ] http://bugzilla.redhat.com/1233410 (NEW) Component: rdo-manager Last change: 2015-06-19 Summary: overcloud deployment fails w/ "Message: No valid host was found. There are not enough hosts available., Code: 500" [1227042 ] http://bugzilla.redhat.com/1227042 (NEW) Component: rdo-manager Last change: 2015-06-01 Summary: rfe: support Keystone HTTPD [1223328 ] http://bugzilla.redhat.com/1223328 (NEW) Component: rdo-manager Last change: 2015-09-18 Summary: Read bit set for others for Openstack services directories in /etc [1232813 ] http://bugzilla.redhat.com/1232813 (NEW) Component: rdo-manager Last change: 2015-06-17 Summary: PXE boot fails: Unrecognized option "--autofree" [1234484 ] http://bugzilla.redhat.com/1234484 (NEW) Component: rdo-manager Last change: 2015-06-22 Summary: cannot view cinder volumes in overcloud controller horizon [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2015-06-11 Summary: there is a newer image that can be used to deploy openstack [1221718 ] http://bugzilla.redhat.com/1221718 (NEW) Component: rdo-manager Last change: 2015-05-14 Summary: rdo-manager: unable to delete the failed overcloud deployment. [1226389 ] http://bugzilla.redhat.com/1226389 (NEW) Component: rdo-manager Last change: 2015-05-29 Summary: RDO-Manager Undercloud install failure ### rdo-manager-cli (6 bugs) [1212467 ] http://bugzilla.redhat.com/1212467 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-03 Summary: [RFE] [RDO-Manager] [CLI] Add an ability to create an overcloud image associated with kernel/ramdisk images in one CLI step [1230170 ] http://bugzilla.redhat.com/1230170 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-11 Summary: the ouptut of openstack management plan show --long command is not readable [1226855 ] http://bugzilla.redhat.com/1226855 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-10 Summary: Role was added to a template with empty flavor value [1228769 ] http://bugzilla.redhat.com/1228769 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-07-13 Summary: Missing dependencies on sysbench and fio (RHEL) [1212390 ] http://bugzilla.redhat.com/1212390 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-08-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to show matched profiles via CLI command [1212371 ] http://bugzilla.redhat.com/1212371 (ASSIGNED) Component: rdo-manager-cli Last change: 2015-06-18 Summary: Validate node power credentials after enrolling ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (3 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (NEW) Component: RFEs Last change: 2015-02-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2015-08-27 Summary: [RFE] Provide easy to use upgrade tool [1217505 ] http://bugzilla.redhat.com/1217505 (NEW) Component: RFEs Last change: 2015-04-30 Summary: IPMI driver for Ironic should support RAID for operating system/root parition ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (176 bugs) ### diskimage-builder (1 bug) [1228761 ] http://bugzilla.redhat.com/1228761 (MODIFIED) Component: diskimage-builder Last change: 2015-09-23 Summary: DIB_YUM_REPO_CONF points to two files and that breaks imagebuilding ### distribution (6 bugs) [1218398 ] http://bugzilla.redhat.com/1218398 (ON_QA) Component: distribution Last change: 2015-06-04 Summary: rdo kilo testing repository missing openstack- neutron-*aas [1265690 ] http://bugzilla.redhat.com/1265690 (ON_QA) Component: distribution Last change: 2015-09-28 Summary: Update python-networkx to 1.10 [1108188 ] http://bugzilla.redhat.com/1108188 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: update el6 icehouse kombu packages for improved performance [1218723 ] http://bugzilla.redhat.com/1218723 (MODIFIED) Component: distribution Last change: 2015-06-04 Summary: Trove configuration files set different control_exchange for taskmanager/conductor and api [1151589 ] http://bugzilla.redhat.com/1151589 (MODIFIED) Component: distribution Last change: 2015-03-18 Summary: trove does not install dependency python-pbr [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2015-06-04 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (2 bugs) [1212862 ] http://bugzilla.redhat.com/1212862 (MODIFIED) Component: instack-undercloud Last change: 2015-06-04 Summary: instack-install-undercloud fails with "ImportError: No module named six" [1232162 ] http://bugzilla.redhat.com/1232162 (MODIFIED) Component: instack-undercloud Last change: 2015-06-16 Summary: the overcloud dns server should not be enforced to 192.168.122.1 when undefined ### openstack-ceilometer (1 bug) [1038162 ] http://bugzilla.redhat.com/1038162 (MODIFIED) Component: openstack-ceilometer Last change: 2014-02-04 Summary: openstack-ceilometer-common missing python-babel dependency ### openstack-cinder (5 bugs) [1234038 ] http://bugzilla.redhat.com/1234038 (POST) Component: openstack-cinder Last change: 2015-06-22 Summary: Packstack Error: cinder type-create iscsi returned 1 instead of one of [0] [1081022 ] http://bugzilla.redhat.com/1081022 (MODIFIED) Component: openstack-cinder Last change: 2014-05-07 Summary: Non-admin user can not attach cinder volume to their instance (LIO) [994370 ] http://bugzilla.redhat.com/994370 (MODIFIED) Component: openstack-cinder Last change: 2014-06-24 Summary: CVE-2013-4183 openstack-cinder: OpenStack: Cinder LVM volume driver does not support secure deletion [openstack-rdo] [1084046 ] http://bugzilla.redhat.com/1084046 (POST) Component: openstack-cinder Last change: 2014-09-26 Summary: cinder: can't delete a volume (raise exception.ISCSITargetNotFoundForVolume) [1212900 ] http://bugzilla.redhat.com/1212900 (ON_QA) Component: openstack-cinder Last change: 2015-05-05 Summary: [packaging] /etc/cinder/cinder.conf missing in openstack-cinder ### openstack-glance (3 bugs) [1008818 ] http://bugzilla.redhat.com/1008818 (MODIFIED) Component: openstack-glance Last change: 2015-01-07 Summary: glance api hangs with low (1) workers on multiple parallel image creation requests [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2014-06-24 Summary: Glance api ssl issue [1023614 ] http://bugzilla.redhat.com/1023614 (POST) Component: openstack-glance Last change: 2014-04-25 Summary: No logging to files ### openstack-heat (3 bugs) [1229477 ] http://bugzilla.redhat.com/1229477 (MODIFIED) Component: openstack-heat Last change: 2015-06-17 Summary: missing dependency in Heat delorean build [1213476 ] http://bugzilla.redhat.com/1213476 (MODIFIED) Component: openstack-heat Last change: 2015-06-10 Summary: [packaging] /etc/heat/heat.conf missing in openstack- heat [1021989 ] http://bugzilla.redhat.com/1021989 (MODIFIED) Component: openstack-heat Last change: 2015-02-01 Summary: heat sometimes keeps listenings stacks with status DELETE_COMPLETE ### openstack-horizon (1 bug) [1219221 ] http://bugzilla.redhat.com/1219221 (ON_QA) Component: openstack-horizon Last change: 2015-05-08 Summary: region selector missing ### openstack-ironic-discoverd (1 bug) [1204218 ] http://bugzilla.redhat.com/1204218 (ON_QA) Component: openstack-ironic-discoverd Last change: 2015-03-31 Summary: ironic-discoverd should allow dropping all ports except for one detected on discovery ### openstack-keystone (1 bug) [1123542 ] http://bugzilla.redhat.com/1123542 (ON_QA) Component: openstack-keystone Last change: 2015-03-19 Summary: file templated catalogs do not work in protocol v3 ### openstack-neutron (13 bugs) [1081203 ] http://bugzilla.redhat.com/1081203 (MODIFIED) Component: openstack-neutron Last change: 2014-04-17 Summary: No DHCP agents are associated with network [1058995 ] http://bugzilla.redhat.com/1058995 (ON_QA) Component: openstack-neutron Last change: 2014-04-08 Summary: neutron-plugin-nicira should be renamed to neutron- plugin-vmware [1050842 ] http://bugzilla.redhat.com/1050842 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: neutron should not specify signing_dir in neutron- dist.conf [1157599 ] http://bugzilla.redhat.com/1157599 (ON_QA) Component: openstack-neutron Last change: 2014-11-25 Summary: fresh neutron install fails due unknown database column 'id' [1109824 ] http://bugzilla.redhat.com/1109824 (MODIFIED) Component: openstack-neutron Last change: 2014-09-27 Summary: Embrane plugin should be split from python-neutron [1098601 ] http://bugzilla.redhat.com/1098601 (MODIFIED) Component: openstack-neutron Last change: 2014-05-16 Summary: neutron-vpn-agent does not use the /etc/neutron/fwaas_driver.ini [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2014-01-13 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1061349 ] http://bugzilla.redhat.com/1061349 (ON_QA) Component: openstack-neutron Last change: 2014-02-04 Summary: neutron-dhcp-agent won't start due to a missing import of module named stevedore [1100136 ] http://bugzilla.redhat.com/1100136 (ON_QA) Component: openstack-neutron Last change: 2014-07-17 Summary: Missing configuration file for ML2 Mellanox Mechanism Driver ml2_conf_mlnx.ini [1088537 ] http://bugzilla.redhat.com/1088537 (ON_QA) Component: openstack-neutron Last change: 2014-06-11 Summary: rhel 6.5 icehouse stage.. neutron-db-manage trying to import systemd [1057822 ] http://bugzilla.redhat.com/1057822 (MODIFIED) Component: openstack-neutron Last change: 2014-04-16 Summary: neutron-ml2 package requires python-pyudev [1019487 ] http://bugzilla.redhat.com/1019487 (MODIFIED) Component: openstack-neutron Last change: 2014-07-17 Summary: neutron-dhcp-agent fails to start without openstack- neutron-openvswitch installed [1209932 ] http://bugzilla.redhat.com/1209932 (MODIFIED) Component: openstack-neutron Last change: 2015-04-10 Summary: Packstack installation failed with Neutron-server Could not start Service ### openstack-nova (5 bugs) [1045084 ] http://bugzilla.redhat.com/1045084 (ON_QA) Component: openstack-nova Last change: 2014-06-03 Summary: Trying to boot an instance with a flavor that has nonzero ephemeral disk will fail [1189347 ] http://bugzilla.redhat.com/1189347 (POST) Component: openstack-nova Last change: 2015-05-04 Summary: openstack-nova-* systemd unit files need NotifyAccess=all [1217721 ] http://bugzilla.redhat.com/1217721 (ON_QA) Component: openstack-nova Last change: 2015-05-05 Summary: [packaging] /etc/nova/nova.conf changes due to deprecated options [1211587 ] http://bugzilla.redhat.com/1211587 (MODIFIED) Component: openstack-nova Last change: 2015-04-14 Summary: openstack-nova-compute fails to start because python- psutil is missing after installing with packstack [958411 ] http://bugzilla.redhat.com/958411 (ON_QA) Component: openstack-nova Last change: 2015-01-07 Summary: Nova: 'nova instance-action-list' table is not sorted by the order of action occurrence. ### openstack-packstack (59 bugs) [1001470 ] http://bugzilla.redhat.com/1001470 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-dashboard django dependency conflict stops packstack execution [1007497 ] http://bugzilla.redhat.com/1007497 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Openstack Installer: packstack does not create tables in Heat db. [1006353 ] http://bugzilla.redhat.com/1006353 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack w/ CONFIG_CEILOMETER_INSTALL=y has an error [1234042 ] http://bugzilla.redhat.com/1234042 (MODIFIED) Component: openstack-packstack Last change: 2015-08-05 Summary: ERROR : Error appeared during Puppet run: 192.168.122.82_api_nova.pp Error: Use of reserved word: type, must be quoted if intended to be a String value at /var/tmp/packstack/811663aa10824d21b860729732c16c3a/ manifests/192.168.122.82_api_nova.pp:41:3 [976394 ] http://bugzilla.redhat.com/976394 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: [RFE] Put the keystonerc_admin file in the current working directory for --all-in-one installs (or where client machine is same as local) [1116403 ] http://bugzilla.redhat.com/1116403 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack prescript fails if NetworkManager is disabled, but still installed [1020048 ] http://bugzilla.redhat.com/1020048 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack neutron plugin does not check if Nova is disabled [964005 ] http://bugzilla.redhat.com/964005 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: keystonerc_admin stored in /root requiring running OpenStack software as root user [1063980 ] http://bugzilla.redhat.com/1063980 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Change packstack to use openstack-puppet-modules [1153128 ] http://bugzilla.redhat.com/1153128 (POST) Component: openstack-packstack Last change: 2015-07-29 Summary: Cannot start nova-network on juno - Centos7 [1003959 ] http://bugzilla.redhat.com/1003959 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Make "Nothing to do" error from yum in Puppet installs a little easier to decipher [1205912 ] http://bugzilla.redhat.com/1205912 (POST) Component: openstack-packstack Last change: 2015-07-27 Summary: allow to specify admin name and email [1093828 ] http://bugzilla.redhat.com/1093828 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack package should depend on yum-utils [1087529 ] http://bugzilla.redhat.com/1087529 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Configure neutron correctly to be able to notify nova about port changes [1088964 ] http://bugzilla.redhat.com/1088964 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Havana Fedora 19, packstack fails w/ mysql error [958587 ] http://bugzilla.redhat.com/958587 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack install succeeds even when puppet completely fails [1101665 ] http://bugzilla.redhat.com/1101665 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: el7 Icehouse: Nagios installation fails [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1061689 ] http://bugzilla.redhat.com/1061689 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Horizon SSL is disabled by Nagios configuration via packstack [1036192 ] http://bugzilla.redhat.com/1036192 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rerunning packstack with the generated allione answerfile will fail with qpidd user logged in [1175726 ] http://bugzilla.redhat.com/1175726 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Disabling glance deployment does not work if you don't disable demo provisioning [979041 ] http://bugzilla.redhat.com/979041 (ON_QA) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora19 no longer has /etc/sysconfig/modules/kvm.modules [1151892 ] http://bugzilla.redhat.com/1151892 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack icehouse doesn't install anything because of repo [1175428 ] http://bugzilla.redhat.com/1175428 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack doesn't configure rabbitmq to allow non- localhost connections to 'guest' user [1111318 ] http://bugzilla.redhat.com/1111318 (MODIFIED) Component: openstack-packstack Last change: 2014-08-18 Summary: pakcstack: mysql fails to restart on CentOS6.5 [957006 ] http://bugzilla.redhat.com/957006 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack reinstall fails trying to start nagios [995570 ] http://bugzilla.redhat.com/995570 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: RFE: support setting up apache to serve keystone requests [1052948 ] http://bugzilla.redhat.com/1052948 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Could not start Service[libvirt]: Execution of '/etc/init.d/libvirtd start' returned 1 [990642 ] http://bugzilla.redhat.com/990642 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: rdo release RPM not installed on all fedora hosts [1018922 ] http://bugzilla.redhat.com/1018922 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack configures nova/neutron for qpid username/password when none is required [991801 ] http://bugzilla.redhat.com/991801 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Warning message for installing RDO kernel needs to be adjusted [1249482 ] http://bugzilla.redhat.com/1249482 (POST) Component: openstack-packstack Last change: 2015-08-05 Summary: Packstack (AIO) failure on F22 due to patch "Run neutron db sync also for each neutron module"? [1006534 ] http://bugzilla.redhat.com/1006534 (MODIFIED) Component: openstack-packstack Last change: 2014-04-08 Summary: Packstack ignores neutron physical network configuration if CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre [1049861 ] http://bugzilla.redhat.com/1049861 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: fail to create snapshot on an "in-use" GlusterFS volume using --force true (el7) [1028591 ] http://bugzilla.redhat.com/1028591 (MODIFIED) Component: openstack-packstack Last change: 2014-02-05 Summary: packstack generates invalid configuration when using GRE tunnels [1011628 ] http://bugzilla.redhat.com/1011628 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack reports installation completed successfully but nothing installed [1098821 ] http://bugzilla.redhat.com/1098821 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack allinone installation fails due to failure to start rabbitmq-server during amqp.pp on CentOS 6.5 [1172876 ] http://bugzilla.redhat.com/1172876 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails on centos6 with missing systemctl [1022421 ] http://bugzilla.redhat.com/1022421 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Error appeared during Puppet run: IPADDRESS_keystone.pp [1108742 ] http://bugzilla.redhat.com/1108742 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Allow specifying of a global --password option in packstack to set all keys/secrets/passwords to that value [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack requires 2 runs to install ceilometer [1039694 ] http://bugzilla.redhat.com/1039694 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails if iptables.service is not available [1018900 ] http://bugzilla.redhat.com/1018900 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080348 ] http://bugzilla.redhat.com/1080348 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Fedora20: packstack gives traceback when SElinux permissive [1014774 ] http://bugzilla.redhat.com/1014774 (MODIFIED) Component: openstack-packstack Last change: 2014-04-23 Summary: packstack configures br-ex to use gateway ip [1006476 ] http://bugzilla.redhat.com/1006476 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: ERROR : Error during puppet run : Error: /Stage[main]/N ova::Network/Sysctl::Value[net.ipv4.ip_forward]/Sysctl[ net.ipv4.ip_forward]: Could not evaluate: Field 'val' is required [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1082729 ] http://bugzilla.redhat.com/1082729 (POST) Component: openstack-packstack Last change: 2015-02-27 Summary: [RFE] allow for Keystone/LDAP configuration at deployment time [956939 ] http://bugzilla.redhat.com/956939 (ON_QA) Component: openstack-packstack Last change: 2015-01-07 Summary: packstack install fails if ntp server does not respond [1018911 ] http://bugzilla.redhat.com/1018911 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Packstack creates duplicate cirros images in glance [1265661 ] http://bugzilla.redhat.com/1265661 (POST) Component: openstack-packstack Last change: 2015-09-23 Summary: Packstack does not install Sahara services (RDO Liberty) [1119920 ] http://bugzilla.redhat.com/1119920 (MODIFIED) Component: openstack-packstack Last change: 2015-07-21 Summary: http://ip/dashboard 404 from all-in-one rdo install on rhel7 [974971 ] http://bugzilla.redhat.com/974971 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: please give greater control over use of EPEL [1185921 ] http://bugzilla.redhat.com/1185921 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: RabbitMQ fails to start if configured with ssl [1008863 ] http://bugzilla.redhat.com/1008863 (MODIFIED) Component: openstack-packstack Last change: 2013-10-23 Summary: Allow overlapping ips by default [1050205 ] http://bugzilla.redhat.com/1050205 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: Dashboard port firewall rule is not permanent [1057938 ] http://bugzilla.redhat.com/1057938 (MODIFIED) Component: openstack-packstack Last change: 2014-06-17 Summary: Errors when setting CONFIG_NEUTRON_OVS_TUNNEL_IF to a VLAN interface [1022312 ] http://bugzilla.redhat.com/1022312 (MODIFIED) Component: openstack-packstack Last change: 2015-06-04 Summary: qpid should enable SSL [1175450 ] http://bugzilla.redhat.com/1175450 (POST) Component: openstack-packstack Last change: 2015-06-04 Summary: packstack fails to start Nova on Rawhide: Error: comparison of String with 18 failed at [...]ceilometer/manifests/params.pp:32 ### openstack-puppet-modules (18 bugs) [1006816 ] http://bugzilla.redhat.com/1006816 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: cinder modules require glance installed [1085452 ] http://bugzilla.redhat.com/1085452 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-02 Summary: prescript puppet - missing dependency package iptables- services [1133345 ] http://bugzilla.redhat.com/1133345 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-09-05 Summary: Packstack execution fails with "Could not set 'present' on ensure" [1185960 ] http://bugzilla.redhat.com/1185960 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-03-19 Summary: problems with puppet-keystone LDAP support [1006401 ] http://bugzilla.redhat.com/1006401 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: explicit check for pymongo is incorrect [1021183 ] http://bugzilla.redhat.com/1021183 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: horizon log errors [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Horizon help url in RDO points to the RHOS documentation [1214358 ] http://bugzilla.redhat.com/1214358 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-07-02 Summary: SSHD configuration breaks GSSAPI [1122968 ] http://bugzilla.redhat.com/1122968 (MODIFIED) Component: openstack-puppet-modules Last change: 2014-08-01 Summary: neutron/manifests/agents/ovs.pp creates /etc/sysconfig /network-scripts/ifcfg-br-{int,tun} [1219447 ] http://bugzilla.redhat.com/1219447 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: The private network created by packstack for demo tenant is wrongly marked as external [1038255 ] http://bugzilla.redhat.com/1038255 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp does not ensure iptables-services package installation [1115398 ] http://bugzilla.redhat.com/1115398 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: swift.pp: Could not find command 'restorecon' [1171352 ] http://bugzilla.redhat.com/1171352 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: add aviator [1182837 ] http://bugzilla.redhat.com/1182837 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: packstack chokes on ironic - centos7 + juno [1037635 ] http://bugzilla.redhat.com/1037635 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: prescript.pp fails with '/sbin/service iptables start' returning 6 [1022580 ] http://bugzilla.redhat.com/1022580 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: netns.py syntax error [1207701 ] http://bugzilla.redhat.com/1207701 (ON_QA) Component: openstack-puppet-modules Last change: 2015-06-04 Summary: Unable to attach cinder volume to instance [1258576 ] http://bugzilla.redhat.com/1258576 (MODIFIED) Component: openstack-puppet-modules Last change: 2015-09-01 Summary: RDO liberty packstack --allinone fails on demo provision of glance ### openstack-selinux (12 bugs) [1144539 ] http://bugzilla.redhat.com/1144539 (POST) Component: openstack-selinux Last change: 2014-10-29 Summary: selinux preventing Horizon access (IceHouse, CentOS 7) [1234665 ] http://bugzilla.redhat.com/1234665 (ON_QA) Component: openstack-selinux Last change: 2015-06-23 Summary: tempest.scenario.test_server_basic_ops.TestServerBasicO ps fails to launch instance w/ selinux enforcing [1105357 ] http://bugzilla.redhat.com/1105357 (MODIFIED) Component: openstack-selinux Last change: 2015-01-22 Summary: Keystone cannot send notifications [1093385 ] http://bugzilla.redhat.com/1093385 (MODIFIED) Component: openstack-selinux Last change: 2014-05-15 Summary: neutron L3 agent RPC errors [1099042 ] http://bugzilla.redhat.com/1099042 (MODIFIED) Component: openstack-selinux Last change: 2014-06-27 Summary: Neutron is unable to create directory in /tmp [1083566 ] http://bugzilla.redhat.com/1083566 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: Selinux blocks Nova services on RHEL7, can't boot or delete instances, [1049091 ] http://bugzilla.redhat.com/1049091 (MODIFIED) Component: openstack-selinux Last change: 2014-06-24 Summary: openstack-selinux blocks communication from dashboard to identity service [1135510 ] http://bugzilla.redhat.com/1135510 (MODIFIED) Component: openstack-selinux Last change: 2015-04-06 Summary: RHEL7 icehouse cluster with ceph/ssl SELinux errors [1049503 ] http://bugzilla.redhat.com/1049503 (MODIFIED) Component: openstack-selinux Last change: 2015-03-10 Summary: rdo-icehouse selinux issues with rootwrap "sudo: unknown uid 162: who are you?" [1024330 ] http://bugzilla.redhat.com/1024330 (MODIFIED) Component: openstack-selinux Last change: 2014-04-18 Summary: Wrong SELinux policies set for neutron-dhcp-agent [1154866 ] http://bugzilla.redhat.com/1154866 (ON_QA) Component: openstack-selinux Last change: 2015-01-11 Summary: latest yum update for RHEL6.5 installs selinux-policy package which conflicts openstack-selinux installed later [1134617 ] http://bugzilla.redhat.com/1134617 (MODIFIED) Component: openstack-selinux Last change: 2014-10-08 Summary: nova-api service denied tmpfs access ### openstack-swift (1 bug) [997983 ] http://bugzilla.redhat.com/997983 (MODIFIED) Component: openstack-swift Last change: 2015-01-07 Summary: swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages ### openstack-tripleo-heat-templates (1 bug) [1235508 ] http://bugzilla.redhat.com/1235508 (POST) Component: openstack-tripleo-heat-templates Last change: 2015-09-29 Summary: Package update does not take puppet managed packages into account ### openstack-trove (1 bug) [1219064 ] http://bugzilla.redhat.com/1219064 (ON_QA) Component: openstack-trove Last change: 2015-08-19 Summary: Trove has missing dependencies ### openstack-tuskar (1 bug) [1222718 ] http://bugzilla.redhat.com/1222718 (ON_QA) Component: openstack-tuskar Last change: 2015-07-06 Summary: MySQL Column is Too Small for Heat Template ### openstack-tuskar-ui (3 bugs) [1175121 ] http://bugzilla.redhat.com/1175121 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: Registering nodes with the IPMI driver always fails [1203859 ] http://bugzilla.redhat.com/1203859 (POST) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: openstack-tuskar-ui: Failed to connect RDO manager tuskar-ui over missing apostrophes for STATIC_ROOT= in local_settings.py [1176596 ] http://bugzilla.redhat.com/1176596 (MODIFIED) Component: openstack-tuskar-ui Last change: 2015-06-04 Summary: The displayed horizon url after deployment has a redundant colon in it and a wrong path ### openstack-utils (2 bugs) [1214044 ] http://bugzilla.redhat.com/1214044 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: update openstack-status for rdo-manager [1213150 ] http://bugzilla.redhat.com/1213150 (POST) Component: openstack-utils Last change: 2015-06-04 Summary: openstack-status as admin falsely shows zero instances ### python-cinderclient (1 bug) [1048326 ] http://bugzilla.redhat.com/1048326 (MODIFIED) Component: python-cinderclient Last change: 2014-01-13 Summary: the command cinder type-key lvm set volume_backend_name=LVM_iSCSI fails to run ### python-django-horizon (3 bugs) [1219006 ] http://bugzilla.redhat.com/1219006 (ON_QA) Component: python-django-horizon Last change: 2015-05-08 Summary: Wrong permissions for directory /usr/share/openstack- dashboard/static/dashboard/ [1211552 ] http://bugzilla.redhat.com/1211552 (MODIFIED) Component: python-django-horizon Last change: 2015-04-14 Summary: Need to add alias in openstack-dashboard.conf to show CSS content [1218627 ] http://bugzilla.redhat.com/1218627 (ON_QA) Component: python-django-horizon Last change: 2015-06-24 Summary: Tree icon looks wrong - a square instead of a regular expand/collpase one ### python-glanceclient (2 bugs) [1206551 ] http://bugzilla.redhat.com/1206551 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-warlock [1206544 ] http://bugzilla.redhat.com/1206544 (ON_QA) Component: python-glanceclient Last change: 2015-04-03 Summary: Missing requires of python-jsonpatch ### python-heatclient (3 bugs) [1028726 ] http://bugzilla.redhat.com/1028726 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient needs a dependency on python-pbr [1087089 ] http://bugzilla.redhat.com/1087089 (POST) Component: python-heatclient Last change: 2015-02-01 Summary: python-heatclient 0.2.9 requires packaging in RDO [1140842 ] http://bugzilla.redhat.com/1140842 (MODIFIED) Component: python-heatclient Last change: 2015-02-01 Summary: heat.bash_completion not installed ### python-keystoneclient (3 bugs) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2015-06-04 Summary: user-get fails when using IDs which are not UUIDs [1024581 ] http://bugzilla.redhat.com/1024581 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: keystone missing tab completion [971746 ] http://bugzilla.redhat.com/971746 (MODIFIED) Component: python-keystoneclient Last change: 2015-06-04 Summary: CVE-2013-2013 OpenStack keystone: password disclosure on command line [RDO] ### python-neutronclient (3 bugs) [1052311 ] http://bugzilla.redhat.com/1052311 (MODIFIED) Component: python-neutronclient Last change: 2014-02-12 Summary: [RFE] python-neutronclient new version request [1067237 ] http://bugzilla.redhat.com/1067237 (ON_QA) Component: python-neutronclient Last change: 2014-03-26 Summary: neutronclient with pre-determined auth token fails when doing Client.get_auth_info() [1025509 ] http://bugzilla.redhat.com/1025509 (MODIFIED) Component: python-neutronclient Last change: 2014-06-24 Summary: Neutronclient should not obsolete quantumclient ### python-novaclient (1 bug) [947535 ] http://bugzilla.redhat.com/947535 (MODIFIED) Component: python-novaclient Last change: 2015-06-04 Summary: nova commands fail with gnomekeyring IOError ### python-openstackclient (1 bug) [1171191 ] http://bugzilla.redhat.com/1171191 (POST) Component: python-openstackclient Last change: 2015-03-02 Summary: Rebase python-openstackclient to version 1.0.0 ### python-oslo-config (1 bug) [1110164 ] http://bugzilla.redhat.com/1110164 (ON_QA) Component: python-oslo-config Last change: 2015-06-04 Summary: oslo.config >=1.2.1 is required for trove-manage ### python-pecan (1 bug) [1265365 ] http://bugzilla.redhat.com/1265365 (MODIFIED) Component: python-pecan Last change: 2015-09-25 Summary: Neutron missing pecan dependency ### python-swiftclient (1 bug) [1126942 ] http://bugzilla.redhat.com/1126942 (MODIFIED) Component: python-swiftclient Last change: 2014-09-16 Summary: Swift pseudo-folder cannot be interacted with after creation ### python-tuskarclient (2 bugs) [1209395 ] http://bugzilla.redhat.com/1209395 (POST) Component: python-tuskarclient Last change: 2015-06-04 Summary: `tuskar help` is missing a description next to plan- templates [1209431 ] http://bugzilla.redhat.com/1209431 (POST) Component: python-tuskarclient Last change: 2015-06-18 Summary: creating a tuskar plan with the exact name gives the user a traceback ### rdo-manager (5 bugs) [1212351 ] http://bugzilla.redhat.com/1212351 (POST) Component: rdo-manager Last change: 2015-06-18 Summary: [RFE] [RDO-Manager] [CLI] Add ability to poll for discovery state via CLI command [1210023 ] http://bugzilla.redhat.com/1210023 (MODIFIED) Component: rdo-manager Last change: 2015-04-15 Summary: instack-ironic-deployment --nodes-json instackenv.json --register-nodes fails [1224584 ] http://bugzilla.redhat.com/1224584 (MODIFIED) Component: rdo-manager Last change: 2015-05-25 Summary: CentOS-7 undercloud install fails w/ "RHOS" undefined variable [1251267 ] http://bugzilla.redhat.com/1251267 (POST) Component: rdo-manager Last change: 2015-08-12 Summary: Overcloud deployment fails for unspecified reason [1222124 ] http://bugzilla.redhat.com/1222124 (MODIFIED) Component: rdo-manager Last change: 2015-05-29 Summary: rdo-manager: fail to discover nodes with "instack- ironic-deployment --discover-nodes": ERROR: Data pre- processing failed ### rdo-manager-cli (8 bugs) [1212367 ] http://bugzilla.redhat.com/1212367 (POST) Component: rdo-manager-cli Last change: 2015-06-16 Summary: Ensure proper nodes states after enroll and before deployment [1233429 ] http://bugzilla.redhat.com/1233429 (POST) Component: rdo-manager-cli Last change: 2015-06-20 Summary: Lack of consistency in specifying plan argument for openstack overcloud commands [1233259 ] http://bugzilla.redhat.com/1233259 (MODIFIED) Component: rdo-manager-cli Last change: 2015-08-03 Summary: Node show of unified CLI has bad formatting [1232838 ] http://bugzilla.redhat.com/1232838 (POST) Component: rdo-manager-cli Last change: 2015-09-04 Summary: OSC plugin isn't saving plan configuration values [1229912 ] http://bugzilla.redhat.com/1229912 (POST) Component: rdo-manager-cli Last change: 2015-06-10 Summary: [rdo-manager-cli][unified-cli]: The command 'openstack baremetal configure boot' fails over - AttributeError (when glance images were uploaded more than once) . [1219053 ] http://bugzilla.redhat.com/1219053 (POST) Component: rdo-manager-cli Last change: 2015-06-18 Summary: "list" command doesn't display nodes in some cases [1211190 ] http://bugzilla.redhat.com/1211190 (POST) Component: rdo-manager-cli Last change: 2015-06-04 Summary: Unable to replace nodes registration instack script due to missing post config action in unified CLI [1230265 ] http://bugzilla.redhat.com/1230265 (POST) Component: rdo-manager-cli Last change: 2015-06-26 Summary: [rdo-manager-cli][unified-cli]: openstack unified-cli commands display - Warning Module novaclient.v1_1 is deprecated. ### rdopkg (1 bug) [1220832 ] http://bugzilla.redhat.com/1220832 (ON_QA) Component: rdopkg Last change: 2015-08-06 Summary: python-manilaclient is missing from kilo RDO repository Thanks, Chandan Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Wed Sep 30 16:18:46 2015 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 30 Sep 2015 12:18:46 -0400 (EDT) Subject: [Rdo-list] [meeting] RDO packaging meeting (2015-09-30) In-Reply-To: <524952458.60533412.1443629886016.JavaMail.zimbra@redhat.com> Message-ID: <1911591239.60533720.1443629926431.JavaMail.zimbra@redhat.com> ======================================== #rdo: RDO Packaging Meeting (2015-09-30) ======================================== Meeting started by jpena at 15:01:50 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-09-30/rdo.2015-09-30-15.01.log.html . Meeting summary --------------- * Roll Call (jpena, 15:02:04) * stable/liberty Delorean (jpena, 15:07:38) * LINK: https://trello.com/c/VPTFAP4o/72-delorean-stable-liberty (jpena, 15:08:02) * LINK: https://review.gerrithub.io/248584 (jpena, 15:09:14) * ACTION: dmsimard to ensure khaleesi/jenkins runs off centos7-libert (dmsimard, 15:19:01) * ACTION: apevec/jpena switch centos7-liberty keeping Trunk current-passed-ci (apevec, 15:19:19) * ACTION: derekh to review https://review.gerrithub.io/248726 (apevec, 15:19:58) * ACTION: dmsimard to create the mitaka jenkins CI (dmsimard, 15:22:56) * RC1 in Rawhide/CBS cloud7-liberty TODAY Sep 30 (jpena, 15:27:58) * LINK: https://trello.com/c/GPqDlVLs/63-liberty-rc1-rpms (jpena, 15:28:06) * LINK: http://annawrites.com/blog/wp-content/uploads/2013/08/trouble-with-tribbles.jpeg (eggmaster, 15:32:46) * LINK: https://launchpad.net/sahara/liberty/liberty-rc1/+download/sahara-3.0.0.0rc1.tar.gz (elmiko, 15:42:17) * ACTION: trown send PR to take over maintenance of Ironic (trown, 15:45:55) * ACTION: apevec to ping missing mainters for RC1 rebases (apevec, 15:47:18) * Package needs version bump (jpena, 15:53:35) * New Package python-reno (jpena, 16:04:25) * chair rotation for next meeting (jpena, 16:07:46) * ACTION: trown to chair next meeting (jpena, 16:08:10) * open floor (jpena, 16:08:27) * ACTION: elmiko follow up with xaeth (Greg Swift) about the scm req for openstack-barbican (apevec, 16:16:06) Meeting ended at 16:16:39 UTC. Action Items ------------ * dmsimard to ensure khaleesi/jenkins runs off centos7-libert * apevec/jpena switch centos7-liberty keeping Trunk current-passed-ci * derekh to review https://review.gerrithub.io/248726 * dmsimard to create the mitaka jenkins CI * trown send PR to take over maintenance of Ironic * apevec to ping missing mainters for RC1 rebases * trown to chair next meeting * elmiko follow up with xaeth (Greg Swift) about the scm req for openstack-barbican Action Items, by person ----------------------- * apevec * apevec/jpena switch centos7-liberty keeping Trunk current-passed-ci * apevec to ping missing mainters for RC1 rebases * derekh * derekh to review https://review.gerrithub.io/248726 * dmsimard * dmsimard to ensure khaleesi/jenkins runs off centos7-libert * dmsimard to create the mitaka jenkins CI * elmiko * elmiko follow up with xaeth (Greg Swift) about the scm req for openstack-barbican * jpena * apevec/jpena switch centos7-liberty keeping Trunk current-passed-ci * trown * trown send PR to take over maintenance of Ironic * trown to chair next meeting * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (148) * number80 (45) * jpena (36) * trown (24) * dmsimard (23) * elmiko (18) * jruzicka (16) * ihrachys (14) * zodbot (6) * vkmc (4) * EmilienM (3) * derekh (3) * eggmaster (2) * egafford (2) * social (2) * mburned (1) * zaneb (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot ? Thanks, Javier From whayutin at redhat.com Wed Sep 30 20:39:15 2015 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 30 Sep 2015 16:39:15 -0400 Subject: [Rdo-list] [ci] delorean ci is down Message-ID: FYI, The jenkins slave used in delorean package CI is experiencing some issues atm. The CI will be shutdown while these issues are resolved. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Wed Sep 30 21:38:09 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 30 Sep 2015 17:38:09 -0400 Subject: [Rdo-list] RDO Manager undercloud install failure Message-ID: All, Today I installed RDO-Manager following the upstream documentation and got an error when executing ?openstack undercloud install? I presented the issue in IRC and John Trowbridge was very helpful trying to determine the root cause, and after a couple of different attempts, the error remained. I?ll try to summarize the changes that we made: This was the error message: [2015-09-30 17:07:36,445] (os-refresh-config) [INFO] Starting phase configure dib-run-parts Wed Sep 30 17:07:36 EDT 2015 Running /usr/libexec/os-refresh-config/configure.d/00-apply-selinux-policy + set -o pipefail + '[' -x /usr/sbin/semanage ']' + semodule -i /opt/stack/selinux-policy/ipxe.pp dib-run-parts Wed Sep 30 17:07:53 EDT 2015 00-apply-selinux-policy completed dib-run-parts Wed Sep 30 17:07:53 EDT 2015 Running /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies + set -o pipefail ++ mktemp -d + TMPDIR=/tmp/tmp.GvcpM84Lsi + '[' -x /usr/sbin/semanage ']' + cd /tmp/tmp.GvcpM84Lsi ++ ls '/opt/stack/selinux-policy/*.te' ls: cannot access /opt/stack/selinux-policy/*.te: No such file or directory + semodule -i '/tmp/tmp.GvcpM84Lsi/*.pp' semodule: Failed on /tmp/tmp.GvcpM84Lsi/*.pp! [2015-09-30 17:07:53,136] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] [2015-09-30 17:07:53,136] (os-refresh-config) [ERROR] Aborting... Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 562, in install _run_orc(instack_env) File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 494, in _run_orc _run_live_command(args, instack_env, 'os-refresh-config') File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 325, in _run_live_command raise RuntimeError('%s failed. See log for details.' % name) RuntimeError: os-refresh-config failed. See log for details. Command 'instack-install-undercloud' returned non-zero exit status 1 Going a bit deeper, the error appears when executing https://github.com/rdo-management/tripleo-image-elements/blob/mgt-master/elements/selinux/os-refresh-config/configure.d/20-compile-and-install-selinux-policies John recommended to explicitly force the use of the the centos.json via ?export JSONFILE=/usr/share/insack-undercloud/json-files/centos-7-undercloud-packages.json? but that did not solve the issue. Then we tried adding ?20-comiple-and-install-selinux-policies? under the blacklist section of the file, and after running the installation again, it once more failed at the same step on the installation. Does anyone has any more ideas? Thanks, IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Wed Sep 30 22:18:02 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 30 Sep 2015 18:18:02 -0400 Subject: [Rdo-list] RDO Manager undercloud install failure In-Reply-To: References: Message-ID: following: i had this error twice this week already and i believe you are the 3rd or 4th person this week with the same error. On Wed, Sep 30, 2015 at 5:38 PM, Ignacio Bravo wrote: > All, > > Today I installed RDO-Manager following the upstream documentation and got > an error when executing ?openstack undercloud install? > I presented the issue in IRC and John Trowbridge was very helpful trying > to determine the root cause, and after a couple of different attempts, the > error remained. > > I?ll try to summarize the changes that we made: > > This was the error message: > [2015-09-30 17:07:36,445] (os-refresh-config) [INFO] Starting phase > configure > dib-run-parts Wed Sep 30 17:07:36 EDT 2015 Running > /usr/libexec/os-refresh-config/configure.d/00-apply-selinux-policy > + set -o pipefail > + '[' -x /usr/sbin/semanage ']' > + semodule -i /opt/stack/selinux-policy/ipxe.pp > dib-run-parts Wed Sep 30 17:07:53 EDT 2015 00-apply-selinux-policy > completed > dib-run-parts Wed Sep 30 17:07:53 EDT 2015 Running > /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > + set -o pipefail > ++ mktemp -d > + TMPDIR=/tmp/tmp.GvcpM84Lsi > + '[' -x /usr/sbin/semanage ']' > + cd /tmp/tmp.GvcpM84Lsi > ++ ls '/opt/stack/selinux-policy/*.te' > ls: cannot access /opt/stack/selinux-policy/*.te: No such file or directory > + semodule -i '/tmp/tmp.GvcpM84Lsi/*.pp' > semodule: Failed on /tmp/tmp.GvcpM84Lsi/*.pp! > [2015-09-30 17:07:53,136] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > > [2015-09-30 17:07:53,136] (os-refresh-config) [ERROR] Aborting... > Traceback (most recent call last): > File "", line 1, in > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 562, in install > _run_orc(instack_env) > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 494, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 325, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) > RuntimeError: os-refresh-config failed. See log for details. > Command 'instack-install-undercloud' returned non-zero exit status 1 > > Going a bit deeper, the error appears when executing > https://github.com/rdo-management/tripleo-image-elements/blob/mgt-master/elements/selinux/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > > John recommended to explicitly force the use of the the centos.json via > ?export > JSONFILE=/usr/share/insack-undercloud/json-files/centos-7-undercloud-packages.json? > but that did not solve the issue. > > Then we tried adding ?20-comiple-and-install-selinux-policies? under the > blacklist section of the file, and after running the installation again, it > once more failed at the same step on the installation. > > Does anyone has any more ideas? > > Thanks, > IB > > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Wed Sep 30 22:50:18 2015 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Wed, 30 Sep 2015 18:50:18 -0400 Subject: [Rdo-list] RDO Manager undercloud install failure In-Reply-To: References: Message-ID: <72D2CA85-BA0A-426E-B622-7D4EE4B55A66@ltgfederal.com> I managed to get pass the error by deleting the file /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies I might not have mentioned that I first run the installation without the delorean-current repo which failed and pushed me to ask in IRC. Afterwards I installed the delorean-current repo and retried the installation with the issues referenced in my note bellow. I wanted to note that the timestamp on all files in the directory above where in synch with the time I run the latest install, with the exception of 20-compile-and-install-selinux-policies that had an earlier datestamp, probably from the first run without the delorean-current repo or without the specification of cents or the cents 7.json file. After deleting the 20-xxx file, and and issuing export NODE_DIST=centos7, the installation seems to have succeeded . IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com Office: (703) 951-7760 > On Sep 30, 2015, at 6:18 PM, Mohammed Arafa wrote: > > following: i had this error twice this week already and i believe you are the 3rd or 4th person this week with the same error. > > On Wed, Sep 30, 2015 at 5:38 PM, Ignacio Bravo > wrote: > All, > > Today I installed RDO-Manager following the upstream documentation and got an error when executing ?openstack undercloud install? > I presented the issue in IRC and John Trowbridge was very helpful trying to determine the root cause, and after a couple of different attempts, the error remained. > > I?ll try to summarize the changes that we made: > > This was the error message: > [2015-09-30 17:07:36,445] (os-refresh-config) [INFO] Starting phase configure > dib-run-parts Wed Sep 30 17:07:36 EDT 2015 Running /usr/libexec/os-refresh-config/configure.d/00-apply-selinux-policy > + set -o pipefail > + '[' -x /usr/sbin/semanage ']' > + semodule -i /opt/stack/selinux-policy/ipxe.pp > dib-run-parts Wed Sep 30 17:07:53 EDT 2015 00-apply-selinux-policy completed > dib-run-parts Wed Sep 30 17:07:53 EDT 2015 Running /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > + set -o pipefail > ++ mktemp -d > + TMPDIR=/tmp/tmp.GvcpM84Lsi > + '[' -x /usr/sbin/semanage ']' > + cd /tmp/tmp.GvcpM84Lsi > ++ ls '/opt/stack/selinux-policy/*.te' > ls: cannot access /opt/stack/selinux-policy/*.te: No such file or directory > + semodule -i '/tmp/tmp.GvcpM84Lsi/*.pp' > semodule: Failed on /tmp/tmp.GvcpM84Lsi/*.pp! > [2015-09-30 17:07:53,136] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] > > [2015-09-30 17:07:53,136] (os-refresh-config) [ERROR] Aborting... > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 562, in install > _run_orc(instack_env) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 494, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 325, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) > RuntimeError: os-refresh-config failed. See log for details. > Command 'instack-install-undercloud' returned non-zero exit status 1 > > Going a bit deeper, the error appears when executing https://github.com/rdo-management/tripleo-image-elements/blob/mgt-master/elements/selinux/os-refresh-config/configure.d/20-compile-and-install-selinux-policies > > John recommended to explicitly force the use of the the centos.json via ?export JSONFILE=/usr/share/insack-undercloud/json-files/centos-7-undercloud-packages.json? but that did not solve the issue. > > Then we tried adding ?20-comiple-and-install-selinux-policies? under the blacklist section of the file, and after running the installation again, it once more failed at the same step on the installation. > > Does anyone has any more ideas? > > Thanks, > IB > > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -- > > > > > 805010942448935 > GR750055912MA > Link to me on LinkedIn -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Wed Sep 30 22:51:24 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 30 Sep 2015 18:51:24 -0400 Subject: [Rdo-list] RDO Manager undercloud install failure In-Reply-To: <72D2CA85-BA0A-426E-B622-7D4EE4B55A66@ltgfederal.com> References: <72D2CA85-BA0A-426E-B622-7D4EE4B55A66@ltgfederal.com> Message-ID: ignacio thank you i will have to try that tomorrow On Wed, Sep 30, 2015 at 6:50 PM, Ignacio Bravo wrote: > I managed to get pass the error by deleting the file > /usr/libexec/os-refresh-config/configure.d/ > 20-compile-and-install-selinux-policies > > I might not have mentioned that I first run the installation without the > delorean-current repo which failed and pushed me to ask in IRC. Afterwards > I installed the delorean-current repo and retried the installation with the > issues referenced in my note bellow. > > I wanted to note that the timestamp on all files in the directory above > where in synch with the time I run the latest install, with the exception > of 20-compile-and-install-selinux-policies that had an earlier datestamp, > probably from the first run without the delorean-current repo or without > the specification of cents or the cents 7.json file. > > After deleting the 20-xxx file, and and issuing export NODE_DIST=centos7, > the installation seems to have succeeded . > > > IB > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > Office: (703) 951-7760 > > On Sep 30, 2015, at 6:18 PM, Mohammed Arafa > wrote: > > following: i had this error twice this week already and i believe you are > the 3rd or 4th person this week with the same error. > > On Wed, Sep 30, 2015 at 5:38 PM, Ignacio Bravo > wrote: > >> All, >> >> Today I installed RDO-Manager following the upstream documentation and >> got an error when executing ?openstack undercloud install? >> I presented the issue in IRC and John Trowbridge was very helpful trying >> to determine the root cause, and after a couple of different attempts, the >> error remained. >> >> I?ll try to summarize the changes that we made: >> >> This was the error message: >> [2015-09-30 17:07:36,445] (os-refresh-config) [INFO] Starting phase >> configure >> dib-run-parts Wed Sep 30 17:07:36 EDT 2015 Running >> /usr/libexec/os-refresh-config/configure.d/00-apply-selinux-policy >> + set -o pipefail >> + '[' -x /usr/sbin/semanage ']' >> + semodule -i /opt/stack/selinux-policy/ipxe.pp >> dib-run-parts Wed Sep 30 17:07:53 EDT 2015 00-apply-selinux-policy >> completed >> dib-run-parts Wed Sep 30 17:07:53 EDT 2015 Running >> /usr/libexec/os-refresh-config/configure.d/20-compile-and-install-selinux-policies >> + set -o pipefail >> ++ mktemp -d >> + TMPDIR=/tmp/tmp.GvcpM84Lsi >> + '[' -x /usr/sbin/semanage ']' >> + cd /tmp/tmp.GvcpM84Lsi >> ++ ls '/opt/stack/selinux-policy/*.te' >> ls: cannot access /opt/stack/selinux-policy/*.te: No such file or >> directory >> + semodule -i '/tmp/tmp.GvcpM84Lsi/*.pp' >> semodule: Failed on /tmp/tmp.GvcpM84Lsi/*.pp! >> [2015-09-30 17:07:53,136] (os-refresh-config) [ERROR] during configure >> phase. [Command '['dib-run-parts', >> '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit >> status 1] >> >> [2015-09-30 17:07:53,136] (os-refresh-config) [ERROR] Aborting... >> Traceback (most recent call last): >> File "", line 1, in >> File >> "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line >> 562, in install >> _run_orc(instack_env) >> File >> "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line >> 494, in _run_orc >> _run_live_command(args, instack_env, 'os-refresh-config') >> File >> "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line >> 325, in _run_live_command >> raise RuntimeError('%s failed. See log for details.' % name) >> RuntimeError: os-refresh-config failed. See log for details. >> Command 'instack-install-undercloud' returned non-zero exit status 1 >> >> Going a bit deeper, the error appears when executing >> https://github.com/rdo-management/tripleo-image-elements/blob/mgt-master/elements/selinux/os-refresh-config/configure.d/20-compile-and-install-selinux-policies >> >> John recommended to explicitly force the use of the the centos.json via >> ?export >> JSONFILE=/usr/share/insack-undercloud/json-files/centos-7-undercloud-packages.json? >> but that did not solve the issue. >> >> Then we tried adding ?20-comiple-and-install-selinux-policies? under the >> blacklist section of the file, and after running the installation again, it >> once more failed at the same step on the installation. >> >> Does anyone has any more ideas? >> >> Thanks, >> IB >> >> >> >> __ >> Ignacio Bravo >> LTG Federal, Inc >> www.ltgfederal.com >> Office: (703) 951-7760 >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > > > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: